aboutsummaryrefslogtreecommitdiff
path: root/accel
AgeCommit message (Collapse)Author
2023-05-18accel/tcg: Fix append_mem_cbRichard Henderson
In fcdab382c8b9 we removed a tcg_gen_extu_tl_i64 from gen_empty_mem_cb, and failed to adjust the associated copy, leading to a failed assert. Fixes: fcdab382c8b9 ("accel/tcg: Widen plugin_gen_empty_mem_callback to i64") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230518145813.2940745-1-richard.henderson@linaro.org>
2023-05-18tcg: round-robin: do not use mb_read for rr_current_cpuPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18kvm: Enable dirty ring for arm64Gavin Shan
arm64 has different capability from x86 to enable the dirty ring, which is KVM_CAP_DIRTY_LOG_RING_ACQ_REL. Besides, arm64 also needs the backup bitmap extension (KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP) when 'kvm-arm-gicv3' or 'arm-its-kvm' device is enabled. Here the extension is always enabled and the unnecessary overhead to do the last stage of dirty log synchronization when those two devices aren't used is introduced, but the overhead should be very small and acceptable. The benefit is cover future cases where those two devices are used without modifying the code. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Tested-by: Zhenyu Zhang <zhenyzha@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20230509022122.20888-5-gshan@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18kvm: Add helper kvm_dirty_ring_init()Gavin Shan
Due to multiple capabilities associated with the dirty ring for different architectures: KVM_CAP_DIRTY_{LOG_RING, LOG_RING_ACQ_REL} for x86 and arm64 separately. There will be more to be done in order to support the dirty ring for arm64. Lets add helper kvm_dirty_ring_init() to enable the dirty ring. With this, the code looks a bit clean. No functional change intended. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Tested-by: Zhenyu Zhang <zhenyzha@redhat.com> Message-Id: <20230509022122.20888-4-gshan@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18kvm: Synchronize the backup bitmap in the last stageGavin Shan
In the last stage of live migration or memory slot removal, the backup bitmap needs to be synchronized when it has been enabled. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Tested-by: Zhenyu Zhang <zhenyzha@redhat.com> Message-Id: <20230509022122.20888-3-gshan@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-18migration: Add last stage indicator to global dirty logGavin Shan
The global dirty log synchronization is used when KVM and dirty ring are enabled. There is a particularity for ARM64 where the backup bitmap is used to track dirty pages in non-running-vcpu situations. It means the dirty ring works with the combination of ring buffer and backup bitmap. The dirty bits in the backup bitmap needs to collected in the last stage of live migration. In order to identify the last stage of live migration and pass it down, an extra parameter is added to the relevant functions and callbacks. This last stage indicator isn't used until the dirty ring is enabled in the subsequent patches. No functional change intended. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Tested-by: Zhenyu Zhang <zhenyzha@redhat.com> Message-Id: <20230509022122.20888-2-gshan@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-16tcg: Add tlb_dyn_max_bits to TCGContextRichard Henderson
Disconnect guest tlb parameters from TCG compilation. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add page_bits and page_mask to TCGContextRichard Henderson
Disconnect guest page size from TCG compilation. While this could be done via exec/target_page.h, we want to cache the value across multiple memory access operations, so we might as well initialize this early. The changes within tcg/ are entirely mechanical: sed -i s/TARGET_PAGE_BITS/s->page_bits/g sed -i s/TARGET_PAGE_MASK/s->page_mask/g Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add addr_type to TCGContextRichard Henderson
This will enable replacement of TARGET_LONG_BITS within tcg/. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Widen plugin_gen_empty_mem_callback to i64Richard Henderson
Since we do this inside gen_empty_mem_cb anyway, let's do this earlier inside tcg expansion. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Merge do_gen_mem_cb into callerRichard Henderson
As do_gen_mem_cb is called once, merge it into gen_empty_mem_cb. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Merge gen_mem_wrapped with plugin_gen_empty_mem_callbackRichard Henderson
As gen_mem_wrapped is only used in plugin_gen_empty_mem_callback, we can avoid the curiosity of union mem_gen_fn by inlining it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen helper_atomic_* addresses to uint64_tRichard Henderson
Always pass the target address as uint64_t. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen helper_{ld,st}_i128 addresses to uint64_tRichard Henderson
Always pass the target address as uint64_t. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Widen tcg-ldst.h addresses to uint64_tRichard Henderson
Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Widen gen_insn_data to uint64_tRichard Henderson
We already pass uint64_t to restore_state_to_opc; this changes all of the other uses from insn_start through the encoding to decoding. Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Remove helper_unaligned_{ld,st}Richard Henderson
These functions are now unused. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16meson: Detect atomic128 support with optimizationRichard Henderson
There is an edge condition prior to gcc13 for which optimization is required to generate 16-byte atomic sequences. Detect this. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add 128-bit guest memory primitivesRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Implement helper_{ld,st}*_mmu for user-onlyRichard Henderson
TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Honor atomicity of storesRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16accel/tcg: Honor atomicity of loadsRichard Henderson
Create ldst_atomicity.c.inc. Not required for user-only code loads, because we've ensured that the page is read-only before beginning to translate code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg: Reorg system mode store helpersRichard Henderson
Instead of trying to unify all operations on uint64_t, use mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg: Reorg system mode load helpersRichard Henderson
Instead of trying to unify all operations on uint64_t, pull out mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg: Introduce tlb_read_idxRichard Henderson
Instead of playing with offsetof in various places, use MMUAccessType to index an array. This is easily defined instead of the previous dummy padding array in the union. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg: Add cpu_in_serial_contextRichard Henderson
Like cpu_in_exclusive_context, but also true if there is no other cpu against which we could race. Use it in tb_flush as a direct replacement. Use it in cpu_loop_exit_atomic to ensure that there is no loop against cpu_exec_step_atomic. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg/tcg-accel-ops-rr: ensure fairness with icountJamie Iles
The round-robin scheduler will iterate over the CPU list with an assigned budget until the next timer expiry and may exit early because of a TB exit. This is fine under normal operation but with icount enabled and SMP it is possible for a CPU to be starved of run time and the system live-locks. For example, booting a riscv64 platform with '-icount shift=0,align=off,sleep=on -smp 2' we observe a livelock once the kernel has timers enabled and starts performing TLB shootdowns. In this case we have CPU 0 in M-mode with interrupts disabled sending an IPI to CPU 1. As we enter the TCG loop, we assign the icount budget to next timer interrupt to CPU 0 and begin executing where the guest is sat in a busy loop exhausting all of the budget before we try to execute CPU 1 which is the target of the IPI but CPU 1 is left with no budget with which to execute and the process repeats. We try here to add some fairness by splitting the budget across all of the CPUs on the thread fairly before entering each one. The CPU count is cached on CPU list generation ID to avoid iterating the list on each loop iteration. With this change it is possible to boot an SMP rv64 guest with icount enabled and no hangs. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Jamie Iles <quic_jiles@quicinc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230427020925.51003-3-quic_jiles@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11accel/tcg: Fix atomic_mmu_lookup for readsRichard Henderson
A copy-paste bug had us looking at the victim cache for writes. Cc: qemu-stable@nongnu.org Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 08dff435e2 ("tcg: Probe the proper permissions for atomic ops") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230505204049.352469-1-richard.henderson@linaro.org>
2023-05-08tb-maint: do not use mb_read/mb_setPaolo Bonzini
The load side can use a relaxed load, which will surely happen before the work item is run by async_safe_run_on_cpu() or before double-checking under mmap_lock. The store side can use an atomic RMW operation. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-05tcg: Widen helper_*_st[bw]_mmu val argumentsRichard Henderson
While the old type was correct in the ideal sense, some ABIs require the argument to be zero-extended. Using uint32_t for all such values is a decent compromise. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-02accel/tcg: Add cpu_ld*_code_mmuRichard Henderson
At least RISC-V has the need to be able to perform a read using execute permissions, outside of translation. Add helpers to facilitate this. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Tested-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230325105429.1142530-9-richard.henderson@linaro.org> Message-Id: <20230412114333.118895-9-richard.henderson@linaro.org>
2023-05-02tcg: Add tcg_gen_gvec_andcsNazar Kazakov
Add tcg expander and helper functions for and-compliment vector with scalar operand. Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk> Message-Id: <20230428144757.57530-10-lawrence.hunter@codethink.co.uk> [rth: Split out of larger patch.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-02accel/tcg: Uncache the host address for instruction fetch when tlb size < 1Weiwei Li
When PMP entry overlap part of the page, we'll set the tlb_size to 1, which will make the address in tlb entry set with TLB_INVALID_MASK, and the next access will again go through tlb_fill.However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1542. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230422130329.23555-6-liweiwei@iscas.ac.cn>
2023-05-02accel/tcg: Report one-insn-per-tb in 'info jit', not 'info status'Peter Maydell
Currently we report whether the TCG accelerator is in 'one-insn-per-tb' mode in the 'info status' output. This is a pretty minor piece of TCG specific information, and we want to deprecate the 'singlestep' field of the associated QMP command. Move the 'one-insn-per-tb' reporting to 'info jit'. We don't need a deprecate-and-drop period for this because the HMP interface has no stability guarantees. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230417164041.684562-8-peter.maydell@linaro.org
2023-05-02accel/tcg: Use one_insn_per_tb global instead of old singlestep globalPeter Maydell
The only place left that looks at the old 'singlestep' global variable is the TCG curr_cflags() function. Replace the old global with a new 'one_insn_per_tb' which is defined in tcg-all.c and declared in accel/tcg/internal.h. This keeps it restricted to the TCG code, unlike 'singlestep' which was available to every file in the system and defined in multiple different places for softmmu vs linux-user vs bsd-user. While we're making this change, use qatomic_read() and qatomic_set() on the accesses to the new global, because TCG will read it without holding a lock. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230417164041.684562-4-peter.maydell@linaro.org
2023-05-02make one-insn-per-tb an accel optionPeter Maydell
This commit adds 'one-insn-per-tb' as a property on the TCG accelerator object, so you can enable it with -accel tcg,one-insn-per-tb=on It has the same behaviour as the existing '-singlestep' command line option. We use a different name because 'singlestep' has always been a confusing choice, because it doesn't have anything to do with single-stepping the CPU. What it does do is force TCG emulation to put one guest instruction in each TB, which can be useful in some situations (such as analysing debug logs). The existing '-singlestep' commandline options are decoupled from the global 'singlestep' variable and instead now are syntactic sugar for setting the accel property. (These can then go away after a deprecation period.) The global variable remains for the moment as: * what the TCG code looks at to change its behaviour * what HMP and QMP use to query and set the behaviour In the following commits we'll clean those up to not directly look at the global variable. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230417164041.684562-2-peter.maydell@linaro.org
2023-04-04Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingPeter Maydell
Fix race condition that can cause a crash at startup. # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmQsVJoUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroPnWgf/SRc2eAtWtLDkIhjszkfK8TVeQzzS # wD0pobk/8MNyj+EW/wV+/HsR3U8oNvHsAnzB4+RKd7YGhPxHwDvqC+hNm5HS8u4g # gY+LhvwirFB7RkP0dDd4yt1BX6emylyFjUpM+QxlrwuorQ5wfRaIh77ex349rnq8 # fp8Kw53VpBWscyp3S3AYlQMRN3NGPH9JdeDtWap0AHFGA+PeBR2VCOuJ3xUJF62T # xyacGGe3JXNUcFJVKR8PMDBO1FeJgl4Y7k0idHK/mcpOPj6HYFN3EV863XdP8Foa # mv9h2DXRuIpFJEj//0GQAVDw+F8BFofjZaPeRNAoX+oE3I4CnZhVC5uG/w== # =Ttdf # -----END PGP SIGNATURE----- # gpg: Signature made Tue 04 Apr 2023 17:47:22 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: kvm: dirty-ring: Fix race with vcpu creation Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-04-04kvm: dirty-ring: Fix race with vcpu creationPeter Xu
It's possible that we want to reap a dirty ring on a vcpu that is during creation, because the vcpu is put onto list (CPU_FOREACH visible) before initialization of the structures. In this case: qemu_init_vcpu x86_cpu_realizefn cpu_exec_realizefn cpu_list_add <---- can be probed by CPU_FOREACH qemu_init_vcpu cpus_accel->create_vcpu_thread(cpu); kvm_init_vcpu map kvm_dirty_gfns <--- kvm_dirty_gfns valid Don't try to reap dirty ring on vcpus during creation or it'll crash. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2124756 Reported-by: Xiaohui Li <xiaohli@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <1d14deb6684bcb7de1c9633c5bd21113988cc698.1676563222.git.huangy81@chinatelecom.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-04-04accel/tcg: Fix jump cache set in cpu_exec_loopRichard Henderson
Assign pc and use store_release to assign tb. Fixes: 2dd5b7a1b91 ("accel/tcg: Move jmp-cache `CF_PCREL` checks to caller") Reported-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-04accel/tcg: Fix overwrite problems of tcg_cflagsWeiwei Li
CPUs often set CF_PCREL in tcg_cflags before qemu_init_vcpu(), in which tcg_cflags will be overwrited by tcg_cpu_init_cflags(). Fixes: 4be790263ffc ("accel/tcg: Replace `TARGET_TB_PCREL` with `CF_PCREL`") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Message-Id: <20230331150609.114401-6-liweiwei@iscas.ac.cn> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to tb_invalidate_phys_rangeRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to tb_invalidate_phys_page_range__lockedRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Properly truncate tb_last to the end of the page; the comment about tb_end being past the end of the page being ok is not correct, considering overflow. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to page_collection_lockRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Fixes a bug in the loop comparision where "<= end" would lock one more page than required. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to PAGE_FOR_EACH_TBRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to page_reset_target_dataRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28accel/tcg: Pass last not end to page_set_flagsRichard Henderson
Pass the address of the last byte to be changed, rather than the first address past the last byte. This avoids overflow when the last page of the address space is involved. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1528 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-28tcg: use QTree instead of GTreeEmilio Cota
qemu-user can hang in a multi-threaded fork. One common reason is that when creating a TB, between fork and exec we manipulate a GTree whose memory allocator (GSlice) is not fork-safe. Although POSIX does not mandate it, the system's allocator (e.g. tcmalloc, libc malloc) is probably fork-safe. Fix some of these hangs by using QTree, which uses the system's allocator regardless of the Glib version that we used at configuration time. Tested with the test program in the original bug report, i.e.: ``` void garble() { int pid = fork(); if (pid == 0) { exit(0); } else { int wstatus; waitpid(pid, &wstatus, 0); } } void supragarble(unsigned depth) { if (depth == 0) return ; std::thread a(supragarble, depth-1); std::thread b(supragarble, depth-1); garble(); a.join(); b.join(); } int main() { supragarble(10); } ``` Resolves: https://gitlab.com/qemu-project/qemu/-/issues/285 Reported-by: Valentin David <me@valentindavid.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Emilio Cota <cota@braap.org> Message-Id: <20230205163758.416992-3-cota@braap.org> [rth: Add QEMU_DISABLE_CFI for all callback using functions.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-23accel/xen: Fix DM state change notification in dm_restrict modeDavid Woodhouse
When dm_restrict is set, QEMU isn't permitted to update the XenStore node to indicate its running status. Previously, the xs_write() call would fail but the failure was ignored. However, in refactoring to allow for emulated XenStore operations, a new call to xs_open() was added. That one didn't fail gracefully, causing a fatal error when running in dm_restrict mode. Partially revert the offending patch, removing the additional call to xs_open() because the global 'xenstore' variable is still available; it just needs to be used with qemu_xen_xs_write() now instead of directly with the xs_write() libxenstore function. Also make the whole thing conditional on !xen_domid_restrict. There's no point even registering the state change handler to attempt to update the XenStore node when we know it's destined to fail. Fixes: ba2a92db1ff6 ("hw/xen: Add xenstore operations to allow redirection to internal emulation") Reported-by: Jason Andryuk <jandryuk@gmail.com> Co-developed-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Tested-by: Jason Andryuk <jandryuk@gmail.com> Message-Id: <1f141995bb61af32c2867ef5559e253f39b0949c.camel@infradead.org> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
2023-03-22*: Add missing includes of qemu/plugin.hRichard Henderson
This had been pulled in from hw/core/cpu.h, but that will be removed. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230310195252.210956-6-richard.henderson@linaro.org> [AJB: also syscall-trace.h] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230315174331.2959-16-alex.bennee@linaro.org> Reviewed-by: Emilio Cota <cota@braap.org>