aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg
AgeCommit message (Collapse)Author
2024-04-09accel/tcg: Improve can_do_io managementRichard Henderson
We already attempted to set and clear can_do_io before the first and last insns, but only used the initial value of max_insns and the call to translator_io_start to find those insns. Now that we track insn_start in DisasContextBase, and now that we have emit_before_op, we can wait until we have finished translation to identify the true first and last insns and emit the sets of can_do_io at that time. This fixes the case of a translation block which crossed a page boundary, and for which the second page turned out to be mmio. In this case we truncate the block, and the previous logic for can_do_io could leave a block with a single insn with can_do_io set to false, which would fail an assertion in cpu_io_recompile. Reported-by: Jørgen Hansen <Jorgen.Hansen@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Jørgen Hansen <Jorgen.Hansen@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-04-09accel/tcg: Add insn_start to DisasContextBaseRichard Henderson
This is currently target-specific for many; begin making it target independent. Tested-by: Jørgen Hansen <Jorgen.Hansen@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-04-02accel/tcg/plugin: Remove CONFIG_SOFTMMU_GATE definitionPhilippe Mathieu-Daudé
The CONFIG_SOFTMMU_GATE definition was never used, remove it. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240313213339.82071-2-philmd@linaro.org>
2024-03-29accel/tcg: Use CPUState.get_pc in cpu_io_recompileRichard Henderson
Using log_pc produces the pc at the beginning of TB, not the actual pc installed by cpu_restore_state_from_tb, which could be any of the guest instructions within TB. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-03-12bulk: Call in place single use cpu_env()Philippe Mathieu-Daudé
Avoid CPUArchState local variable when cpu_env() is used once. Mechanical patch using the following Coccinelle spatch script: @@ type CPUArchState; identifier env; expression cs; @@ { - CPUArchState *env = cpu_env(cs); ... when != env - env + cpu_env(cs) ... when != env } Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240129164514.73104-5-philmd@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-03-06plugins: cleanup codepath for previous inline operationPierrick Bouvier
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-Id: <20240304130036.124418-13-pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240305121005.3528075-26-alex.bennee@linaro.org>
2024-03-06plugins: add inline operation per vcpuPierrick Bouvier
Extends API with three new functions: qemu_plugin_register_vcpu_{tb, insn, mem}_exec_inline_per_vcpu(). Those functions takes a qemu_plugin_u64 as input. This allows to have a thread-safe and type-safe version of inline operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-Id: <20240304130036.124418-5-pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240305121005.3528075-18-alex.bennee@linaro.org>
2024-03-06plugins: implement inline operation relative to cpu_indexPierrick Bouvier
Instead of working on a fixed memory location, allow to address it based on cpu_index, an element size and a given offset. Result address: ptr + offset + cpu_index * element_size. With this, we can target a member in a struct array from a base pointer. Current semantic is not modified, thus inline operation still targets always the same memory location. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-Id: <20240304130036.124418-4-pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240305121005.3528075-17-alex.bennee@linaro.org>
2024-03-05accel/tcg: Add TLB_CHECK_ALIGNEDRichard Henderson
This creates a per-page method for checking of alignment. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-03-05accel/tcg: Add tlb_fill_flags to CPUTLBEntryFullRichard Henderson
Allow the target to set tlb flags to apply to all of the comparators. Remove MemTxAttrs.byte_swap, as the bit is not relevant to memory transactions, only the page mapping. Adjust target/sparc to set TLB_BSWAP directly. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-29accel/tcg: Disconnect TargetPageDataNode from page sizeRichard Henderson
Dynamically size the node for the runtime target page size. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Helge Deller <deller@gmx.de> Message-Id: <20240102015808.132373-29-richard.henderson@linaro.org>
2024-02-29cpu: Remove page_size_initRichard Henderson
Move qemu_host_page_{size,mask} and HOST_PAGE_ALIGN into bsd-user. It should be removed from bsd-user as well, but defer that cleanup. Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Helge Deller <deller@gmx.de> Message-Id: <20240102015808.132373-28-richard.henderson@linaro.org>
2024-02-29accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotectRichard Henderson
Use qemu_real_host_page_size instead. Except for the final mprotect within page_protect, we already handled host < target page size. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Helge Deller <deller@gmx.de> Message-Id: <20240102015808.132373-2-richard.henderson@linaro.org>
2024-02-29tcg: Avoid double lock if page tables happen to be in mmio memory.Jonathan Cameron
On i386, after fixing the page walking code to work with pages in MMIO memory (specifically CXL emulated interleaved memory), a crash was seen in an interrupt handling path. Useful part of backtrace 7 0x0000555555ab1929 in bql_lock_impl (file=0x555556049122 "../../accel/tcg/cputlb.c", line=2033) at ../../system/cpus.c:524 8 bql_lock_impl (file=file@entry=0x555556049122 "../../accel/tcg/cputlb.c", line=line@entry=2033) at ../../system/cpus.c:520 9 0x0000555555c9f7d6 in do_ld_mmio_beN (cpu=0x5555578e0cb0, full=0x7ffe88012950, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2033 10 0x0000555555ca0fbd in do_ld_8 (cpu=cpu@entry=0x5555578e0cb0, p=p@entry=0x7ffff4efd1d0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 11 0x0000555555ca341f in do_ld8_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=19595792376, oi=oi@entry=52, ra=0, ra@entry=52, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 12 0x0000555555ca5f59 in cpu_ldq_mmu (ra=52, oi=52, addr=19595792376, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:169 13 cpu_ldq_le_mmuidx_ra (env=0x5555578e3470, addr=19595792376, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:301 14 0x0000555555b4b5fc in ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:98 15 ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:93 16 mmu_translate (env=env@entry=0x5555578e3470, in=0x7ffff4efd3e0, out=0x7ffff4efd3b0, err=err@entry=0x7ffff4efd3c0, ra=ra@entry=0) at ../../target/i386/tcg/sysemu/excp_helper.c:174 17 0x0000555555b4c4b3 in get_physical_address (ra=0, err=0x7ffff4efd3c0, out=0x7ffff4efd3b0, mmu_idx=0, access_type=MMU_DATA_LOAD, addr=18446741874686299840, env=0x5555578e3470) at ../../target/i386/tcg/sysemu/excp_helper.c:580 18 x86_cpu_tlb_fill (cs=0x5555578e0cb0, addr=18446741874686299840, size=<optimized out>, access_type=MMU_DATA_LOAD, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:606 19 0x0000555555ca0ee9 in tlb_fill (retaddr=0, mmu_idx=0, access_type=MMU_DATA_LOAD, size=<optimized out>, addr=18446741874686299840, cpu=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1315 20 mmu_lookup1 (cpu=cpu@entry=0x5555578e0cb0, data=data@entry=0x7ffff4efd540, mmu_idx=0, access_type=access_type@entry=MMU_DATA_LOAD, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:1713 21 0x0000555555ca2c61 in mmu_lookup (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, type=type@entry=MMU_DATA_LOAD, l=l@entry=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1803 22 0x0000555555ca3165 in do_ld4_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2416 23 0x0000555555ca5ef9 in cpu_ldl_mmu (ra=0, oi=32, addr=18446741874686299840, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:158 24 cpu_ldl_le_mmuidx_ra (env=env@entry=0x5555578e3470, addr=addr@entry=18446741874686299840, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:294 25 0x0000555555bb6cdd in do_interrupt64 (is_hw=1, next_eip=18446744072399775809, error_code=0, is_int=0, intno=236, env=0x5555578e3470) at ../../target/i386/tcg/seg_helper.c:889 26 do_interrupt_all (cpu=cpu@entry=0x5555578e0cb0, intno=236, is_int=is_int@entry=0, error_code=error_code@entry=0, next_eip=next_eip@entry=0, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1130 27 0x0000555555bb87da in do_interrupt_x86_hardirq (env=env@entry=0x5555578e3470, intno=<optimized out>, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1162 28 0x0000555555b5039c in x86_cpu_exec_interrupt (cs=0x5555578e0cb0, interrupt_request=<optimized out>) at ../../target/i386/tcg/sysemu/seg_helper.c:197 29 0x0000555555c94480 in cpu_handle_interrupt (last_tb=<synthetic pointer>, cpu=0x5555578e0cb0) at ../../accel/tcg/cpu-exec.c:844 Peter identified this as being due to the BQL already being held when the page table walker encounters MMIO memory and attempts to take the lock again. There are other examples of similar paths TCG, so this follows the approach taken in those of simply checking if the lock is already held and if it is, don't take it again. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-Id: <20240219173153.12114-4-Jonathan.Cameron@huawei.com> [rth: Use BQL_LOCK_GUARD] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-29accel/tcg: Set can_do_io at at start of lookup_tb_ptr helperPeter Maydell
If a page table is in IO memory and lookup_tb_ptr probes the TLB it can result in a page table walk for the instruction fetch. If this hits IO memory and io_prepare falsely assumes it needs to do a TLB recompile. Avoid that by setting can_do_io at the start of lookup_tb_ptr. Link: https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-Id: <20240219173153.12114-2-Jonathan.Cameron@huawei.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-28plugins: create CPUPluginState and migrate plugin_maskAlex Bennée
As we expand the per-vCPU data for plugins we don't want to pollute CPUState. For now this just moves the plugin_mask (renamed to event_mask) as the memory callbacks are accessed directly by TCG generated code. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240227144335.1196131-23-alex.bennee@linaro.org>
2024-02-28plugins: Use different helpers when reading registersAkihiko Odaki
This avoids optimizations incompatible when reading registers. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-Id: <20231213-gdb-v17-12-777047380591@daynix.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240227144335.1196131-21-alex.bennee@linaro.org>
2024-02-20accel/tcg: correct typosManos Pitsidianakis
Correct typos automatically found with the `typos` tool <https://crates.io/crates/typos> Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-03include/exec: Change cpu_mmu_index argument to CPUStateRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29target/i386: Extract x86_cpu_exec_halt() from accel/tcg/Philippe Mathieu-Daudé
Move this x86-specific code out of the generic accel/tcg/. Reported-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240124101639.30056-10-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Introduce TCGCPUOps::cpu_exec_halt() handlerPhilippe Mathieu-Daudé
In order to make accel/tcg/ target agnostic, introduce the cpu_exec_halt() handler. Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240124101639.30056-9-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Inline need_replay_interruptRichard Henderson
The function is now trivial, and with inlining we can re-use the calling function's tcg_ops variable. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29target/i386: Extract x86_need_replay_interrupt() from accel/tcg/Philippe Mathieu-Daudé
Move this x86-specific code out of the generic accel/tcg/. Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240124101639.30056-8-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Introduce TCGCPUOps::need_replay_interrupt() handlerPhilippe Mathieu-Daudé
In order to make accel/tcg/ target agnostic, introduce the need_replay_interrupt() handler. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Message-Id: <20240124101639.30056-7-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Use CPUState.cc instead of CPU_GET_CLASS in cpu-exec.cRichard Henderson
CPU_GET_CLASS does runtime type checking; use the cached copy of the class instead. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Un-inline icount_exit_request() for clarityPhilippe Mathieu-Daudé
Convert packed logic to dumb icount_exit_request() helper. No functional change intended. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Message-Id: <20240124101639.30056-5-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Rename tcg_cpus_exec() -> tcg_cpu_exec()Philippe Mathieu-Daudé
tcg_cpus_exec() operates on a single vCPU, rename it as 'tcg_cpu_exec'. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Message-Id: <20240124101639.30056-4-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Rename tcg_cpus_destroy() -> tcg_cpu_destroy()Philippe Mathieu-Daudé
tcg_cpus_destroy() operates on a single vCPU, rename it as 'tcg_cpu_destroy'. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240124101639.30056-3-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Rename tcg_ss[] -> tcg_specific_ss[] in mesonPhilippe Mathieu-Daudé
tcg_ss[] source set contains target-specific units. Rename it as 'tcg_specific_ss[]' for clarity. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Message-Id: <20240124101639.30056-2-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Move perf and debuginfo support to tcg/Ilya Leoshkevich
tcg/ should not depend on accel/tcg/, but perf and debuginfo support provided by the latter are being used by tcg/tcg.c. Since that's the only user, move both to tcg/. Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231212003837.64090-5-iii@linux.ibm.com> Message-Id: <20240125054631.78867-5-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Remove #ifdef TARGET_I386 from perf.cIlya Leoshkevich
Preparation for moving perf.c to tcg/. This affects only profiling guest code, which has code in a non-0 based segment, e.g., 16-bit code, which is not particularly important. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231212003837.64090-4-iii@linux.ibm.com> Message-Id: <20240125054631.78867-4-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg: Make use of qemu_target_page_mask() in perf.cIlya Leoshkevich
Stop using TARGET_PAGE_MASK in order to make perf.c more target-agnostic. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231212003837.64090-2-iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240125054631.78867-2-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29accel/tcg/cpu-exec: Use RCU_READ_LOCK_GUARDPhilippe Mathieu-Daudé
Replace the manual rcu_read_(un)lock calls in cpu_exec(). Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240124074201.8239-2-philmd@linaro.org> [rth: Use RCU_READ_LOCK_GUARD not WITH_RCU_READ_LOCK_GUARD] Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29cpu-exec: simplify jump cache managementPaolo Bonzini
Unless I'm missing something egregious, the jmp cache is only every populated with a valid entry by the same thread that reads the cache. Therefore, the contents of any valid entry are always consistent and there is no need for any acquire/release magic. Indeed ->tb has to be accessed with atomics, because concurrent invalidations would otherwise cause data races. But ->pc is only ever accessed by one thread, and accesses to ->tb and ->pc within tb_lookup can never race with another tb_lookup. While the TranslationBlock (especially the flags) could be modified by a concurrent invalidation, store-release and load-acquire operations on the cache entry would not add any additional ordering beyond what you get from performing the accesses within a single thread. Because of this, there is really nothing to win in splitting the CF_PCREL and !CF_PCREL paths. It is easier to just always use the ->pc field in the jump cache. I noticed this while working on splitting commit 8ed558ec0cb ("accel/tcg: Introduce TARGET_TB_PCREL", 2022-10-04) into multiple pieces, for the sake of finding a more fine-grained bisection result for https://gitlab.com/qemu-project/qemu/-/issues/2092. It does not (and does not intend to) fix that issue; therefore it may make sense to not commit it until the root cause of issue #2092 is found. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240122153409.351959-1-pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-19system/watchpoint: Move TCG specific code to accel/tcg/Philippe Mathieu-Daudé
Keep system/watchpoint.c accelerator-agnostic by moving TCG specific code to accel/tcg/watchpoint.c. Update meson. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240111162032.43378-1-philmd@linaro.org>
2024-01-19util/async: Only call icount_notify_exit() if icount is enabledPhilippe Mathieu-Daudé
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231208113529.74067-6-philmd@linaro.org>
2024-01-19system/cpu-timers: Introduce ICountMode enumeratorPhilippe Mathieu-Daudé
Rather than having to lookup for what the 0, 1, 2, ... icount values are, use a enum definition. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231208113529.74067-4-philmd@linaro.org>
2024-01-19system/cpu-timers: Have icount_configure() return a booleanPhilippe Mathieu-Daudé
Following the example documented since commit e3fe3988d7 ("error: Document Error API usage rules"), have icount_configure() return a boolean indicating whether an error is set or not. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231208113529.74067-2-philmd@linaro.org>
2024-01-19accel/tcg: Remove tb_invalidate_phys_page() from system emulationPhilippe Mathieu-Daudé
Since previous commit, tb_invalidate_phys_page() is not used anymore in system emulation. Make it static for user emulation and remove its public declaration in "exec/translate-all.h". Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20231130205600.35727-1-philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-09Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into stagingPeter Maydell
Pull request # -----BEGIN PGP SIGNATURE----- # # iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmWcJMUACgkQnKSrs4Gr # c8hh/Qf/Wt177UlhBR49OWmmegs8c8yS1mhyawo7YIJM4pqoXCYLaACpcKECXcGU # rlgyR4ow68EXnnU8+/s2cp2UqHxrla+E2eNqBoTDmkNt3Cko5sJn5G5PM5EYK+mO # JjFRzn7awRyxD6mGOuaMVoj6OuHbAA/U4JF7FhW0YuRl8v0/mvAxRSfQ4U6Crq/y # 19Aa1CXHD1GH2CUJsMCY8zT47Dr4DJcvZx5IpcDFaHaYDCkktFwNzdo5IDnCx2M2 # xnP37Qp/Q93cu12lWkVOu8HCT6yhoszahyOqlBxDmo7QeGkskrxGbMyE+vHM3fFI # aGSxiw193U7/QWu+Cq2/727C3YIq1g== # =pKUb # -----END PGP SIGNATURE----- # gpg: Signature made Mon 08 Jan 2024 16:37:25 GMT # gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full] # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full] # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * tag 'block-pull-request' of https://gitlab.com/stefanha/qemu: Rename "QEMU global mutex" to "BQL" in comments and docs Replace "iothread lock" with "BQL" in comments qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql() qemu/main-loop: rename QEMU_IOTHREAD_LOCK_GUARD to BQL_LOCK_GUARD system/cpus: rename qemu_mutex_lock_iothread() to bql_lock() iothread: Remove unused Error** argument in aio_context_set_aio_params Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-08Replace "iothread lock" with "BQL" in commentsStefan Hajnoczi
The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL) in their names. Update the code comments to use "BQL" instead of "iothread lock". Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-id: 20240102153529.486531-5-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql()Stefan Hajnoczi
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL) instead, it is already widely used and unambiguous. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-4-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08replay: stop us hanging in rr_wait_io_eventAlex Bennée
A lot of the hang I see are when we end up spinning in rr_wait_io_event for an event that will never come in playback. As a new check functions which can see if we are in PLAY mode and kick us us the wait function so the event can be processed. This fixes most of the failures in replay_kernel.py Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2013 Cc: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20231211091346.14616-12-alex.bennee@linaro.org>
2023-12-31configure, meson: rename targetos to host_osPaolo Bonzini
This variable is about the host OS, not the target. It is used a lot more since the Meson conversion, but the original sin dates back to 2003. Time to fix it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-31meson: remove OS definitions from config_targetosPaolo Bonzini
CONFIG_DARWIN, CONFIG_LINUX and CONFIG_BSD are used in some rules, but only CONFIG_LINUX has substantial use. Convert them all to if...endif. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-11-14accel/tcg: Forward probe size on to notdirty_writeJessica Clarke
Without this, we just dirty a single byte, and so if the caller writes more than one byte to the host memory then we won't have invalidated any translation blocks that start after the first byte and overlap those writes. In particular, AArch64's DC ZVA implementation uses probe_access (via probe_write), and so we don't invalidate the entire block, only the TB overlapping the first byte (and, in the unusual case an unaligned VA is given to the instruction, we also probe that specific address in order to get the right VA reported on an exception, so will invalidate a TB overlapping that address too). Since our IC IVAU implementation is a no-op for system emulation that relies on the softmmu already having detected self-modifying code via this mechanism, this means we have observably wrong behaviour when jumping to code that has been DC ZVA'ed. In practice this is an unusual thing for software to do, as in reality the OS will DC ZVA the page and the application will go and write actual instructions to it that aren't UDF #0, but you can write a test that clearly shows the faulty behaviour. For functions other than probe_access it's not clear what size to use when 0 is passed in. Arguably a size of 0 shouldn't dirty at all, since if you want to actually write then you should pass in a real size, but I have conservatively kept the implementation as dirtying the first byte in that case so as to avoid breaking any assumptions about that behaviour. Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com> Message-Id: <20231104031232.3246614-1-jrtc27@jrtc27.com> [rth: Move the dirtysize computation next to notdirty_write.] Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-11-14accel/tcg: Remove CF_LAST_IORichard Henderson
In cpu_exec_step_atomic, we did not set CF_LAST_IO, which lead to a loop with cpu_io_recompile. But since 18a536f1f8 ("Always require can_do_io") we no longer need a flag to indicate when the last insn should have can_do_io set, so remove the flag entirely. Reported-by: Clément Chigot <chigot@adacore.com> Tested-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Claudio Fontana <cfontana@suse.de> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1961 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-11-07accel/tcg: Factor tcg_cpu_reset_hold() outPhilippe Mathieu-Daudé
Factor the TCG specific code from cpu_common_reset_hold() to tcg_cpu_reset_hold() within tcg-accel-ops.c. Since this file is sysemu specific, we can inline tcg_flush_softmmu_tlb(), removing its declaration in "exec/cpu-common.h". Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230918104153.24433-4-philmd@linaro.org>
2023-11-07accel: Introduce cpu_exec_reset_hold()Philippe Mathieu-Daudé
Introduce cpu_exec_reset_hold() which call an accelerator specific AccelOpsClass::cpu_reset_hold() handler. Define a stub on TCG user emulation, because CPU reset is irrelevant there. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230918104153.24433-3-philmd@linaro.org>