aboutsummaryrefslogtreecommitdiff
path: root/target/i386/tcg/sysemu
AgeCommit message (Collapse)Author
2024-03-27target/i386/tcg: Enable page walking from MMIO memoryGregory Price
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using this memory as conventional memory in a separate NUMA node. If a process is deliberately forced to run from that node $ numactl --membind=1 ls the page table walk on i386 fails. Useful part of backtrace: (cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p") at ../../cpu-target.c:359 (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>) at ../../accel/tcg/cputlb.c:1339 (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030 (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 at ../../accel/tcg/ldst_common.c.inc:301 at ../../target/i386/tcg/sysemu/excp_helper.c:173 (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0) at ../../target/i386/tcg/sysemu/excp_helper.c:578 (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604 Avoid this by plumbing the address all the way down from x86_cpu_tlb_fill() where is available as retaddr to the actual accessors which provide it to probe_access_full() which already handles MMIO accesses. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Gregory Price <gregory.price@memverge.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 9dab7bbb017d11b64c52239fa4e2f910a6a004f2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-21target/i386: use separate MMU indexes for 32-bit accessesPaolo Bonzini
Accesses from a 32-bit environment (32-bit code segment for instruction accesses, EFER.LMA==0 for processor accesses) have to mask away the upper 32 bits of the address. While a bit wasteful, the easiest way to do so is to use separate MMU indexes. These days, QEMU anyway is compiled with a fixed value for NB_MMU_MODES. Split MMU_USER_IDX, MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 90f641531c782c873a05895f411c05fbbbef3c49) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
2024-03-21target/i386: introduce function to query MMU indicesPaolo Bonzini
Remove knowledge of specific MMU indexes (other than MMU_NESTED_IDX and MMU_PHYS_IDX) from mmu_translate(). This will make it possible to split 32-bit and 64-bit MMU indexes. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 5f97afe2543f09160a8d123ab6e2e8c6d98fa9ce) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fixup in target/i386/cpu.h due to other changes in that area)
2024-02-28target/i386: leave the A20 bit set in the final NPT walkPaolo Bonzini
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address. Unlike the previous patch, in this one the masking must be kept, but it was done too early. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b5a9de3259f4c791bde2faff086dd5737625e41e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: remove unnecessary/wrong application of the A20 maskPaolo Bonzini
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_access_full() and x86_cpu_tlb_fill(). If ptw_translate() on the other hand does a MMU_NESTED_IDX access, the A20 mask must not be applied to the address that is looked up in the nested page tables; it must be applied only to the addresses that hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the previous paragraph). Therefore, we can remove A20 masking from the computation of the page table entry's address, and let get_physical_address() or mmu_translate() apply it when they know they are returning a host-physical address. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a28fe7dc1939333c81b895cdced81c69eb7c5ad0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: Fix physical address truncationPaolo Bonzini
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorrect when using physical address extensions (PAE) outside of long mode, with the result that a 32-bit operating system using PAE to access memory above 4G will experience undefined behaviour. The truncation code was originally introduced in commit 33dfdb5 ("x86: only allow real mode to access 32bit without LMA"), where it applied only to translations performed while paging is disabled (and so cannot affect guests using PAE). Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") rearranged the code such that the truncation also applied to the use of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use atomic operations for pte updates") brought this truncation into scope for page table entry accesses, and is the first commit for which a Windows 10 32-bit guest will reliably fail to boot if memory above 4G is present. The truncation code however is not completely redundant. Even though the maximum address size for any executed instruction is 32 bits, helpers for operations such as BOUND, FSAVE or XSAVE may ask get_physical_address() to translate an address outside of the 32-bit range, if invoked with an argument that is close to the 4G boundary. Likewise for processor accesses, for example TSS or IDT accesses, when EFER.LMA==0. So, move the address truncation in get_physical_address() so that it applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040 Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Cc: qemu-stable@nongnu.org Co-developed-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: drop unrelated change in target/i386/cpu.c)
2024-02-28target/i386: check validity of VMCB addressesPaolo Bonzini
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the maximum physical address width of the processor. Setting them to 1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual). The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE, even though the manual is not clear on that. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit d09c79010ffd880dc69e7a21e3cfdef90b928fb8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: mask high bits of CR3 in 32-bit modePaolo Bonzini
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level paging or PAE paging). Do this in mmu_translate() to remove the last where get_physical_address() meaningfully drops the high bits of the address. Cc: qemu-stable@nongnu.org Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 68fb78d7d5723066ec2cacee7d25d67a4143b42f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04accel/tcg: Replace CPUState.env_ptr with cpu_env()Richard Henderson
Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-26target/i386/svm_helper: eliminate duplicate local variablePaolo Bonzini
This shadows an outer "cs" variable that is initialized to the same expression. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-09-07Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* only build util/async-teardown.c when system build is requested * target/i386: fix BQL handling of the legacy FERR interrupts * target/i386: fix memory operand size for CVTPS2PD * target/i386: Add support for AMX-COMPLEX in CPUID enumeration * compile plugins on Darwin * configure and meson cleanups * drop mkvenv support for Python 3.7 and Debian10 * add wrap file for libblkio * tweak KVM stubs # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmT5t6UUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmjwf+MpvVuq+nn+3PqGUXgnzJx5ccA5ne # O9Xy8+1GdlQPzBw/tPovxXDSKn3HQtBfxObn2CCE1tu/4uHWpBA1Vksn++NHdUf2 # P0yoHxGskJu5iYYTtIcNw5cH2i+AizdiXuEjhfNjqD5Y234cFoHnUApt9e3zBvVO # cwGD7WpPuSb4g38hHkV6nKcx72o7b4ejDToqUVZJ2N+RkddSqB03fSdrOru0hR7x # V+lay0DYdFszNDFm05LJzfDbcrHuSryGA91wtty7Fzj6QhR/HBHQCUZJxMB5PI7F # Zy4Zdpu60zxtSxUqeKgIi7UhNFgMcax2Hf9QEqdc/B4ARoBbboh4q4u8kQ== # =dH7/ # -----END PGP SIGNATURE----- # gpg: Signature made Thu 07 Sep 2023 07:44:37 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (51 commits) docs/system/replay: do not show removed command line option subprojects: add wrap file for libblkio sysemu/kvm: Restrict kvm_pc_setup_irq_routing() to x86 targets sysemu/kvm: Restrict kvm_has_pit_state2() to x86 targets sysemu/kvm: Restrict kvm_get_apic_state() to x86 targets sysemu/kvm: Restrict kvm_arch_get_supported_cpuid/msr() to x86 targets target/i386: Restrict declarations specific to CONFIG_KVM target/i386: Allow elision of kvm_hv_vpindex_settable() target/i386: Allow elision of kvm_enable_x2apic() target/i386: Remove unused KVM stubs target/i386/cpu-sysemu: Inline kvm_apic_in_kernel() target/i386/helper: Restrict KVM declarations to system emulation hw/i386/fw_cfg: Include missing 'cpu.h' header hw/i386/pc: Include missing 'cpu.h' header hw/i386/pc: Include missing 'sysemu/tcg.h' header Revert "mkvenv: work around broken pip installations on Debian 10" mkvenv: assume presence of importlib.metadata Python: Drop support for Python 3.7 configure: remove dead code meson: list leftover CONFIG_* symbols ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-01target/i386: raise FERR interrupt with iothread lockedPaolo Bonzini
Otherwise tcg_handle_interrupt() triggers an assertion failure: #5 0x0000555555c97369 in tcg_handle_interrupt (cpu=0x555557434cb0, mask=2) at ../accel/tcg/tcg-accel-ops.c:83 #6 tcg_handle_interrupt (cpu=0x555557434cb0, mask=2) at ../accel/tcg/tcg-accel-ops.c:81 #7 0x0000555555b4d58b in pic_irq_request (opaque=<optimized out>, irq=<optimized out>, level=1) at ../hw/i386/x86.c:555 #8 0x0000555555b4f218 in gsi_handler (opaque=0x5555579423d0, n=13, level=1) at ../hw/i386/x86.c:611 #9 0x00007fffa42bde14 in code_gen_buffer () #10 0x0000555555c724bb in cpu_tb_exec (cpu=cpu@entry=0x555557434cb0, itb=<optimized out>, tb_exit=tb_exit@entry=0x7fffe9bfd658) at ../accel/tcg/cpu-exec.c:457 Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1808 Reported-by: NyanCatTW1 <https://gitlab.com/a0939712328> Co-developed-by: Richard Henderson <richard.henderson@linaro.org>' Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-08-31target/translate: Include missing 'exec/cpu_ldst.h' headerPhilippe Mathieu-Daudé
All these files access the CPU LD/ST API declared in "exec/cpu_ldst.h". Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230828221314.18435-4-philmd@linaro.org>
2023-06-26target/i386: implement SYSCALL/SYSRET in 32-bit emulatorsPaolo Bonzini
AMD supports both 32-bit and 64-bit SYSCALL/SYSRET, but the TCG only exposes it for 64-bit targets. For system emulation just reuse the helper; for user-mode emulation the ABI is the same as "int $80". The BSDs does not support any fast system call mechanism in 32-bit mode so add to bsd-user the same stub that FreeBSD has for 64-bit compatibility mode. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-06-20meson: Replace softmmu_ss -> system_ssPhilippe Mathieu-Daudé
We use the user_ss[] array to hold the user emulation sources, and the softmmu_ss[] array to hold the system emulation ones. Hold the latter in the 'system_ss[]' array for parity with user emulation. Mechanical change doing: $ sed -i -e s/softmmu_ss/system_ss/g $(git grep -l softmmu_ss) Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230613133347.82210-10-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20meson: Replace CONFIG_SOFTMMU -> CONFIG_SYSTEM_ONLYPhilippe Mathieu-Daudé
Since we *might* have user emulation with softmmu, use the clearer 'CONFIG_SYSTEM_ONLY' key to check for system emulation. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230613133347.82210-9-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-20target/i386: Avoid unreachable variable declaration in mmu_translate()Peter Maydell
Coverity complains (CID 1507880) that the declaration "int error_code;" in mmu_translate() is unreachable code. Since this is only a declaration, this isn't actually a bug, but: * it's a bear-trap for future changes, because if it was changed to include an initialization 'int error_code = foo;' then the initialization wouldn't actually happen (being dead code) * it's against our coding style, which wants declarations to be at the start of blocks * it means that anybody reading the code has to go and look up exactly what the C rules are for skipping over variable declarations using a goto Move the declaration to the top of the function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230406155946.3362077-1-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-28accel/tcg: Add 'size' param to probe_access_fullRichard Henderson
Change to match the recent change to probe_access_flags. All existing callers updated to supply 0, so no change in behaviour. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-01target/i386: Always completely initialize TranslateFaultRichard Henderson
In get_physical_address, the canonical address check failed to set TranslateFault.stage2, which resulted in an uninitialized read from the struct when reporting the fault in x86_cpu_tlb_fill. Adjust all error paths to use structure assignment so that the entire struct is always initialized. Reported-by: Daniel Hoffman <dhoff749@gmail.com> Fixes: 9bbcf372193a ("target/i386: Reorg GET_HPHYS") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221201074522.178498-1-richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1324 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-11-03Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* bug fixes * reduced memory footprint for IPI virtualization on Intel processors * asynchronous teardown support (Linux only) # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNiVykUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroN0Swf/YxjphCtFgYYSO14WP+7jAnfRZLhm # 0xWChWP8rco5I352OBFeFU64Av5XoLGNn6SZLl8lcg86lQ/G0D27jxu6wOcDDHgw # 0yTDO1gevj51UKsbxoC66OWSZwKTEo398/BHPDcI2W41yOFycSdtrPgspOrFRVvf # 7M3nNjuNPsQorZeuu8NGr3jakqbt99ZDXcyDEWbrEAcmy2JBRMbGgT0Kdnc6aZfW # CvL+1ljxzldNwGeNBbQW2QgODbfHx5cFZcy4Daze35l5Ra7K/FrgAzr6o/HXptya # 9fEs5LJQ1JWI6JtpaWwFy7fcIIOsJ0YW/hWWQZSDt9JdAJFE5/+vF+Kz5Q== # =CgrO # -----END PGP SIGNATURE----- # gpg: Signature made Wed 02 Nov 2022 07:40:25 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: target/i386: Fix test for paging enabled util/log: Close per-thread log file on thread termination target/i386: Set maximum APIC ID to KVM prior to vCPU creation os-posix: asynchronous teardown for shutdown on Linux target/i386: Fix calculation of LOCK NEG eflags Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-11-02target/i386: Fix test for paging enabledRichard Henderson
If CR0.PG is unset, pg_mode will be zero, but it will also be zero for non-PAE/non-PSE page tables with CR0.WP=0. Restore the correct test for paging enabled. Fixes: 98281984a37 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1269 Reported-by: Andreas Gustafsson <gson@gson.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221102091232.1092552-1-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-11-01accel/tcg: Remove will_exit argument from cpu_restore_stateRichard Henderson
The value passed is always true, and if the target's synchronize_from_tb hook is non-trivial, not exiting may be erroneous. Reviewed-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-18target/i386: Use probe_access_full for final stage2 translationRichard Henderson
Rather than recurse directly on mmu_translate, go through the same softmmu lookup that we did for the page table walk. This centralizes all knowledge of MMU_NESTED_IDX, with respect to setup of TranslationParams, to get_physical_address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-10-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use atomic operations for pte updatesRichard Henderson
Use probe_access_full in order to resolve to a host address, which then lets us use a host cmpxchg to update the pte. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/279 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-9-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Combine 5 sets of variables in mmu_translateRichard Henderson
We don't need one variable set per translation level, which requires copying into pte/pte_addr for huge pages. Standardize on pte/pte_addr for all levels. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-8-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use MMU_NESTED_IDX for vmload/vmsaveRichard Henderson
Use MMU_NESTED_IDX for each memory access, rather than just a single translation to physical. Adjust svm_save_seg and svm_load_seg to pass in mmu_idx. This removes the last use of get_hphys so remove it. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-7-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDXRichard Henderson
These new mmu indexes will be helpful for improving paging and code throughout the target. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-6-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Reorg GET_HPHYSRichard Henderson
Replace with PTE_HPHYS for the page table walk, and a direct call to mmu_translate for the final stage2 translation. Hoist the check for HF2_NPT_MASK out to get_physical_address, which avoids the recursive call when stage2 is disabled. We can now return all the way out to x86_cpu_tlb_fill before raising an exception, which means probe works. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-5-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Introduce structures for mmu_translateRichard Henderson
Create TranslateParams for inputs, TranslateResults for successful outputs, and TranslateFault for error outputs; return true on success. Move stage1 error paths from handle_mmu_fault to x86_cpu_tlb_fill; reorg the rest of handle_mmu_fault into get_physical_address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-4-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Direct call get_hphys from mmu_translateRichard Henderson
Use a boolean to control the call to get_hphys instead of passing a null function pointer. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-3-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use MMUAccessType across excp_helper.cRichard Henderson
Replace int is_write1 and magic numbers with the proper MMUAccessType access_type and enumerators. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-2-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11x86: Implement MSR_CORE_THREAD_COUNT MSRAlexander Graf
Intel CPUs starting with Haswell-E implement a new MSR called MSR_CORE_THREAD_COUNT which exposes the number of threads and cores inside of a package. This MSR is used by XNU to populate internal data structures and not implementing it prevents virtual machines with more than 1 vCPU from booting if the emulated CPU generation is at least Haswell-E. This patch propagates the existing hvf logic from patch 027ac0cb516 ("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to TCG. Signed-off-by: Alexander Graf <agraf@csgraf.de> Message-Id: <20221004225643.65036-2-agraf@csgraf.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-18target/i386: Raise #GP on unaligned m128 accesses when required.Paolo Bonzini
Many instructions which load/store 128-bit values are supposed to raise #GP when the memory operand isn't 16-byte aligned. This includes: - Instructions explicitly requiring memory alignment (Exceptions Type 1 in the "AVX and SSE Instruction Exception Specification" section of the SDM) - Legacy SSE instructions that load/store 128-bit values (Exceptions Types 2 and 4). This change sets MO_ALIGN_16 on 128-bit memory accesses that require 16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access hooks that simulate a #GP exception in qemu-user and qemu-system, respectively. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ricky Zhou <ricky@rzhou.org> Message-Id: <20220830034816.57091-2-ricky@rzhou.org> [Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06target/i386/tcg: Fix masking of real-mode addresses with A20 bitStephen Michael Jothen
The correct A20 masking is done if paging is enabled (protected mode) but it seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2 section 1.2.4: > If the sum of the segment base and effective address carries over into bit 20, > that bit can be optionally truncated to mimic the 20-bit address wrapping of the > 8086 processor by using the A20M# input signal to mask the A20 address bit. Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line afterwards, the correct wrapping wasn't taking place. `handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit. In real mode, it fills the TLB with the raw unmasked address. However, for the protected mode, the `mmu_translate' function does the correct A20 masking. The fix then should be to just apply the A20 mask in the first branch of the if statement. Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com> Message-Id: <Yo5MUMSz80jXtvt9@air-old.local> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-21compiler.h: replace QEMU_NORETURN with G_NORETURNMarc-André Lureau
G_NORETURN was introduced in glib 2.68, fallback to G_GNUC_NORETURN in glib-compat. Note that this attribute must be placed before the function declaration (bringing a bit of consistency in qemu codebase usage). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Warner Losh <imp@bsdimp.com> Message-Id: <20220420132624.2439741-20-marcandre.lureau@redhat.com>
2022-03-15target/i386: Throw a #SS when loading a non-canonical ISTGareth Webb
Loading a non-canonical address into rsp when handling an interrupt or performing a far call should raise a #SS not a #GP. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/870 Signed-off-by: Gareth Webb <gareth.webb@umbralsoftware.co.uk> Message-Id: <164529651121.25406.15337137068584246397-0@git.sr.ht> [Move get_pg_mode to seg_helper.c for user-mode emulators. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15target/i386: only include bits in pg_mode if they are not ignoredPaolo Bonzini
LA57/PKE/PKS is only relevant in 64-bit mode, and NXE is only relevant if PAE is in use. Since there is code that checks PG_MODE_LA57 to determine the canonicality of addresses, make sure that the bit is not set by mistake in 32-bit mode. While it would not be a problem because 32-bit addresses by definition fit in both 48-bit and 57-bit address spaces, it is nicer if get_pg_mode() actually returns whether a feature is enabled, and it allows a few simplifications in the page table walker. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-06target/i386/tcg/sysemu: Include missing 'exec/exec-all.h' headerPhilippe Mathieu-Daudé
excp_helper.c requires "exec/exec-all.h" for tlb_set_page_with_attrs() and misc_helper.c for tlb_flush(). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-8-f4bug@amsat.org>
2022-02-21exec/exec-all: Move 'qemu/log.h' include in units requiring itPhilippe Mathieu-Daudé
Many files use "qemu/log.h" declarations but neglect to include it (they inherit it via "exec/exec-all.h"). "exec/exec-all.h" is a core component and shouldn't be used that way. Move the "qemu/log.h" inclusion locally to each unit requiring it. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Thomas Huth <thuth@redhat.com> Message-Id: <20220207082756.82600-10-f4bug@amsat.org> Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-02-09target/i386: use CPU_LOG_INT for IRQ servicingAlex Bennée
I think these have been wrong since f193c7979c (do not depend on thunk.h - more log items). Fix them so as not to confuse other debugging. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220204204335.1689602-26-alex.bennee@linaro.org>
2021-11-08target-i386: mmu: fix handling of noncanonical virtual addressesPaolo Bonzini
mmu_translate is supposed to return an error code for page faults; it is not able to handle other exceptions. The #GP case for noncanonical virtual addresses is not handled correctly, and incorrectly raised as a page fault with error code 1. Since it cannot happen for nested page tables, move it directly to handle_mmu_fault, even before the invocation of mmu_translate. Fixes: #676 Fixes: 661ff4879e ("target/i386: extract mmu_translate", 2021-05-11) Cc: qemu-stable@nongnu.org Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-11-08target-i386: mmu: use pg_mode instead of HF_LMA_MASKPaolo Bonzini
Correctly look up the paging mode of the hypervisor when it is using 64-bit mode but the guest is not. Fixes: 68746930ae ("target/i386: use mmu_translate for NPT walk", 2021-05-11) Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-14target/i386: Move x86_cpu_exec_interrupt() under sysemu/ folderPhilippe Mathieu-Daudé
Following the logic of commit 30493a030ff ("i386: split seg_helper into user-only and sysemu parts"), move x86_cpu_exec_interrupt() under sysemu/seg_helper.c. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-By: Warner Losh <imp@bsdimp.com> Message-Id: <20210911165434.531552-12-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13target/i386: Added vVMLOAD and vVMSAVE featureLara Lazier
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without causing a VMEXIT. (APM2 15.33.1) Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: Added changed priority check for VIRQLara Lazier
Writes to cr8 affect v_tpr. This could set or unset an interrupt request as the priority might have changed. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: Added ignore TPR check in ctl_has_irqLara Lazier
The APM2 states that if V_IGN_TPR is nonzero, the current virtual interrupt ignores the (virtual) TPR. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: Added VGIF V_IRQ masking capabilityLara Lazier
VGIF provides masking capability for when virtual interrupts are taken. (APM2) Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: Moved int_ctl into CPUX86State structureLara Lazier
Moved int_ctl into the CPUX86State structure. It removes some unnecessary stores and loads, and prepares for tracking the vIRQ state even when it is masked due to vGIF. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: Added VGIF featureLara Lazier
VGIF allows STGI and CLGI to execute in guest mode and control virtual interrupts in guest mode. When the VGIF feature is enabled then: * executing STGI in the guest sets bit 9 of the VMCB offset 60h. * executing CLGI in the guest clears bit 9 of the VMCB offset 60h. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210730070742.9674-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13target/i386: VMRUN and VMLOAD canonicalizationsLara Lazier
APM2 requires that VMRUN and VMLOAD canonicalize (sign extend to 63 from 48/57) all base addresses in the segment registers that have been respectively loaded. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210804113058.45186-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>