aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2024-03-27target/riscv: always clear vstart in whole vec move insnsDaniel Henrique Barboza
These insns have 2 paths: we'll either have vstart already cleared if vstart_eq_zero or we'll do a brcond to check if vstart >= maxsz to call the 'vmvr_v' helper. The helper will clear vstart if it executes until the end, or if vstart >= vl. For starters, the check itself is wrong: we're checking vstart >= maxsz, when in fact we should use vstart in bytes, or 'startb' like 'vmvr_v' is calling, to do the comparison. But even after fixing the comparison we'll still need to clear vstart in the end, which isn't happening too. We want to make the helpers responsible to manage vstart, including these corner cases, precisely to avoid these situations: - remove the wrong vstart >= maxsz cond from the translation; - add a 'startb >= maxsz' cond in 'vmvr_v', and clear vstart if that happens. This way we're now sure that vstart is being cleared in the end of the execution, regardless of the path taken. Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-5-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 7e53e3ddf6dff200098e112c5370ab16d2d5dbd1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: fix 'vmvr_v' memcpy endianessDaniel Henrique Barboza
vmvr_v isn't handling the case where the host might be big endian and the bytes to be copied aren't sequential. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 768e7b329c0be22035da077fe76221dd0a47103b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27trans_rvv.c.inc: set vstart = 0 in int scalar move insnsDaniel Henrique Barboza
trans_vmv_x_s, trans_vmv_s_x, trans_vfmv_f_s and trans_vfmv_s_f aren't setting vstart = 0 after execution. This is usually done by a helper in vector_helper.c but these functions don't use helpers. We'll set vstart after any potential 'over' brconds, and that will also mandate a mark_vs_dirty() too. Fixes: dedc53cbc9 ("target/riscv: rvv-1.0: integer scalar move instructions") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240314175704.478276-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 0848f7c18ef50de9f955e7eeb4363d92766a41bf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: set vstart = 0 in GEN_VEXT_VSLIDEUP_VX()Daniel Henrique Barboza
The helper isn't setting env->vstart = 0 after its execution, as it is expected from every vector instruction that completes successfully. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20240314175704.478276-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit d3646e31ce6d1e02e46e6eabdbc2e637c0cbece7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/i386/tcg: Enable page walking from MMIO memoryGregory Price
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using this memory as conventional memory in a separate NUMA node. If a process is deliberately forced to run from that node $ numactl --membind=1 ls the page table walk on i386 fails. Useful part of backtrace: (cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p") at ../../cpu-target.c:359 (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>) at ../../accel/tcg/cputlb.c:1339 (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030 (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 at ../../accel/tcg/ldst_common.c.inc:301 at ../../target/i386/tcg/sysemu/excp_helper.c:173 (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0) at ../../target/i386/tcg/sysemu/excp_helper.c:578 (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604 Avoid this by plumbing the address all the way down from x86_cpu_tlb_fill() where is available as retaddr to the actual accessors which provide it to probe_access_full() which already handles MMIO accesses. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Gregory Price <gregory.price@memverge.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 9dab7bbb017d11b64c52239fa4e2f910a6a004f2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-25target/s390x: Use mutable temporary value for op_tsIdo Plat
Otherwise TCG would assume the register that holds t1 would be constant and reuse whenever it needs the value within it. Cc: qemu-stable@nongnu.org Fixes: f1ea739bd598 ("target/s390x: Use tcg_constant_* in local contexts") Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [iii: Adjust a newline and capitalization, add tags] Signed-off-by: Ido Plat <ido.plat@ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-ID: <20240318202722.20675-1-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 272fba9779af0bb1c29cd30302fc1e31c59274d0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-25target/loongarch: Fix qemu-system-loongarch64 assert failed with the option ↵Song Gao
'-d int' qemu-system-loongarch64 assert failed with the option '-d int', the helper_idle() raise an exception EXCP_HLT, but the exception name is undefined. Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240321123606.1704900-1-gaosong@loongson.cn> (cherry picked from commit 1590154ee4376819a8c6ee61e849ebf4a4e7cd02) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/loongarch: Fix qemu-loongarch64 hang when executing 'll.d $t0, $t0, 0'Song Gao
On gen_ll, if a->imm is zero, make_address_x return src1, but the load to destination may clobber src1. We use a new destination to fix this problem. Fixes: c5af6628f4be (target/loongarch: Extract make_address_i() helper) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240320013955.1561311-1-gaosong@loongson.cn> (cherry picked from commit 77642f92c0b71a105aba2a4d03bc62328eae703b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix do_stdby_e()Sven Schnelle
stdby,e,m was writing data from the wrong half of the register into memory for cases 0-3. Fixes: 25460fc5a71 ("target/hppa: Implement STDBY") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-7-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 518d2f4300e5c50a3e6416fd46e58373781a5267) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: mask privilege bits in mfiaSven Schnelle
mfia should return only the iaoq bits without privilege bits. Fixes: 98a9cb792c8 ("target-hppa: Implement system and memory-management insns") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Helge Deller <deller@gmx.de> Message-Id: <20240319161921.487080-6-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit b5e0b3a53c983c4a9620a44a6a557b389e589218) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: exit tb on flush cache instructionsSven Schnelle
When the guest modifies the tb it is currently executing from, it executes a fic instruction. Exit the tb on such instruction, otherwise we might execute stale code. Signed-off-by: Sven Schnelle <svens@stackframe.org> Message-Id: <20240319161921.487080-5-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit ad1fdacd1b936557514dd72c2079a80be0c2dfb4) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix access_id checkSven Schnelle
PA2.0 provides 8 instead of 4 PID registers. Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-4-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit ae157fc25053917830c3b581bc282f906e6d95d3) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix shrp for wide modeSven Schnelle
Fixes: f7b775a9c075 ("target/hppa: Implement SHRPD") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Helge Deller <deller@gmx.de> Message-Id: <20240319161921.487080-3-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit d37fad0ae5bd2c544fdb0f2eff6acdb28a155be0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: ldcw,s uses static shift of 3Sven Schnelle
Fixes: 96d6407f363 ("target-hppa: Implement loads and stores") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-2-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit c3ea1996a14d5dbbedb3f9036f7ebec4395dc889) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_12a insns for wide modeRichard Henderson
Tested-by: Helge Deller <deller@gmx.de> Reported-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 46174e140d274385b1255bc7f16a5a711853053f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_11a insns for wide modeRichard Henderson
Tested-by: Helge Deller <deller@gmx.de> Reviewed-by: Helge Deller <deller@gmx.de> Reported-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 4768c28edd4097ebef42822e15b4a43026b15376) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_16 insns for wide modeRichard Henderson
Reported-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Helge Deller <deller@gmx.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 72bace2d13cb427fde3bb50ae1a71a2abe9acc0f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-21target/i386: Revert monitor_puts() in do_inject_x86_mce()Tao Su
monitor_puts() doesn't check the monitor pointer, but do_inject_x86_mce() may have a parameter with NULL monitor pointer. Revert monitor_puts() in do_inject_x86_mce() to fix, then the fact that we send the same message to monitor and log is again more obvious. Fixes: bf0c50d4aa85 (monitor: expose monitor_puts to rest of code) Reviwed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Message-ID: <20240320083640.523287-1-tao1.su@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 7fd226b04746f0be0b636de5097f1b42338951a0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-21target/i386: fix direction of "32-bit MMU" testPaolo Bonzini
The low bit of MMU indices for x86 TCG indicates whether the processor is in 32-bit mode and therefore linear addresses have to be masked to 32 bits. However, the index was computed incorrectly, leading to possible conflicts in the TLB for any address above 4G. Analyzed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Fixes: b1661801c18 ("target/i386: Fix physical address truncation", 2024-02-28) Fixes: a28b6b4e743 ("target/i386: Fix physical address truncation" in stable-8.2) Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2206 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 2cc68629a6fc198f4a972698bdd6477f883aedfb) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
2024-03-21target/i386: use separate MMU indexes for 32-bit accessesPaolo Bonzini
Accesses from a 32-bit environment (32-bit code segment for instruction accesses, EFER.LMA==0 for processor accesses) have to mask away the upper 32 bits of the address. While a bit wasteful, the easiest way to do so is to use separate MMU indexes. These days, QEMU anyway is compiled with a fixed value for NB_MMU_MODES. Split MMU_USER_IDX, MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 90f641531c782c873a05895f411c05fbbbef3c49) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
2024-03-21target/i386: introduce function to query MMU indicesPaolo Bonzini
Remove knowledge of specific MMU indexes (other than MMU_NESTED_IDX and MMU_PHYS_IDX) from mmu_translate(). This will make it possible to split 32-bit and 64-bit MMU indexes. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 5f97afe2543f09160a8d123ab6e2e8c6d98fa9ce) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fixup in target/i386/cpu.h due to other changes in that area)
2024-03-09target/arm: Fix 32-bit SMOPARichard Henderson
While the 8-bit input elements are sequential in the input vector, the 32-bit output elements are not sequential in the output matrix. Do not attempt to compute 2 32-bit outputs at the same time. Cc: qemu-stable@nongnu.org Fixes: 23a5e3859f5 ("target/arm: Implement SME integer outer product") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2083 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240305163931.242795-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit d572bcb222010b38b382871a23b2f38e2c3f4d2d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: leave the A20 bit set in the final NPT walkPaolo Bonzini
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address. Unlike the previous patch, in this one the masking must be kept, but it was done too early. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b5a9de3259f4c791bde2faff086dd5737625e41e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: remove unnecessary/wrong application of the A20 maskPaolo Bonzini
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_access_full() and x86_cpu_tlb_fill(). If ptw_translate() on the other hand does a MMU_NESTED_IDX access, the A20 mask must not be applied to the address that is looked up in the nested page tables; it must be applied only to the addresses that hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the previous paragraph). Therefore, we can remove A20 masking from the computation of the page table entry's address, and let get_physical_address() or mmu_translate() apply it when they know they are returning a host-physical address. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a28fe7dc1939333c81b895cdced81c69eb7c5ad0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: Fix physical address truncationPaolo Bonzini
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorrect when using physical address extensions (PAE) outside of long mode, with the result that a 32-bit operating system using PAE to access memory above 4G will experience undefined behaviour. The truncation code was originally introduced in commit 33dfdb5 ("x86: only allow real mode to access 32bit without LMA"), where it applied only to translations performed while paging is disabled (and so cannot affect guests using PAE). Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") rearranged the code such that the truncation also applied to the use of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use atomic operations for pte updates") brought this truncation into scope for page table entry accesses, and is the first commit for which a Windows 10 32-bit guest will reliably fail to boot if memory above 4G is present. The truncation code however is not completely redundant. Even though the maximum address size for any executed instruction is 32 bits, helpers for operations such as BOUND, FSAVE or XSAVE may ask get_physical_address() to translate an address outside of the 32-bit range, if invoked with an argument that is close to the 4G boundary. Likewise for processor accesses, for example TSS or IDT accesses, when EFER.LMA==0. So, move the address truncation in get_physical_address() so that it applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040 Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Cc: qemu-stable@nongnu.org Co-developed-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: drop unrelated change in target/i386/cpu.c)
2024-02-28target/i386: check validity of VMCB addressesPaolo Bonzini
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the maximum physical address width of the processor. Setting them to 1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual). The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE, even though the manual is not clear on that. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit d09c79010ffd880dc69e7a21e3cfdef90b928fb8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: mask high bits of CR3 in 32-bit modePaolo Bonzini
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level paging or PAE paging). Do this in mmu_translate() to remove the last where get_physical_address() meaningfully drops the high bits of the address. Cc: qemu-stable@nongnu.org Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 68fb78d7d5723066ec2cacee7d25d67a4143b42f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-24target/ppc: Fix crash on machine check caused by ifetchNicholas Piggin
is_prefix_insn_excp() loads the first word of the instruction address which caused an exception, to determine whether or not it was prefixed so the prefix bit can be set in [H]SRR1. This works if the instruction image can be loaded, but if the exception was caused by an ifetch, this load could fail and cause a recursive exception and crash. Machine checks caused by ifetch are not excluded from the prefix check and can crash (see issue 2108 for an example). Fix this by excluding machine checks caused by ifetch from the prefix check. Cc: qemu-stable@nongnu.org Acked-by: Cédric Le Goater <clg@kaod.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2108 Fixes: 55a7fa34f89 ("target/ppc: Machine check on invalid real address access on POWER9/10") Fixes: 5a5d3b23cb2 ("target/ppc: Add SRR1 prefix indication to interrupt handlers") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> (cherry picked from commit c8fd9667e5975fe2e70a906e125a758737eab707) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-24target/ppc: Fix lxv/stxv MSR facility checkNicholas Piggin
The move to decodetree flipped the inequality test for the VEC / VSX MSR facility check. This caused application crashes under Linux, where these facility unavailable interrupts are used for lazy-switching of VEC/VSX register sets. Getting the incorrect interrupt would result in wrong registers being loaded, potentially overwriting live values and/or exposing stale ones. Cc: qemu-stable@nongnu.org Reported-by: Joel Stanley <joel@jms.id.au> Fixes: 70426b5bb738 ("target/ppc: moved stxvx and lxvx from legacy to decodtree") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1769 Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Tested-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Tested-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> (cherry picked from commit 2cc0e449d17310877fb28a942d4627ad22bb68ea) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20target/i386: Generate an illegal opcode exception on cmp instructions with ↵Ziqiao Kong
lock prefix target/i386: As specified by Intel Manual Vol2 3-180, cmp instructions are not allowed to have lock prefix and a `UD` should be raised. Without this patch, s1->T0 will be uninitialized and used in the case OP_CMPL. Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com> Message-ID: <20240215095015.570748-2-ziqiaokong@gmail.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 99d0dcd7f102c07a510200d768cae65e5db25d23) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpuid: Move leaf 7 to correct groupXiaoyao Li
CPUID leaf 7 was grouped together with SGX leaf 0x12 by commit b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM") by mistake. SGX leaf 0x12 has its specific logic to check if subleaf (starting from 2) is valid or not by checking the bit 0:3 of corresponding EAX is 1 or not. Leaf 7 follows the logic that EAX of subleaf 0 enumerates the maximum valid subleaf. Fixes: b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-ID: <20240125024016.2521244-4-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 0729857c707535847d7fe31d3d91eb8b2a118e3c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpuid: Decrease cpuid_i when skipping CPUID leaf 1FXiaoyao Li
Existing code misses a decrement of cpuid_i when skip leaf 0x1F. There's a blank CPUID entry(with leaf, subleaf as 0, and all fields stuffed 0s) left in the CPUID array. It conflicts with correct CPUID leaf 0. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by:Yang Weijiang <weijiang.yang@intel.com> Message-ID: <20240125024016.2521244-2-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 10f92799af8ba3c3cef2352adcd4780f13fbab31) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpu: Mask with XCR0/XSS mask for FEAT_XSAVE_XCR0_HI and ↵Xiaoyao Li
FEAT_XSAVE_XSS_HI leafs The value of FEAT_XSAVE_XCR0_HI leaf and FEAT_XSAVE_XSS_HI leaf also need to be masked by XCR0 and XSS mask respectively, to make it logically correct. Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Yang Weijiang <weijiang.yang@intel.com> Message-ID: <20240115091325.1904229-3-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a11a365159b944e05be76f3ec3b98c8b38cb70fd) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpu: Clear FEAT_XSAVE_XSS_LO/HI leafs when CPUID_EXT_XSAVE is not availableXiaoyao Li
Leaf FEAT_XSAVE_XSS_LO and FEAT_XSAVE_XSS_HI also need to be cleared when CPUID_EXT_XSAVE is not set. Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Yang Weijiang <weijiang.yang@intel.com> Message-ID: <20240115091325.1904229-2-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 81f5cad3858f27623b1b14467926032d229b76cc) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Don't get MDCR_EL2 in pmu_counter_enabled() before checking ↵Peter Maydell
ARM_FEATURE_PMU It doesn't make sense to read the value of MDCR_EL2 on a non-A-profile CPU, and in fact if you try to do it we will assert: #6 0x00007ffff4b95e96 in __GI___assert_fail (assertion=0x5555565a8c70 "!arm_feature(env, ARM_FEATURE_M)", file=0x5555565a6e5c "../../target/arm/helper.c", line=12600, function=0x5555565a9560 <__PRETTY_FUNCTION__.0> "arm_security_space_below_el3") at ./assert/assert.c:101 #7 0x0000555555ebf412 in arm_security_space_below_el3 (env=0x555557bc8190) at ../../target/arm/helper.c:12600 #8 0x0000555555ea6f89 in arm_is_el2_enabled (env=0x555557bc8190) at ../../target/arm/cpu.h:2595 #9 0x0000555555ea942f in arm_mdcr_el2_eff (env=0x555557bc8190) at ../../target/arm/internals.h:1512 We might call pmu_counter_enabled() on an M-profile CPU (for example from the migration pre/post hooks in machine.c); this should always return false because these CPUs don't set ARM_FEATURE_PMU. Avoid the assertion by not calling arm_mdcr_el2_eff() before we have done the early return for "PMU not present". This fixes an assertion failure if you try to do a loadvm or savevm for an M-profile board. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2155 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240208153346.970021-1-peter.maydell@linaro.org (cherry picked from commit ac1d88e9e7ca0bed83e91e07ce6d0597f10cc77d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Fix SVE/SME gross MTE suppression checksRichard Henderson
The TBI and TCMA bits are located within mtedesc, not desc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 855f94eca80c85a99f459e36684ea2f98f6a3243) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Handle mte in do_ldrq, do_ldroRichard Henderson
These functions "use the standard load helpers", but fail to clean_data_tbi or populate mtedesc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 623507ccfcfebb0f10229ae5de3f85a27fb615a7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Split out make_svemte_descRichard Henderson
Share code that creates mtedesc and embeds within simd_desc. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 96fcc9982b4aad7aced7fbff046048bbccc6cb0c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Adjust and validate mtedesc sizem1Richard Henderson
When we added SVE_MTEDESC_SHIFT, we effectively limited the maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining bits (32 - 10 - 5 - 12 == 5). Assert that the data to be stored fits within the field (expecting 8 * 4 - 1 == 31, exact fit). Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit b12a7671b6099a26ce5d5ab09701f151e21c112c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-16target/arm: Fix nregs computation in do_{ld,st}_zpaRichard Henderson
The field is encoded as [0-3], which is convenient for indexing our array of function pointers, but the true value is [1-4]. Adjust before calling do_mem_zpa. Add an assert, and move the comment re passing ZT to the helper back next to the relevant code. Cc: qemu-stable@nongnu.org Fixes: 206adacfb8d ("target/arm: Add mte helpers for sve scalar + int loads") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 64c6e7444dff64b42d11b836b9aec9acfbe8ecc2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-09target/arm: Reinstate "vfp" property on AArch32 CPUsPeter Maydell
In commit 4315f7c614743 we restructured the logic for creating the VFP related properties to avoid testing the aa32_simd_r32 feature on AArch64 CPUs. However in the process we accidentally stopped exposing the "vfp" QOM property on AArch32 TCG CPUs. This mostly hasn't had any ill effects because not many people want to disable VFP, but it wasn't intentional. Reinstate the property. Cc: qemu-stable@nongnu.org Fixes: 4315f7c614743 ("target/arm: Restructure has_vfp_d32 test") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2098 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240126193432.2210558-1-peter.maydell@linaro.org (cherry picked from commit 185e3fdf8d106cb2f7d234d5e6453939c66db2a9) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-09target/arm: fix exception syndrome for AArch32 bkpt insnJan Klötzke
Debug exceptions that target AArch32 Hyp mode are reported differently than on AAarch64. Internally, Qemu uses the AArch64 syndromes. Therefore such exceptions need to be either converted to a prefetch abort (breakpoints, vector catch) or a data abort (watchpoints). Cc: qemu-stable@nongnu.org Signed-off-by: Jan Klötzke <jan.kloetzke@kernkonzept.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240127202758.3326381-1-jan.kloetzke@kernkonzept.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit f670be1aad33e801779af580398895b9455747ee) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-27target/arm: Fix incorrect aa64_tidcp1 feature checkPeter Maydell
A typo in the implementation of isar_feature_aa64_tidcp1() means we were checking the field in the wrong ID register, so we might have provided the feature on CPUs that don't have it and not provided it on CPUs that should have it. Correct this bug. Cc: qemu-stable@nongnu.org Fixes: 9cd0c0dec97be9 "target/arm: Implement FEAT_TIDCP1" Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2120 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240123160333.958841-1-peter.maydell@linaro.org (cherry picked from commit ee0a2e3c9d2991a11c13ffadb15e4d0add43c257) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-27target/arm: Fix A64 scalar SQSHRN and SQRSHRNPeter Maydell
In commit 1b7bc9b5c8bf374dd we changed handle_vec_simd_sqshrn() so that instead of starting with a 0 value and depositing in each new element from the narrowing operation, it instead started with the raw result of the narrowing operation of the first element. This is fine in the vector case, because the deposit operations for the second and subsequent elements will always overwrite any higher bits that might have been in the first element's result value in tcg_rd. However in the scalar case we only go through this loop once. The effect is that for a signed narrowing operation, if the result is negative then we will now return a value where the bits above the first element are incorrectly 1 (because the narrowfn returns a sign-extended result, not one that is truncated to the element size). Fix this by using an extract operation to get exactly the correct bits of the output of the narrowfn for element 1, instead of a plain move. Cc: qemu-stable@nongnu.org Fixes: 1b7bc9b5c8bf374dd3 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240123153416.877308-1-peter.maydell@linaro.org (cherry picked from commit 6fffc8378562c7fea6290c430b4f653f830a4c1a) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-27target/xtensa: fix OOB TLB entry accessMax Filippov
r[id]tlb[01], [iw][id]tlb opcodes use TLB way index passed in a register by the guest. The host uses 3 bits of the index for ITLB indexing and 4 bits for DTLB, but there's only 7 entries in the ITLB array and 10 in the DTLB array, so a malicious guest may trigger out-of-bound access to these arrays. Change split_tlb_entry_spec return type to bool to indicate whether TLB way passed to it is valid. Change get_tlb_entry to return NULL in case invalid TLB way is requested. Add assertion to xtensa_tlb_get_entry that requested TLB way and entry indices are valid. Add checks to the [rwi]tlb helpers that requested TLB way is valid and return 0 or do nothing when it's not. Cc: qemu-stable@nongnu.org Fixes: b67ea0cd7441 ("target-xtensa: implement memory protection options") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20231215120307.545381-1-jcmvbkbc@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 604927e357c2b292c70826e4ce42574ad126ef32) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-20target/i386: pcrel: store low bits of physical address in data[0]Paolo Bonzini
For PC-relative translation blocks, env->eip changes during the execution of a translation block, Therefore, QEMU must be able to recover an instruction's PC just from the TranslationBlock struct and the instruction data with. Because a TB will not span two pages, QEMU stores all the low bits of EIP in the instruction data and replaces them in x86_restore_state_to_opc. Bits 12 and higher (which may vary between executions of a PCREL TB, since these only use the physical address in the hash key) are kept unmodified from env->eip. The assumption is that these bits of EIP, unlike bits 0-11, will not change as the translation block executes. Unfortunately, this is incorrect when the CS base is not aligned to a page. Then the linear address of the instructions (i.e. the one with the CS base addred) indeed will never span two pages, but bits 12+ of EIP can actually change. For example, if CS base is 0x80262200 and EIP = 0x6FF4, the first instruction in the translation block will be at linear address 0x802691F4. Even a very small TB will cross to EIP = 0x7xxx, while the linear addresses will remain comfortably within a single page. The fix is simply to use the low bits of the linear address for data[0], since those don't change. Then x86_restore_state_to_opc uses tb->cs_base to compute a temporary linear address (referring to some unknown instruction in the TB, but with the correct values of bits 12 and higher); the low bits are replaced with data[0], and EIP is obtained by subtracting again the CS base. Huge thanks to Mark Cave-Ayland for the image and initial debugging, and to Gitlab user @kjliew for help with bisecting another occurrence of (hopefully!) the same bug. It should be relatively easy to write a testcase that performs MMIO on an EIP with different bits 12+ than the first instruction of the translation block; any help is welcome. Fixes: e3a79e0e878 ("target/i386: Enable TARGET_TB_PCREL", 2022-10-11) Cc: qemu-stable@nongnu.org Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Cc: Richard Henderson <richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1759 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1964 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2012 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 729ba8e933f8af5800c3a92b37e630e9bdaa9f1e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-20target/i386: fix incorrect EIP in PC-relative translation blocksguoguangyao
The PCREL patches introduced a bug when updating EIP in the !CF_PCREL case. Using s->pc in func gen_update_eip_next() solves the problem. Cc: qemu-stable@nongnu.org Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation") Signed-off-by: guoguangyao <guoguangyao18@mails.ucas.ac.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240115020804.30272-1-guoguangyao18@mails.ucas.ac.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 2926eab8969908bc068629e973062a0fb6ff3759) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-20target/i386: Do not re-compute new pc with CF_PCRELRichard Henderson
With PCREL, we have a page-relative view of EIP, and an approximation of PC = EIP+CSBASE that is good enough to detect page crossings. If we try to recompute PC after masking EIP, we will mess up that approximation and write a corrupt value to EIP. We already handled masking properly for PCREL, so the fix in b5e0d5d2 was only needed for the !PCREL path. Cc: qemu-stable@nongnu.org Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation") Reported-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240101230617.129349-1-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a58506b748b8988a95f4fa1a2420ac5c17038b30) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-17target/hppa: Fix IOR and ISR on error in probeHelge Deller
Put correct values (depending on CPU arch) into IOR and ISR on fault. Signed-off-by: Helge Deller <deller@gmx.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 31efbe72c6cc54b9cbc2505d78870a8a87a8d392) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-17target/hppa: Fix IOR and ISR on unaligned access trapHelge Deller
Put correct values (depending on CPU arch) into IOR and ISR on fault. Signed-off-by: Helge Deller <deller@gmx.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 910ada0225d17530188aa45afcb9412c17267f46) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>