aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2024-04-30target/loongarch/cpu.c: typo fix: expectionMichael Tokarev
Fixes: 1590154ee437 ("target/loongarch: Fix qemu-system-loongarch64 assert failed with the option '-d int'") Fixes: ef9b43bb8e2d (in stable-8.2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 0cbb322f70e8a87e4acbffecef5ea8f9448f3513) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change timer regs size to u64Daniel Henrique Barboza
KVM_REG_RISCV_TIMER regs are always u64 according to the KVM API, but at this moment we'll return u32 regs if we're running a RISCV32 target. Use the kvm_riscv_reg_id_u64() helper in RISCV_TIMER_REG() to fix it. Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 10f86d1b845087d14b58d65dd2a6e3411d1b6529) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change KVM_REG_RISCV_FP_D to u64Daniel Henrique Barboza
KVM_REG_RISCV_FP_D regs are always u64 size. Using kvm_riscv_reg_id() in RISCV_FP_D_REG() ends up encoding the wrong size if we're running with TARGET_RISCV32. Create a new helper that returns a KVM ID with u64 size and use it with RISCV_FP_D_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 450bd6618fda3d2e2ab02b2fce1c79efd5b66084) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change KVM_REG_RISCV_FP_F to u32Daniel Henrique Barboza
KVM_REG_RISCV_FP_F regs have u32 size according to the API, but by using kvm_riscv_reg_id() in RISCV_FP_F_REG() we're returning u64 sizes when running with TARGET_RISCV64. The most likely reason why no one noticed this is because we're not implementing kvm_cpu_synchronize_state() in RISC-V yet. Create a new helper that returns a KVM ID with u32 size and use it in RISCV_FP_F_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 49c211ffca00fdf7c0c29072c224e88527a14838) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/m68k: Map FPU exceptions to FPSR registerKeith Packard
Add helpers for reading/writing the 68881 FPSR register so that changes in floating point exception state can be seen by the application. Call these helpers in pre_load/post_load hooks to synchronize exception state. Signed-off-by: Keith Packard <keithp@keithp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230803035231.429697-1-keithp@keithp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 5888357942da1fd5a50efb6e4a6af8b1a27a5af8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: add missing CHECK_NOT_DELAY_SLOTZack Buhman
CHECK_NOT_DELAY_SLOT is correctly applied to the branch-related instructions, but not to the PC-relative mov* instructions. I verified the existence of an illegal slot exception on a SH7091 when any of these instructions are attempted inside a delay slot. This also matches the behavior described in the SH-4 ISA manual. Signed-off-by: Zack Buhman <zack@buhman.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240407150705.5965-1-zack@buhman.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp> (cherry picked from commit b754cb2dcde26a7bc8a9d17bb6900a0ac0dd38e2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Fix mac.w with saturation enabledZack Buhman
The saturation arithmetic logic in helper_macw is not correct. I tested and verified this behavior on a SH7091. Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Zack Buhman <zack@buhman.org> Message-Id: <20240405233802.29128-3-zack@buhman.org> [rth: Reformat helper_macw, add a test case.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 7227c0cd506eaab5b1d89d15832cac7e05ecb412) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Fix mac.l with saturation enabledZack Buhman
The saturation arithmetic logic in helper_macl is not correct. I tested and verified this behavior on a SH7091. Signed-off-by: Zack Buhman <zack@buhman.org> Message-Id: <20240404162641.27528-2-zack@buhman.org> [rth: Reformat helper_macl, add a test case.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit c97e8977dcacb3fa8362ee28bcee75ceb01fceaa) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Merge mach and macl into a unionRichard Henderson
Allow host access to the entire 64-bit accumulator. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 7d95db5e78a24d3315e3112d26909a7262355cb7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: mac.w: memory accesses are 16-bit wordsZack Buhman
Before this change, executing a code sequence such as: mova tblm,r0 mov r0,r1 mova tbln,r0 clrs clrmac mac.w @r0+,@r1+ mac.w @r0+,@r1+ .align 4 tblm: .word 0x1234 .word 0x5678 tbln: .word 0x9abc .word 0xdefg Does not result in correct behavior: Expected behavior: first macw : macl = 0x1234 * 0x9abc + 0x0 mach = 0x0 second macw: macl = 0x5678 * 0xdefg + 0xb00a630 mach = 0x0 Observed behavior (qemu-sh4eb, prior to this commit): first macw : macl = 0x5678 * 0xdefg + 0x0 mach = 0x0 second macw: (unaligned longword memory access, SIGBUS) Various SH-4 ISA manuals also confirm that `mac.w` is a 16-bit word memory access, not a 32-bit longword memory access. Signed-off-by: Zack Buhman <zack@buhman.org> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240402093756.27466-1-zack@buhman.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit b0f2f2976b4db05351117b0440b32bf0aac2c5c6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-09target/arm: Use correct SecuritySpace for AArch64 AT ops at EL3Peter Maydell
When we do an AT address translation operation, the page table walk is supposed to be performed in the context of the EL we're doing the walk for, so for instance an AT S1E2R walk is done for EL2. In the pseudocode an EL is passed to AArch64.AT(), which calls SecurityStateAtEL() to find the security state that we should be doing the walk with. In ats_write64() we get this wrong, instead using the current security space always. This is fine for AT operations performed from EL1 and EL2, because there the current security state and the security state for the lower EL are the same. But for AT operations performed from EL3, the current security state is always either Secure or Root, whereas we want to use the security state defined by SCR_EL3.{NS,NSE} for the walk. This affects not just guests using FEAT_RME but also ones where EL3 is Secure state and the EL3 code is trying to do an AT for a NonSecure EL2 or EL1. Use arm_security_space_below_el3() to get the SecuritySpace to pass to do_ats_write() for all AT operations except the AT S1E3* operations. Cc: qemu-stable@nongnu.org Fixes: e1ee56ec2383 ("target/arm: Pass security space rather than flag for AT instructions") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2250 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240405180232.3570066-1-peter.maydell@linaro.org (cherry picked from commit 19b254e86a900dc5ee332e3ac0baf9c521301abf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-02target/arm: take HSTR traps of cp15 accesses to EL2, not EL1Peter Maydell
The HSTR_EL2 register allows the hypervisor to trap AArch32 EL1 and EL0 accesses to cp15 registers. We incorrectly implemented this so they trap to EL1 when we detect the need for a HSTR trap at code generation time. (The check in access_check_cp_reg() which we do at runtime to catch traps from EL0 is correctly routing them to EL2.) Use the correct target EL when generating the code to take the trap. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2226 Fixes: 049edada5e93df ("target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240325133116.2075362-1-peter.maydell@linaro.org (cherry picked from commit fbe5ac5671a9cfcc7f4aee9a5fac7720eea08876) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-01target/hppa: Clear psw_n for BE on use_nullify_skip pathRichard Henderson
Along this path we have already skipped the insn to be nullified, so the subsequent insn should be executed. Cc: qemu-stable@nongnu.org Reported-by: Sven Schnelle <svens@stackframe.org> Tested-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 4a3aa11e1fb25c28c24a43fd2835c429b00a463d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/kvm: fix timebase-frequency when using KVM accelerationYong-Xuan Wang
The timebase-frequency of guest OS should be the same with host machine. The timebase-frequency value in DTS should be got from hypervisor when using KVM acceleration. Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com> Message-ID: <20240314061510.9800-1-yongxuan.wang@sifive.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 385e575cd5ab2436c123e4b7f8c9b383a64c0dbe) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fix due to missing other changes in this area in 8.2.x)
2024-03-27target/riscv: Fix mode in riscv_tlb_fillIrina Ryapolova
Need to convert mmu_idx to privilege mode for PMP function. Signed-off-by: Irina Ryapolova <irina.ryapolova@syntacore.com> Fixes: b297129ae1 ("target/riscv: propagate PMP permission to TLB page") Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20240320172828.23965-1-irina.ryapolova@syntacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit e06adebb08325c39e4c9b652139426c10f021abb) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv: rvv: Remove the dependency of Zvfbfmin to ZfbfminMax Chou
According to the Zvfbfmin definition in the RISC-V BF16 extensions spec, the Zvfbfmin extension only requires either the V extension or the Zve32f extension. Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240321170929.1162507-1-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit c9b07fe14d3525cd3f2fc01f46eeb3d4ed7c3603) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: optimize loops in ldst helpersDaniel Henrique Barboza
Change the for loops in ldst helpers to do a single increment in the counter, and assign it env->vstart, to avoid re-reading from vstart every time. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-11-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 0a11629c915f61df798919db51a18ffe4649cb65) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helpers: do early exit when vstart >= vlDaniel Henrique Barboza
We're going to make changes that will required each helper to be responsible for the 'vstart' management, i.e. we will relieve the 'vstart < vl' assumption that helpers have today. Helpers are usually able to deal with vstart >= vl, i.e. doing nothing aside from setting vstart = 0 at the end, but the tail update functions will update the tail regardless of vstart being valid or not. Unifying the tail update process in a single function that would handle the vstart >= vl case isn't trivial (see [1] for more info). This patch takes a blunt approach: do an early exit in every single vector helper if vstart >= vl, unless the helper is guarded with vstart_eq_zero in the translation. For those cases the helper is ready to deal with cases where vl might be zero, i.e. throwing exceptions based on it like vcpop_m() and first_m(). Helpers that weren't changed: - vcpop_m(), vfirst_m(), vmsetm(), GEN_VEXT_VIOTA_M(): these are guarded directly with vstart_eq_zero; - GEN_VEXT_VCOMPRESS_VM(): guarded with vcompress_vm_check() that checks vstart_eq_zero; - GEN_VEXT_RED(): guarded with either reduction_check() or reduction_widen_check(), both check vstart_eq_zero; - GEN_VEXT_FRED(): guarded with either freduction_check() or freduction_widen_check(), both check vstart_eq_zero. Another exception is vext_ldst_whole(), who operates on effective vector length regardless of the current settings in vtype and vl. [1] https://lore.kernel.org/qemu-riscv/1590234b-0291-432a-a0fa-c5a6876097bc@linux.alibaba.com/ Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240314175704.478276-7-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit df4252b2ecaf93b601109373a17427d1867046e8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv: always clear vstart in whole vec move insnsDaniel Henrique Barboza
These insns have 2 paths: we'll either have vstart already cleared if vstart_eq_zero or we'll do a brcond to check if vstart >= maxsz to call the 'vmvr_v' helper. The helper will clear vstart if it executes until the end, or if vstart >= vl. For starters, the check itself is wrong: we're checking vstart >= maxsz, when in fact we should use vstart in bytes, or 'startb' like 'vmvr_v' is calling, to do the comparison. But even after fixing the comparison we'll still need to clear vstart in the end, which isn't happening too. We want to make the helpers responsible to manage vstart, including these corner cases, precisely to avoid these situations: - remove the wrong vstart >= maxsz cond from the translation; - add a 'startb >= maxsz' cond in 'vmvr_v', and clear vstart if that happens. This way we're now sure that vstart is being cleared in the end of the execution, regardless of the path taken. Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-5-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 7e53e3ddf6dff200098e112c5370ab16d2d5dbd1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: fix 'vmvr_v' memcpy endianessDaniel Henrique Barboza
vmvr_v isn't handling the case where the host might be big endian and the bytes to be copied aren't sequential. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 768e7b329c0be22035da077fe76221dd0a47103b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27trans_rvv.c.inc: set vstart = 0 in int scalar move insnsDaniel Henrique Barboza
trans_vmv_x_s, trans_vmv_s_x, trans_vfmv_f_s and trans_vfmv_s_f aren't setting vstart = 0 after execution. This is usually done by a helper in vector_helper.c but these functions don't use helpers. We'll set vstart after any potential 'over' brconds, and that will also mandate a mark_vs_dirty() too. Fixes: dedc53cbc9 ("target/riscv: rvv-1.0: integer scalar move instructions") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240314175704.478276-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 0848f7c18ef50de9f955e7eeb4363d92766a41bf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: set vstart = 0 in GEN_VEXT_VSLIDEUP_VX()Daniel Henrique Barboza
The helper isn't setting env->vstart = 0 after its execution, as it is expected from every vector instruction that completes successfully. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20240314175704.478276-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit d3646e31ce6d1e02e46e6eabdbc2e637c0cbece7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/i386/tcg: Enable page walking from MMIO memoryGregory Price
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using this memory as conventional memory in a separate NUMA node. If a process is deliberately forced to run from that node $ numactl --membind=1 ls the page table walk on i386 fails. Useful part of backtrace: (cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p") at ../../cpu-target.c:359 (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>) at ../../accel/tcg/cputlb.c:1339 (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030 (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 at ../../accel/tcg/ldst_common.c.inc:301 at ../../target/i386/tcg/sysemu/excp_helper.c:173 (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0) at ../../target/i386/tcg/sysemu/excp_helper.c:578 (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604 Avoid this by plumbing the address all the way down from x86_cpu_tlb_fill() where is available as retaddr to the actual accessors which provide it to probe_access_full() which already handles MMIO accesses. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Gregory Price <gregory.price@memverge.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 9dab7bbb017d11b64c52239fa4e2f910a6a004f2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-25target/s390x: Use mutable temporary value for op_tsIdo Plat
Otherwise TCG would assume the register that holds t1 would be constant and reuse whenever it needs the value within it. Cc: qemu-stable@nongnu.org Fixes: f1ea739bd598 ("target/s390x: Use tcg_constant_* in local contexts") Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [iii: Adjust a newline and capitalization, add tags] Signed-off-by: Ido Plat <ido.plat@ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-ID: <20240318202722.20675-1-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 272fba9779af0bb1c29cd30302fc1e31c59274d0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-25target/loongarch: Fix qemu-system-loongarch64 assert failed with the option ↵Song Gao
'-d int' qemu-system-loongarch64 assert failed with the option '-d int', the helper_idle() raise an exception EXCP_HLT, but the exception name is undefined. Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240321123606.1704900-1-gaosong@loongson.cn> (cherry picked from commit 1590154ee4376819a8c6ee61e849ebf4a4e7cd02) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/loongarch: Fix qemu-loongarch64 hang when executing 'll.d $t0, $t0, 0'Song Gao
On gen_ll, if a->imm is zero, make_address_x return src1, but the load to destination may clobber src1. We use a new destination to fix this problem. Fixes: c5af6628f4be (target/loongarch: Extract make_address_i() helper) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240320013955.1561311-1-gaosong@loongson.cn> (cherry picked from commit 77642f92c0b71a105aba2a4d03bc62328eae703b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix do_stdby_e()Sven Schnelle
stdby,e,m was writing data from the wrong half of the register into memory for cases 0-3. Fixes: 25460fc5a71 ("target/hppa: Implement STDBY") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-7-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 518d2f4300e5c50a3e6416fd46e58373781a5267) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: mask privilege bits in mfiaSven Schnelle
mfia should return only the iaoq bits without privilege bits. Fixes: 98a9cb792c8 ("target-hppa: Implement system and memory-management insns") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Helge Deller <deller@gmx.de> Message-Id: <20240319161921.487080-6-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit b5e0b3a53c983c4a9620a44a6a557b389e589218) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: exit tb on flush cache instructionsSven Schnelle
When the guest modifies the tb it is currently executing from, it executes a fic instruction. Exit the tb on such instruction, otherwise we might execute stale code. Signed-off-by: Sven Schnelle <svens@stackframe.org> Message-Id: <20240319161921.487080-5-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit ad1fdacd1b936557514dd72c2079a80be0c2dfb4) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix access_id checkSven Schnelle
PA2.0 provides 8 instead of 4 PID registers. Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-4-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit ae157fc25053917830c3b581bc282f906e6d95d3) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: fix shrp for wide modeSven Schnelle
Fixes: f7b775a9c075 ("target/hppa: Implement SHRPD") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Helge Deller <deller@gmx.de> Message-Id: <20240319161921.487080-3-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit d37fad0ae5bd2c544fdb0f2eff6acdb28a155be0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: ldcw,s uses static shift of 3Sven Schnelle
Fixes: 96d6407f363 ("target-hppa: Implement loads and stores") Signed-off-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240319161921.487080-2-svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit c3ea1996a14d5dbbedb3f9036f7ebec4395dc889) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_12a insns for wide modeRichard Henderson
Tested-by: Helge Deller <deller@gmx.de> Reported-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 46174e140d274385b1255bc7f16a5a711853053f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_11a insns for wide modeRichard Henderson
Tested-by: Helge Deller <deller@gmx.de> Reviewed-by: Helge Deller <deller@gmx.de> Reported-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 4768c28edd4097ebef42822e15b4a43026b15376) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-22target/hppa: Fix assemble_16 insns for wide modeRichard Henderson
Reported-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Helge Deller <deller@gmx.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 72bace2d13cb427fde3bb50ae1a71a2abe9acc0f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-21target/i386: Revert monitor_puts() in do_inject_x86_mce()Tao Su
monitor_puts() doesn't check the monitor pointer, but do_inject_x86_mce() may have a parameter with NULL monitor pointer. Revert monitor_puts() in do_inject_x86_mce() to fix, then the fact that we send the same message to monitor and log is again more obvious. Fixes: bf0c50d4aa85 (monitor: expose monitor_puts to rest of code) Reviwed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Message-ID: <20240320083640.523287-1-tao1.su@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 7fd226b04746f0be0b636de5097f1b42338951a0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-21target/i386: fix direction of "32-bit MMU" testPaolo Bonzini
The low bit of MMU indices for x86 TCG indicates whether the processor is in 32-bit mode and therefore linear addresses have to be masked to 32 bits. However, the index was computed incorrectly, leading to possible conflicts in the TLB for any address above 4G. Analyzed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Fixes: b1661801c18 ("target/i386: Fix physical address truncation", 2024-02-28) Fixes: a28b6b4e743 ("target/i386: Fix physical address truncation" in stable-8.2) Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2206 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 2cc68629a6fc198f4a972698bdd6477f883aedfb) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
2024-03-21target/i386: use separate MMU indexes for 32-bit accessesPaolo Bonzini
Accesses from a 32-bit environment (32-bit code segment for instruction accesses, EFER.LMA==0 for processor accesses) have to mask away the upper 32 bits of the address. While a bit wasteful, the easiest way to do so is to use separate MMU indexes. These days, QEMU anyway is compiled with a fixed value for NB_MMU_MODES. Split MMU_USER_IDX, MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 90f641531c782c873a05895f411c05fbbbef3c49) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
2024-03-21target/i386: introduce function to query MMU indicesPaolo Bonzini
Remove knowledge of specific MMU indexes (other than MMU_NESTED_IDX and MMU_PHYS_IDX) from mmu_translate(). This will make it possible to split 32-bit and 64-bit MMU indexes. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 5f97afe2543f09160a8d123ab6e2e8c6d98fa9ce) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fixup in target/i386/cpu.h due to other changes in that area)
2024-03-09target/arm: Fix 32-bit SMOPARichard Henderson
While the 8-bit input elements are sequential in the input vector, the 32-bit output elements are not sequential in the output matrix. Do not attempt to compute 2 32-bit outputs at the same time. Cc: qemu-stable@nongnu.org Fixes: 23a5e3859f5 ("target/arm: Implement SME integer outer product") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2083 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240305163931.242795-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit d572bcb222010b38b382871a23b2f38e2c3f4d2d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: leave the A20 bit set in the final NPT walkPaolo Bonzini
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address. Unlike the previous patch, in this one the masking must be kept, but it was done too early. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b5a9de3259f4c791bde2faff086dd5737625e41e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: remove unnecessary/wrong application of the A20 maskPaolo Bonzini
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_access_full() and x86_cpu_tlb_fill(). If ptw_translate() on the other hand does a MMU_NESTED_IDX access, the A20 mask must not be applied to the address that is looked up in the nested page tables; it must be applied only to the addresses that hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the previous paragraph). Therefore, we can remove A20 masking from the computation of the page table entry's address, and let get_physical_address() or mmu_translate() apply it when they know they are returning a host-physical address. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a28fe7dc1939333c81b895cdced81c69eb7c5ad0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: Fix physical address truncationPaolo Bonzini
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorrect when using physical address extensions (PAE) outside of long mode, with the result that a 32-bit operating system using PAE to access memory above 4G will experience undefined behaviour. The truncation code was originally introduced in commit 33dfdb5 ("x86: only allow real mode to access 32bit without LMA"), where it applied only to translations performed while paging is disabled (and so cannot affect guests using PAE). Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") rearranged the code such that the truncation also applied to the use of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use atomic operations for pte updates") brought this truncation into scope for page table entry accesses, and is the first commit for which a Windows 10 32-bit guest will reliably fail to boot if memory above 4G is present. The truncation code however is not completely redundant. Even though the maximum address size for any executed instruction is 32 bits, helpers for operations such as BOUND, FSAVE or XSAVE may ask get_physical_address() to translate an address outside of the 32-bit range, if invoked with an argument that is close to the 4G boundary. Likewise for processor accesses, for example TSS or IDT accesses, when EFER.LMA==0. So, move the address truncation in get_physical_address() so that it applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040 Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Cc: qemu-stable@nongnu.org Co-developed-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: drop unrelated change in target/i386/cpu.c)
2024-02-28target/i386: check validity of VMCB addressesPaolo Bonzini
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the maximum physical address width of the processor. Setting them to 1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual). The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE, even though the manual is not clear on that. Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit d09c79010ffd880dc69e7a21e3cfdef90b928fb8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-28target/i386: mask high bits of CR3 in 32-bit modePaolo Bonzini
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level paging or PAE paging). Do this in mmu_translate() to remove the last where get_physical_address() meaningfully drops the high bits of the address. Cc: qemu-stable@nongnu.org Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 68fb78d7d5723066ec2cacee7d25d67a4143b42f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-24target/ppc: Fix crash on machine check caused by ifetchNicholas Piggin
is_prefix_insn_excp() loads the first word of the instruction address which caused an exception, to determine whether or not it was prefixed so the prefix bit can be set in [H]SRR1. This works if the instruction image can be loaded, but if the exception was caused by an ifetch, this load could fail and cause a recursive exception and crash. Machine checks caused by ifetch are not excluded from the prefix check and can crash (see issue 2108 for an example). Fix this by excluding machine checks caused by ifetch from the prefix check. Cc: qemu-stable@nongnu.org Acked-by: Cédric Le Goater <clg@kaod.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2108 Fixes: 55a7fa34f89 ("target/ppc: Machine check on invalid real address access on POWER9/10") Fixes: 5a5d3b23cb2 ("target/ppc: Add SRR1 prefix indication to interrupt handlers") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> (cherry picked from commit c8fd9667e5975fe2e70a906e125a758737eab707) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-24target/ppc: Fix lxv/stxv MSR facility checkNicholas Piggin
The move to decodetree flipped the inequality test for the VEC / VSX MSR facility check. This caused application crashes under Linux, where these facility unavailable interrupts are used for lazy-switching of VEC/VSX register sets. Getting the incorrect interrupt would result in wrong registers being loaded, potentially overwriting live values and/or exposing stale ones. Cc: qemu-stable@nongnu.org Reported-by: Joel Stanley <joel@jms.id.au> Fixes: 70426b5bb738 ("target/ppc: moved stxvx and lxvx from legacy to decodtree") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1769 Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Tested-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Tested-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> (cherry picked from commit 2cc0e449d17310877fb28a942d4627ad22bb68ea) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20target/i386: Generate an illegal opcode exception on cmp instructions with ↵Ziqiao Kong
lock prefix target/i386: As specified by Intel Manual Vol2 3-180, cmp instructions are not allowed to have lock prefix and a `UD` should be raised. Without this patch, s1->T0 will be uninitialized and used in the case OP_CMPL. Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com> Message-ID: <20240215095015.570748-2-ziqiaokong@gmail.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 99d0dcd7f102c07a510200d768cae65e5db25d23) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpuid: Move leaf 7 to correct groupXiaoyao Li
CPUID leaf 7 was grouped together with SGX leaf 0x12 by commit b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM") by mistake. SGX leaf 0x12 has its specific logic to check if subleaf (starting from 2) is valid or not by checking the bit 0:3 of corresponding EAX is 1 or not. Leaf 7 follows the logic that EAX of subleaf 0 enumerates the maximum valid subleaf. Fixes: b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-ID: <20240125024016.2521244-4-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 0729857c707535847d7fe31d3d91eb8b2a118e3c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-02-20i386/cpuid: Decrease cpuid_i when skipping CPUID leaf 1FXiaoyao Li
Existing code misses a decrement of cpuid_i when skip leaf 0x1F. There's a blank CPUID entry(with leaf, subleaf as 0, and all fields stuffed 0s) left in the CPUID array. It conflicts with correct CPUID leaf 0. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by:Yang Weijiang <weijiang.yang@intel.com> Message-ID: <20240125024016.2521244-2-xiaoyao.li@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 10f92799af8ba3c3cef2352adcd4780f13fbab31) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>