aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2024-06-07target/loongarch: fix a wrong print in cpu dumplanyanzhi
description: loongarch_cpu_dump_state() want to dump all loongarch cpu state registers, but there is a tiny typographical error when printing "PRCFG2". Cc: qemu-stable@nongnu.org Signed-off-by: lanyanzhi <lanyanzhi22b@ict.ac.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240604073831.666690-1-lanyanzhi22b@ict.ac.cn> Signed-off-by: Song Gao <gaosong@loongson.cn> (cherry picked from commit 78f932ea1f7b3b9b0ac628dc2a91281318fe51fa) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-06target/i386: fix SSE and SSE2 feature checkXinyu Li
Features check of CPUID_SSE and CPUID_SSE2 should use cpuid_features, rather than cpuid_ext_features. Signed-off-by: Xinyu Li <lixinyu20s@ict.ac.cn> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Message-ID: <20240602100904.2137939-1-lixinyu20s@ict.ac.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit da7c95920d027dbb00c6879c1da0216b19509191) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-06target/i386: fix xsave.flat from kvm-unit-testsPaolo Bonzini
xsave.flat checks that "executing the XSETBV instruction causes a general- protection fault (#GP) if ECX = 0 and EAX[2:1] has the value 10b". QEMU allows that option, so the test fails. Add the condition. Cc: qemu-stable@nongnu.org Fixes: 892544317fe ("target/i386: implement XSAVE and XRSTOR of AVX registers", 2022-10-18) Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 7604bbc2d87d153e65e38cf2d671a5a9a35917b1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv/kvm.c: Fix the hart bit setting of AIAYong-Xuan Wang
In AIA spec, each hart (or each hart within a group) has a unique hart number to locate the memory pages of interrupt files in the address space. The number of bits required to represent any hart number is equal to ceil(log2(hmax + 1)), where hmax is the largest hart number among groups. However, if the largest hart number among groups is a power of 2, QEMU will pass an inaccurate hart-index-bit setting to Linux. For example, when the guest OS has 4 harts, only ceil(log2(3 + 1)) = 2 bits are sufficient to represent 4 harts, but we passes 3 to Linux. The code needs to be updated to ensure accurate hart-index-bit settings. Additionally, a Linux patch[1] is necessary to correctly recover the hart index when the guest OS has only 1 hart, where the hart-index-bit is 0. [1] https://lore.kernel.org/lkml/20240415064905.25184-1-yongxuan.wang@sifive.com/t/ Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240515091129.28116-1-yongxuan.wang@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 190b867f28cb5781f3cd01a3deb371e4211595b1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: rvzicbo: Fixup CBO extension register calculationAlistair Francis
When running the instruction ``` cbo.flush 0(x0) ``` QEMU would segfault. The issue was in cpu_gpr[a->rs1] as QEMU does not have cpu_gpr[0] allocated. In order to fix this let's use the existing get_address() helper. This also has the benefit of performing pointer mask calculations on the address specified in rs1. The pointer masking specificiation specifically states: """ Cache Management Operations: All instructions in Zicbom, Zicbop and Zicboz """ So this is the correct behaviour and we previously have been incorrectly not masking the address. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reported-by: Fabian Thomas <fabian.thomas@cispa.de> Fixes: e05da09b7cfd ("target/riscv: implement Zicbom extension") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240514023910.301766-1-alistair.francis@wdc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit c5eb8d6336741dbcb98efcc347f8265bf60bc9d1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: do not set mtval2 for non guest-page faultsAlexei Filippov
Previous patch fixed the PMP priority in raise_mmu_exception() but we're still setting mtval2 incorrectly. In riscv_cpu_tlb_fill(), after pmp check in 2 stage translation part, mtval2 will be set in case of successes 2 stage translation but failed pmp check. In this case we gonna set mtval2 via env->guest_phys_fault_addr in context of riscv_cpu_tlb_fill(), as this was a guest-page-fault, but it didn't and mtval2 should be zero, according to RISCV privileged spec sect. 9.4.4: When a guest page-fault is taken into M-mode, mtval2 is written with either zero or guest physical address that faulted, shifted by 2 bits. *For other traps, mtval2 is set to zero...* Signed-off-by: Alexei Filippov <alexei.filippov@syntacore.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240503103052.6819-1-alexei.filippov@syntacore.com> Cc: qemu-stable <qemu-stable@nongnu.org> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 6c9a344247132ac6c3d0eb9670db45149a29c88f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: prioritize pmp errors in raise_mmu_exception()Daniel Henrique Barboza
raise_mmu_exception(), as is today, is prioritizing guest page faults by checking first if virt_enabled && !first_stage, and then considering the regular inst/load/store faults. There's no mention in the spec about guest page fault being a higher priority that PMP faults. In fact, privileged spec section 3.7.1 says: "Attempting to fetch an instruction from a PMP region that does not have execute permissions raises an instruction access-fault exception. Attempting to execute a load or load-reserved instruction which accesses a physical address within a PMP region without read permissions raises a load access-fault exception. Attempting to execute a store, store-conditional, or AMO instruction which accesses a physical address within a PMP region without write permissions raises a store access-fault exception." So, in fact, we're doing it wrong - PMP faults should always be thrown, regardless of also being a first or second stage fault. The way riscv_cpu_tlb_fill() and get_physical_address() work is adequate: a TRANSLATE_PMP_FAIL error is immediately reported and reflected in the 'pmp_violation' flag. What we need is to change raise_mmu_exception() to prioritize it. Reported-by: Joseph Chan <jchan@ventanamicro.com> Fixes: 82d53adfbb ("target/riscv/cpu_helper.c: Invalid exception on MMU translation stage") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240413105929.7030-1-alexei.filippov@syntacore.com> Cc: qemu-stable <qemu-stable@nongnu.org> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 68e7c86927afa240fa450578cb3a4f18926153e4) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: rvv: Remove redudant SEW checking for vector fp narrow/widen ↵Max Chou
instructions If the checking functions check both the single and double width operators at the same time, then the single width operator checking functions (require_rvf[min]) will check whether the SEW is 8. Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240322092600.1198921-5-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 93cb52b7a3ccc64e8d28813324818edae07e21d5) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: rvv: Check single width operator for vfncvt.rod.f.f.wMax Chou
The opfv_narrow_check needs to check the single width float operator by require_rvf. Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240322092600.1198921-4-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 692f33a3abcaae789b08623e7cbdffcd2c738c89) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: rvv: Check single width operator for vector fp widen instructionsMax Chou
The require_scale_rvf function only checks the double width operator for the vector floating point widen instructions, so most of the widen checking functions need to add require_rvf for single width operator. The vfwcvt.f.x.v and vfwcvt.f.xu.v instructions convert single width integer to double width float, so the opfxv_widen_check function doesn’t need require_rvf for the single width operator(integer). Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240322092600.1198921-3-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 7a999d4dd704aa71fe6416871ada69438b56b1e5) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: rvv: Fix Zvfhmin checking for vfwcvt.f.f.v and vfncvt.f.f.w ↵Max Chou
instructions According v spec 18.4, only the vfwcvt.f.f.v and vfncvt.f.f.w instructions will be affected by Zvfhmin extension. And the vfwcvt.f.f.v and vfncvt.f.f.w instructions only support the conversions of * From 1*SEW(16/32) to 2*SEW(32/64) * From 2*SEW(32/64) to 1*SEW(16/32) Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240322092600.1198921-2-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 17b713c0806e72cd8edc6c2ddd8acc5be0475df6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv/cpu.c: fix Zvkb extension configYangyu Chen
This code has a typo that writes zvkb to zvkg, causing users can't enable zvkb through the config. This patch gets this fixed. Signed-off-by: Yangyu Chen <cyy@cyyself.name> Fixes: ea61ef7097d0 ("target/riscv: Move vector crypto extensions to riscv_cpu_extensions") Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Max Chou <max.chou@sifive.com> Reviewed-by:  Weiwei Li <liwei1518@gmail.com> Message-ID: <tencent_7E34EEF0F90B9A68BF38BEE09EC6D4877C0A@qq.com> Cc: qemu-stable <qemu-stable@nongnu.org> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit ff33b7a9699e977a050a1014c617a89da1bf8295) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv: Fix the element agnostic function problemHuang Tao
In RVV and vcrypto instructions, the masked and tail elements are set to 1s using vext_set_elems_1s function if the vma/vta bit is set. It is the element agnostic policy. However, this function can't deal the big endian situation. This patch fixes the problem by adding handling of such case. Signed-off-by: Huang Tao <eric.huang@linux.alibaba.com> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240325021654.6594-1-eric.huang@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 75115d880c6d396f8a2d56aab8c12236d85a90e0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-05target/riscv/kvm: tolerate KVM disable ext errorsDaniel Henrique Barboza
Running a KVM guest using a 6.9-rc3 kernel, in a 6.8 host that has zkr enabled, will fail with a kernel oops SIGILL right at the start. The reason is that we can't expose zkr without implementing the SEED CSR. Disabling zkr in the guest would be a workaround, but if the KVM doesn't allow it we'll error out and never boot. In hindsight this is too strict. If we keep proceeding, despite not disabling the extension in the KVM vcpu, we'll not add the extension in the riscv,isa. The guest kernel will be unaware of the extension, i.e. it doesn't matter if the KVM vcpu has it enabled underneath or not. So it's ok to keep booting in this case. Change our current logic to not error out if we fail to disable an extension in kvm_set_one_reg(), but show a warning and keep booting. It is important to throw a warning because we must make the user aware that the extension is still available in the vcpu, meaning that an ill-behaved guest can ignore the riscv,isa settings and use the extension. The case we're handling happens with an EINVAL error code. If we fail to disable the extension in KVM for any other reason, error out. We'll also keep erroring out when we fail to enable an extension in KVM, since adding the extension in riscv,isa at this point will cause a guest malfunction because the extension isn't enabled in the vcpu. Suggested-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Cc: qemu-stable <qemu-stable@nongnu.org> Message-ID: <20240422171425.333037-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 1215d45b2aa97512a2867e401aa59f3d0c23cb23) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-06-01target/arm: Disable SVE extensions when SVE is disabledMarcin Juszkiewicz
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2304 Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Message-id: 20240526204551.553282-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit daf9748ac002ec35258e5986b6257961fd04b565) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-30hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registersZenghui Yu
We wrongly encoded ID_AA64PFR1_EL1 using {3,0,0,4,2} in hvf_sreg_match[] so we fail to get the expected ARMCPRegInfo from cp_regs hash table with the wrong key. Fix it with the correct encoding {3,0,0,4,1}. With that fixed, the Linux guest can properly detect FEAT_SSBS2 on my M1 HW. All DBG{B,W}{V,C}R_EL1 registers are also wrongly encoded with op0 == 14. It happens to work because HVF_SYSREG(CRn, CRm, 14, op1, op2) equals to HVF_SYSREG(CRn, CRm, 2, op1, op2), by definition. But we shouldn't rely on it. Cc: qemu-stable@nongnu.org Fixes: a1477da3ddeb ("hvf: Add Apple Silicon support") Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev> Reviewed-by: Alexander Graf <agraf@csgraf.de> Message-id: 20240503153453.54389-1-zenghui.yu@linux.dev Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 19ed42e8adc87a3c739f61608b66a046bb9237e2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target/i386: no single-step exception after MOV or POP SSPaolo Bonzini
Intel SDM 18.3.1.4 "If an occurrence of the MOV or POP instruction loads the SS register executes with EFLAGS.TF = 1, no single-step debug exception occurs following the MOV or POP instruction." Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit f0f0136abba688a6516647a79cc91e03fad6d5d7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target/i386: disable jmp_opt if EFLAGS.RF is 1Paolo Bonzini
If EFLAGS.RF is 1, special processing in gen_eob_worker() is needed and therefore goto_tb cannot be used. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 8225bff7c5db504f50e54ef66b079854635dba70) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target-i386: hyper-v: Correct kvm_hv_handle_exit return valuedonsheng
This bug fix addresses the incorrect return value of kvm_hv_handle_exit for KVM_EXIT_HYPERV_SYNIC, which should be EXCP_INTERRUPT. Handling of KVM_EXIT_HYPERV_SYNIC in QEMU needs to be synchronous. This means that async_synic_update should run in the current QEMU vCPU thread before returning to KVM, returning EXCP_INTERRUPT to guarantee this. Returning 0 can cause async_synic_update to run asynchronously. One problem (kvm-unit-tests's hyperv_synic test fails with timeout error) caused by this bug: When a guest VM writes to the HV_X64_MSR_SCONTROL MSR to enable Hyper-V SynIC, a VM exit is triggered and processed by the kvm_hv_handle_exit function of the QEMU vCPU. This function then calls the async_synic_update function to set synic->sctl_enabled to true. A true value of synic->sctl_enabled is required before creating SINT routes using the hyperv_sint_route_new() function. If kvm_hv_handle_exit returns 0 for KVM_EXIT_HYPERV_SYNIC, the current QEMU vCPU thread may return to KVM and enter the guest VM before running async_synic_update. In such case, the hyperv_synic test’s subsequent call to synic_ctl(HV_TEST_DEV_SINT_ROUTE_CREATE, ...) immediately after writing to HV_X64_MSR_SCONTROL can cause QEMU’s hyperv_sint_route_new() function to return prematurely (because synic->sctl_enabled is false). If the SINT route is not created successfully, the SINT interrupt will not be fired, resulting in a timeout error in the hyperv_synic test. Fixes: 267e071bd6d6 (“hyperv: make overlay pages for SynIC”) Suggested-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com> Message-ID: <20240521200114.11588-1-dongsheng.x.zhang@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 84d4b72854869821eb89813c195927fdd3078c12) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target/i386: fix feature dependency for WAITPKGPaolo Bonzini
The VMX feature bit depends on general availability of WAITPKG, not the other way round. Fixes: 33cc88261c3 ("target/i386: add support for VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE", 2023-08-28) Cc: qemu-stable@nongnu.org Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit fe01af5d47d4cf7fdf90c54d43f784e5068c8d72) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target/i386: rdpkru/wrpkru are no-prefix instructionsPaolo Bonzini
Reject 0x66/0xf3/0xf2 in front of them. Cc: qemu-stable@nongnu.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 40a3ec7b5ffde500789d016660a171057d6b467c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-27target/i386: fix operand size for DATA16 REX.W POPCNTPaolo Bonzini
According to the manual, 32-bit vs 64-bit is governed by REX.W and REX ignores the 0x66 prefix. This can be confirmed with this program: #include <stdio.h> int main() { int x = 0x12340000; int y; asm("popcntl %1, %0" : "=r" (y) : "r" (x)); printf("%x\n", y); asm("mov $-1, %0; .byte 0x66; popcntl %1, %0" : "+r" (y) : "r" (x)); printf("%x\n", y); asm("mov $-1, %0; .byte 0x66; popcntq %q1, %q0" : "+r" (y) : "r" (x)); printf("%x\n", y); } which prints 5/ffff0000/5 on real hardware and 5/ffff0000/ffff0000 on QEMU. Cc: qemu-stable@nongnu.org Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 41c685dc59bb611096f3bb6a663cfa82e4cba97b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: drop removal of mo_64_32() helper function in target/i386/tcg/translate.c due to missing-in-8.2 v9.0.0-542-gaef4f4affde2 "target/i386: remove now-converted opcodes from old decoder" which removed other user of it)
2024-05-13target/sparc: Fix FMUL8x16Richard Henderson
This instruction has f32 as source1, which alters the decoding of the register number, which means we've been passing the wrong data for odd register numbers. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240502165528.244004-4-richard.henderson@linaro.org> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (cherry picked from commit 9157dccc7e71f7c94581c38f38acbef9a21bbe9a) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-13target/sparc: Fix FEXPANDRichard Henderson
This is a 2-operand instruction, not 3-operand. Worse, we took the source from the wrong operand. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240502165528.244004-3-richard.henderson@linaro.org> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (cherry picked from commit 7b616f36de0bde126e1ba6b0793ed26fc414a1ff) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-13target/i386: Give IRQs a chance when resetting HF_INHIBIT_IRQ_MASKRuihan Li
When emulated with QEMU, interrupts will never come in the following loop. However, if the NOP instruction is uncommented, interrupts will fire as normal. loop: cli call do_sti jmp loop do_sti: sti # nop ret This behavior is different from that of a real processor. For example, if KVM is enabled, interrupts will always fire regardless of whether the NOP instruction is commented or not. Also, the Intel Software Developer Manual states that after the STI instruction is executed, the interrupt inhibit should end as soon as the next instruction (e.g., the RET instruction if the NOP instruction is commented) is executed. This problem is caused because the previous code may choose not to end the TB even if the HF_INHIBIT_IRQ_MASK has just been reset (e.g., in the case where the STI instruction is immediately followed by the RET instruction), so that IRQs may not have a change to trigger. This commit fixes the problem by always terminating the current TB to give IRQs a chance to trigger when HF_INHIBIT_IRQ_MASK is reset. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn> Message-ID: <20240415064518.4951-4-lrh2000@pku.edu.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 6a5a63f74ba5c5355b7a8468d3d814bfffe928fb) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-04target/sh4: Fix SUBV opcodePhilippe Mathieu-Daudé
The documentation says: SUBV Rm, Rn Rn - Rm -> Rn, underflow -> T The overflow / underflow can be calculated as: T = ((Rn ^ Rm) & (Result ^ Rn)) >> 31 However we were using the incorrect: T = ((Rn ^ Rm) & (Result ^ Rm)) >> 31 Fix by using the Rn register instead of Rm. Add tests provided by Paul Cercueil. Cc: qemu-stable@nongnu.org Fixes: ad8d25a11f ("target-sh4: implement addv and subv using TCG") Reported-by: Paul Cercueil <paul@crapouillou.net> Suggested-by: Paul Cercueil <paul@crapouillou.net> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2318 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Message-Id: <20240430163125.77430-3-philmd@linaro.org> (cherry picked from commit e88a856efd1d3c3ffa8e53da4831eff8da290808) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-05-04target/sh4: Fix ADDV opcodePhilippe Mathieu-Daudé
The documentation says: ADDV Rm, Rn Rn + Rm -> Rn, overflow -> T But QEMU implementation was: ADDV Rm, Rn Rn + Rm -> Rm, overflow -> T Fix by filling the correct Rm register. Add tests provided by Paul Cercueil. Cc: qemu-stable@nongnu.org Fixes: ad8d25a11f ("target-sh4: implement addv and subv using TCG") Reported-by: Paul Cercueil <paul@crapouillou.net> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2317 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Message-Id: <20240430163125.77430-2-philmd@linaro.org> (cherry picked from commit c365e6b0705788866a65e7b8206bd4c5332595cd) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-30target/loongarch/cpu.c: typo fix: expectionMichael Tokarev
Fixes: 1590154ee437 ("target/loongarch: Fix qemu-system-loongarch64 assert failed with the option '-d int'") Fixes: ef9b43bb8e2d (in stable-8.2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 0cbb322f70e8a87e4acbffecef5ea8f9448f3513) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change timer regs size to u64Daniel Henrique Barboza
KVM_REG_RISCV_TIMER regs are always u64 according to the KVM API, but at this moment we'll return u32 regs if we're running a RISCV32 target. Use the kvm_riscv_reg_id_u64() helper in RISCV_TIMER_REG() to fix it. Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 10f86d1b845087d14b58d65dd2a6e3411d1b6529) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change KVM_REG_RISCV_FP_D to u64Daniel Henrique Barboza
KVM_REG_RISCV_FP_D regs are always u64 size. Using kvm_riscv_reg_id() in RISCV_FP_D_REG() ends up encoding the wrong size if we're running with TARGET_RISCV32. Create a new helper that returns a KVM ID with u64 size and use it with RISCV_FP_D_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 450bd6618fda3d2e2ab02b2fce1c79efd5b66084) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-27target/riscv/kvm: change KVM_REG_RISCV_FP_F to u32Daniel Henrique Barboza
KVM_REG_RISCV_FP_F regs have u32 size according to the API, but by using kvm_riscv_reg_id() in RISCV_FP_F_REG() we're returning u64 sizes when running with TARGET_RISCV64. The most likely reason why no one noticed this is because we're not implementing kvm_cpu_synchronize_state() in RISC-V yet. Create a new helper that returns a KVM ID with u32 size and use it in RISCV_FP_F_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 49c211ffca00fdf7c0c29072c224e88527a14838) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/m68k: Map FPU exceptions to FPSR registerKeith Packard
Add helpers for reading/writing the 68881 FPSR register so that changes in floating point exception state can be seen by the application. Call these helpers in pre_load/post_load hooks to synchronize exception state. Signed-off-by: Keith Packard <keithp@keithp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230803035231.429697-1-keithp@keithp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 5888357942da1fd5a50efb6e4a6af8b1a27a5af8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: add missing CHECK_NOT_DELAY_SLOTZack Buhman
CHECK_NOT_DELAY_SLOT is correctly applied to the branch-related instructions, but not to the PC-relative mov* instructions. I verified the existence of an illegal slot exception on a SH7091 when any of these instructions are attempted inside a delay slot. This also matches the behavior described in the SH-4 ISA manual. Signed-off-by: Zack Buhman <zack@buhman.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240407150705.5965-1-zack@buhman.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp> (cherry picked from commit b754cb2dcde26a7bc8a9d17bb6900a0ac0dd38e2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Fix mac.w with saturation enabledZack Buhman
The saturation arithmetic logic in helper_macw is not correct. I tested and verified this behavior on a SH7091. Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Zack Buhman <zack@buhman.org> Message-Id: <20240405233802.29128-3-zack@buhman.org> [rth: Reformat helper_macw, add a test case.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 7227c0cd506eaab5b1d89d15832cac7e05ecb412) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Fix mac.l with saturation enabledZack Buhman
The saturation arithmetic logic in helper_macl is not correct. I tested and verified this behavior on a SH7091. Signed-off-by: Zack Buhman <zack@buhman.org> Message-Id: <20240404162641.27528-2-zack@buhman.org> [rth: Reformat helper_macl, add a test case.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit c97e8977dcacb3fa8362ee28bcee75ceb01fceaa) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: Merge mach and macl into a unionRichard Henderson
Allow host access to the entire 64-bit accumulator. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 7d95db5e78a24d3315e3112d26909a7262355cb7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-10target/sh4: mac.w: memory accesses are 16-bit wordsZack Buhman
Before this change, executing a code sequence such as: mova tblm,r0 mov r0,r1 mova tbln,r0 clrs clrmac mac.w @r0+,@r1+ mac.w @r0+,@r1+ .align 4 tblm: .word 0x1234 .word 0x5678 tbln: .word 0x9abc .word 0xdefg Does not result in correct behavior: Expected behavior: first macw : macl = 0x1234 * 0x9abc + 0x0 mach = 0x0 second macw: macl = 0x5678 * 0xdefg + 0xb00a630 mach = 0x0 Observed behavior (qemu-sh4eb, prior to this commit): first macw : macl = 0x5678 * 0xdefg + 0x0 mach = 0x0 second macw: (unaligned longword memory access, SIGBUS) Various SH-4 ISA manuals also confirm that `mac.w` is a 16-bit word memory access, not a 32-bit longword memory access. Signed-off-by: Zack Buhman <zack@buhman.org> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240402093756.27466-1-zack@buhman.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit b0f2f2976b4db05351117b0440b32bf0aac2c5c6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-09target/arm: Use correct SecuritySpace for AArch64 AT ops at EL3Peter Maydell
When we do an AT address translation operation, the page table walk is supposed to be performed in the context of the EL we're doing the walk for, so for instance an AT S1E2R walk is done for EL2. In the pseudocode an EL is passed to AArch64.AT(), which calls SecurityStateAtEL() to find the security state that we should be doing the walk with. In ats_write64() we get this wrong, instead using the current security space always. This is fine for AT operations performed from EL1 and EL2, because there the current security state and the security state for the lower EL are the same. But for AT operations performed from EL3, the current security state is always either Secure or Root, whereas we want to use the security state defined by SCR_EL3.{NS,NSE} for the walk. This affects not just guests using FEAT_RME but also ones where EL3 is Secure state and the EL3 code is trying to do an AT for a NonSecure EL2 or EL1. Use arm_security_space_below_el3() to get the SecuritySpace to pass to do_ats_write() for all AT operations except the AT S1E3* operations. Cc: qemu-stable@nongnu.org Fixes: e1ee56ec2383 ("target/arm: Pass security space rather than flag for AT instructions") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2250 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240405180232.3570066-1-peter.maydell@linaro.org (cherry picked from commit 19b254e86a900dc5ee332e3ac0baf9c521301abf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-02target/arm: take HSTR traps of cp15 accesses to EL2, not EL1Peter Maydell
The HSTR_EL2 register allows the hypervisor to trap AArch32 EL1 and EL0 accesses to cp15 registers. We incorrectly implemented this so they trap to EL1 when we detect the need for a HSTR trap at code generation time. (The check in access_check_cp_reg() which we do at runtime to catch traps from EL0 is correctly routing them to EL2.) Use the correct target EL when generating the code to take the trap. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2226 Fixes: 049edada5e93df ("target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240325133116.2075362-1-peter.maydell@linaro.org (cherry picked from commit fbe5ac5671a9cfcc7f4aee9a5fac7720eea08876) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-04-01target/hppa: Clear psw_n for BE on use_nullify_skip pathRichard Henderson
Along this path we have already skipped the insn to be nullified, so the subsequent insn should be executed. Cc: qemu-stable@nongnu.org Reported-by: Sven Schnelle <svens@stackframe.org> Tested-by: Sven Schnelle <svens@stackframe.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 4a3aa11e1fb25c28c24a43fd2835c429b00a463d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/kvm: fix timebase-frequency when using KVM accelerationYong-Xuan Wang
The timebase-frequency of guest OS should be the same with host machine. The timebase-frequency value in DTS should be got from hypervisor when using KVM acceleration. Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com> Message-ID: <20240314061510.9800-1-yongxuan.wang@sifive.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 385e575cd5ab2436c123e4b7f8c9b383a64c0dbe) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fix due to missing other changes in this area in 8.2.x)
2024-03-27target/riscv: Fix mode in riscv_tlb_fillIrina Ryapolova
Need to convert mmu_idx to privilege mode for PMP function. Signed-off-by: Irina Ryapolova <irina.ryapolova@syntacore.com> Fixes: b297129ae1 ("target/riscv: propagate PMP permission to TLB page") Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20240320172828.23965-1-irina.ryapolova@syntacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit e06adebb08325c39e4c9b652139426c10f021abb) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv: rvv: Remove the dependency of Zvfbfmin to ZfbfminMax Chou
According to the Zvfbfmin definition in the RISC-V BF16 extensions spec, the Zvfbfmin extension only requires either the V extension or the Zve32f extension. Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240321170929.1162507-1-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit c9b07fe14d3525cd3f2fc01f46eeb3d4ed7c3603) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: optimize loops in ldst helpersDaniel Henrique Barboza
Change the for loops in ldst helpers to do a single increment in the counter, and assign it env->vstart, to avoid re-reading from vstart every time. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-11-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 0a11629c915f61df798919db51a18ffe4649cb65) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helpers: do early exit when vstart >= vlDaniel Henrique Barboza
We're going to make changes that will required each helper to be responsible for the 'vstart' management, i.e. we will relieve the 'vstart < vl' assumption that helpers have today. Helpers are usually able to deal with vstart >= vl, i.e. doing nothing aside from setting vstart = 0 at the end, but the tail update functions will update the tail regardless of vstart being valid or not. Unifying the tail update process in a single function that would handle the vstart >= vl case isn't trivial (see [1] for more info). This patch takes a blunt approach: do an early exit in every single vector helper if vstart >= vl, unless the helper is guarded with vstart_eq_zero in the translation. For those cases the helper is ready to deal with cases where vl might be zero, i.e. throwing exceptions based on it like vcpop_m() and first_m(). Helpers that weren't changed: - vcpop_m(), vfirst_m(), vmsetm(), GEN_VEXT_VIOTA_M(): these are guarded directly with vstart_eq_zero; - GEN_VEXT_VCOMPRESS_VM(): guarded with vcompress_vm_check() that checks vstart_eq_zero; - GEN_VEXT_RED(): guarded with either reduction_check() or reduction_widen_check(), both check vstart_eq_zero; - GEN_VEXT_FRED(): guarded with either freduction_check() or freduction_widen_check(), both check vstart_eq_zero. Another exception is vext_ldst_whole(), who operates on effective vector length regardless of the current settings in vtype and vl. [1] https://lore.kernel.org/qemu-riscv/1590234b-0291-432a-a0fa-c5a6876097bc@linux.alibaba.com/ Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240314175704.478276-7-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit df4252b2ecaf93b601109373a17427d1867046e8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv: always clear vstart in whole vec move insnsDaniel Henrique Barboza
These insns have 2 paths: we'll either have vstart already cleared if vstart_eq_zero or we'll do a brcond to check if vstart >= maxsz to call the 'vmvr_v' helper. The helper will clear vstart if it executes until the end, or if vstart >= vl. For starters, the check itself is wrong: we're checking vstart >= maxsz, when in fact we should use vstart in bytes, or 'startb' like 'vmvr_v' is calling, to do the comparison. But even after fixing the comparison we'll still need to clear vstart in the end, which isn't happening too. We want to make the helpers responsible to manage vstart, including these corner cases, precisely to avoid these situations: - remove the wrong vstart >= maxsz cond from the translation; - add a 'startb >= maxsz' cond in 'vmvr_v', and clear vstart if that happens. This way we're now sure that vstart is being cleared in the end of the execution, regardless of the path taken. Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-5-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 7e53e3ddf6dff200098e112c5370ab16d2d5dbd1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: fix 'vmvr_v' memcpy endianessDaniel Henrique Barboza
vmvr_v isn't handling the case where the host might be big endian and the bytes to be copied aren't sequential. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240314175704.478276-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 768e7b329c0be22035da077fe76221dd0a47103b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27trans_rvv.c.inc: set vstart = 0 in int scalar move insnsDaniel Henrique Barboza
trans_vmv_x_s, trans_vmv_s_x, trans_vfmv_f_s and trans_vfmv_s_f aren't setting vstart = 0 after execution. This is usually done by a helper in vector_helper.c but these functions don't use helpers. We'll set vstart after any potential 'over' brconds, and that will also mandate a mark_vs_dirty() too. Fixes: dedc53cbc9 ("target/riscv: rvv-1.0: integer scalar move instructions") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240314175704.478276-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 0848f7c18ef50de9f955e7eeb4363d92766a41bf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/riscv/vector_helper.c: set vstart = 0 in GEN_VEXT_VSLIDEUP_VX()Daniel Henrique Barboza
The helper isn't setting env->vstart = 0 after its execution, as it is expected from every vector instruction that completes successfully. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20240314175704.478276-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit d3646e31ce6d1e02e46e6eabdbc2e637c0cbece7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-03-27target/i386/tcg: Enable page walking from MMIO memoryGregory Price
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using this memory as conventional memory in a separate NUMA node. If a process is deliberately forced to run from that node $ numactl --membind=1 ls the page table walk on i386 fails. Useful part of backtrace: (cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p") at ../../cpu-target.c:359 (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>) at ../../accel/tcg/cputlb.c:1339 (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030 (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 at ../../accel/tcg/ldst_common.c.inc:301 at ../../target/i386/tcg/sysemu/excp_helper.c:173 (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0) at ../../target/i386/tcg/sysemu/excp_helper.c:578 (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604 Avoid this by plumbing the address all the way down from x86_cpu_tlb_fill() where is available as retaddr to the actual accessors which provide it to probe_access_full() which already handles MMIO accesses. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Gregory Price <gregory.price@memverge.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 9dab7bbb017d11b64c52239fa4e2f910a6a004f2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>