aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2023-12-20target/arm/helper: Propagate MDCR_EL2.HPMN into PMCR_EL0.NJean-Philippe Brucker
MDCR_EL2.HPMN allows an hypervisor to limit the number of PMU counters available to EL1 and EL0 (to keep the others to itself). QEMU already implements this split correctly, except for PMCR_EL0.N reads: the number of counters read by EL1 or EL0 should be the one configured in MDCR_EL2.HPMN. Cc: qemu-stable@nongnu.org Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20231215144652.4193815-2-jean-philippe@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 6980c31dec42b6daebf7fec13b2d39ed87bb4766) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-12-20target/arm: Disable SME if SVE is disabledPeter Maydell
There is no architectural requirement that SME implies SVE, but our implementation currently assumes it. (FEAT_SME_FA64 does imply SVE.) So if you try to run a CPU with eg "-cpu max,sve=off" you quickly run into an assert when the guest tries to write to SMCR_EL1: #6 0x00007ffff4b38e96 in __GI___assert_fail (assertion=0x5555566e69cb "sm", file=0x5555566e5b24 "../../target/arm/helper.c", line=6865, function=0x5555566e82f0 <__PRETTY_FUNCTION__.31> "sve_vqm1_for_el_sm") at ./assert/assert.c:101 #7 0x0000555555ee33aa in sve_vqm1_for_el_sm (env=0x555557d291f0, el=2, sm=false) at ../../target/arm/helper.c:6865 #8 0x0000555555ee3407 in sve_vqm1_for_el (env=0x555557d291f0, el=2) at ../../target/arm/helper.c:6871 #9 0x0000555555ee3724 in smcr_write (env=0x555557d291f0, ri=0x555557da23b0, value=2147483663) at ../../target/arm/helper.c:6995 #10 0x0000555555fd1dba in helper_set_cp_reg64 (env=0x555557d291f0, rip=0x555557da23b0, value=2147483663) at ../../target/arm/tcg/op_helper.c:839 #11 0x00007fff60056781 in code_gen_buffer () Avoid this unsupported and slightly odd combination by disabling SME when SVE is not present. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2005 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231127173318.674758-1-peter.maydell@linaro.org (cherry picked from commit f7767ca301796334f74b9b642b395a4bd3e3dbac) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-12-05target/arm: Set IL bit for pauth, SVE access, BTI trap syndromesPeter Maydell
The syndrome register value always has an IL field at bit 25, which is 0 for a trap on a 16 bit instruction, and 1 for a trap on a 32 bit instruction (or for exceptions which aren't traps on a known instruction, like PC alignment faults). This means that our syn_*() functions should always either take an is_16bit argument to determine whether to set the IL bit, or else unconditionally set it. We missed setting the IL bit for the syndrome for three kinds of trap: * an SVE access exception * a pointer authentication check failure * a BTI (branch target identification) check failure All of these traps are AArch64 only, and so the instruction causing the trap is always 64 bit. This means we can unconditionally set the IL bit in the syn_*() function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231120150121.3458408-1-peter.maydell@linaro.org Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 11a3c4a286d5dc603582ea0a1fca62c2ec0a1aee) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-22target/arm: Fix SME FMOPA (16-bit), BFMOPARichard Henderson
Perform the loop increment unconditionally, not nested within the predication. Cc: qemu-stable@nongnu.org Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1985 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20231117193135.1180657-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 3efd8495735c69b863476e9003e624877382a72d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-19target/tricore: Rename tricore_featureBastian Koppelmann
this name is used by capstone and will lead to a build failure of QEMU, when capstone is enabled. So we rename it to tricore_has_feature(), to match has_feature() in translate.c. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1774 Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20230721060605.76636-1-kbastian@mail.uni-paderborn.de> (cherry picked from commit f8cfdd2038c1823301e6df753242e465b1dc8539) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: update context in target/tricore/cpu.c, target/tricore/op_helper.c, drop chunks in target/tricore/helper.c)
2023-11-19target/s390x: Fix LAALG not updating cc_srcIlya Leoshkevich
LAALG uses op_laa() and wout_addu64(). The latter expects cc_src to be set, but the former does not do it. This can lead to assertion failures if something sets cc_src to neither 0 nor 1 before. Fix by introducing op_laa_addu64(), which sets cc_src, and using it for LAALG. Fixes: 4dba4d6fef61 ("target/s390x: Use atomic operations for LOAD AND OP") Cc: qemu-stable@nongnu.org Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231106093605.1349201-4-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit bea402482a8c94389638cbd3d7fe3963fb317f4c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-19target/mips: Fix TX79 LQ/SQ opcodesPhilippe Mathieu-Daudé
The base register address offset is *signed*. Cc: qemu-stable@nongnu.org Fixes: aaaa82a9f9 ("target/mips/tx79: Introduce LQ opcode (Load Quadword)") Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230914090447.12557-1-philmd@linaro.org> (cherry picked from commit 18f86aecd6a1bea0f78af14587a684ad966d8d3a) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-19target/mips: Fix MSA BZ/BNZ opcodes displacementPhilippe Mathieu-Daudé
The PC offset is *signed*. Cc: qemu-stable@nongnu.org Reported-by: Sergey Evlashev <vectorchiefrocks@gmail.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1624 Fixes: c7a9ef7517 ("target/mips: Introduce decode tree bindings for MSA ASE") Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230914085807.12241-1-philmd@linaro.org> (cherry picked from commit 04591b3ddd9a96b9298a1dd437a6464ab55e62ee) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-03target/arm: Correctly propagate stage 1 BTI guarded bit in a two-stage walkPeter Maydell
In a two-stage translation, the result of the BTI guarded bit should be the guarded bit from the first stage of translation, as there is no BTI guard information in stage two. Our code tried to do this, but got it wrong, because we currently have two fields where the GP bit information might live (ARMCacheAttrs::guarded and CPUTLBEntryFull::extra::arm::guarded), and we were storing the GP bit in the latter during the stage 1 walk but trying to copy the former in combine_cacheattrs(). Remove the duplicated storage, and always use the field in CPUTLBEntryFull; correctly propagate the stage 1 value to the output in get_phys_addr_twostage(). Note for stable backports: in v8.0 and earlier the field is named result->f.guarded, not result->f.extra.arm.guarded. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1950 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231031173723.26582-1-peter.maydell@linaro.org (cherry picked from commit 4c09abeae8704970ff03bf2196973f6bf08ab6f9) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: replace f.extra.arm.guarded -> f.guarded due to v8.1.0-1179-ga81fef4b64)
2023-11-03target/arm: Fix handling of SW and NSW bits for stage 2 walksPeter Maydell
We currently don't correctly handle the VSTCR_EL2.SW and VTCR_EL2.NSW configuration bits. These allow configuration of whether the stage 2 page table walks for Secure IPA and NonSecure IPA should do their descriptor reads from Secure or NonSecure physical addresses. (This is separate from how the translation table base address and other parameters are set: an NS IPA always uses VTTBR_EL2 and VTCR_EL2 for its base address and walk parameters, regardless of the NSW bit, and similarly for Secure.) Provide a new function ptw_idx_for_stage_2() which returns the MMU index to use for descriptor reads, and use it to set up the .in_ptw_idx wherever we call get_phys_addr_lpae(). For a stage 2 walk, wherever we call get_phys_addr_lpae(): * .in_ptw_idx should be ptw_idx_for_stage_2() of the .in_mmu_idx * .in_secure should be true if .in_mmu_idx is Stage2_S This allows us to correct S1_ptw_translate() so that it consistently always sets its (out_secure, out_phys) to the result it gets from the S2 walk (either by calling get_phys_addr_lpae() or by TLB lookup). This makes better conceptual sense because the S2 walk should return us an (address space, address) tuple, not an address that we then randomly assign to S or NS. Our previous handling of SW and NSW was broken, so guest code trying to use these bits to put the s2 page tables in the "other" address space wouldn't work correctly. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230504135425.2748672-3-peter.maydell@linaro.org (cherry picked from commit fcc0b0418fff655f20fd0cf86a1bbdc41fd2e7c6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-03target/arm: Don't allow stage 2 page table walks to downgrade to NSPeter Maydell
Bit 63 in a Table descriptor is only the NSTable bit for stage 1 translations; in stage 2 it is RES0. We were incorrectly looking at it all the time. This causes problems if: * the stage 2 table descriptor was incorrectly setting the RES0 bit * we are doing a stage 2 translation in Secure address space for a NonSecure stage 1 regime -- in this case we would incorrectly do an immediate downgrade to NonSecure A bug elsewhere in the code currently prevents us from getting to the second situation, but when we fix that it will be possible. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230504135425.2748672-2-peter.maydell@linaro.org (cherry picked from commit 21a4ab8318ba6f049aac244e237cd1557586e216) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-03target/arm: Don't access TCG code when debugging with KVMFabiano Rosas
When TCG is disabled this part of the code should not be reachable, so wrap it with an ifdef for now. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 0d3de77a07f4f774f7a9248afa8ea497ad5f2ae5) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: trivial change which makes two subsequent cherry-picks to apply cleanly)
2023-10-04target/i386: fix memory operand size for CVTPS2PDPaolo Bonzini
CVTPS2PD only loads a half-register for memory, unlike the other operations under 0x0F 0x5A. "Unpack" the group into separate emission functions instead of using gen_unary_fp_sse. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit abd41884c530aa025ada253bf1a5bd0c2b808219) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04target/i386: generalize operand size "ph" for use in CVTPS2PDPaolo Bonzini
CVTPS2PD only loads a half-register for memory, like CVTPH2PS. It can reuse the "ph" packed half-precision size to load a half-register, but rename it to "xh" because it is now a variation of "x" (it is not used only for half-precision values). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a48b26978a090fe1f3f3e54319902d4ab56a6b3a) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04target/i386: Fix exception classes for MOVNTPS/MOVNTPD.Ricky Zhou
Before this change, MOVNTPS and MOVNTPD were labeled as Exception Class 4 (only requiring alignment for legacy SSE instructions). This changes them to Exception Class 1 (always requiring memory alignment), as documented in the Intel manual. Message-Id: <20230501111428.95998-3-ricky@rzhou.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 8bf171c2d126aea6b60b818f1cee7e0e9eef0390) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04target/i386: Fix exception classes for SSE/AVX instructions.Ricky Zhou
Fix the exception classes for some SSE/AVX instructions to match what is documented in the Intel manual. These changes are expected to have no functional effect on the behavior that qemu implements (primarily >= 16-byte memory alignment checks). For instance, since qemu does not implement the AC flag, there is no difference in behavior between Exception Classes 4 and 5 for instructions where the SSE version only takes <16 byte memory operands. Message-Id: <20230501111428.95998-2-ricky@rzhou.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit cab529b0dc15746b270e87d77e1dd12c6216807c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04target/i386: Fix and add some comments next to SSE/AVX instructions.Ricky Zhou
Adds some comments describing what instructions correspond to decoding table entries and fixes some existing comments which named the wrong instruction. Message-Id: <20230501111428.95998-1-ricky@rzhou.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit afa94dabc52b17e340975e158d5a816ec2b2de23) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-04target/i386: fix operand size of unary SSE operationsPaolo Bonzini
VRCPSS, VRSQRTSS and VCVTSx2Sx have a 32-bit or 64-bit memory operand, which is represented in the decoding tables by X86_VEX_REPScalar. Add it to the tables, and make validate_vex() handle the case of an instruction that is in exception type 4 without the REP prefix and exception type 5 with it; this is the cas of VRCP and VRSQRT. Reported-by: yongwoo <https://gitlab.com/yongwoo36> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1377 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 3d304620ec6c95f31db17acc132f42f243369299) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-25target/arm: Don't skip MTE checks for LDRT/STRT at EL0Peter Maydell
The LDRT/STRT "unprivileged load/store" instructions behave like normal ones if executed at EL0. We handle this correctly for the load/store semantics, but get the MTE checking wrong. We always look at s->mte_active[is_unpriv] to see whether we should be doing MTE checks, but in hflags.c when we set the TB flags that will be used to fill the mte_active[] array we only set the MTE0_ACTIVE bit if UNPRIV is true (i.e. we are not at EL0). This means that a LDRT at EL0 will see s->mte_active[1] as 0, and will not do MTE checks even when MTE is enabled. To avoid the translate-time code having to do an explicit check on s->unpriv to see if it is OK to index into the mte_active[] array, duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false. (This isn't a very serious bug because generally nobody executes LDRT/STRT at EL0, because they have no use there.) Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230912140434.1333369-2-peter.maydell@linaro.org (cherry picked from commit 903dbefc2b6918c10d12d9aafa0168cee8d287c7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: before v7.2.0-1636-g671efad16a this code was in target/arm/helper.c)
2023-09-13target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changesLeon Schuermann
When the rule-lock bypass (RLB) bit is set in the mseccfg CSR, the PMP configuration lock bits must not apply. While this behavior is implemented for the pmpcfgX CSRs, this bit is not respected for changes to the pmpaddrX CSRs. This patch ensures that pmpaddrX CSR writes work even on locked regions when the global rule-lock bypass is enabled. Signed-off-by: Leon Schuermann <leons@opentitan.org> Reviewed-by: Mayuresh Chitale <mchitale@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20230829215046.1430463-1-leon@is.currently.online> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> (cherry picked from commit 4e3adce1244e1ca30ec05874c3eca14911dc0825) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-13arm64: Restore trapless ptimer accessColton Lewis
Due to recent KVM changes, QEMU is setting a ptimer offset resulting in unintended trap and emulate access and a consequent performance hit. Filter out the PTIMER_CNT register to restore trapless ptimer access. Quoting Andrew Jones: Simply reading the CNT register and writing back the same value is enough to set an offset, since the timer will have certainly moved past whatever value was read by the time it's written. QEMU frequently saves and restores all registers in the get-reg-list array, unless they've been explicitly filtered out (with Linux commit 680232a94c12, KVM_REG_ARM_PTIMER_CNT is now in the array). So, to restore trapless ptimer accesses, we need a QEMU patch to filter out the register. See https://lore.kernel.org/kvmarm/gsntttsonus5.fsf@coltonlewis-kvm.c.googlers.com/T/#m0770023762a821db2a3f0dd0a7dc6aa54e0d0da9 for additional context. Cc: qemu-stable@nongnu.org Signed-off-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Colton Lewis <coltonlewis@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Colton Lewis <coltonlewis@google.com> Message-id: 20230831190052.129045-1-coltonlewis@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 682814e2a3c883b27f24b9e7cab47313c49acbd4) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/ppc: Flush inputs to zero with NJ in ppc_store_vscrRichard Henderson
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1779 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org> (cherry picked from commit af03aeb631eeb81a44d2c0ff5b429cd4b5dc2799) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/arm: Fix 64-bit SSRARichard Henderson
Typo applied byte-wise shift instead of double-word shift. Cc: qemu-stable@nongnu.org Fixes: 631e565450c ("target/arm: Create gen_gvec_[us]sra") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1737 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230821022025.397682-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit cd1e4db73646006039f25879af3bff55b2295ff3) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/arm: Fix SME ST1QRichard Henderson
A typo, noted in the bug report, resulting in an incorrect write offset. Cc: qemu-stable@nongnu.org Fixes: 7390e0e9ab8 ("target/arm: Implement SME LD1, ST1") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1833 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230818214255.146905-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 4b3520fd93cd49cc56dfcab45d90735cc2e35af7) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11accel/kvm: Specify default IPA size for arm64Akihiko Odaki
Before this change, the default KVM type, which is used for non-virt machine models, was 0. The kernel documentation says: > On arm64, the physical address size for a VM (IPA Size limit) is > limited to 40bits by default. The limit can be configured if the host > supports the extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use > KVM_VM_TYPE_ARM_IPA_SIZE(IPA_Bits) to set the size in the machine type > identifier, where IPA_Bits is the maximum width of any physical > address used by the VM. The IPA_Bits is encoded in bits[7-0] of the > machine type identifier. > > e.g, to configure a guest to use 48bit physical address size:: > > vm_fd = ioctl(dev_fd, KVM_CREATE_VM, KVM_VM_TYPE_ARM_IPA_SIZE(48)); > > The requested size (IPA_Bits) must be: > > == ========================================================= > 0 Implies default size, 40bits (for backward compatibility) > N Implies N bits, where N is a positive integer such that, > 32 <= N <= Host_IPA_Limit > == ========================================================= > Host_IPA_Limit is the maximum possible value for IPA_Bits on the host > and is dependent on the CPU capability and the kernel configuration. > The limit can be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the > KVM_CHECK_EXTENSION ioctl() at run-time. > > Creation of the VM will fail if the requested IPA size (whether it is > implicit or explicit) is unsupported on the host. https://docs.kernel.org/virt/kvm/api.html#kvm-create-vm So if Host_IPA_Limit < 40, specifying 0 as the type will fail. This actually confused libvirt, which uses "none" machine model to probe the KVM availability, on M2 MacBook Air. Fix this by using Host_IPA_Limit as the default type when KVM_CAP_ARM_VM_IPA_SIZE is available. Cc: qemu-stable@nongnu.org Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-3-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 1ab445af8cd99343f29032b5944023ad7d8edebf) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11kvm: Introduce kvm_arch_get_default_type hookAkihiko Odaki
kvm_arch_get_default_type() returns the default KVM type. This hook is particularly useful to derive a KVM type that is valid for "none" machine model, which is used by libvirt to probe the availability of KVM. For MIPS, the existing mips_kvm_type() is reused. This function ensures the availability of VZ which is mandatory to use KVM on the current QEMU. Cc: qemu-stable@nongnu.org Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20230727073134.134102-2-akihiko.odaki@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: added doc comment for new function] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 5e0d65909c6f335d578b90491e165440c99adf81) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/s390x: Check reserved bits of VFMIN/VFMAX's M5Ilya Leoshkevich
VFMIN and VFMAX should raise a specification exceptions when bits 1-3 of M5 are set. Cc: qemu-stable@nongnu.org Fixes: da4807527f3b ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230804234621.252522-1-iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 6a2ea6151835aa4f5fee29382a421c13b0e6619f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/s390x: Fix VSTL with a large lengthIlya Leoshkevich
The length is always truncated to 16 bytes. Do not probe more than that. Cc: qemu-stable@nongnu.org Fixes: 0e0a5b49ad58 ("s390x/tcg: Implement VECTOR STORE WITH LENGTH") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230804235624.263260-1-iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 6db3518ba4fcddd71049718f138552999f0d97b4) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/s390x: Use a 16-bit immediate in VREPIlya Leoshkevich
Unlike most other instructions that contain an immediate element index, VREP's one is 16-bit, and not 4-bit. The code uses only 8 bits, so using, e.g., 0x101 does not lead to a specification exception. Fix by checking all 16 bits. Cc: qemu-stable@nongnu.org Fixes: 28d08731b1d8 ("s390x/tcg: Implement VECTOR REPLICATE") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230807163459.849766-1-iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 23e87d419f347b6b5f4da3bf70d222acc24cdb64) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-11target/s390x: Fix the "ignored match" case in VSTRSIlya Leoshkevich
Currently the emulation of VSTRS recognizes partial matches in presence of \0 in the haystack, which, according to PoP, is not correct: If the ZS flag is one and a zero byte was detected in the second operand, then there can not be a partial match ... Add a check for this. While at it, fold a number of explicitly handled special cases into the generic logic. Cc: qemu-stable@nongnu.org Reported-by: Claudio Fontana <cfontana@suse.de> Closes: https://lists.gnu.org/archive/html/qemu-devel/2023-08/msg00633.html Fixes: 1d706f314191 ("target/s390x: vxeh2: vector string search") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230804233748.218935-3-iii@linux.ibm.com> Tested-by: Claudio Fontana <cfontana@suse.de> Acked-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 791b2b6a930273db694b9ba48bbb406e78715927) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-05target/i386: Check CR0.TS before enter_mmxMatt Borgerson
When CR0.TS=1, execution of x87 FPU, MMX, and some SSE instructions will cause a Device Not Available (DNA) exception (#NM). System software uses this exception event to lazily context switch FPU state. Before this patch, enter_mmx helpers may be generated just before #NM generation, prematurely resetting FPU state before the guest has a chance to save it. Signed-off-by: Matt Borgerson <contact@mborgerson.com> Message-ID: <CADc=-s5F10muEhLs4f3mxqsEPAHWj0XFfOC2sfFMVHrk9fcpMg@mail.gmail.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b2ea6450d8e1336a33eb958ccc64604bc35a43dd) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-05target/ppc: Fix VRMA page size for ISA v3.0Nicholas Piggin
Until v2.07s, the VRMA page size (L||LP) was encoded in LPCR[VRMASD]. In v3.0 that moved to the partition table PS field. The powernv machine can now run KVM HPT guests on POWER9/10 CPUs with this fix and the patch to add ASDR. Fixes: 3367c62f522b ("target/ppc: Support for POWER9 native hash") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-ID: <20230730111842.39292-1-npiggin@gmail.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> (cherry picked from commit 0e2a3ec36885f6d79a96230f582d4455878c6373) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-05target/ppc: Fix pending HDEC when entering PM stateNicholas Piggin
HDEC is defined to not wake from PM state. There is a check in the HDEC timer to avoid setting the interrupt if we are in a PM state, but no check on PM entry to lower HDEC if it already fired. This can cause a HDECR wake up and QEMU abort with unsupported exception in Power Save mode. Fixes: 4b236b621bf ("ppc: Initial HDEC support") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-ID: <20230726182230.433945-4-npiggin@gmail.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> (cherry picked from commit 9915dac4847f3cc5ffd36e4c374a4eec83fe09b5) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-05target/ppc: Implement ASDR register for ISA v3.0 for HPTNicholas Piggin
The ASDR register was introduced in ISA v3.0. It has not been implemented for HPT. With HPT, ASDR is the format of the slbmte RS operand (containing VSID), which matches the ppc_slb_t field. Fixes: 3367c62f522b ("target/ppc: Support for POWER9 native hash") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-ID: <20230726182230.433945-2-npiggin@gmail.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> (cherry picked from commit 9201af096962a1967ce5d0b270ed16ae4edd3db6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-04target/hppa: Move iaoq registers and thus reduce generated code sizeHelge Deller
On hppa the Instruction Address Offset Queue (IAOQ) registers specifies the next to-be-executed instructions addresses. Each generated TB writes those registers at least once, so those registers are used heavily in generated code. Looking at the generated assembly, for a x86-64 host this code to write the address $0x7ffe826f into iaoq_f is generated: 0x7f73e8000184: c7 85 d4 01 00 00 6f 82 movl $0x7ffe826f, 0x1d4(%rbp) 0x7f73e800018c: fe 7f 0x7f73e800018e: c7 85 d8 01 00 00 73 82 movl $0x7ffe8273, 0x1d8(%rbp) 0x7f73e8000196: fe 7f With the trivial change, by moving the variables iaoq_f and iaoq_b to the top of struct CPUArchState, the offset to %rbp is reduced (from 0x1d4 to 0), which allows the x86-64 tcg to generate 3 bytes less of generated code per move instruction: 0x7fc1e800018c: c7 45 00 6f 82 fe 7f movl $0x7ffe826f, (%rbp) 0x7fc1e8000193: c7 45 04 73 82 fe 7f movl $0x7ffe8273, 4(%rbp) Overall this is a reduction of generated code (not a reduction of number of instructions). A test run with checks the generated code size by running "/bin/ls" with qemu-user shows that the code size shrinks from 1616767 to 1569273 bytes, which is ~97% of the former size. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Helge Deller <deller@gmx.de> Cc: qemu-stable@nongnu.org (cherry picked from commit f8c0fd9804f435a20c3baa4c0c77ba9a02af24ef) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-03target/m68k: Fix semihost lseek offset computationPeter Maydell
The arguments for deposit64 are (value, start, length, fieldval); this appears to have thought they were (value, fieldval, start, length). Reorder the parameters to match the actual function. Cc: qemu-stable@nongnu.org Fixes: 950272506d ("target/m68k: Use semihosting/syscalls.h") Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230801154519.3505531-1-peter.maydell@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 8caaae7319a5f7ca449900c0e6bfcaed78fa3ae2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-03target/nios2: Fix semihost lseek offset computationKeith Packard
The arguments for deposit64 are (value, start, length, fieldval); this appears to have thought they were (value, fieldval, start, length). Reorder the parameters to match the actual function. Signed-off-by: Keith Packard <keithp@keithp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Fixes: d1e23cbaa403b2d ("target/nios2: Use semihosting/syscalls.h") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230731235245.295513-1-keithp@keithp.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit 71e2dd6aa1bdbac19c661638a4ae91816002ac9e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-03target/nios2: Pass semihosting arg to exitKeith Packard
Instead of using R_ARG0 (the semihost function number), use R_ARG1 (the provided exit status). Signed-off-by: Keith Packard <keithp@keithp.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230801152245.332749-1-keithp@keithp.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> (cherry picked from commit c11d5bdae79a8edaf00dfcb2e49c064a50c67671) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-08-01target/ppc: Disable goto_tb with architectural singlestepRichard Henderson
The change to use translator_use_goto_tb went too far, as the CF_SINGLE_STEP flag managed by the translator only handles gdb single stepping and not the architectural single stepping modeled in DisasContext.singlestep_enabled. Fixes: 6e9cc373ec5 ("target/ppc: Use translator_use_goto_tb") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1795 Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 2e718e665706d5fcc3e3501bda26f277f055ed85) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/arm: Avoid writing to constant TCGv in trans_CSEL()Peter Maydell
In commit 0b188ea05acb5 we changed the implementation of trans_CSEL() to use tcg_constant_i32(). However, this change was incorrect, because the implementation of the function sets up the TCGv_i32 rn and rm to be either zero or else a TCG temp created in load_reg(), and these TCG temps are then in both cases written to by the emitted TCG ops. The result is that we hit a TCG assertion: qemu-system-arm: ../../tcg/tcg.c:4455: tcg_reg_alloc_mov: Assertion `!temp_readonly(ots)' failed. (or on a non-debug build, just produce a garbage result) Adjust the code so that rn and rm are always writeable temporaries whether the instruction is using the special case "0" or a normal register as input. Cc: qemu-stable@nongnu.org Fixes: 0b188ea05acb5 ("target/arm: Use tcg_constant in trans_CSEL") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230727103906.2641264-1-peter.maydell@linaro.org (cherry picked from commit 2b0d656ab6484cae7f174e194215a6d50343ecd2) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context fixup in target/arm/tcg/translate.c)
2023-07-31target/loongarch: Fix the CSRRD CPUID instruction on big endian hostsThomas Huth
The test in tests/avocado/machine_loongarch.py is currently failing on big endian hosts like s390x. By comparing the traces between running the QEMU_EFI.fd bios on a s390x and on a x86 host, it's quickly obvious that the CSRRD instruction for the CPUID is behaving differently. And indeed: The code currently does a long read (i.e. 64 bit) from the address that points to the CPUState->cpu_index field (with tcg_gen_ld_tl() in the trans_csrrd() function). But this cpu_index field is only an "int" (i.e. 32 bit). While this dirty pointer magic works on little endian hosts, it of course fails on big endian hosts. Fix it by using a proper helper function instead. Message-Id: <20230720175307.854460-1-thuth@redhat.com> Reviewed-by: Song Gao <gaosong@loongson.cn> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit c34ad459926f6c600a55fe6782a27edfa405d60b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/s390x: Fix assertion failure in VFMIN/VFMAX with type 13Ilya Leoshkevich
Type 13 is reserved, so using it should result in specification exception. Due to an off-by-1 error the code triggers an assertion at a later point in time instead. Cc: qemu-stable@nongnu.org Fixes: da4807527f3b ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)") Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-8-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit ff537b0370ab5918052b8d8a798e803c47272406) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/s390x: Make MC raise specification exception when class >= 16Ilya Leoshkevich
MC requires bit positions 8-11 (upper 4 bits of class) to be zeros, otherwise it must raise a specification exception. Cc: qemu-stable@nongnu.org Fixes: 20d143e2cab8 ("s390x/tcg: Implement MONITOR CALL") Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-6-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 9c028c057adce49304c6e4a51f6b426bd4f8f6b8) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: context edit in target/s390x/tcg/translate.c)
2023-07-31target/s390x: Fix ICM with M3=0Ilya Leoshkevich
When the mask is zero, access exceptions should still be recognized for 1 byte at the second-operand address. CC should be set to 0. Cc: qemu-stable@nongnu.org Fixes: e023e832d0ac ("s390x: translate engine for s390x CPU") Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-5-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit a2025557ed4d8d5e6a4d0dd681717c390f51f5be) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/s390x: Fix CONVERT TO LOGICAL/FIXED with out-of-range inputsIlya Leoshkevich
CONVERT TO LOGICAL/FIXED deviate from IEEE 754 in that they raise an inexact exception on out-of-range inputs. float_flag_invalid_cvti aligns nicely with that behavior, so convert it to S390_IEEE_MASK_INEXACT. Cc: qemu-stable@nongnu.org Fixes: defb0e3157af ("s390x: Implement opcode helpers") Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-4-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 53684e344a27da770acc9012740334154ddea24f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/s390x: Fix CLM with M3=0Ilya Leoshkevich
When the mask is zero, access exceptions should still be recognized for 1 byte at the second-operand address. CC should be set to 0. Cc: qemu-stable@nongnu.org Fixes: defb0e3157af ("s390x: Implement opcode helpers") Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-3-iii@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 4b6e4c0b8223681ae85462794848db4386de1a8d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-31target/s390x: Make CKSM raise an exception if R2 is oddIlya Leoshkevich
R2 designates an even-odd register pair; the instruction should raise a specification exception when R2 is not even. Cc: qemu-stable@nongnu.org Fixes: e023e832d0ac ("s390x: translate engine for s390x CPU") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20230724082032.66864-2-iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> (cherry picked from commit 761b0aa9381e2f755b9b594f7f3033d564561751) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-22target/arm: Return correct result for LDG when ATA=0Peter Maydell
The LDG instruction loads the tag from a memory address (identified by [Xn + offset]), and then merges that tag into the destination register Xt. We implemented this correctly for the case when allocation tags are enabled, but didn't get it right when ATA=0: instead of merging the tag bits into Xt, we merged them into the memory address [Xn + offset] and then set Xt to that. Merge the tag bits into the old Xt value, as they should be. Cc: qemu-stable@nongnu.org Fixes: c15294c1e36a7dd9b25 ("target/arm: Implement LDG, STG, ST2G instructions") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 7e2788471f9e079fff696a694721a7d41a451839) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-22target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomicsPeter Maydell
The atomic memory operations are supposed to return the old memory data value in the destination register. This value is not sign-extended, even if the operation is the signed minimum or maximum. (In the pseudocode for the instructions the returned data value is passed to ZeroExtend() to create the value in the register.) We got this wrong because we were doing a 32-to-64 zero extend on the result for 8 and 16 bit data values, rather than the correct amount of zero extension. Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data sizes rather than ext32u. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org (cherry picked from commit 243705aa6ea3465b20e9f5a8bfcf36d3153f3c10) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-11target/ppc: Fix PMU hflags calculationNicholas Piggin
Some of the PMU hflags bits can go out of synch, for example a store to MMCR0 with PMCjCE=1 fails to update hflags correctly and results in hflags mismatch: qemu: fatal: TCG hflags mismatch (current:0x2408003d rebuilt:0x240a003d) This can be reproduced by running perf on a recent machine. Some of the fragility here is the duplication of PMU hflags calculations. This change consolidates that in a single place to update pmu-related hflags, to be called after a well defined state changes. The post-load PMU update is pulled out of the MSR update because it does not depend on the MSR value. Fixes: 8b3d1c49a9f0 ("target/ppc: Add new PMC HFLAGS") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20230530130447.372617-1-npiggin@gmail.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> (cherry picked from commit 6494d2c1fd4ebc37b575130399a97a1fcfff1afc) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>