aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2024-08-13target/arm: Fix usage of MMU indexes when EL3 is AArch32Peter Maydell
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. We could fix this in one of two ways: * The most straightforward is to add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This matches how we use indexes for the AArch64 regimes, and preserves propirties like being able to determine the privilege level from an MMU index without any other information. However it would add two MMU indexes (we can share one with ARMMMUIdx_EL3), and we are already using 14 of the 16 the core TLB code permits. * The more complicated approach is the one we take here. We use the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0 than we do for NonSecure PL1&0. This saves on MMU indexes, but means we need to check in some places whether we're in the Secure PL1&0 regime or not before we interpret an MMU index. The changes in this commit were created by auditing all the places where we use specific ARMMMUIdx_ values, and checking whether they needed to be changed to handle the new index value usage. Note for potential stable backports: taking also the previous (comment-change-only) commit might make the backport easier. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
2024-08-13target/arm: Update translation regime comment for new featuresPeter Maydell
We have a long comment describing the Arm architectural translation regimes and how we map them to QEMU MMU indexes. This comment has got a bit out of date: * FEAT_SEL2 allows Secure EL2 and corresponding new regimes * FEAT_RME introduces Realm state and its translation regimes * We now model the Cortex-R52 so that is no longer a hypothetical * We separated Secure Stage 2 and NonSecure Stage 2 MMU indexes * We have an MMU index per physical address spacea Add the missing pieces so that the list of architectural translation regimes matches the Arm ARM, and the list and count of QEMU MMU indexes in the comment matches the enum. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-2-peter.maydell@linaro.org
2024-08-13target/arm: Clear high SVE elements in handle_vec_simd_wshliRichard Henderson
AdvSIMD instructions are supposed to zero bits beyond 128. Affects SSHLL, USHLL, SSHLL2, USHLL2. Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240717060903.205098-15-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-08-09target/arm: Fix BTI versus CF_PCRELRichard Henderson
With pcrel, we cannot check the guarded page bit at translation time, as different mappings of the same physical page may or may not have the GP bit set. Instead, add a couple of helpers to check the page at runtime, after all other filters that might obviate the need for the check. The set_btype_for_br call must be moved after the gen_a64_set_pc call to ensure the current pc can still be computed. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240802003028.795476-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-08-03hvf: arm: Fix hvf_sysreg_read_cp() callAkihiko Odaki
Changed val from uint64_t to a pointer to uint64_t in hvf_sysreg_read, but didn't change its usage in hvf_sysreg_read_cp call. Fixes: e9e640148c ("hvf: arm: Raise an exception for sysreg by default") Reported-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20240802-hvf-v1-1-e2c0292037e5@daynix.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-08-01target/arm: Handle denormals correctly for FMOPA (widening)Peter Maydell
The FMOPA (widening) SME instruction takes pairs of half-precision floating point values, widens them to single-precision, does a two-way dot product and accumulates the results into a single-precision destination. We don't quite correctly handle the FPCR bits FZ and FZ16 which control flushing of denormal inputs and outputs. This is because at the moment we pass a single float_status value to the helper function, which then uses that configuration for all the fp operations it does. However, because the inputs to this operation are float16 and the outputs are float32 we need to use the fp_status_f16 for the float16 input widening but the normal fp_status for everything else. Otherwise we will apply the flushing control FPCR.FZ16 to the 32-bit output rather than the FPCR.FZ control, and incorrectly flush a denormal output to zero when we should not (or vice-versa). (In commit 207d30b5fdb5b we tried to fix the FZ handling but didn't get it right, switching from "use FPCR.FZ for everything" to "use FPCR.FZ16 for everything".) Pass the CPU env to the sme_fmopa_h helper instead of an fp_status pointer, and have the helper pass an extra fp_status into the f16_dotadd() function so that we can use the right status for the right parts of this operation. Cc: qemu-stable@nongnu.org Fixes: 207d30b5fdb5 ("target/arm: Use FPST_F16 for SME FMOPA (widening)") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2373 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-29target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabledPeter Maydell
When determining the current vector length, the SMCR_EL2.LEN and SVCR_EL2.LEN settings should only be considered if EL2 is enabled (compare the pseudocode CurrentSVL and CurrentNSVL which call EL2Enabled()). We were checking against ARM_FEATURE_EL2 rather than calling arm_is_el2_enabled(), which meant that we would look at SMCR_EL2/SVCR_EL2 when in Secure EL1 or Secure EL0 even if Secure EL2 was not enabled. Use the correct check in sve_vqm1_for_el_sm(). Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-5-peter.maydell@linaro.org
2024-07-29target/arm: Avoid shifts by -1 in tszimm_shr() and tszimm_shl()Peter Maydell
The function tszimm_esz() returns a shift amount, or possibly -1 in certain cases that correspond to unallocated encodings in the instruction set. We catch these later in the trans_ functions (generally with an "a-esz < 0" check), but before we do the decodetree-generated code will also call tszimm_shr() or tszimm_sl(), which will use the tszimm_esz() return value as a shift count without checking that it is not negative, which is undefined behaviour. Avoid the UB by checking the return value in tszimm_shr() and tszimm_shl(). Cc: qemu-stable@nongnu.org Resolves: Coverity CID 1547617, 1547694 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-4-peter.maydell@linaro.org
2024-07-29target/arm: Fix UMOPA/UMOPS of 16-bit valuesPeter Maydell
The UMOPA/UMOPS instructions are supposed to multiply unsigned 8 or 16 bit elements and accumulate the products into a 64-bit element. In the Arm ARM pseudocode, this is done with the usual infinite-precision signed arithmetic. However our implementation doesn't quite get it right, because in the DEF_IMOP_64() macro we do: sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); where NTYPE and MTYPE are uint16_t or int16_t. In the uint16_t case, the C usual arithmetic conversions mean the values are converted to "int" type and the multiply is done as a 32-bit multiply. This means that if the inputs are, for example, 0xffff and 0xffff then the result is 0xFFFE0001 as an int, which is then promoted to uint64_t for the accumulation into sum; this promotion incorrectly sign extends the multiply. Avoid the incorrect sign extension by casting to int64_t before the multiply, so we do the multiply as 64-bit signed arithmetic, which is a type large enough that the multiply can never overflow into the sign bit. (The equivalent 8-bit operations in DEF_IMOP_32() are fine, because the 8-bit multiplies can never overflow into the sign bit of a 32-bit integer.) Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2372 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-3-peter.maydell@linaro.org
2024-07-29target/arm: Don't assert for 128-bit tile accesses when SVL is 128Peter Maydell
For an instruction which accesses a 128-bit element tile when the SVL is also 128 (for example MOV z0.Q, p0/M, ZA0H.Q[w0,0]), we will assert in get_tile_rowcol(): qemu-system-aarch64: ../../tcg/tcg-op.c:926: tcg_gen_deposit_z_i32: Assertion `len > 0' failed. This happens because we calculate len = ctz32(streaming_vec_reg_size(s)) - esz;$ but if the SVL and the element size are the same len is 0, and the deposit operation asserts. In this case the ZA storage contains exactly one 128 bit element ZA tile, and the horizontal or vertical slice is just that tile. This means that regardless of the index value in the Ws register, we always access that tile. (In pseudocode terms, we calculate (index + offset) MOD 1, which is 0.) Special case the len == 0 case to avoid hitting the assertion in tcg_gen_deposit_z_i32(). Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-2-peter.maydell@linaro.org
2024-07-29hvf: arm: Do not advance PC when raising an exceptionAkihiko Odaki
This is identical with commit 30a1690f2402 ("hvf: arm: Do not advance PC when raising an exception") but for writes instead of reads. Fixes: a2260983c655 ("hvf: arm: Add support for GICv3") Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-29hvf: arm: Properly disable PMUAkihiko Odaki
Setting pmu property used to have no effect for hvf so fix it. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-29hvf: arm: Raise an exception for sysreg by defaultAkihiko Odaki
Any sysreg access results in an exception unless defined otherwise so we should raise an exception by default. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-29target/arm/kvm: Do not silently remove PMUAkihiko Odaki
kvm_arch_init_vcpu() used to remove PMU when it is not available even if the CPU model needs one. It is semantically incorrect, and may continue execution on a misbehaving host that advertises a CPU model while lacking its PMU. Keep the PMU when the CPU model needs one, and let kvm_arm_vcpu_init() fail if the KVM implementation mismatches with our expectation. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-29target/arm/kvm: Set PMU for host only when availableAkihiko Odaki
target/arm/kvm.c checked PMU availability but unconditionally set the PMU feature flag for the host CPU model, which is confusing. Set the feature flag only when available. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-23bsd-user: Make compile for non-linux user-mode stuffWarner Losh
We include the files that define PR_MTE_TCF_SHIFT only on Linux, but use them unconditionally. Restrict its use to Linux-only. "It's ugly, but it's not actually wrong." Signed-off-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23bsd-user: Hard wire aarch64 to be 4k pages onlyWarner Losh
Only support 4k pages for aarch64 binaries. The variable page size stuff isn't working just yet, so put in this lessor-of-evils kludge until that is complete. Signed-off-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23Merge tag 'pull-tcg-20240723' of https://gitlab.com/rth7680/qemu into stagingRichard Henderson
accel/tcg: Export set/clear_helper_retaddr target/arm: Use set_helper_retaddr for dc_zva, sve and sme target/ppc: Tidy dcbz helpers target/ppc: Use set_helper_retaddr for dcbz target/s390x: Use set_helper_retaddr in mem_helper.c # -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmafJKIdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+FBAf7Bup+karxeGHZx2rN # cPeF248bcCWTxBWHK7dsYze4KqzsrlNIJlPeOKErU2bbbRDZGhOp1/N95WVz+P8V # 6Ny63WTsAYkaFWKxE6Jf0FWJlGw92btk75pTV2x/TNZixg7jg0vzVaYkk0lTYc5T # m5e4WycYEbzYm0uodxI09i+wFvpd+7WCnl6xWtlJPWZENukvJ36Ss43egFMDtuMk # vTJuBkS9wpwZ9MSi6EY6M+Raieg8bfaotInZeDvE/yRPNi7CwrA7Dgyc1y626uBA # joGkYRLzhRgvT19kB3bvFZi1AXa0Pxr+j0xJqwspP239Gq5qezlS5Bv/DrHdmGHA # jaqSwg== # =XgUE # -----END PGP SIGNATURE----- # gpg: Signature made Tue 23 Jul 2024 01:33:54 PM AEST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate] * tag 'pull-tcg-20240723' of https://gitlab.com/rth7680/qemu: target/riscv: Simplify probing in vext_ldff target/s390x: Use set/clear_helper_retaddr in mem_helper.c target/s390x: Use user_or_likely in access_memmove target/s390x: Use user_or_likely in do_access_memset target/ppc: Improve helper_dcbz for user-only target/ppc: Merge helper_{dcbz,dcbzep} target/ppc: Split out helper_dbczl for 970 target/ppc: Hoist dcbz_size out of dcbz_common target/ppc/mem_helper.c: Remove a conditional from dcbz_common() target/arm: Use set/clear_helper_retaddr in SVE and SME helpers target/arm: Use set/clear_helper_retaddr in helper-a64.c accel/tcg: Move {set,clear}_helper_retaddr to cpu_ldst.h Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23target/arm: Use set/clear_helper_retaddr in SVE and SME helpersRichard Henderson
Avoid a race condition with munmap in another thread. Use around blocks that exclusively use "host_fn". Keep the blocks as small as possible, but without setting and clearing for every operation on one page. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23target/arm: Use set/clear_helper_retaddr in helper-a64.cRichard Henderson
Use these in helper_dc_dva and the FEAT_MOPS routines to avoid a race condition with munmap in another thread. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-22gdbstub: Re-factor gdb command extensionsAlex Bennée
Coverity reported a memory leak (CID 1549757) in this code and its admittedly rather clumsy handling of extending the command table. Instead of handing over a full array of the commands lets use the lighter weight GPtrArray and simply test for the presence of each entry as we go. This avoids complications of transferring ownership of arrays and keeps the final command entries as static entries in the target code. Cc: Akihiko Odaki <akihiko.odaki@daynix.com> Cc: Gustavo Bueno Romero <gustavo.romero@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240718094523.1198645-4-alex.bennee@linaro.org>
2024-07-18hvf: arm: Do not advance PC when raising an exceptionAkihiko Odaki
hvf did not advance PC when raising an exception for most unhandled system registers, but it mistakenly advanced PC when raising an exception for GICv3 registers. Cc: qemu-stable@nongnu.org Fixes: a2260983c655 ("hvf: arm: Add support for GICv3") Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240716-pmu-v3-4-8c7c1858a227@daynix.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18target/arm: Use FPST_F16 for SME FMOPA (widening)Richard Henderson
This operation has float16 inputs and thus must use the FZ16 control not the FZ control. Cc: qemu-stable@nongnu.org Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)") Reported-by: Daniyal Khan <danikhan632@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240717060149.204788-3-richard.henderson@linaro.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2374 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18target/arm: Use float_status copy in sme_fmopa_sDaniyal Khan
We made a copy above because the fp exception flags are not propagated back to the FPST register, but then failed to use the copy. Cc: qemu-stable@nongnu.org Fixes: 558e956c719 ("target/arm: Implement FMOPA, FMOPS (non-widening)") Signed-off-by: Daniyal Khan <danikhan632@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240717060149.204788-2-richard.henderson@linaro.org [rth: Split from a larger patch] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18target/arm: LDAPR should honour SCTLR_ELx.nAAPeter Maydell
In commit c1a1f80518d360b when we added the FEAT_LSE2 relaxations to the alignment requirements for atomic and ordered loads and stores, we didn't quite get it right for LDAPR/LDAPRH/LDAPRB with no immediate offset. These instructions were handled in the old decoder as part of disas_ldst_atomic(), but unlike all the other insns that function decoded (LDADD, LDCLR, etc) these insns are "ordered", not "atomic", so they should be using check_ordered_align() rather than check_atomic_align(). Commit c1a1f80518d360b used check_atomic_align() regardless for everything in disas_ldst_atomic(). We then carried that incorrect check over in the decodetree conversion, where LDAPR/LDAPRH/LDAPRB are now handled by trans_LDAPR(). The effect is that when FEAT_LSE2 is implemented, these instructions don't honour the SCTLR_ELx.nAA bit and will generate alignment faults when they should not. (The LDAPR insns with an immediate offset were in disas_ldst_ldapr_stlr() and then in trans_LDAPR_i() and trans_STLR_i(), and have always used the correct check_ordered_align().) Use check_ordered_align() in trans_LDAPR(). Cc: qemu-stable@nongnu.org Fixes: c1a1f80518d360b ("target/arm: Relax ordered/atomic alignment checks for LSE2") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240709134504.3500007-3-peter.maydell@linaro.org
2024-07-18target/arm: Fix handling of LDAPR/STLR with negative offsetPeter Maydell
When we converted the LDAPR/STLR instructions to decodetree we accidentally introduced a regression where the offset is negative. The 9-bit immediate field is signed, and the old hand decoder correctly used sextract32() to get it out of the insn word, but the ldapr_stlr_i pattern in the decode file used "imm:9" instead of "imm:s9", so it treated the field as unsigned. Fix the pattern to treat the field as a signed immediate. Cc: qemu-stable@nongnu.org Fixes: 2521b6073b7 ("target/arm: Convert LDAPR/STLR (imm) to decodetree") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2419 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240709134504.3500007-2-peter.maydell@linaro.org
2024-07-11target/arm: Convert PMULL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SMULL, UMULL, SMLAL, UMLAL, SMLSL, UMLSL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()Peter Maydell
In commit a96edb687e76 we set the cpu_exec_halt field of the TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the arm_v7m_tcg_ops struct unchanged. That isn't wrong, because for M-profile FEAT_WFxT doesn't exist and the default handling for "no cpu_exec_halt method" is correct, but it's perhaps a little confusing. We would also like to make setting the cpu_exec_halt method mandatory. Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use for A-profile. (On M-profile we never set up the wfxt timer so there is no change in behaviour here.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target/arm: Use cpu_env in cpu_untagged_addrRichard Henderson
In a completely artifical memset benchmark object_dynamic_cast_assert dominates the profile, even above guest address resolution and the underlying host memset. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240702154911.1667418-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Allow FPCR bits that aren't in FPSCRPeter Maydell
In order to allow FPCR bits that aren't in the FPSCR (like the new bits that are defined for FEAT_AFP), we need to make sure that writes to the FPSCR only write to the bits of FPCR that are architecturally mapped, and not the others. Implement this with a new function vfp_set_fpcr_masked() which takes a mask of which bits to update. (We could do the same for FPSR, but we leave that until we actually are likely to need it.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-10-peter.maydell@linaro.org
2024-07-11target/arm: Rename FPSR_MASK and FPCR_MASK and define them symbolicallyPeter Maydell
Now that we store FPSR and FPCR separately, the FPSR_MASK and FPCR_MASK macros are slightly confusingly named and the comment describing them is out of date. Rename them to FPSCR_FPSR_MASK and FPSCR_FPCR_MASK, document that they are the mask of which FPSCR bits are architecturally mapped to which AArch64 register, and define them symbolically rather than as hex values. (This latter requires defining some extra macros for bits which we haven't previously defined.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-9-peter.maydell@linaro.org
2024-07-11target/arm: Rename FPCR_ QC, NZCV macros to FPSR_Peter Maydell
The QC, N, Z, C, V bits live in the FPSR, not the FPCR. Rename the macros that define these bits accordingly. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-8-peter.maydell@linaro.org
2024-07-11target/arm: Store FPSR and FPCR in separate CPU state fieldsPeter Maydell
Now that we have refactored the set/get functions so that the FPSCR format is no longer the authoritative one, we can keep FPSR and FPCR in separate CPU state fields. As well as the get and set functions, we also have a scattering of places in the code which directly access vfp.xregs[ARM_VFP_FPSCR] to extract single fields which are stored there. These all change to directly access either vfp.fpsr or vfp.fpcr, depending on the location of the field. (Most commonly, this is the NZCV flags.) We make the field in the CPU state struct 64 bits, because architecturally FPSR and FPCR are 64 bits. However we leave the types of the arguments and return values of the get/set functions as 32 bits, since we don't need to make that change with the current architecture and various callsites would be unable to handle set bits in the high half (for instance the gdbstub protocol assumes they're only 32 bit registers). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-7-peter.maydell@linaro.org
2024-07-11target/arm: Implement store_cpu_field_low32() macroPeter Maydell
We already have a load_cpu_field_low32() to load the low half of a 64-bit CPU struct field to a TCGv_i32; however we haven't yet needed the store equivalent. We'll want that in the next patch, so implement it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-6-peter.maydell@linaro.org
2024-07-11target/arm: Support migration when FPSR/FPCR won't fit in the FPSCRPeter Maydell
To support FPSR and FPCR bits that don't exist in the AArch32 FPSCR view of floating point control and status (such as the FEAT_AFP ones), we need to make sure those bits can be migrated. This commit allows that, whilst maintaining backwards and forwards migration compatibility for CPUs where there are no such bits: On sending: * If either the FPCR or the FPSR include set bits that are not visible in the AArch32 FPSCR view of floating point control/status then we send the FPCR and FPSR as two separate fields in a new cpu/vfp/fpcr_fpsr subsection, and we send a 0 for the old FPSCR field in cpu/vfp * Otherwise, we don't send the fpcr_fpsr subsection, and we send an FPSCR-format value in cpu/vfp as we did previously On receiving: * if we see a non-zero FPSCR field, that is the right information * if we see a fpcr_fpsr subsection then that has the information * if we see neither, then FPSCR/FPCR/FPSR are all zero on the source; cpu_pre_load() ensures the CPU state defaults to that * if we see both, then the migration source is buggy or malicious; either the fpcr_fpsr or the FPSCR will "win" depending which is first in the migration stream; we don't care which that is We make the new FPCR and FPSR on-the-wire data be 64 bits, because architecturally these registers are that wide, and this avoids the need to engage in further migration-compatibility contortions in future if some new architecture revision defines bits in the high half of either register. (We won't ever send the new migration subsection until we add support for a CPU feature which enables setting overlapping FPCR bits, like FEAT_AFP.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-5-peter.maydell@linaro.org
2024-07-11target/arm: Make vfp_set_fpscr() call vfp_set_{fpcr, fpsr}Peter Maydell
Make vfp_set_fpscr() call vfp_set_fpsr() and vfp_set_fpcr() instead of the other way around. The masking we do when getting and setting vfp.xregs[ARM_VFP_FPSCR] is a little awkward, but we are going to change where we store the underlying FPSR and FPCR information in a later commit, so it will go away then. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-4-peter.maydell@linaro.org
2024-07-11target/arm: Make vfp_get_fpscr() call vfp_get_{fpcr, fpsr}Peter Maydell
In AArch32, the floating point control and status bits are all in a single register, FPSCR. In AArch64, these were split into separate FPCR and FPSR registers, but the bit layouts remained the same, with no overlaps, so that you could construct an FPSCR value by ORing FPCR and FPSR, or equivalently could produce FPSR and FPCR by masking an FPSCR value. For QEMU's implementation, we opted to use masking to produce FPSR and FPCR, because we started with an AArch32 implementation of FPSCR. The addition of the (AArch64-only) FEAT_AFP adds new bits to the FPCR which overlap with some bits in the FPSR. This means we'll no longer be able to consider the FPSCR-encoded value as the primary one, but instead need to treat FPSR/FPCR as the primary encoding and construct the FPSCR from those. (This remains possible because the FEAT_AFP bits in FPCR don't appear in the FPSCR.) As the first step in this refactoring, make vfp_get_fpscr() call vfp_get_fpcr() and vfp_get_fpsr(), instead of the other way around. Note that vfp_get_fpcsr_from_host() returns only bits in the FPSR (for the cumulative fp exception bits), so we can simply rename it without needing to add a new function for getting FPCR bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-3-peter.maydell@linaro.org
2024-07-11target/arm: Correct comments about M-profile FPSCRPeter Maydell
The M-profile FPSCR LTPSIZE is bits [18:16]; this is the same field as A-profile FPSCR Len, not Stride. Correct the comment in vfp_get_fpscr(). We also implemented M-profile FPSCR.QC, but forgot to delete a TODO comment from vfp_set_fpscr(); remove it now. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-2-peter.maydell@linaro.org
2024-07-05gdbstub: Add support for MTE in user modeGustavo Romero
This commit implements the stubs to handle the qIsAddressTagged, qMemTag, and QMemTag GDB packets, allowing all GDB 'memory-tag' subcommands to work with QEMU gdbstub on aarch64 user mode. It also implements the get/set functions for the special GDB MTE register 'tag_ctl', used to control the MTE fault type at runtime. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Message-Id: <20240628050850.536447-11-gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240705084047.857176-40-alex.bennee@linaro.org>
2024-07-05target/arm: Make some MTE helpers widely availableGustavo Romero
Make the MTE helpers allocation_tag_mem_probe, load_tag1, and store_tag1 available to other subsystems. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240628050850.536447-6-gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240705084047.857176-35-alex.bennee@linaro.org>
2024-07-05target/arm: Fix exception case in allocation_tag_mem_probeGustavo Romero
If page in 'ptr_access' is inaccessible and probe is 'true' allocation_tag_mem_probe should not throw an exception, but currently it does, so fix it. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240628050850.536447-5-gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240705084047.857176-34-alex.bennee@linaro.org>
2024-07-01target/arm: Enable FEAT_Debugv8p8 for -cpu maxGustavo Romero
Enable FEAT_Debugv8p8 for max CPU. This feature is out of scope for QEMU since it concerns the external debug interface for JTAG, but is mandatory in Armv8.8 implementations, hence it is reported as supported in the ID registers. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240624180915.4528-4-gustavo.romero@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01target/arm: Move initialization of debug ID registersGustavo Romero
Move the initialization of the debug ID registers to aa32_max_features, which is used to set the 32-bit ID registers. This ensures that the debug ID registers are consistently set for the max CPU in a single place. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240624180915.4528-3-gustavo.romero@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01target/arm: Fix indentationGustavo Romero
Fix comment indentation adding a missing space. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240624180915.4528-2-gustavo.romero@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-01target/arm: Delete dead code from disas_simd_indexedRichard Henderson
MLA, MLS, SQDMULH, SQRDMULH, were converted with 8db93dcd3def and f80701cb44d, and this code should have been removed then. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240625183536.1672454-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>