aboutsummaryrefslogtreecommitdiff
path: root/target/arm/tcg
AgeCommit message (Collapse)Author
2024-09-19target/arm: Convert handle_scalar_simd_shri to decodetreeRichard Henderson
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert SHRN, RSHRN to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Split out subroutines of handle_shri_with_rndaccRichard Henderson
There isn't a lot of commonality along the different paths of handle_shri_with_rndacc. Split them out to separate functions, which will be usable during the decodetree conversion. Simplify 64-bit rounding operations to not require double-word arithmetic. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Push tcg_rnd into handle_shri_with_rndaccRichard Henderson
We always pass the same value for round; compute it within common code. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert SSHLL, USHLL to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Use {, s}extract in handle_vec_simd_wshliRichard Henderson
Combine the right shift with the extension via the tcg extract operations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert handle_vec_simd_shli to decodetreeRichard Henderson
This includes SHL and SLI. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert handle_vec_simd_shri to decodetreeRichard Henderson
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Fix whitespace near gen_srshr64_i64Richard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Introduce gen_gvec_sshr, gen_gvec_ushrRichard Henderson
Handle the two special cases within these new functions instead of higher in the call stack. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert MOVI, FMOV, ORR, BIC (vector immediate) to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert FMOVI (scalar, immediate) to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert FMAXNMV, FMINNMV, FMAXV, FMINV to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert ADDV, *ADDLV, *MAXV, *MINV to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Simplify do_reduction_opRichard Henderson
Use simple shift and add instead of ctpop, ctz, shift and mask. Unlike SVE, there is no predicate to disable elements. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert UZP, TRN, ZIP to decodetreeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert TBL, TBX to decodetreeRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Convert EXT to decodetreeRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Use tcg_gen_extract2_i64 for EXTRichard Henderson
The extract2 tcg op performs the same operation as the do_ext64 function. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Use cmpsel in gen_sshl_vecRichard Henderson
Instead of cmp+and or cmp+andc, use cmpsel. This will be better for hosts that use predicate registers for cmp. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Use cmpsel in gen_ushl_vecRichard Henderson
Instead of cmp+and or cmp+andc, use cmpsel. This will be better for hosts that use predicate registers for cmp. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Replace tcg_gen_dupi_vec with constants in translate-sve.cRichard Henderson
Instead of copying a constant into a temporary with dupi, use a vector constant directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19target/arm: Replace tcg_gen_dupi_vec with constants in gengvec.cRichard Henderson
Instead of copying a constant into a temporary with dupi, use a vector constant directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240912024114.1097832-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-13target/arm/tcg: refine cache descriptions with a wrapperAlireza Sanaee
This patch allows for easier manipulation of the cache description register, CCSIDR. Which is helpful for testing as well. Currently, numbers get hard-coded and might be prone to errors. Therefore, this patch adds a wrapper for different types of CPUs available in tcg to decribe caches. One function `make_ccsidr` supports two cases by carrying a parameter as FORMAT that can be LEGACY and CCIDX which determines the specification of the register. For CCSIDR register, 32 bit version follows specification [1]. Conversely, 64 bit version follows specification [2]. [1] B4.1.19, ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition, https://developer.arm.com/documentation/ddi0406 [2] D23.2.29, ARM Architecture Reference Manual for A-profile Architecture, https://developer.arm.com/documentation/ddi0487/latest/ Signed-off-by: Alireza Sanaee <alireza.sanaee@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240903144550.280-1-alireza.sanaee@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-05target/arm: Correct names of VFP VFNMA and VFNMS insnsPeter Maydell
In vfp.decode we have the names of the VFNMA and VFNMS instructions the wrong way around. The architecture says that bit 6 is the 'op' bit, which is 1 for VFNMA and 0 for VFNMS, but we label these two lines of decode the other way around. This doesn't cause any user-visible problem because in the handling of these functions in translate-vfp.c we give VFNMA the behaviour specified for VFNMS and vice-versa, but it's confusing when reading the code. Switch the names of the VFP VFNMA and VFNMS instructions in the decode file and flip the behaviour also. NB: the instructions VFMA and VFMS *are* decoded with op=0 for VFMA and op=1 for VFMS; the confusion probably arose because we assumed VFNMA and VFNMS to be the same way around. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2536 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240830152156.2046590-1-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Enable FEAT_EBF16 in the "max" CPUPeter Maydell
Now that we've implemented the required behaviour for FEAT_EBF16, we can enable it for the "max" CPU type, list it in our documentation, and delete a TODO comment about it being missing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Implement FPCR.EBF=1 semantics for bfdotadd()Peter Maydell
Implement the FPCR.EBF=1 semantics for bfdotadd() operations: * is_ebf() sets up fpst and fpst_odd * bfdotadd_ebf() implements the fused paired-multiply-and-add operation that we need The paired-multiply-and-add is similar to f16_dotadd() and we use the same trick here as in that function, but the inputs here are bfloat16 rather than float16. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Prepare bfdotadd() callers for FEAT_EBF supportPeter Maydell
We use bfdotadd() in four callsites for various helper functions. Currently this all assumes that we have the FPCR.EBF=0 semantics. For FPCR.EBF=1 we will need to: * call a different routine to bfdotadd() because we need to do a fused multiply-add rather than separate multiply and add steps * use a different float_status that honours the FPCR rounding mode and denormal-flushing fields * pass in an extra float_status that has been set up to perform round-to-odd rounding To prepare for this, refactor all the callsites so that instead of for (...) { x = bfdotadd(...); } they are: float_status fpst, fpst_odd; if (is_ebf(env, &fpst, &fpst_odd)) { for (...) { x = bfdotadd_ebf(..., &fpst, &fpst_odd); } } else { for (...) { x = bfdotadd(..., &fpst); } } For the moment the is_ebf() function always returns false, sets up fpst for EBF=0 semantics and never sets up fpst_odd; bfdotadd_ebf() will assert if called. We'll fill in the handling for EBF=1 in the next commit. This change should be a zero-behaviour-change refactor. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Pass env pointer through to gvec_bfmmla helperPeter Maydell
Pass the env pointer through to the gvec_bfmmla helper, so we can use it to add support for FEAT_EBF16. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Pass env pointer through to gvec_bfdot_idx helperPeter Maydell
Pass the env pointer through to the gvec_bfdot_idx helper, so we can use it to add support for FEAT_EBF16. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Pass env pointer through to gvec_bfdot helperPeter Maydell
Pass the env pointer through to the gvec_bfdot helper, so we can use it to add support for FEAT_EBF16. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05target/arm: Pass env pointer through to sme_bfmopa helperPeter Maydell
To implement the FEAT_EBF16 semantics, we are going to need the CPUARMState env pointer in every helper function which calls bfdotadd(). Pass the env pointer through from generated code to the sme_bfmopa helper. (We'll add the code that uses it when we've adjusted all the helpers to have access to the env pointer.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-08-13target/arm: Fix usage of MMU indexes when EL3 is AArch32Peter Maydell
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. We could fix this in one of two ways: * The most straightforward is to add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This matches how we use indexes for the AArch64 regimes, and preserves propirties like being able to determine the privilege level from an MMU index without any other information. However it would add two MMU indexes (we can share one with ARMMMUIdx_EL3), and we are already using 14 of the 16 the core TLB code permits. * The more complicated approach is the one we take here. We use the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0 than we do for NonSecure PL1&0. This saves on MMU indexes, but means we need to check in some places whether we're in the Secure PL1&0 regime or not before we interpret an MMU index. The changes in this commit were created by auditing all the places where we use specific ARMMMUIdx_ values, and checking whether they needed to be changed to handle the new index value usage. Note for potential stable backports: taking also the previous (comment-change-only) commit might make the backport easier. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
2024-08-13target/arm: Clear high SVE elements in handle_vec_simd_wshliRichard Henderson
AdvSIMD instructions are supposed to zero bits beyond 128. Affects SSHLL, USHLL, SSHLL2, USHLL2. Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240717060903.205098-15-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-08-09target/arm: Fix BTI versus CF_PCRELRichard Henderson
With pcrel, we cannot check the guarded page bit at translation time, as different mappings of the same physical page may or may not have the GP bit set. Instead, add a couple of helpers to check the page at runtime, after all other filters that might obviate the need for the check. The set_btype_for_br call must be moved after the gen_a64_set_pc call to ensure the current pc can still be computed. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240802003028.795476-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-08-01target/arm: Handle denormals correctly for FMOPA (widening)Peter Maydell
The FMOPA (widening) SME instruction takes pairs of half-precision floating point values, widens them to single-precision, does a two-way dot product and accumulates the results into a single-precision destination. We don't quite correctly handle the FPCR bits FZ and FZ16 which control flushing of denormal inputs and outputs. This is because at the moment we pass a single float_status value to the helper function, which then uses that configuration for all the fp operations it does. However, because the inputs to this operation are float16 and the outputs are float32 we need to use the fp_status_f16 for the float16 input widening but the normal fp_status for everything else. Otherwise we will apply the flushing control FPCR.FZ16 to the 32-bit output rather than the FPCR.FZ control, and incorrectly flush a denormal output to zero when we should not (or vice-versa). (In commit 207d30b5fdb5b we tried to fix the FZ handling but didn't get it right, switching from "use FPCR.FZ for everything" to "use FPCR.FZ16 for everything".) Pass the CPU env to the sme_fmopa_h helper instead of an fp_status pointer, and have the helper pass an extra fp_status into the f16_dotadd() function so that we can use the right status for the right parts of this operation. Cc: qemu-stable@nongnu.org Fixes: 207d30b5fdb5 ("target/arm: Use FPST_F16 for SME FMOPA (widening)") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2373 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-29target/arm: Avoid shifts by -1 in tszimm_shr() and tszimm_shl()Peter Maydell
The function tszimm_esz() returns a shift amount, or possibly -1 in certain cases that correspond to unallocated encodings in the instruction set. We catch these later in the trans_ functions (generally with an "a-esz < 0" check), but before we do the decodetree-generated code will also call tszimm_shr() or tszimm_sl(), which will use the tszimm_esz() return value as a shift count without checking that it is not negative, which is undefined behaviour. Avoid the UB by checking the return value in tszimm_shr() and tszimm_shl(). Cc: qemu-stable@nongnu.org Resolves: Coverity CID 1547617, 1547694 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-4-peter.maydell@linaro.org
2024-07-29target/arm: Fix UMOPA/UMOPS of 16-bit valuesPeter Maydell
The UMOPA/UMOPS instructions are supposed to multiply unsigned 8 or 16 bit elements and accumulate the products into a 64-bit element. In the Arm ARM pseudocode, this is done with the usual infinite-precision signed arithmetic. However our implementation doesn't quite get it right, because in the DEF_IMOP_64() macro we do: sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); where NTYPE and MTYPE are uint16_t or int16_t. In the uint16_t case, the C usual arithmetic conversions mean the values are converted to "int" type and the multiply is done as a 32-bit multiply. This means that if the inputs are, for example, 0xffff and 0xffff then the result is 0xFFFE0001 as an int, which is then promoted to uint64_t for the accumulation into sum; this promotion incorrectly sign extends the multiply. Avoid the incorrect sign extension by casting to int64_t before the multiply, so we do the multiply as 64-bit signed arithmetic, which is a type large enough that the multiply can never overflow into the sign bit. (The equivalent 8-bit operations in DEF_IMOP_32() are fine, because the 8-bit multiplies can never overflow into the sign bit of a 32-bit integer.) Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2372 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-3-peter.maydell@linaro.org
2024-07-29target/arm: Don't assert for 128-bit tile accesses when SVL is 128Peter Maydell
For an instruction which accesses a 128-bit element tile when the SVL is also 128 (for example MOV z0.Q, p0/M, ZA0H.Q[w0,0]), we will assert in get_tile_rowcol(): qemu-system-aarch64: ../../tcg/tcg-op.c:926: tcg_gen_deposit_z_i32: Assertion `len > 0' failed. This happens because we calculate len = ctz32(streaming_vec_reg_size(s)) - esz;$ but if the SVL and the element size are the same len is 0, and the deposit operation asserts. In this case the ZA storage contains exactly one 128 bit element ZA tile, and the horizontal or vertical slice is just that tile. This means that regardless of the index value in the Ws register, we always access that tile. (In pseudocode terms, we calculate (index + offset) MOD 1, which is 0.) Special case the len == 0 case to avoid hitting the assertion in tcg_gen_deposit_z_i32(). Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240722172957.1041231-2-peter.maydell@linaro.org
2024-07-23target/arm: Use set/clear_helper_retaddr in SVE and SME helpersRichard Henderson
Avoid a race condition with munmap in another thread. Use around blocks that exclusively use "host_fn". Keep the blocks as small as possible, but without setting and clearing for every operation on one page. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-23target/arm: Use set/clear_helper_retaddr in helper-a64.cRichard Henderson
Use these in helper_dc_dva and the FEAT_MOPS routines to avoid a race condition with munmap in another thread. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-18target/arm: Use FPST_F16 for SME FMOPA (widening)Richard Henderson
This operation has float16 inputs and thus must use the FZ16 control not the FZ control. Cc: qemu-stable@nongnu.org Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)") Reported-by: Daniyal Khan <danikhan632@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240717060149.204788-3-richard.henderson@linaro.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2374 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18target/arm: Use float_status copy in sme_fmopa_sDaniyal Khan
We made a copy above because the fp exception flags are not propagated back to the FPST register, but then failed to use the copy. Cc: qemu-stable@nongnu.org Fixes: 558e956c719 ("target/arm: Implement FMOPA, FMOPS (non-widening)") Signed-off-by: Daniyal Khan <danikhan632@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240717060149.204788-2-richard.henderson@linaro.org [rth: Split from a larger patch] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18target/arm: LDAPR should honour SCTLR_ELx.nAAPeter Maydell
In commit c1a1f80518d360b when we added the FEAT_LSE2 relaxations to the alignment requirements for atomic and ordered loads and stores, we didn't quite get it right for LDAPR/LDAPRH/LDAPRB with no immediate offset. These instructions were handled in the old decoder as part of disas_ldst_atomic(), but unlike all the other insns that function decoded (LDADD, LDCLR, etc) these insns are "ordered", not "atomic", so they should be using check_ordered_align() rather than check_atomic_align(). Commit c1a1f80518d360b used check_atomic_align() regardless for everything in disas_ldst_atomic(). We then carried that incorrect check over in the decodetree conversion, where LDAPR/LDAPRH/LDAPRB are now handled by trans_LDAPR(). The effect is that when FEAT_LSE2 is implemented, these instructions don't honour the SCTLR_ELx.nAA bit and will generate alignment faults when they should not. (The LDAPR insns with an immediate offset were in disas_ldst_ldapr_stlr() and then in trans_LDAPR_i() and trans_STLR_i(), and have always used the correct check_ordered_align().) Use check_ordered_align() in trans_LDAPR(). Cc: qemu-stable@nongnu.org Fixes: c1a1f80518d360b ("target/arm: Relax ordered/atomic alignment checks for LSE2") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240709134504.3500007-3-peter.maydell@linaro.org
2024-07-18target/arm: Fix handling of LDAPR/STLR with negative offsetPeter Maydell
When we converted the LDAPR/STLR instructions to decodetree we accidentally introduced a regression where the offset is negative. The 9-bit immediate field is signed, and the old hand decoder correctly used sextract32() to get it out of the insn word, but the ldapr_stlr_i pattern in the decode file used "imm:9" instead of "imm:s9", so it treated the field as unsigned. Fix the pattern to treat the field as a signed immediate. Cc: qemu-stable@nongnu.org Fixes: 2521b6073b7 ("target/arm: Convert LDAPR/STLR (imm) to decodetree") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2419 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240709134504.3500007-2-peter.maydell@linaro.org
2024-07-11target/arm: Convert PMULL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>