aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2021-09-01target/arm: Implement MVE VCVT with specified rounding modePeter Maydell
Implement the MVE VCVT which converts from floating-point to integer using a rounding mode specified by the instruction. We implement this similarly to the Neon equivalents, by passing the required rounding mode as an extra integer parameter to the helper functions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between fp and integerPeter Maydell
Implement the MVE "VCVT (between floating-point and integer)" insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VCVT between floating and fixed pointPeter Maydell
Implement the MVE VCVT insns which convert between floating and fixed point. As with the Neon equivalents, these use essentially the same constant encoding as right-shift-by-immediate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp scalar comparisonsPeter Maydell
Implement the MVE fp scalar comparisons VCMP and VPT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp vector comparisonsPeter Maydell
Implement the MVE fp vector comparisons VCMP and VPT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE FP max/min across vectorPeter Maydell
Implement the MVE VMAXNMV, VMINNMV, VMAXNMAV, VMINNMAV insns. These calculate the maximum or minimum of floating point elements across a vector, starting with a value in a general purpose register and returning the result there. The pseudocode silences a possible SNaN in the accumulating result on every iteration (by calling FPConvertNaN), but we do it only on the input ra, because if none of the inputs to float*_maxnum or float*_minnum are SNaNs then the result can't be an SNaN. Note that we can't use the float*_maxnuma() etc functions we defined earlier for VMAXNMA and VMINNMA, because we mustn't take the absolute value of the starting general-purpose register value, which could be negative. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE fp-with-scalar VFMA, VFMASPeter Maydell
Implement the MVE fp-with-scalar VFMA and VFMAS insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE scalar fp insnsPeter Maydell
Implement the MVE scalar floating point insns VADD, VSUB and VMUL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01target/arm: Implement MVE VMAXNMA and VMINNMAPeter Maydell
Implement the MVE VMAXNMA and VMINNMA insns; these are 2-operand, but the destination register must be the same as one of the source registers. We defer the decode of the size in bit 28 to the individual insn patterns rather than doing it in the format, because otherwise we would have a single insn pattern that overlapped with two groups (eg VMAXNMA with the VMULH_S and VMULH_U groups). Having two insn patterns per insn seems clearer than a complex multilevel nesting of overlapping and non-overlapping groups. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VCMUL and VCMLAPeter Maydell
Implement the MVE VCMUL and VCMLA insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VFMA and VFMSPeter Maydell
Implement the MVE VFMA and VFMS insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VCADDPeter Maydell
Implement the MVE VCADD insn. Note that here the size bit is the opposite sense to the other 2-operand fp insns. We don't check for the sz == 1 && Qd == Qm UNPREDICTABLE case, because that would mean we can't use the DO_2OP_FP macro in translate-mve.c. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VSUB, VMUL, VABD, VMAXNM, VMINNMPeter Maydell
Implement more simple 2-operand floating point MVE insns. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target/arm: Implement MVE VADD (floating-point)Peter Maydell
Implement the MVE VADD (floating-point) insn. Handling of this is similar to the 2-operand integer insns, except that we must take care to only update the floating point exception status if the least significant bit of the predicate mask for each element is active. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm: Do hflags rebuild in cpsr_write()Peter Maydell
Currently we rely on all the callsites of cpsr_write() to rebuild the cached hflags if they change one of the CPSR bits which we use as a TB flag and cache in hflags. This is a bit awkward when we want to change the set of CPSR bits that we cache, because it means we need to re-audit all the cpsr_write() callsites to see which flags they are writing and whether they now need to rebuild the hflags. Switch instead to making cpsr_write() call arm_rebuild_hflags() itself if one of the bits being changed is a cached bit. We don't do the rebuild for the CPSRWriteRaw write type, because that kind of write is generally doing something special anyway. For the CPSRWriteRaw callsites in the KVM code and inbound migration we definitely don't want to recalculate the hflags; the callsites in boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves anyway because of other CPU state changes they make. This allows us to drop explicit arm_rebuild_hflags() calls in a couple of places where the only reason we needed to call it was the CPSR write. This fixes a bug where we were incorrectly failing to rebuild hflags in the code path for a gdbstub write to CPSR, which meant that you could make QEMU assert by breaking into a running guest, altering the CPSR to change the value of, for example, CPSR.E, and then continuing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TJDBXPeter Maydell
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TTEEPeter Maydell
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses to the Thumb2EE TEECR and TEEHBR registers to be trapped to the hypervisor. Implement these traps. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
2021-08-26target/arm: Avoid assertion trying to use KVM and multiple ASesPeter Maydell
KVM cannot support multiple address spaces per CPU; if you try to create more than one then cpu_address_space_init() will assert. In the Arm CPU realize function, detect the configurations which would cause us to need more than one AS, and cleanly fail the realize rather than blundering on into the assertion. This turns this: $ qemu-system-aarch64 -enable-kvm -display none -cpu max -machine raspi3b qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine raspi3b qemu-system-aarch64: Cannot enable KVM when guest CPU has EL3 enabled and this: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: Cannot enable KVM when using an M-profile guest CPU Fixes: https://gitlab.com/qemu-project/qemu/-/issues/528 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210816135842.25302-3-peter.maydell@linaro.org
2021-08-26target/arm/cpu64: Validate sve vector lengths are supportedAndrew Jones
Future CPU types may specify which vector lengths are supported. We can apply nearly the same logic to validate those lengths as we do for KVM's supported vector lengths. We merge the code where we can, but unfortunately can't completely merge it because KVM requires all vector lengths, power-of-two or not, smaller than the maximum enabled length to also be enabled. The architecture only requires all the power-of-two lengths, though, so TCG will only enforce that. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-5-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu64: Replace kvm_supported with sve_vq_supportedAndrew Jones
Now that we have an ARMCPU member sve_vq_supported we no longer need the local kvm_supported bitmap for KVM's supported vector lengths. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/kvm64: Ensure sve vls map is completely clearAndrew Jones
bitmap_clear() only clears the given range. While the given range should be sufficient in this case we might as well be 100% sure all bits are zeroed by using bitmap_zero(). Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-3-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu: Introduce sve_vq_supported bitmapAndrew Jones
Allow CPUs that support SVE to specify which SVE vector lengths they support by setting them in this bitmap. Currently only the 'max' and 'host' CPU types supports SVE and 'host' requires KVM which obtains its supported bitmap from the host. So, we only need to initialize the bitmap for 'max' with TCG. And, since 'max' should support all SVE vector lengths we simply fill the bitmap. Future CPU types may have less trivial maps though. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()Hamza Mahfooz
As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock variants"), RCU_READ_LOCK_GUARD() should be used instead of rcu_read_{un}lock(). Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210727235201.11491-1-someguy@effective-light.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25target/arm: Implement M-profile trapping on division by zeroPeter Maydell
Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
2021-08-25target/arm: Re-indent sdiv and udiv helpersPeter Maydell
We're about to make a code change to the sdiv and udiv helper functions, so first fix their indentation and coding style. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
2021-08-25target/arm: Implement MVE interleaving loads/storesPeter Maydell
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2 and VST4. VLD2 loads 16 bytes of data from memory and writes to 2 consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes to 4 consecutive Qregs. The 'pattern' field in the encoding determines the offset into memory which is accessed and also which elements in the Qregs are written to. (The intention is that a sequence of four consecutive VLD4 with different pattern values performs a complete de-interleaving load of 64 bytes into all elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE scatter-gather immediate formsPeter Maydell
Implement the MVE VLDR/VSTR insns which do scatter-gather using base addresses from Qm plus or minus an immediate offset (possibly with writeback). Note that writeback is not predicated but it does have to honour ECI state, so we have to add an eci_mask check to the VSTR_SG macros (the VLDR_SG macros already needed this to be able to distinguish "skip beat" from "set predicated element to 0"). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE scatter-gather insnsPeter Maydell
Implement the MVE gather-loads and scatter-stores which form the address by adding a base value from a scalar register to an offset in each element of a vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VCTPPeter Maydell
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so as to predicate any element at index Rn or greater is predicated. As with VPNOT, this insn itself is predicable and subject to beatwise execution. The calculation of the mask is the same as is used to determine ltpmask in mve_element_mask(), but we precalculate masklen in generated code to avoid having to have 4 helpers specialized by size. We put the decode line in with the low-overhead-loop insns in t32.decode because it's logically part of that collection of insn patterns, even though it is an MVE only insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VPNOTPeter Maydell
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0 (subject to both predication and to beatwise execution). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMOV to/from 2 general-purpose registersPeter Maydell
Implement the MVE VMOV forms that move data between 2 general-purpose registers and 2 32-bit lanes in a vector register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMAXA, VMINAPeter Maydell
Implement the MVE VMAXA and VMINA insns, which take the absolute value of the signed elements in the input vector and then accumulate the unsigned max or min into the destination vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VQABS, VQNEGPeter Maydell
Implement the MVE 1-operand saturating operations VQABS and VQNEG. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE saturating doubling multiply accumulatesPeter Maydell
Implement the MVE saturating doubling multiply accumulate insns VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply, double, add the accumulator shifted by the element size, possibly round, saturate to twice the element size, then take the high half of the result. The *MLAH insns do vector * scalar + vector, and the *MLASH insns do vector * vector + scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLAPeter Maydell
Implement the MVE VMLA insn, which multiplies a vector by a scalar and accumulates into another vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLADAV and VMLSLDAVPeter Maydell
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and VMLSLDAV insns already implemented, these accumulate multiplied vector elements; but they accumulate a 32-bit result rather than a 64-bit one. Note that these encodings overlap with what would be RdaHi=0b111 for VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFnPeter Maydell
The MVEGenDualAccOpFn is a bit misnamed, since it is used for the "long dual accumulate" operations that use a 64-bit accumulator. Rename it to MVEGenLongDualAccOpFn so we can use the former name for the 32-bit accumulator insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE narrowing movesPeter Maydell
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN. These take a double-width input, narrow it (possibly saturating) and store the result to either the top or bottom half of the output element. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VABAVPeter Maydell
Implement the MVE VABAV insn, which computes absolute differences between elements of two vectors and accumulates the result into a general purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer min/max across vectorPeter Maydell
Implement the MVE integer min/max across vector insns VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum from the vector elements and a general purpose register, and store the maximum back into the general purpose register. These insns overlap with VRMLALDAVH (they use what would be RdaHi=0b110). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Move 'x' and 'a' bit definitions into vmlaldav formatsPeter Maydell
All the users of the vmlaldav formats have an 'x bit in bit 12 and an 'a' bit in bit 5; move these to the format rather than specifying them in each insn pattern. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE shift-by-scalarPeter Maydell
Implement the MVE instructions which perform shifts by a scalar. These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the shift amount in a general purpose register and shift every element in the vector by that amount. Mostly we can reuse the helper functions for shift-by-immediate; we do need two new helpers for VQRSHL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLASPeter Maydell
Implement the MVE VMLAS insn, which multiplies a vector by a vector and adds a scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VPSELPeter Maydell
Implement the MVE VPSEL insn, which sets each byte of the destination vector Qd to the byte from either Qn or Qm depending on the value of the corresponding bit in VPR.P0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector-vs-scalar comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions that compare each element against a scalar from a general purpose register. These are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)" encodings T4, T5 and T6. We have to move the decodetree pattern for VPST, because it overlaps with VCMP T4 with size = 0b11. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions. These are "VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings T1, T2 and T3. These insns compare corresponding elements in each vector, and update the VPR.P0 predicate bits with the results of the comparison. VPT also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively "VCMP then VPST". Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Factor out gen_vpst()Peter Maydell
Factor out the "generate code to update VPR.MASK01/MASK23" part of trans_VPST(); we are going to want to reuse it for the VPT insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE incrementing/decrementing dup insnsPeter Maydell
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP, VIWDUP and VDWDUP. These fill the elements of a vector with successively incrementing values, starting at the offset specified in a general purpose register. The final value of the offset is written back to this register. The wrapping variants take a second general purpose register which specifies the point where the count should wrap back to 0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMULL (polynomial)Peter Maydell
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the inputs are in either the low or the high half of each double-width element. The assembler for this insn indicates the size with "P8" or "P16", encoded into bit 28 as size = 0 or 1. We choose to follow the same encoding as VQDMULL and decode this into a->size as MO_16 or MO_32 indicating the size of the result elements. This then carries through to the helper function names where it then matches up with the existing pmull_h() which does an 8x8->16 operation and a new pmull_w() which does the 16x16->32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix VLDRB/H/W for predicated elementsPeter Maydell
For vector loads, predicated elements are zeroed, instead of retaining their previous values (as happens for most data processing operations). This means we need to distinguish "beat not executed due to ECI" (don't touch destination element) from "beat executed but predicated out" (zero destination element). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>