aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2021-08-25target/arm: Implement MVE shift-by-scalarPeter Maydell
Implement the MVE instructions which perform shifts by a scalar. These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the shift amount in a general purpose register and shift every element in the vector by that amount. Mostly we can reuse the helper functions for shift-by-immediate; we do need two new helpers for VQRSHL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLASPeter Maydell
Implement the MVE VMLAS insn, which multiplies a vector by a vector and adds a scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VPSELPeter Maydell
Implement the MVE VPSEL insn, which sets each byte of the destination vector Qd to the byte from either Qn or Qm depending on the value of the corresponding bit in VPR.P0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector-vs-scalar comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions that compare each element against a scalar from a general purpose register. These are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)" encodings T4, T5 and T6. We have to move the decodetree pattern for VPST, because it overlaps with VCMP T4 with size = 0b11. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions. These are "VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings T1, T2 and T3. These insns compare corresponding elements in each vector, and update the VPR.P0 predicate bits with the results of the comparison. VPT also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively "VCMP then VPST". Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Factor out gen_vpst()Peter Maydell
Factor out the "generate code to update VPR.MASK01/MASK23" part of trans_VPST(); we are going to want to reuse it for the VPT insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE incrementing/decrementing dup insnsPeter Maydell
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP, VIWDUP and VDWDUP. These fill the elements of a vector with successively incrementing values, starting at the offset specified in a general purpose register. The final value of the offset is written back to this register. The wrapping variants take a second general purpose register which specifies the point where the count should wrap back to 0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMULL (polynomial)Peter Maydell
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the inputs are in either the low or the high half of each double-width element. The assembler for this insn indicates the size with "P8" or "P16", encoded into bit 28 as size = 0 or 1. We choose to follow the same encoding as VQDMULL and decode this into a->size as MO_16 or MO_32 indicating the size of the result elements. This then carries through to the helper function names where it then matches up with the existing pmull_h() which does an 8x8->16 operation and a new pmull_w() which does the 16x16->32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix VLDRB/H/W for predicated elementsPeter Maydell
For vector loads, predicated elements are zeroed, instead of retaining their previous values (as happens for most data processing operations). This means we need to distinguish "beat not executed due to ECI" (don't touch destination element) from "beat executed but predicated out" (zero destination element). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix VPT advance when ECI is non-zeroPeter Maydell
We were not paying attention to the ECI state when advancing the VPT state. Architecturally, VPT state advance happens for every beat (see the pseudocode VPTAdvance()), so on every beat the 4 bits of VPR.P0 corresponding to the current beat are inverted if required, and at the end of beats 1 and 3 the VPR MASK fields are updated. This means that if the ECI state says we should not be executing all 4 beats then we need to skip some of the updating of the VPR that we currently do in mve_advance_vpt(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Factor out mve_eci_mask()Peter Maydell
In some situations we need a mask telling us which parts of the vector correspond to beats that are not being executed because of ECI, separately from the combined "which bytes are predicated away" mask. Factor this mask calculation out of mve_element_mask() into its own function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix calculation of LTP mask when LR is 0Peter Maydell
In mve_element_mask(), we calculate a mask for tail predication which should have a number of 1 bits based on the value of LR. However, our MAKE_64BIT_MASK() macro has undefined behaviour when passed a zero length. Special case this to give the all-zeroes mask we require. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix MVE 48-bit SQRSHRL for small right shiftsPeter Maydell
We got an edge case wrong in the 48-bit SQRSHRL implementation: if the shift is to the right, although it always makes the result smaller than the input value it might not be within the 48-bit range the result is supposed to be if the input had some bits in [63..48] set and the shift didn't bring all of those within the [47..0] range. Handle this similarly to the way we already do for this case in do_uqrshl48_d(): extend the calculated result from 48 bits, and return that if not saturating or if it doesn't change the result; otherwise fall through to return a saturated value. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix 48-bit saturating shiftsPeter Maydell
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge cases wrong and failed to saturate correctly: (1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs() does to obtain the saturated most-negative and most-positive 48-bit signed values for the large-shift-left case. This gives (1 << 47) for saturate-to-most-negative, but we weren't sign-extending this value to the 64-bit output as the pseudocode requires. (2) For left shifts by less than 48, we copied the "8/16 bit" code from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right thing because it assumes the C type we're working with is at least twice the number of bits we're saturating to (so that a shift left by bits-1 can't shift anything off the top of the value). This isn't true for bits == 48, so we would incorrectly return 0 rather than the most-positive value for situations like "shift (1 << 44) right by 20". Instead check for saturation by doing the shift and signextend and then testing whether shifting back left again gives the original value. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix mask handling for MVE narrowing operationsPeter Maydell
In the MVE helpers for the narrowing operations (DO_VSHRN and DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for the 'top' versions of the insn. This is because the loop works over the double-sized input elements and shifts the predicate mask by that many bits each time, but when we write out the half-sized output we must look at the mask bits for whichever half of the element we are writing to. Correct this by shifting the whole mask right by ESIZE bits for the 'top' insns. This allows us also to simplify the saturation bit checking (where we had noticed that we needed to look at a different mask bit for the 'top' insn.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix signed VADDVPeter Maydell
A cut-and-paste error meant we handled signed VADDV like unsigned VADDV; fix the type used. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix MVE VSLI by 0 and VSRI by <dt>Peter Maydell
In the MVE shift-and-insert insns, we special case VSLI by 0 and VSRI by <dt>. VSRI by <dt> means "don't update the destination", which is what we've implemented. However VSLI by 0 is "set destination to the input", so we don't want to use the same special-casing that we do for VSRI by <dt>. Since the generic logic gives the right answer for a shift by 0, just use that. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Print MVE VPR in CPU dumpsPeter Maydell
Include the MVE VPR register value in the CPU dumps produced by arm_cpu_dump_state() if we are printing FPU information. This makes it easier to interpret debug logs when predication is active. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Note that we handle VMOVL as a special case of VSHLLPeter Maydell
Although the architecture doesn't define it as an alias, VMOVL (vector move long) is encoded as a VSHLL with a zero shift. Add a comment in the decode file noting that we handle VMOVL as part of VSHLL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-27target/arm: Add sve-default-vector-length cpu propertyRichard Henderson
Mirror the behavour of /proc/sys/abi/sve_default_vector_length under the real linux kernel. We have no way of passing along a real default across exec like the kernel can, but this is a decent way of adjusting the startup vector length of a process. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-4-richard.henderson@linaro.org [PMM: tweaked docs formatting, document -1 special-case, added fixup patch from RTH mentioning QEMU's maximum veclen.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Export aarch64_sve_zcr_get_valid_lenRichard Henderson
Rename from sve_zcr_get_valid_len and make accessible from outside of helper.c. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Correctly bound length in sve_zcr_get_valid_lenRichard Henderson
Currently, our only caller is sve_zcr_len_for_el, which has already masked the length extracted from ZCR_ELx, so the masking done here is a nop. But we will shortly have uses from other locations, where the length will be unmasked. Saturate the length to ARM_MAX_VQ instead of truncating to the low 4 bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Report M-profile alignment faults correctly to the guestPeter Maydell
For M-profile, we weren't reporting alignment faults triggered by the generic TCG code correctly to the guest. These get passed into arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile style exception.fsr value of 1. We didn't check for this, and so they fell through into the default of "assume this is an MPU fault" and were reported to the guest as a data access violation MPU fault. Report these alignment faults as UsageFaults which set the UNALIGNED bit in the UFSR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
2021-07-27target/arm: Add missing 'return's after calling v7m_exception_taken()Peter Maydell
In do_v7m_exception_exit(), we perform various checks as part of performing the exception return. If one of these checks fails, the architecture requires that we take an appropriate exception on the existing stackframe. We implement this by calling v7m_exception_taken() to set up to take the new exception, and then immediately returning from do_v7m_exception_exit() without proceeding any further with the unstack-and-exception-return process. In a couple of checks that are new in v8.1M, we forgot the "return" statement, with the effect that if bad code in the guest tripped over these checks we would set up to take a UsageFault exception but then blunder on trying to also unstack and return from the original exception, with the probable result that the guest would crash. Add the missing return statements. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
2021-07-27target/arm: Enforce that M-profile SP low 2 bits are always zeroPeter Maydell
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be RES0H, which is to say that they must be hardwired to zero so that guest attempts to write non-zero values to them are ignored. Implement this behaviour by masking out the low bits: * for writes to r13 by the gdbstub * for writes to any of the various flavours of SP via MSR * for writes to r13 via store_reg() in generated code Note that all the direct uses of cpu_R[] in translate.c are in places where the register is definitely not r13 (usually because that has been checked for as an UNDEFINED or UNPREDICTABLE case and handled as UNDEF). All the other writes to regs[13] in C code are either: * A-profile only code * writes of values we can guarantee to be aligned, such as - writes of previous-SP-value plus or minus a 4-aligned constant - writes of the value in an SP limit register (which we already enforce to be aligned) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
2021-07-21accel/tcg: Remove TranslatorOps.breakpoint_checkRichard Henderson
The hook is now unused, with breakpoints checked outside translation. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21target/arm: Implement debug_check_breakpointRichard Henderson
Reuse the code at the bottom of helper_check_breakpoints, which is what we currently call from *_tr_breakpoint_check. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21tcg: Rename helper_atomic_*_mmu and provide for user-onlyRichard Henderson
Always provide the atomic interface using TCGMemOpIdx oi and uintptr_t retaddr. Rename from helper_* to cpu_* so as to (mostly) match the exec/cpu_ldst.h functions, and to emphasize that they are not callable from TCG directly. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-18target/arm: Remove duplicate 'plus1' function from Neon and SVE decodePeter Maydell
The Neon and SVE decoders use private 'plus1' functions to implement "add one" for the !function decoder syntax. We have a generic "plus_1" function in translate.h, so use that instead. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210715095341.701-1-peter.maydell@linaro.org
2021-07-18target/arm: Fix offsets for TTBCRRichard Henderson
The functions vmsa_ttbcr_write and vmsa_ttbcr_raw_write expect the offset to be for the complete TCR structure, not the offset to the low 32-bits of a uint64_t. Using offsetoflow32 in this case breaks big-endian hosts. For TTBCR2, we do want the high 32-bits of a uint64_t. Use cp15.tcr_el[*].raw_tcr as the offsetofhigh32 argument to clarify this. Buglink: https://gitlab.com/qemu-project/qemu/-/issues/187 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210709230621.938821-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-12Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210710' ↵Peter Maydell
into staging Add translator_use_goto_tb. Cleanups in prep of breakpoint fixes. Misc fixes. # gpg: Signature made Sat 10 Jul 2021 16:29:14 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth-gitlab/tags/pull-tcg-20210710: (41 commits) cpu: Add breakpoint tracepoints tcg: Remove TCG_TARGET_HAS_goto_ptr accel/tcg: Log tb->cflags with -d exec accel/tcg: Split out log_cpu_exec accel/tcg: Move tb_lookup to cpu-exec.c accel/tcg: Move helper_lookup_tb_ptr to cpu-exec.c target/i386: Use cpu_breakpoint_test in breakpoint_handler tcg: Fix prologue disassembly target/xtensa: Use translator_use_goto_tb target/tricore: Use tcg_gen_lookup_and_goto_ptr target/tricore: Use translator_use_goto_tb target/sparc: Use translator_use_goto_tb target/sh4: Use translator_use_goto_tb target/s390x: Remove use_exit_tb target/s390x: Use translator_use_goto_tb target/rx: Use translator_use_goto_tb target/riscv: Use translator_use_goto_tb target/ppc: Use translator_use_goto_tb target/openrisc: Use translator_use_goto_tb target/nios2: Use translator_use_goto_tb ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-11Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into ↵Peter Maydell
staging * More SVM fixes (Lara) * Module annotation database (Gerd) * Memory leak fixes (myself) * Build fixes (myself) * --with-devices-* support (Alex) # gpg: Signature made Fri 09 Jul 2021 17:23:52 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini-gitlab/tags/for-upstream: (48 commits) meson: Use input/output for entitlements target configure: allow the selection of alternate config in the build configs: rename default-configs to configs and reorganise hw/arm: move CONFIG_V7M out of default-devices hw/arm: add dependency on OR_IRQ for XLNX_VERSAL meson: Introduce target-specific Kconfig meson: switch function tests from compilation to linking vl: fix leak of qdict_crumple return value target/i386: fix exceptions for MOV to DR target/i386: Added DR6 and DR7 consistency checks target/i386: Added MSRPM and IOPM size check monitor/tcg: move tcg hmp commands to accel/tcg, register them dynamically usb: build usb-host as module monitor/usb: register 'info usbhost' dynamically usb: drop usb_host_dev_is_scsi_storage hook monitor: allow register hmp commands accel: build tcg modular accel: add tcg module annotations accel: build qtest modular accel: add qtest module annotations ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-09target/arm: Use translator_use_goto_tb for aarch32Richard Henderson
Just use translator_use_goto_tb directly at the one call site, rather than maintaining a local wrapper. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09target/arm: Use translator_use_goto_tb for aarch64Richard Henderson
We have not needed to end a TB for I/O since ba3e7926691 ("icount: clean up cpu_can_io at the entry to the block"), and gdbstub singlestep is handled by the generic function. Drop the unused 'n' argument to use_goto_tb. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09target/arm: Use DISAS_TOO_MANY for ISB and SBRichard Henderson
Using gen_goto_tb directly misses the single-step check. Let the branch or debug exception be emitted by arm_tr_tb_stop. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09tcg: Avoid including 'trace-tcg.h' in target translate.cPhilippe Mathieu-Daudé
The root trace-events only declares a single TCG event: $ git grep -w tcg trace-events trace-events:115:# tcg/tcg-op.c trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d" and only a tcg/tcg-op.c uses it: $ git grep -l trace_guest_mem_before_tcg tcg/tcg-op.c therefore it is pointless to include "trace-tcg.h" in each target (because it is not used). Remove it. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210629050935.2570721-1-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09meson: Introduce target-specific KconfigPhilippe Mathieu-Daudé
Add a target-specific Kconfig. We need the definitions in Kconfig so the minikconf tool can verify they exits. However CONFIG_FOO is only enabled for target foo via the meson.build rules. Two architecture have a particularity, ARM and MIPS. As their translators have been split you can potentially build a plain 32 bit build along with a 64-bit version including the 32-bit subset. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210131111316.232778-6-f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707131744.26027-2-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-09target/arm: Correct the encoding of MDCCSR_EL0 and DBGDSCRinthnick@vmware.com
Signed-off-by: Nick Hudson <hnick@vmware.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-02target/arm: Implement MVE shifts by registerPeter Maydell
Implement the MVE shifts by register, which perform shifts on a single general-purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE shifts by immediatePeter Maydell
Implement the MVE shifts by immediate, which perform shifts on a single general-purpose register. These patterns overlap with the long-shift-by-immediates, so we have to rearrange the grouping a little here. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by registerPeter Maydell
Implement the MVE long shifts by register, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with the shift count in another general-purpose register, which might be either positive or negative. Like the long-shifts-by-immediate, these encodings sit in the space that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15. Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases), we have to move the CSEL pattern into the same decodetree group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by immediatePeter Maydell
The MVE extension to v8.1M includes some new shift instructions which sit entirely within the non-coprocessor part of the encoding space and which operate only on general-purpose registers. They take up the space which was previously UNPREDICTABLE MOVS and ORRS encodings with Rm == 13 or 15. Implement the long shifts by immediate, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with an immediate shift count between 1 and 32. Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for the Rm==13,15 case, we need to explicitly emit code to UNDEF for the cases where v8.1M now requires that. (Trying to change MOVS and ORRS is too difficult, because the functions that generate the code are shared between a dozen different kinds of arithmetic or logical instruction for all A32, T16 and T32 encodings, and for some insns and some encodings Rm==13,15 are valid.) We make the helper functions we need for UQSHLL and SQSHLL take a 32-bit value which the helper casts to int8_t because we'll need these helpers also for the shift-by-register insns, where the shift count might be < 0 or > 32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VADDLVPeter Maydell
Implement the MVE VADDLV insn; this is similar to VADDV, except that it accumulates 32-bit elements into a 64-bit accumulator stored in a pair of general-purpose registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLCPeter Maydell
Implement the MVE VSHLC insn, which performs a shift left of the entire vector with carry in bits provided from a general purpose register and carry out bits written back to that register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE saturating narrowing shiftsPeter Maydell
Implement the MVE saturating shift-right-and-narrow insns VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN. do_srshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHRN, VRSHRNPeter Maydell
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN. do_urshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSRI, VSLIPeter Maydell
Implement the MVE VSRI and VSLI insns, which perform a shift-and-insert operation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLLPeter Maydell
Implement the MVE VHLL (vector shift left long) insn. This has two encodings: the T1 encoding is the usual shift-by-immediate format, and the T2 encoding is a special case where the shift count is always equal to the element size. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift right by immediate insnsPeter Maydell
Implement the MVE vector shift right by immediate insns VSHRI and VRSHRI. As with Neon, we implement these by using helper functions which perform left shifts but allow negative shift counts to indicate right shifts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift left by immediate insnsPeter Maydell
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL and VQSHLU. The size-and-immediate encoding here is the same as Neon, and we handle it the same way neon-dp.decode does. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-8-peter.maydell@linaro.org