aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2021-07-09target/ppc: Introduce ppc_xlateRichard Henderson
Create one common dispatch for all of the ppc_*_xlate functions. Use ppc64_v3_radix to directly dispatch between ppc_radix64_xlate and ppc_hash64_xlate. Remove the separate *_handle_mmu_fault and *_get_phys_page_debug functions, using common code for ppc_cpu_tlb_fill and ppc_cpu_get_phys_page_debug. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-9-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Split out ppc_jumbo_xlateRichard Henderson
Mirror the interface of ppc_radix64_xlate (mostly), putting all of the logic for older mmu translation into a single entry point. For booke, we need to add mmu_idx to the xlate-style interface. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-8-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Split out ppc_hash32_xlateRichard Henderson
Mirror the interface of ppc_radix64_xlate, putting all of the logic for hash32 translation into a single entry point. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-7-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Split out ppc_hash64_xlateRichard Henderson
Mirror the interface of ppc_radix64_xlate, putting all of the logic for hash64 translation into a single function. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-6-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Use bool success for ppc_radix64_xlateRichard Henderson
Instead of returning non-zero for failure, return true for success. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-5-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Push real-mode handling into ppc_radix64_xlateRichard Henderson
This removes some incomplete duplication between ppc_radix64_handle_mmu_fault and ppc_radix64_get_phys_page_debug. The former was correct wrt SPR_HRMOR and the latter was not. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-4-bruno.larsen@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Use MMUAccessType with *_handle_mmu_faultRichard Henderson
These changes were waiting until we didn't need to match the function type of PowerPCCPUClass.handle_mmu_fault. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-3-bruno.larsen@eldorado.org.br> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Remove PowerPCCPUClass.handle_mmu_faultRichard Henderson
Instead, use a switch on env->mmu_model. This avoids some replicated information in cpu setup. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210621125115.67717-2-bruno.larsen@eldorado.org.br> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Drop PowerPCCPUClass::interrupts_big_endian()Greg Kurz
This isn't used anymore. Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <20210622140926.677618-3-groug@kaod.org> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09target/ppc: Introduce ppc_interrupts_little_endian()Greg Kurz
PowerPC CPUs use big endian by default but starting with POWER7, server grade CPUs use the ILE bit of the LPCR special purpose register to decide on the endianness to use when handling interrupts. This gives a clue to QEMU on the endianness the guest kernel is running, which is needed when generating an ELF dump of the guest or when delivering an FWNMI machine check interrupt. Commit 382d2db62bcb ("target-ppc: Introduce callback for interrupt endianness") added a class method to PowerPCCPUClass to modelize this : default implementation returns a fixed "big endian" value, while POWER7 and newer do the LPCR_ILE check. This is suboptimal as it forces to implement the method for every new CPU family, and it is very unlikely that this will ever be different than what we have today. We basically only have three cases to consider: a) CPU doesn't have an LPCR => big endian b) CPU has an LPCR but doesn't support the ILE bit => big endian c) CPU has an LPCR and supports the ILE bit => little or big endian Instead of class methods, introduce an inline helper that checks the ILE bit in the LPCR_MASK to decide on the outcome. The new helper words little endian instead of big endian. This allows to drop a ! operator in ppc_cpu_do_fwnmi_machine_check(). Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <20210622140926.677618-2-groug@kaod.org> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-07target/s390x: split sysemu part of cpu modelsCho, Yu-Chen
split sysemu part of cpu models, also create a tiny _user.c with just the (at least for now), empty implementation of apply_cpu_model. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-15-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: move kvm files into kvm/Cho, Yu-Chen
move kvm files into kvm/ After the reshuffling, update MAINTAINERS accordingly. Make use of the new directory: target/s390x/kvm/ Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-14-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: remove kvm-stub.cCho, Yu-Chen
all function calls are protected by kvm_enabled(), so we do not need the stubs. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-13-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: use kvm_enabled() to wrap call to kvm_s390_get_hpage_1mCho, Yu-Chen
this will allow to remove the kvm stubs. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-12-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: make helper.c sysemu-onlyCho, Yu-Chen
Now that we have moved cpu-dump functionality out of helper.c, we can make the module sysemu-only. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-11-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: split cpu-dump from helper.cCho, Yu-Chen
Splitting this functionality also allows us to make helper.c sysemu-only. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-10-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: move sysemu-only code out to cpu-sysemu.cCho, Yu-Chen
move sysemu-only code out to cpu-sysemu.c Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-9-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: start moving TCG-only code to tcg/Cho, Yu-Chen
move everything related to translate, as well as HELPER code in tcg/ mmu_helper.c stays put for now, as it contains both TCG and KVM code. After the reshuffling, update MAINTAINERS accordingly. Make use of the new directory: target/s390x/tcg/ Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-8-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: rename internal.h to s390x-internal.hCho, Yu-Chen
The internal.h file is renamed to s390x-internal.h, because of the risk of collision with other files with the same name. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-7-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: remove tcg-stub.cCho, Yu-Chen
now that we protect all calls to the tcg-specific functions with if (tcg_enabled()), we do not need the TCG stub anymore. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-6-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: meson: add target_user_archCho, Yu-Chen
the lack of target_user_arch makes it hard to fully leverage the build system in order to separate user code from sysemu code. Provide it, so that we can avoid the proliferation of #ifdef in target code. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Cho, Yu-Chen <acho@suse.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210707105324.23400-2-acho@suse.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07s390x/tcg: Fix m5 vs. m4 field for VECTOR MULTIPLY SUM LOGICALDavid Hildenbrand
The element size is located in m5, not in m4. As there is no m4, qemu currently crashes with an assertion, trying to lookup that field. Reproduced and tested via GO, which ends up using VMSL once the Vector enhancements facility is around for verifying certificates with elliptic curves. Reported-by: Jonathan Albrecht <jonathan.albrecht@linux.vnet.ibm.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/449 Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Fixes: 8c18fa5b3eba ("s390x/tcg: Implement VECTOR MULTIPLY SUM LOGICAL") Message-Id: <20210705090341.58289-1-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07target/s390x: Fix CC set by CONVERT TO FIXED/LOGICALUlrich Weigand
The FP-to-integer conversion instructions need to set CC 3 whenever a "special case" occurs; this is the case whenever the instruction also signals the IEEE invalid exception. (See e.g. figure 19-18 in the Principles of Operation.) However, qemu currently will set CC 3 only in the case where the input was a NaN. This is indeed one of the special cases, but there are others, most notably the case where the input is out of range of the target data type. This patch fixes the problem by switching these instructions to the "static" CC method and computing the correct result directly in the helper. (It cannot be re-computed later as the information about the invalid exception is no longer available.) This fixes a bug observed when running the wasmtime test suite under the s390x-linux-user target. Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210630105058.GA29130@oc3748833570.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-07s390x/cpumodel: add 3931 and 3932Christian Borntraeger
This defines 5 new facilities and the new 3931 and 3932 machines. As before the name is not yet known and we do use gen16a and gen16b. The new features are part of the full model. The default model is still empty (same as z15) and will be added in a separate patch at a later point in time. Also add the dependencies of new facilities and as a fix for z15 add a dependency from S390_FEAT_VECTOR_PACKED_DECIMAL_ENH to S390_VECTOR_PACKED_DECIMAL. [merged <20210701084348.26556-1-borntraeger@de.ibm.com>] Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20210622201923.150205-2-borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-07-06target/i386: Move X86XSaveArea into TCGDavid Edmondson
Given that TCG is now the only consumer of X86XSaveArea, move the structure definition and associated offset declarations and checks to a TCG specific header. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Populate x86_ext_save_areas offsets using cpuid where possibleDavid Edmondson
Rather than relying on the X86XSaveArea structure definition, determine the offset of XSAVE state areas using CPUID leaf 0xd where possible (KVM and HVF). Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Observe XSAVE state area offsetsDavid Edmondson
Rather than relying on the X86XSaveArea structure definition directly, the routines that manipulate the XSAVE state area should observe the offsets declared in the x86_ext_save_areas array. Currently the offsets declared in the array are derived from the structure definition, resulting in no functional change. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-7-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Make x86_ext_save_areas visible outside cpu.cDavid Edmondson
Provide visibility of the x86_ext_save_areas array and associated type outside of cpu.c. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-6-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Pass buffer and length to XSAVE helperDavid Edmondson
In preparation for removing assumptions about XSAVE area offsets, pass a buffer pointer and buffer length to the XSAVE helper functions. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-5-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Clarify the padding requirements of X86XSaveAreaDavid Edmondson
Replace the hard-coded size of offsets or structure elements with defined constants or sizeof(). Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-4-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Consolidate the X86XSaveArea offset checksDavid Edmondson
Rather than having similar but different checks in cpu.h and kvm.c, move them all to cpu.h. Message-Id: <20210705104632.2902400-3-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06target/i386: Declare constants for XSAVE offsetsDavid Edmondson
Declare and use manifest constants for the XSAVE state component offsets. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210705104632.2902400-2-david.edmondson@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-04Merge remote-tracking branch 'remotes/philmd/tags/mips-20210702' into stagingPeter Maydell
MIPS patches queue - Extract nanoMIPS, microMIPS, Code Compaction from translate.c - Allow PCI config accesses smaller than 32-bit on Bonito64 device - Fix migration of g364fb device on Jazz Magnum - Fix dp8393x PROM checksum on Jazz Magnum and Quadra 800 - Map the UART devices unconditionally on Jazz Magnum - Add functional test booting Linux on the Fuloong 2E # gpg: Signature made Fri 02 Jul 2021 16:36:19 BST # gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE # gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full] # Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE * remotes/philmd/tags/mips-20210702: hw/mips/jazz: Map the UART devices unconditionally hw/mips/jazz: specify correct endian for dp8393x device hw/m68k/q800: fix PROM checksum and MAC address storage qemu/bitops.h: add bitrev8 implementation dp8393x: remove onboard PROM containing MAC address and checksum hw/m68k/q800: move PROM and checksum calculation from dp8393x device to board hw/mips/jazz: move PROM and checksum calculation from dp8393x device to board dp8393x: convert to trace-events dp8393x: checkpatch fixes g364fb: add VMStateDescription for G364SysBusState g364fb: use RAM memory region for framebuffer tests/acceptance: Test Linux on the Fuloong 2E machine hw/pci-host/bonito: Allow PCI config accesses smaller than 32-bit hw/pci-host/bonito: Trace PCI config accesses smaller than 32-bit target/mips: Extract nanoMIPS ISA translation routines target/mips: Extract the microMIPS ISA translation routines target/mips: Extract Code Compaction ASE translation routines target/mips: Add declarations for generic TCG helpers Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-02target/arm: Implement MVE shifts by registerPeter Maydell
Implement the MVE shifts by register, which perform shifts on a single general-purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE shifts by immediatePeter Maydell
Implement the MVE shifts by immediate, which perform shifts on a single general-purpose register. These patterns overlap with the long-shift-by-immediates, so we have to rearrange the grouping a little here. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by registerPeter Maydell
Implement the MVE long shifts by register, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with the shift count in another general-purpose register, which might be either positive or negative. Like the long-shifts-by-immediate, these encodings sit in the space that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15. Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases), we have to move the CSEL pattern into the same decodetree group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE long shifts by immediatePeter Maydell
The MVE extension to v8.1M includes some new shift instructions which sit entirely within the non-coprocessor part of the encoding space and which operate only on general-purpose registers. They take up the space which was previously UNPREDICTABLE MOVS and ORRS encodings with Rm == 13 or 15. Implement the long shifts by immediate, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with an immediate shift count between 1 and 32. Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for the Rm==13,15 case, we need to explicitly emit code to UNDEF for the cases where v8.1M now requires that. (Trying to change MOVS and ORRS is too difficult, because the functions that generate the code are shared between a dozen different kinds of arithmetic or logical instruction for all A32, T16 and T32 encodings, and for some insns and some encodings Rm==13,15 are valid.) We make the helper functions we need for UQSHLL and SQSHLL take a 32-bit value which the helper casts to int8_t because we'll need these helpers also for the shift-by-register insns, where the shift count might be < 0 or > 32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VADDLVPeter Maydell
Implement the MVE VADDLV insn; this is similar to VADDV, except that it accumulates 32-bit elements into a 64-bit accumulator stored in a pair of general-purpose registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLCPeter Maydell
Implement the MVE VSHLC insn, which performs a shift left of the entire vector with carry in bits provided from a general purpose register and carry out bits written back to that register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE saturating narrowing shiftsPeter Maydell
Implement the MVE saturating shift-right-and-narrow insns VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN. do_srshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHRN, VRSHRNPeter Maydell
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN. do_urshr() is borrowed from sve_helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSRI, VSLIPeter Maydell
Implement the MVE VSRI and VSLI insns, which perform a shift-and-insert operation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE VSHLLPeter Maydell
Implement the MVE VHLL (vector shift left long) insn. This has two encodings: the T1 encoding is the usual shift-by-immediate format, and the T2 encoding is a special case where the shift count is always equal to the element size. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift right by immediate insnsPeter Maydell
Implement the MVE vector shift right by immediate insns VSHRI and VRSHRI. As with Neon, we implement these by using helper functions which perform left shifts but allow negative shift counts to indicate right shifts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE vector shift left by immediate insnsPeter Maydell
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL and VQSHLU. The size-and-immediate encoding here is the same as Neon, and we handle it the same way neon-dp.decode does. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
2021-07-02target/arm: Implement MVE logical immediate insnsPeter Maydell
Implement the MVE logical-immediate insns (VMOV, VMVN, VORR and VBIC). These have essentially the same encoding as their Neon equivalents, and we implement the decode in the same way. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
2021-07-02target/arm: Use dup_const() instead of bitfield_replicate()Peter Maydell
Use dup_const() instead of bitfield_replicate() in disas_simd_mod_imm(). (We can't replace the other use of bitfield_replicate() in this file, in logic_imm_decode_wmask(), because that location needs to handle 2 and 4 bit elements, which dup_const() cannot.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
2021-07-02target/arm: Use asimd_imm_const for A64 decodePeter Maydell
The A64 AdvSIMD modified-immediate grouping uses almost the same constant encoding that A32 Neon does; reuse asimd_imm_const() (to which we add the AArch64-specific case for cmode 15 op 1) instead of reimplementing it all. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
2021-07-02target/arm: Make asimd_imm_const() publicPeter Maydell
The function asimd_imm_const() in translate-neon.c is an implementation of the pseudocode AdvSIMDExpandImm(), which we will also want for MVE. Move the implementation to translate.c, with a prototype in translate.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
2021-07-02target/arm: Fix bugs in MVE VRMLALDAVH, VRMLSLDAVHPeter Maydell
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH insns had some bugs: * the 32x32 multiply of elements was being done as 32x32->32, not 32x32->64 * we were incorrectly maintaining the accumulator in its full 72-bit form across all 4 beats of the insn; in the pseudocode it is squashed back into the 64 bits of the RdaHi:RdaLo registers after each beat In particular, fixing the second of these allows us to recast the implementation to avoid 128-bit arithmetic entirely. Since the element size here is always 4, we can also drop the parameterization of ESIZE to make the code a little more readable. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-3-peter.maydell@linaro.org