aboutsummaryrefslogtreecommitdiff
path: root/target/arm/cpu.h
AgeCommit message (Collapse)Author
2020-12-10target/arm: Refactor M-profile VMSR/VMRS handlingPeter Maydell
Currently M-profile borrows the A-profile code for VMSR and VMRS (access to the FP system registers), because all it needs to support is the FPSCR. In v8.1M things become significantly more complicated in two ways: * there are several new FP system registers; some have side effects on read, and one (FPCXT_NS) needs to avoid the usual vfp_access_check() and the "only if FPU implemented" check * all sysregs are now accessible both by VMRS/VMSR (which reads/writes a general purpose register) and also by VLDR/VSTR (which reads/writes them directly to memory) Refactor the structure of how we handle VMSR/VMRS to cope with this: * keep the M-profile code entirely separate from the A-profile code * abstract out the "read or write the general purpose register" part of the code into a loadfn or storefn function pointer, so we can reuse it for VLDR/VSTR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
2020-12-10target/arm: Implement VSCCLRM insnPeter Maydell
Implement the v8.1M VSCCLRM insn, which zeros floating point registers if there is an active floating point context. This requires support in write_neon_element32() for the MO_32 element size, so add it. Because we want to use arm_gen_condlabel(), we need to move the definition of that function up in translate.c so it is before the #include of translate-vfp.c.inc. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
2020-11-15arm tcg cpus: Fix Lesser GPL version numberChetan Pant
There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by: Chetan Pant <chetan4windows@gmail.com> Message-Id: <20201023122913.19561-1-chetan4windows@gmail.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-10-27linux-user: Set PAGE_TARGET_1 for TARGET_PROT_BTIRichard Henderson
Transform the prot bit to a qemu internal page bit, and save it in the page tables. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201021173749.111103-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Implement FPSCR.LTPSIZE for M-profile LOB extensionPeter Maydell
If the M-profile low-overhead-branch extension is implemented, FPSCR bits [18:16] are a new field LTPSIZE. If MVE is not implemented (currently always true for us) then this field always reads as 4 and ignores writes. These bits used to be the vector-length field for the old short-vector extension, so we need to take care that they are not misinterpreted as setting vec_len. We do this with a rearrangement of the vfp_set_fpscr() code that deals with vec_len, vec_stride and also the QC bit; this obviates the need for the M-profile only masking step that we used to have at the start of the function. We provide a new field in CPUState for LTPSIZE, even though this will always be 4, in preparation for MVE, so we don't have to come back later and split it out of the vfp.xregs[FPSCR] value. (This state struct field will be saved and restored as part of the FPSCR value via the vmstate_fpscr in machine.c.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-11-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M branch-future insns (as NOPs)Peter Maydell
v8.1M implements a new 'branch future' feature, which is a set of instructions that request the CPU to perform a branch "in the future", when it reaches a particular execution address. In hardware, the expected implementation is that the information about the branch location and destination is cached and then acted upon when execution reaches the specified address. However the architecture permits an implementation to discard this cached information at any point, and so guest code must always include a normal branch insn at the branch point as a fallback. In particular, an implementation is specifically permitted to treat all BF insns as NOPs (which is equivalent to discarding the cached information immediately). For QEMU, implementing this caching of branch information would be complicated and would not improve the speed of execution at all, so we make the IMPDEF choice to implement all BF insns as NOPs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-7-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M NOCP handlingPeter Maydell
From v8.1M, disabled-coprocessor handling changes slightly: * coprocessors 8, 9, 14 and 15 are also governed by the cp10 enable bit, like cp11 * an extra range of instruction patterns is considered to be inside the coprocessor space We previously marked these up with TODO comments; implement the correct behaviour. Unfortunately there is no ID register field which indicates this behaviour. We could in theory test an unrelated ID register which indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch >= 3 (low-overhead-loops), but it seems better to simply define a new ARM_FEATURE_V8_1M feature flag and use it for this and other new-in-v8.1M behaviour that isn't identifiable from the ID registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-3-peter.maydell@linaro.org
2020-10-08hw/arm/virt: Implement kvm-steal-timeAndrew Jones
We add the kvm-steal-time CPU property and implement it for machvirt. A tiny bit of refactoring was also done to allow pmu and pvtime to use the same vcpu device helper functions. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Message-id: 20201001061718.101915-7-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-01target/arm: Make isar_feature_aa32_fp16_arith() handle M-profilePeter Maydell
The M-profile definition of the MVFR1 ID register differs slightly from the A-profile one, and in particular the check for "does the CPU support fp16 arithmetic" is not the same. We don't currently implement any M-profile CPUs with fp16 arithmetic, so this is not yet a visible bug, but correcting the logic now disarms this beartrap for when we eventually do. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-6-peter.maydell@linaro.org
2020-10-01target/arm: Move id_pfr0, id_pfr1 into ARMISARegistersPeter Maydell
Move the id_pfr0 and id_pfr1 fields into the ARMISARegisters sub-struct. We're going to want id_pfr1 for an isar_features check, and moving both at the same time avoids an odd inconsistency. Changes other than the ones to cpu.h and kvm64.c made automatically with: perl -p -i -e 's/cpu->id_pfr/cpu->isar.id_pfr/' target/arm/*.c hw/intc/armv7m_nvic.c Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-3-peter.maydell@linaro.org
2020-10-01target/arm: Replace ARM_FEATURE_PXN with ID_MMFR0.VMSA checkPeter Maydell
The ARM_FEATURE_PXN bit indicates whether the CPU supports the PXN bit in short-descriptor translation table format descriptors. This is indicated by ID_MMFR0.VMSA being at least 0b0100. Replace the feature bit with an ID register check, in line with our preference for ID register checks over feature bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-2-peter.maydell@linaro.org
2020-09-08target/arm: Move start-powered-off property to generic CPUStateThiago Jung Bauermann
There are other platforms which also have CPUs that start powered off, so generalize the start-powered-off property so that it can be used by them. Note that ARMv7MState also has a property of the same name but this patch doesn't change it because that class isn't a subclass of CPUState so it wouldn't be a trivial change. This change should not cause any change in behavior. Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Message-Id: <20200826055535.951207-2-bauerman@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-09-01target/arm: Use correct ID register check for aa32_fp16_arithPeter Maydell
The aa32_fp16_arith feature check function currently looks at the AArch64 ID_AA64PFR0 register. This is (as the comment notes) not correct. The bogus check was put in mostly to allow testing of the fp16 variants of the VCMLA instructions and it was something of a mistake that we allowed them to exist in master. Switch the feature check function to testing VMFR1.FPHP, which is what it ought to be. This will remove emulation of the VCMLA and VCADD insns from AArch32 code running on an AArch64 '-cpu max' using system emulation. (They were never enabled for aarch32 linux-user and system-emulation.) Since we weren't advertising their existence via the AArch32 ID register, well-behaved guests wouldn't have been using them anyway. Once we have implemented all the AArch32 support for the FP16 extension we will advertise it in the MVFR1 ID register field, which will reenable these insns along with all the others. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-3-peter.maydell@linaro.org
2020-08-24target/arm: Implement FPST_STD_F16 fpstatusPeter Maydell
Architecturally, Neon FP16 operations use the "standard FPSCR" like all other Neon operations. However, this is defined in the Arm ARM pseudocode as "a fixed value, except that FZ16 (and AHP) follow the FPSCR bits". In QEMU, the softfloat float_status doesn't include separate flush-to-zero for FP16 operations, so we must keep separate fp_status for "Neon non-FP16" and "Neon fp16" operations, in the same way we do already for the non-Neon "fp_status" vs "fp_status_f16". Add the extra float_status field to the CPU state structure, ensure it is correctly initialized and updated on FPSCR writes, and make fpstatus_ptr(FPST_STD_F16) return a pointer to it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20200806104453.30393-4-peter.maydell@linaro.org
2020-08-24target/arm: Delete unused ARM_FEATURE_CRCPeter Maydell
In commit 962fcbf2efe57231a9f5df we converted the uses of the ARM_FEATURE_CRC bit to use the aa32_crc32 isar_feature test instead. However we forgot to remove the now-unused definition of the feature name in the enum. Delete it now. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20200805210848.6688-1-peter.maydell@linaro.org
2020-07-03target/arm: kvm: Handle misconfigured dabt injectionBeata Michalska
Injecting external data abort through KVM might trigger an issue on kernels that do not get updated to include the KVM fix. For those and aarch32 guests, the injected abort gets misconfigured to be an implementation defined exception. This leads to the guest repeatedly re-running the faulting instruction. Add support for handling that case. [ Fixed-by: 018f22f95e8a ('KVM: arm: Fix DFSR setting for non-LPAE aarch32 guests') Fixed-by: 21aecdbd7f3a ('KVM: arm: Make inject_abt32() inject an external abort instead') ] Signed-off-by: Beata Michalska <beata.michalska@linaro.org> Acked-by: Andrew Jones <drjones@redhat.com> Message-id: 20200629114110.30723-3-beata.michalska@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Create tagged ram when MTE is enabledRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200626033144.790098-44-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement data cache set allocation tagsRichard Henderson
This is DC GVA and DC GZVA, and the tag check for DC ZVA. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-40-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add mte helpers for sve scalar + int loadsRichard Henderson
Because the elements are sequential, we can eliminate many tests all at once when the tag hits TCMA, or if the page(s) are not Tagged. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-34-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add arm_tlb_bti_gpRichard Henderson
Introduce an lvalue macro to wrap target_tlb_bit0. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-33-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE bits to tb_flagsRichard Henderson
Cache the composite ATA setting. Cache when MTE is fully enabled, i.e. access to tags are enabled and tag checks affect the PE. Do this for both the normal context and the UNPRIV context. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE system registersRichard Henderson
This is TFSRE0_EL1, TFSR_EL1, TFSR_EL2, TFSR_EL3, RGSR_EL1, GCR_EL1, GMID_EL1, and PSTATE.TCO. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add isar tests for mteRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-23target/arm: Remove unnecessary gen_io_end() callsPeter Maydell
Since commit ba3e7926691ed3 it has been unnecessary for target code to call gen_io_end() after an IO instruction in icount mode; it is sufficient to call gen_io_start() before it and to force the end of the TB. Many now-unnecessary calls to gen_io_end() were removed in commit 9e9b10c6491153b, but some were missed or accidentally added later. Remove unneeded calls from the arm target: * the call in the handling of exception-return-via-LDM is unnecessary, and the code is already forcing end-of-TB * the call in the VFP access check code is more complicated: we weren't ending the TB, so we need to add the code to force that by setting DISAS_UPDATE * the doc comment for ARM_CP_IO doesn't need to mention gen_io_end() any more Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-id: 20200619170324.12093-1-peter.maydell@linaro.org
2020-05-21target/arm: Allow user-mode code to write CPSR.E via MSRPeter Maydell
Using the MSR instruction to write to CPSR.E is deprecated, but it is required to work from any mode including unprivileged code. We were incorrectly forbidding usermode code from writing it because CPSR_USER did not include the CPSR_E bit. We use CPSR_USER in only three places: * as the mask of what to allow userspace MSR to write to CPSR * when deciding what bits a linux-user signal-return should be able to write from the sigcontext structure * in target_user_copy_regs() when we set up the initial registers for the linux-user process In the first two cases not being able to update CPSR.E is a bug, and in the third case it doesn't matter because CPSR.E is always 0 there. So we can fix both bugs by adding CPSR_E to CPSR_USER. Because the cpsr_write() in restore_sigcontext() is now changing a CPSR bit which is cached in hflags, we need to add an arm_rebuild_hflags() call there; the callsite in target_user_copy_regs() was already rebuilding hflags for other reasons. (The recommended way to change CPSR.E is to use the 'SETEND' instruction, which we do correctly allow from usermode code.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200518142801.20503-1-peter.maydell@linaro.org
2020-05-14target-arm: kvm64: handle SIGBUS signal from kernel or KVMDongjiu Geng
Add a SIGBUS signal handler. In this handler, it checks the SIGBUS type, translates the host VA delivered by host to guest PA, then fills this PA to guest APEI GHES memory, then notifies guest according to the SIGBUS type. When guest accesses the poisoned memory, it will generate a Synchronous External Abort(SEA). Then host kernel gets an APEI notification and calls memory_failure() to unmapped the affected page in stage 2, finally returns to guest. Guest continues to access the PG_hwpoison page, it will trap to KVM as stage2 fault, then a SIGBUS_MCEERR_AR synchronous signal is delivered to Qemu, Qemu records this error address into guest APEI GHES memory and notifes guest using Synchronous-External-Abort(SEA). In order to inject a vSEA, we introduce the kvm_inject_arm_sea() function in which we can setup the type of exception and the syndrome information. When switching to guest, the target vcpu will jump to the synchronous external abort vector table entry. The ESR_ELx.DFSC is set to synchronous external abort(0x10), and the ESR_ELx.FnV is set to not valid(0x1), which will tell guest that FAR is not valid and hold an UNKNOWN value. These values will be set to KVM register structures through KVM_SET_ONE_REG IOCTL. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20200512030609.19593-10-gengdongjiu@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Make set_feature() available for other filesThomas Huth
Move the common set_feature() and unset_feature() functions from cpu.c and cpu64.c to cpu.h. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200504172448.9402-3-philmd@redhat.com Message-ID: <20190921150420.30743-2-thuth@redhat.com> [PMD: Split Thomas's patch in two: set_feature, cpu_register] Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-04target/arm: Use uint64_t for midr field in CPU state structPhilippe Mathieu-Daudé
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0. Represent it in QEMU's ARMCPU struct with a uint64_t, not a uint32_t. This fixes an error when compiling with -Werror=conversion because we were manipulating the register value using a local uint64_t variable: target/arm/cpu64.c: In function ‘aarch64_max_initfn’: target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion] 628 | cpu->midr = t; | ^ and future-proofs us against a possible future architecture change using some of the top 32 bits. Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20200428172634.29707-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-04target/arm: Implement ARMv8.2-TTS2UXNPeter Maydell
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2 translation table descriptors from just bit [54] to bits [54:53], allowing stage 2 to control execution permissions separately for EL0 and EL1. Implement the new semantics of the XN field and enable the feature for our 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-5-peter.maydell@linaro.org
2020-05-04target/arm: Don't use a TLB for ARMMMUIdx_Stage2Peter Maydell
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU TLB. However we never actually use the TLB -- all stage 2 lookups are done by direct calls to get_phys_addr_lpae() followed by a physical address load via address_space_ld*(). Remove Stage2 from the list of ARM MMU indexes which correspond to real core MMU indexes, and instead put it in the set of "NOTLB" ARM MMU indexes. This allows us to drop NB_MMU_MODES to 11. It also means we can safely add support for the ARMv8.3-TTS2UXN extension, which adds permission bits to the stage 2 descriptors which define execute permission separatel for EL0 and EL1; supporting that while keeping Stage2 in a QEMU TLB would require us to use separate TLBs for "Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a lot of extra complication given we aren't even using the QEMU TLB. In the process of updating the comment on our MMU index use, fix a couple of other minor errors: * NS EL2 EL2&0 was missing from the list in the comment * some text hadn't been updated from when we bumped NB_MMU_MODES above 8 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-2-peter.maydell@linaro.org
2020-03-17target/arm: generate xml description of our SVE registersAlex Bennée
We also expose a the helpers to read/write the the registers. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-19-alex.bennee@linaro.org>
2020-03-17target/arm: explicitly encode regnum in our XMLAlex Bennée
This is described as optional but I'm not convinced of the numbering when multiple target fragments are sent. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-17-alex.bennee@linaro.org>
2020-03-17target/arm: prepare for multiple dynamic XMLsAlex Bennée
We will want to generate similar dynamic XML for gdbstub support of SVE registers (the upstream doesn't use XML). To that end lightly rename a few things to make the distinction. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200316172155.971-16-alex.bennee@linaro.org>
2020-03-17gdbstub: extend GByteArray to read register helpersAlex Bennée
Instead of passing a pointer to memory now just extend the GByteArray to all the read register helpers. They can then safely append their data through the normal way. We don't bother with this abstraction for write registers as we have already ensured the buffer being copied from is the correct size. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Damien Hedde <damien.hedde@greensocs.com> Message-Id: <20200316172155.971-15-alex.bennee@linaro.org>
2020-03-05target/arm: Optimize cpu_mmu_indexRichard Henderson
We now cache the core mmu_idx in env->hflags. Rather than recompute from scratch, extract the field. All of the uses of cpu_mmu_index within target/arm are within helpers, and env->hflags is always stable within a translation block from whence helpers are called. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Add HCR_EL2 bit definitions from ARMv8.6Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Implement ARMv8.3-CCIDXPeter Maydell
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers have a format that uses the full 64 bit width of the register, and adds a new CCSIDR2 register so AArch32 can get at the high 32 bits. QEMU doesn't implement caches, so we just treat these ID registers as opaque values that are set to the correct constant values for each CPU. The only thing we need to do is allow 64-bit values in our cssidr[] array and provide the CCSIDR2 accessors. We don't set the CCIDX field in our 'max' CPU because the CCSIDR constant values we use are the same as the ones used by the Cortex-A57 and they are in the old 32-bit format. This means that the extra regdef added here is unused currently, but it means that whenever in the future we add a CPU that does need the new 64-bit format it will just work when we set the cssidr values and the ID registers for it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224182626.29252-1-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.4-RCPCPeter Maydell
The v8.4-RCPC extension implements some new instructions: * LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW * STLUR, STLURB, STLURH These are all in a new subgroup of encodings that sits below the top-level "Loads and Stores" group in the Arm ARM. The STLUR* instructions have standard store-release semantics; the LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose to implement them as the slightly stronger Load-Acquire. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.3-RCPCPeter Maydell
The v8.3-RCPC extension implements three new load instructions which provide slightly weaker consistency guarantees than the existing load-acquire operations. For QEMU we choose to simply implement them with a full LDAQ barrier. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
2020-02-28target/arm: Fix wrong use of FIELD_EX32 on ID_AA64DFR0Peter Maydell
We missed an instance of using FIELD_EX32 on a 64-bit ID register, in isar_feature_aa64_pmu_8_4(). Fix it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-2-peter.maydell@linaro.org
2020-02-28target/arm: Remove ARM_FEATURE_VFP*Richard Henderson
We have converted all tests against these features to ISAR tests. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmacRichard Henderson
All remaining tests for VFP4 are for fused multiply-add insns. Since the MVFR1 field is used for both VFP and NEON, move its adjustment from the !has_neon block to the (!has_vfp && !has_neon) block. Test for vfp of the appropraite width alongside the test for simdfmac within translate-vfp.inc.c. Within disas_neon_data_insn, we have already tested for ARM_FEATURE_NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Add isar_feature_aa64_fp_simd, isar_feature_aa32_vfpRichard Henderson
We cannot easily create "any" functions for these, because the ID_AA64PFR0 fields for FP and SIMD signal "enabled" with zero. Which means that an aarch32-only cpu will return incorrect results when testing the aarch64 registers. To use these, we must either have context or additionally test vs ARM_FEATURE_AARCH64. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Add isar_feature_aa32_{fpsp_v2, fpsp_v3, fpdp_v3}Richard Henderson
We will shortly use these to test for VFPv2 and VFPv3 in different situations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Rename isar_feature_aa32_fpdp_v2Richard Henderson
The old name, isar_feature_aa32_fpdp, does not reflect that the test includes VFPv2. We will introduce another feature tests for VFPv3. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Add isar_feature_aa32_vfp_simdRichard Henderson
Use this in the places that were checking ARM_FEATURE_VFP, and are obviously testing for the existance of the register set as opposed to testing for some particular instruction extension. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Rename isar_feature_aa32_simd_r32Richard Henderson
The old name, isar_feature_aa32_fp_d32, does not reflect the MVFR0 field name, SIMDReg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200214181547.21408-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: wrapped one long line] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Correctly implement ACTLR2, HACTLR2Peter Maydell
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7 or the original ARMv8. They were later added as optional registers, whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2 they are mandatory (ie ID_MMFR4.AC2 must be non-zero). We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we incorrectly made it exist for all v8 CPUs, and we didn't implement ACTLR2 at all. Sort this out by implementing both registers only when they are supposed to exist, and setting the ID_MMFR4 bit for -cpu max. Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72 CPU models; this is correct, because those CPUs do not implement this register. Fixes: 0e0456ab8895a5e85 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-22-peter.maydell@linaro.org
2020-02-21target/arm: Use FIELD_EX32 for testing 32-bit fieldsPeter Maydell
Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes no difference in behaviour, it's just more consistent.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-21-peter.maydell@linaro.org
2020-02-21target/arm: Use isar_feature function for testing AA32HPD featurePeter Maydell
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we can define and use an isar_feature for the presence of the ARMv8.2-AA32HPD feature, rather than open-coding the test. While we're here, correct a comment typo which missed an 'A' from the feature name. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-20-peter.maydell@linaro.org