aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2020-02-27target/ricsv: Flush the TLB on virtulisation mode changesAlistair Francis
To ensure our TLB isn't out-of-date we flush it on all virt mode changes. Unlike priv mode this isn't saved in the mmu_idx as all guests share V=1. The easiest option is just to flush on all changes. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add support for virtual interrupt settingAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Extend the SIP CSR to support virtulisationAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Extend the MIE CSR to support virtulisationAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Set VS bits in mideleg for Hyp extensionAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add virtual register swapping functionAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add Hypervisor machine CSRs accessesAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add Hypervisor virtual CSRs accessesAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add Hypervisor CSR access functionsAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Dump Hypervisor registers if enabledAlistair Francis
Dump the Hypervisor registers and the current Hypervisor state. While we are editing this code let's also dump stvec and scause. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Print priv and virt in disas logAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Fix CSR perm checking for HS modeAlistair Francis
Update the CSR permission checking to work correctly when we are in HS-mode. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add the force HS exception modeAlistair Francis
Add a FORCE_HS_EXCEP mode to the RISC-V virtulisation status. This bit specifies if an exeption should be taken to HS mode no matter the current delegation status. This is used when an exeption must be taken to HS mode, such as when handling interrupts. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add the virtulisation modeAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Rename the H irqs to VS irqsAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add support for the new execption numbersAlistair Francis
The v0.5 Hypervisor spec add new execption numbers, let's add support for those. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add the Hypervisor CSRs to CPUStateAlistair Francis
Add the Hypervisor CSRs to CPUState and at the same time (to avoid bisect issues) update the CSR macros for the v0.5 Hyp spec. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Add the Hypervisor extensionAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Chih-Min Chao <chihmin.chao@sifive.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27target/riscv: Convert MIP CSR to target_ulongAlistair Francis
The MIP CSR is a xlen CSR, it was only 32-bits to allow atomic access. Now that we don't use atomics for MIP we can change this back to a xlen CSR. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-02-27Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200227' into stagingPeter Maydell
Includes a headers update against 5.6-current. - add missing vcpu reset functionality - rstfy some s390 documentation - fixes and enhancements # gpg: Signature made Thu 27 Feb 2020 11:50:08 GMT # gpg: using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF # gpg: issuer "cohuck@redhat.com" # gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [marginal] # gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full] # gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full] # gpg: aka "Cornelia Huck <cohuck@kernel.org>" [marginal] # gpg: aka "Cornelia Huck <cohuck@redhat.com>" [marginal] # Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF * remotes/cohuck/tags/s390x-20200227: s390x: Rename and use constants for short PSW address and mask docs: rstfy vfio-ap documentation docs: rstfy s390 dasd ipl documentation s390/sclp: improve special wait psw logic s390x: Add missing vcpu reset functions linux-headers: update target/s390x/translate: Fix RNSBG instruction Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-27s390x: Rename and use constants for short PSW address and maskJanosch Frank
Let's rename PSW_MASK_ESA_ADDR to PSW_MASK_SHORT_ADDR because we're not working with a ESA PSW which would not support the extended addressing bit. Also let's actually use it. Additionally we introduce PSW_MASK_SHORT_CTRL and use it throughout the codebase. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200227092341.38558-1-frankja@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2020-02-26s390/sclp: improve special wait psw logicChristian Borntraeger
There is a special quiesce PSW that we check for "shutdown". Otherwise disabled wait is detected as "crashed". Architecturally we must only check PSW bits 116-127. Fix this. Cc: qemu-stable@nongnu.org Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1582204582-22995-1-git-send-email-borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2020-02-26s390x: Add missing vcpu reset functionsJanosch Frank
Up to now we only had an ioctl to reset vcpu data QEMU couldn't reach for the initial reset, which was also called for the clear reset. To be architecture compliant, we also need to clear local interrupts on a normal reset. Because of this and the upcoming protvirt support we need to add ioctls for the missing clear and normal resets. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Message-Id: <20200214151636.8764-3-frankja@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2020-02-26target/s390x/translate: Fix RNSBG instructionThomas Huth
RNSBG is handled via the op_rosbg() helper function. But RNSBG has the opcode 0xEC54, i.e. 0x54 as second byte, while op_rosbg() currently checks for 0x55. This seems to be a typo, fix it to use 0x54 instead, so that op_rosbg() does not abort() anymore if a program uses RNSBG. I've checked with a simple test function that I now get the same results with KVM and with TCG: static void test_rnsbg(void) { uint64_t r1, r2; r2 = 0xffff000000000000UL; r1 = 0x123456789bdfaaaaUL; asm volatile (" rnsbg %0,%1,12,61,16 " : "+r"(r1) : "r"(r2)); printf("r1 afterwards: 0x%lx\n", r1); } Buglink: https://bugs.launchpad.net/qemu/+bug/1860920 Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20200130133417.10531-1-thuth@redhat.com> Fixes: d6c6372e186e ("target-s390: Implement R[NOX]SBG") Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2020-02-25target/riscv: progressively load the instruction during decodeAlex Bennée
The plugin system would throw up a harmless warning when it detected that a disassembly of an instruction didn't use all it's bytes. Fix the riscv decoder to only load the instruction bytes it needs as it needs them. This drops opcode from the ctx in favour if passing the appropriately sized opcode down a few levels of the decode. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Robert Foley <robert.foley@linaro.org> Message-Id: <20200225124710.14152-15-alex.bennee@linaro.org>
2020-02-25Merge branch 'exec_rw_const_v4' of https://github.com/philmd/qemu into HEADPaolo Bonzini
2020-02-25target/i386: check for empty register in FXAMPaolo Bonzini
The fxam instruction returns the wrong result after fdecstp or after an underflow. Check fptags to handle this. Reported-by: <chengang@emindsoft.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-21target/arm: Set MVFR0.FPSP for ARMv5 cpusRichard Henderson
We are going to convert FEATURE tests to ISAR tests, so FPSP needs to be set for these cpus, like we have already for FPDP. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214181547.21408-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Use isar_feature_aa32_simd_r32 more placesRichard Henderson
Many uses of ARM_FEATURE_VFP3 are testing for the number of simd registers implemented. Use the proper test vs MVFR0.SIMDReg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214181547.21408-4-richard.henderson@linaro.org [PMM: fix typo in commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Rename isar_feature_aa32_simd_r32Richard Henderson
The old name, isar_feature_aa32_fp_d32, does not reflect the MVFR0 field name, SIMDReg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200214181547.21408-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: wrapped one long line] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Convert PMULL.8 to gvecRichard Henderson
We still need two different helpers, since NEON and SVE2 get the inputs from different locations within the source vector. However, we can convert both to the same internal form for computation. The sve2 helper is not used yet, but adding it with this patch helps illustrate why the neon changes are helpful. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216214232.4230-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Convert PMULL.64 to gvecRichard Henderson
The gvec form will be needed for implementing SVE2. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216214232.4230-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Convert PMUL.8 to gvecRichard Henderson
The gvec form will be needed for implementing SVE2. Extend the implementation to operate on uint64_t instead of uint32_t. Use a counted inner loop instead of terminating when op1 goes to zero, looking toward the required implementation for ARMv8.4-DIT. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216214232.4230-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Vectorize USHL and SSHLRichard Henderson
These instructions shift left or right depending on the sign of the input, and 7 bits are significant to the shift. This requires several masks and selects in addition to the actual shifts to form the complete answer. That said, the operation is still a small improvement even for two 64-bit elements -- 13 vector operations instead of 2 * 7 integer operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216214232.4230-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Correctly implement ACTLR2, HACTLR2Peter Maydell
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7 or the original ARMv8. They were later added as optional registers, whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2 they are mandatory (ie ID_MMFR4.AC2 must be non-zero). We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we incorrectly made it exist for all v8 CPUs, and we didn't implement ACTLR2 at all. Sort this out by implementing both registers only when they are supposed to exist, and setting the ID_MMFR4 bit for -cpu max. Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72 CPU models; this is correct, because those CPUs do not implement this register. Fixes: 0e0456ab8895a5e85 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-22-peter.maydell@linaro.org
2020-02-21target/arm: Use FIELD_EX32 for testing 32-bit fieldsPeter Maydell
Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes no difference in behaviour, it's just more consistent.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-21-peter.maydell@linaro.org
2020-02-21target/arm: Use isar_feature function for testing AA32HPD featurePeter Maydell
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we can define and use an isar_feature for the presence of the ARMv8.2-AA32HPD feature, rather than open-coding the test. While we're here, correct a comment typo which missed an 'A' from the feature name. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-20-peter.maydell@linaro.org
2020-02-21target/arm: Test correct register in aa32_pan and aa32_ats1e1 checksPeter Maydell
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions are supposed to be testing fields in ID_MMFR3; but a cut-and-paste error meant we were looking at MVFR0 instead. Fix the functions to look at the right register; this requires us to move at least id_mmfr3 to the ARMISARegisters struct; we choose to move all the ID_MMFRn registers for consistency. Fixes: 3d6ad6bb466f Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-19-peter.maydell@linaro.org
2020-02-21target/arm: Correct handling of PMCR_EL0.LC bitPeter Maydell
The LC bit in the PMCR_EL0 register is supposed to be: * read/write * RES1 on an AArch64-only implementation * an architecturally UNKNOWN value on reset (and use of LC==0 by software is deprecated). We were implementing it incorrectly as read-only always zero, though we do have all the code needed to test it and behave accordingly. Instead make it a read-write bit which resets to 1 always, which satisfies all the architectural requirements above. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-18-peter.maydell@linaro.org
2020-02-21target/arm: Correct definition of PMCRDPPeter Maydell
The PMCR_EL0.DP bit is bit 5, which is 0x20, not 0x10. 0x10 is 'X'. Correct our #define of PMCRDP and add the missing PMCRX. We do have the correct behaviour for handling the DP bit being set, so this fixes a guest-visible bug. Fixes: 033614c47de Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-17-peter.maydell@linaro.org
2020-02-21target/arm: Provide ARMv8.4-PMU in '-cpu max'Peter Maydell
Set the ID register bits to provide ARMv8.4-PMU (and implicitly also ARMv8.1-PMU) in the 'max' CPU. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-16-peter.maydell@linaro.org
2020-02-21target/arm: Implement ARMv8.4-PMU extensionPeter Maydell
The ARMv8.4-PMU extension adds: * one new required event, STALL * one new system register PMMIR_EL1 (There are also some more L1-cache related events, but since we don't implement any cache we don't provide these, in the same way we don't provide the base-PMUv3 cache events.) The STALL event "counts every attributable cycle on which no attributable instruction or operation was sent for execution on this PE". QEMU doesn't stall in this sense, so this is another always-reads-zero event. The PMMIR_EL1 register is a read-only register providing implementation-specific information about the PMU; currently it has only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU event. Since QEMU doesn't implement the STALL_SLOT event, we can validly make the register read zero. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-15-peter.maydell@linaro.org
2020-02-21target/arm: Implement ARMv8.1-PMU extensionPeter Maydell
The ARMv8.1-PMU extension requires: * the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10 * MDCR_EL2.HPMD allows event counting to be disabled at EL2 * two new required events, STALL_FRONTEND and STALL_BACKEND * ID register bits in ID_AA64DFR0_EL1 and ID_DFR0 We already implement the 16-bit evtCount field and the HPMD bit, so all that is missing is the two new events: STALL_FRONTEND "counts every cycle counted by the CPU_CYCLES event on which no operation was issued because there are no operations available to issue to this PE from the frontend" STALL_BACKEND "counts every cycle counted by the CPU_CYCLES event on which no operation was issued because the backend is unable to accept any available operations from the frontend" QEMU never stalls in this sense, so our implementation is trivial: always return a zero count. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-14-peter.maydell@linaro.org
2020-02-21target/arm: Read debug-related ID registers from KVMPeter Maydell
Now we have isar_feature test functions that look at fields in the ID_AA64DFR0_EL1 and ID_DFR0 ID registers, add the code that reads these register values from KVM so that the checks behave correctly when we're using KVM. No isar_feature function tests ID_AA64DFR1_EL1 or DBGDIDR yet, but we add it to maintain the invariant that every field in the ARMISARegisters struct is populated for a KVM CPU and can be relied on. This requirement isn't actually written down yet, so add a note to the relevant comment. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-13-peter.maydell@linaro.org
2020-02-21target/arm: Move DBGDIDR into ARMISARegistersPeter Maydell
We're going to want to read the DBGDIDR register from KVM in a subsequent commit, which means it needs to be in the ARMISARegisters sub-struct. Move it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-12-peter.maydell@linaro.org
2020-02-21target/arm: Stop assuming DBGDIDR always existsPeter Maydell
The AArch32 DBGDIDR defines properties like the number of breakpoints, watchpoints and context-matching comparators. On an AArch64 CPU, the register may not even exist if AArch32 is not supported at EL1. Currently we hard-code use of DBGDIDR to identify the number of breakpoints etc; this works for all our TCG CPUs, but will break if we ever add an AArch64-only CPU. We also have an assert() that the AArch32 and AArch64 registers match, which currently works only by luck for KVM because we don't populate either of these ID registers from the KVM vCPU and so they are both zero. Clean this up so we have functions for finding the number of breakpoints, watchpoints and context comparators which look in the appropriate ID register. This allows us to drop the "check that AArch64 and AArch32 agree on the number of breakpoints etc" asserts: * we no longer look at the AArch32 versions unless that's the right place to be looking * it's valid to have a CPU (eg AArch64-only) where they don't match * we shouldn't have been asserting the validity of ID registers in a codepath used with KVM anyway Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-11-peter.maydell@linaro.org
2020-02-21target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checksPeter Maydell
Add the 64-bit version of the "is this a v8.1 PMUv3?" ID register check function, and the _any_ version that checks for either AArch32 or AArch64 support. We'll use this in a later commit. We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1, but we move id_aa64dfr1 into the ARMISARegisters struct with id_aa64dfr0, for consistency. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-10-peter.maydell@linaro.org
2020-02-21target/arm: Define an aa32_pmu_8_1 isar feature test functionPeter Maydell
Instead of open-coding a check on the ID_DFR0 PerfMon ID register field, create a standardly-named isar_feature for "does AArch32 have a v8.1 PMUv3" and use it. This entails moving the id_dfr0 field into the ARMISARegisters struct. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-9-peter.maydell@linaro.org
2020-02-21target/arm: Use FIELD macros for clearing ID_DFR0 PERFMON fieldPeter Maydell
We already define FIELD macros for ID_DFR0, so use them in the one place where we're doing direct bit value manipulation. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-8-peter.maydell@linaro.org
2020-02-21target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1Peter Maydell
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them where we currently have hard-coded bit values. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-7-peter.maydell@linaro.org