aboutsummaryrefslogtreecommitdiff
path: root/target/arm/cpu.h
AgeCommit message (Collapse)Author
2019-11-19target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLYRichard Henderson
Simply moving the non-stub helper_v7m_mrs/msr outside of !CONFIG_USER_ONLY is not an option, because of all of the other system-mode helpers that are called. But we can split out a few subroutines to handle the few EL0 accessible registers without duplicating code. Reported-by: Christophe Lyon <christophe.lyon@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191118194916.3670-1-richard.henderson@linaro.org [PMM: deleted now-redundant comment; added a default case to switch in v7m_msr helper] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-11-19target/arm: Merge arm_cpu_vq_map_next_smaller into sole callerRichard Henderson
Coverity reports, in sve_zcr_get_valid_len, "Subtract operation overflows on operands arm_cpu_vq_map_next_smaller(cpu, start_vq + 1U) and 1U" First, the aarch32 stub version of arm_cpu_vq_map_next_smaller, returning 0, does exactly what Coverity reports. Remove it. Second, the aarch64 version of arm_cpu_vq_map_next_smaller has a set of asserts, but they don't cover the case in question. Further, there is a fair amount of extra arithmetic needed to convert from the 0-based zcr register, to the 1-base vq form, to the 0-based bitmap, and back again. This can be simplified by leaving the value in the 0-based form. Finally, use test_bit to simplify the common case, where the length in the zcr registers is in fact a supported length. Reported-by: Coverity (CID 1407217) Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20191118091414.19440-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-11-01target/arm/kvm: host cpu: Add support for sve<N> propertiesAndrew Jones
Allow cpu 'host' to enable SVE when it's available, unless the user chooses to disable it with the added 'sve=off' cpu property. Also give the user the ability to select vector lengths with the sve<N> properties. We don't adopt 'max' cpu's other sve property, sve-max-vq, because that property is difficult to use with KVM. That property assumes all vector lengths in the range from 1 up to and including the specified maximum length are supported, but there may be optional lengths not supported by the host in that range. With KVM one must be more specific when enabling vector lengths. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Message-id: 20191031142734.8590-10-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-11-01target/arm/cpu64: max cpu: Introduce sve<N> propertiesAndrew Jones
Introduce cpu properties to give fine control over SVE vector lengths. We introduce a property for each valid length up to the current maximum supported, which is 2048-bits. The properties are named, e.g. sve128, sve256, sve384, sve512, ..., where the number is the number of bits. See the updates to docs/arm-cpu-features.rst for a description of the semantics and for example uses. Note, as sve-max-vq is still present and we'd like to be able to support qmp_query_cpu_model_expansion with guests launched with e.g. -cpu max,sve-max-vq=8 on their command lines, then we do allow sve-max-vq and sve<N> properties to be provided at the same time, but this is not recommended, and is why sve-max-vq is not mentioned in the document. If sve-max-vq is provided then it enables all lengths smaller than and including the max and disables all lengths larger. It also has the side-effect that no larger lengths may be enabled and that the max itself cannot be disabled. Smaller non-power-of-two lengths may, however, be disabled, e.g. -cpu max,sve-max-vq=4,sve384=off provides a guest the vector lengths 128, 256, and 512 bits. This patch has been co-authored with Richard Henderson, who reworked the target/arm/cpu64.c changes in order to push all the validation and auto-enabling/disabling steps into the finalizer, resulting in a nice LOC reduction. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by: Beata Michalska <beata.michalska@linaro.org> Message-id: 20191031142734.8590-5-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Add arm_rebuild_hflagsRichard Henderson
This function assumes nothing about the current state of the cpu, and writes the computed value to env->hflags. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Hoist computation of TBFLAG_A32.VFPENRichard Henderson
There are 3 conditions that each enable this flag. M-profile always enables; A-profile with EL1 as AA64 always enables. Both of these conditions can easily be cached. The final condition relies on the FPEXC register which we are not prepared to cache. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Split arm_cpu_data_is_big_endianRichard Henderson
Set TBFLAG_ANY.BE_DATA in rebuild_hflags_common_32 and rebuild_hflags_a64 instead of rebuild_hflags_common, where we do not need to re-test is_a64() nor re-compute the various inputs. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Split out rebuild_hflags_commonRichard Henderson
Create a function to compute the values of the TBFLAG_ANY bits that will be cached. For now, the env->hflags variable is not used, and the results are fed back to cpu_get_tb_cpu_state. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Allow ARMCPRegInfo read/write functions to throw exceptionsPeter Maydell
Currently the only part of an ARMCPRegInfo which is allowed to cause a CPU exception is the access function, which returns a value indicating that some flavour of UNDEF should be generated. For the ATS system instructions, we would like to conditionally generate exceptions as part of the writefn, because some faults during the page table walk (like external aborts) should cause an exception to be raised rather than returning a value. There are several ways we could do this: * plumb the GETPC() value from the top level set_cp_reg/get_cp_reg helper functions through into the readfn and writefn hooks * add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC() value * require the ATS instructions to provide a dummy accessfn, which serves no purpose except to cause the code generation to emit TCG ops to sync the CPU state * add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly throwing an exception in its read/write hooks, and make the codegen sync the CPU state before calling the hooks if the flag is set This patch opts for the last of these, as it is fairly simple to implement and doesn't require invasive changes like updating the readfn/writefn hook function prototype signature. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190816125802.25877-2-peter.maydell@linaro.org
2019-08-16Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20190816' into staging target-arm queue: * target/arm: generate a custom MIDR for -cpu max * hw/misc/zynq_slcr: refactor to use standard register definition * Set ENET_BD_BDU in I.MX FEC controller * target/arm: Fix routing of singlestep exceptions * refactor a32/t32 decoder handling of PC * minor optimisations/cleanups of some a32/t32 codegen * target/arm/cpu64: Ensure kvm really supports aarch64=off * target/arm/cpu: Ensure we can use the pmu with kvm * target/arm: Minor cleanups preparatory to KVM SVE support # gpg: Signature made Fri 16 Aug 2019 14:15:55 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20190816: (29 commits) target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR target/arm: Use tcg_gen_rotri_i32 for gen_swap_half target/arm: Use ror32 instead of open-coding the operation target/arm: Remove redundant shift tests target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB target/arm: Use tcg_gen_extract_i32 for shifter_out_im target/arm/kvm64: Move the get/put of fpsimd registers out target/arm/kvm64: Fix error returns target/arm/cpu: Use div-round-up to determine predicate register array size target/arm/helper: zcr: Add build bug next to value range assumption target/arm/cpu: Ensure we can use the pmu with kvm target/arm/cpu64: Ensure kvm really supports aarch64=off target/arm: Remove helper_double_saturate target/arm: Use unallocated_encoding for aarch32 target/arm: Remove offset argument to gen_exception_bkpt_insn target/arm: Replace offset with pc in gen_exception_internal_insn target/arm: Replace offset with pc in gen_exception_insn target/arm: Replace s->pc with s->base.pc_next target/arm: Remove redundant s->pc & ~1 ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/cpu: Use div-round-up to determine predicate register array sizeAndrew Jones
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of four, then we should use DIV_ROUND_UP to ensure we get an appropriate array size. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Fix routing of singlestep exceptionsPeter Maydell
When generating an architectural single-step exception we were routing it to the "default exception level", which is to say the same exception level we execute at except that EL0 exceptions go to EL1. This is incorrect because the debug exception level can be configured by the guest for situations such as single stepping of EL0 and EL1 code by EL2. We have to track the target debug exception level in the TB flags, because it is dependent on CPU state like HCR_EL2.TGE and MDCR_EL2.TDE. (That we were previously calling the arm_debug_target_el() function to determine dc->ss_same_el is itself a bug, though one that would only have manifested as incorrect syndrome information.) Since we are out of TB flag bits unless we want to expand into the cs_base field, we share some bits with the M-profile only HANDLER and STACKCHECK bits, since only A-profile has this singlestep. Fixes: https://bugs.launchpad.net/qemu/+bug/1838913 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
2019-08-16target/arm: generate a custom MIDR for -cpu maxAlex Bennée
While most features are now detected by probing the ID_* registers kernels can (and do) use MIDR_EL1 for working out of they have to apply errata. This can trip up warnings in the kernel as it tries to work out if it should apply workarounds to features that don't actually exist in the reported CPU type. Avoid this problem by synthesising our own MIDR value. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190726113950.7499-1-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16migration: Move the VMStateDescription typedef to typedefs.hMarkus Armbruster
We declare incomplete struct VMStateDescription in a couple of places so we don't have to include migration/vmstate.h for the typedef. That's fine with me. However, the next commit will drop migration/vmstate.h from a massive number of compiles. Move the typedef to qemu/typedefs.h now, so I don't have to insert struct in front of VMStateDescription all over the place then. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-15-armbru@redhat.com>
2019-07-04target/arm: Restrict semi-hosting to TCGPhilippe Mathieu-Daudé
Per Peter Maydell: Semihosting hooks either SVC or HLT instructions, and inside KVM both of those go to EL1, ie to the guest, and can't be trapped to KVM. Let check_for_semihosting() return False when not running on TCG. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701194942.10092-3-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Move CPU state dumping routines to cpu.cPhilippe Mathieu-Daudé
Suggested-by: Samuel Ortiz <sameo@linux.intel.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-11-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-06-17target/arm: Only implement doubles if the FPU supports themPeter Maydell
The architecture permits FPUs which have only single-precision support, not double-precision; Cortex-M4 and Cortex-M33 are both like that. Add the necessary checks on the MVFR0 FPDP field so that we UNDEF any double-precision instructions on CPUs like this. Note that even if FPDP==0 the insns like VMOV-to/from-gpreg, VLDM/VSTM, VLDR/VSTR which take double precision registers still exist. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190614104457.24703-3-peter.maydell@linaro.org
2019-06-17target/arm: Allow M-profile CPUs to disable the DSP extension via CPU propertyPeter Maydell
Allow the DSP extension to be disabled via a CPU property for M-profile CPUs. (A and R-profile CPUs don't have this extension as a defined separate optional architecture extension, so they don't need the property.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190517174046.11146-3-peter.maydell@linaro.org
2019-06-17target/arm: Allow VFP and Neon to be disabled via a CPU propertyPeter Maydell
Allow VFP and neon to be disabled via a CPU property. As with the "pmu" property, we only allow these features to be removed from CPUs which have it by default, not added to CPUs which don't have it. The primary motivation here is to be able to optionally create Cortex-M33 CPUs with no FPU, but we provide switches for both VFP and Neon because the two interact: * AArch64 can't have one without the other * Some ID register fields only change if both are disabled Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190517174046.11146-2-peter.maydell@linaro.org
2019-06-13target/arm: Convert VFP VMLA to decodetreePeter Maydell
Convert the VFP VMLA instruction to decodetree. This is the first of the VFP 3-operand data processing instructions, so we include in this patch the code which loops over the elements for an old-style VFP vector operation. The existing code to do this looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since we are going to be converting instructions one at a time anyway we can take the opportunity to make the new loop use TCG temporaries, which means we can do that conversion one operation at a time rather than needing to do it all in one go. We include an UNDEF check which was missing in the old code: short-vector operations (with stride or length non-zero) were deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec field does not indicate that support for short vectors is present we UNDEF the operations that would use them. (This is a change of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which previously were all incorrectly allowing short-vector operations.) Note that the conversion fixes a bug in the old code for the case of VFP short-vector "mixed scalar/vector operations". These happen where the destination register is in a vector bank but but the second operand is in a scalar bank. For example vmla.f64 d10, d1, d16 with length 2 stride 2 is equivalent to the pair of scalar operations vmla.f64 d10, d1, d16 vmla.f64 d8, d3, d16 where the destination and first input register cycle through their vector but the second input is scalar (d16). In the old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d} as a temporary output for the multiply, which trashes the second input operand. For the fully-scalar case (where we never do a second iteration) and the fully-vector case (where the loop loads the new second input operand) this doesn't matter, but for the mixed scalar/vector case we will end up using the wrong value for later loop iterations. In the new code we use TCG temporaries and so avoid the bug. This bug is present for all the multiply-accumulate insns that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS. Note 2: the expression used to calculate the next register number in the vector bank is not in fact correct; we leave this behaviour unchanged from the old decoder and will fix this bug later in the series. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13target/arm: Convert the VSEL instructions to decodetreePeter Maydell
Convert the VSEL instructions to decodetree. We leave trans_VSEL() in translate.c for now as this allows the patch to show just the changes from the old handle_vsel(). In the old code the check for "do D16-D31 exist" was hidden in the VFP_DREG macro, and assumed that VFPv3 always implied that D16-D31 exist. In the new code we do the correct ID register test. This gives identical behaviour for most of our CPUs, and fixes previously incorrect handling for Cortex-R5F, Cortex-M4 and Cortex-M33, which all implement VFPv3 or better with only 16 double-precision registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
2019-06-10cpu: Remove CPU_COMMONRichard Henderson
This macro is now always empty, so remove it. This leaves the entire contents of CPUArchState under the control of the guest architecture. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Introduce CPUNegativeOffsetStateRichard Henderson
Nothing in there so far, but all of the plumbing done within the target ArchCPU state. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Move ENV_OFFSET to exec/gen-icount.hRichard Henderson
Now that we have ArchCPU, we can define this generically, in the one place that needs it. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10target/arm: Use env_cpu, env_archcpuRichard Henderson
Cleanup in the boilerplate that each target must define. Replace arm_env_get_cpu with env_archcpu. The combination CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin; use env_cpu now. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Replace ENV_GET_CPU with env_cpuRichard Henderson
Now that we have both ArchCPU and CPUArchState, we can define this generically instead of via macro in each target's cpu.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Define ArchCPURichard Henderson
For all targets, do this just before including exec/cpu-all.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Define CPUArchState with typedefRichard Henderson
For all targets, do this just before including exec/cpu-all.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Split out target/arch/cpu-param.hRichard Henderson
For all targets, into this new file move TARGET_LONG_BITS, TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS, TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES. Include this new file from exec/cpu-defs.h. This now removes the somewhat odd requirement that target/arch/cpu.h defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the bulk of the includes within target/arch/cpu.h to the top. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22target/arm: Implement ARMv8.5-RNGRichard Henderson
Use the newly introduced infrastructure for guest random numbers. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22target/arm: Put all PAC keys into a structureRichard Henderson
This allows us to use a single syscall to initialize them all. Reviewed-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-07target/arm: Implement XPSR GE bitsPeter Maydell
In the M-profile architecture, if the CPU implements the DSP extension then the XPSR has GE bits, in the same way as the A-profile CPSR. When we added DSP extension support we forgot to add support for reading and writing the GE bits, which are stored in env->GE. We did put in the code to add XPSR_GE to the mask of bits to update in the v7m_msr helper, but forgot it in v7m_mrs. We also must not allow the XPSR we pull off the stack on exception return to set the nonexistent GE bits. Correct these errors: * read and write env->GE in xpsr_read() and xpsr_write() * only set GE bits on exception return if DSP present * read GE bits for MRS if DSP present Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190430131439.25251-5-peter.maydell@linaro.org
2019-05-07arm: Allow system registers for KVM guests to be changed by QEMU codePeter Maydell
At the moment the Arm implementations of kvm_arch_{get,put}_registers() don't support having QEMU change the values of system registers (aka coprocessor registers for AArch32). This is because although kvm_arch_get_registers() calls write_list_to_cpustate() to update the CPU state struct fields (so QEMU code can read the values in the usual way), kvm_arch_put_registers() does not call write_cpustate_to_list(), meaning that any changes to the CPU state struct fields will not be passed back to KVM. The rationale for this design is documented in a comment in the AArch32 kvm_arch_put_registers() -- writing the values in the cpregs list into the CPU state struct is "lossy" because the write of a register might not succeed, and so if we blindly copy the CPU state values back again we will incorrectly change register values for the guest. The assumption was that no QEMU code would need to write to the registers. However, when we implemented debug support for KVM guests, we broke that assumption: the code to handle "set the guest up to take a breakpoint exception" does so by updating various guest registers including ESR_EL1. Support this by making kvm_arch_put_registers() synchronize CPU state back into the list. We sync only those registers where the initial write succeeds, which should be sufficient. This commit is the same as commit 823e1b3818f9b10b824ddc which we had to revert in commit 942f99c825fc94c8b1a4, except that the bug which was preventing EDK2 guest firmware running has been fixed: kvm_arm_reset_vcpu() now calls write_list_to_cpustate(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Eric Auger <eric.auger@redhat.com>
2019-04-29target/arm: Implement VLSTM for v7M CPUs with an FPUPeter Maydell
Implement the VLSTM instruction for v7M for the FPU present case. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
2019-04-29target/arm: Implement M-profile lazy FP state preservationPeter Maydell
The M-profile architecture floating point system supports lazy FP state preservation, where FP registers are not pushed to the stack when an exception occurs but are instead only saved if and when the first FP instruction in the exception handler is executed. Implement this in QEMU, corresponding to the check of LSPACT in the pseudocode ExecuteFPCheck(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
2019-04-29target/arm: New function armv7m_nvic_set_pending_lazyfp()Peter Maydell
In the v7M architecture, if an exception is generated in the process of doing the lazy stacking of FP registers, the handling of possible escalation to HardFault is treated differently to the normal approach: it works based on the saved information about exception readiness that was stored in the FPCCR when the stack frame was created. Provide a new function armv7m_nvic_set_pending_lazyfp() which pends exceptions during lazy stacking, and implements this logic. This corresponds to the pseudocode TakePreserveFPException(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
2019-04-29target/arm: New helper function arm_v7m_mmu_idx_all()Peter Maydell
Add a new helper function which returns the MMU index to use for v7M, where the caller specifies all of the security state, privilege level and whether the execution priority is negative, and reimplement the existing arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it. We are going to need this for the lazy-FP-stacking code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
2019-04-29target/arm: Activate M-profile floating point context when FPCCR.ASPEN is setPeter Maydell
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point context preservation is enabled. Before executing any floating-point instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits indicate that there is no active floating point context then we must create a new context (by initializing FPSCR and setting FPCA/SFPA to indicate that the context is now active). In the pseudocode this is handled by ExecuteFPCheck(). Implement this with a new TB flag which tracks whether we need to create a new FP context. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
2019-04-29target/arm: Set FPCCR.S when executing M-profile floating point insnsPeter Maydell
The M-profile FPCCR.S bit indicates the security status of the floating point context. In the pseudocode ExecuteFPCheck() function it is unconditionally set to match the current security state whenever a floating point instruction is executed. Implement this by adding a new TB flag which tracks whether FPCCR.S is different from the current security state, so that we only need to emit the code to update it in the less-common case when it is not already set correctly. Note that we will add the handling for the other work done by ExecuteFPCheck() in later commits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
2019-04-29target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flagsPeter Maydell
We are close to running out of TB flags for AArch32; we could start using the cs_base word, but before we do that we can economise on our usage by sharing the same bits for the VFP VECSTRIDE field and the XScale XSCALE_CPAR field. This works because no XScale CPU ever had VFP. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
2019-04-29target/arm: Move NS TBFLAG from bit 19 to bit 6Peter Maydell
Move the NS TBFLAG down from bit 19 to bit 6, which has not been used since commit c1e3781090b9d36c60 in 2015, when we started passing the entire MMU index in the TB flags rather than just a 'privilege level' bit. This rearrangement is not strictly necessary, but means that we can put M-profile-only bits next to each other rather than scattered across the flag word. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
2019-04-29target/arm: Implement v7m_update_fpccr()Peter Maydell
Implement the code which updates the FPCCR register on an exception entry where we are going to use lazy FP stacking. We have to defer to the NVIC to determine whether the various exceptions are currently ready or not. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
2019-04-29target/arm: Implement dummy versions of M-profile FP-related registersPeter Maydell
The M-profile floating point support has three associated config registers: FPCAR, FPCCR and FPDSCR. It also makes the registers CPACR and NSACR have behaviour other than reads-as-zero. Add support for all of these as simple reads-as-written registers. We will hook up actual functionality later. The main complexity here is handling the FPCCR register, which has a mix of banked and unbanked bits. Note that we don't share storage with the A-profile cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour is quite similar, for two reasons: * the M profile CPACR is banked between security states * it preserves the invariant that M profile uses no state inside the cp15 substruct Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
2019-04-18qom/cpu: Simplify how CPUClass:cpu_dump_state() printsMarkus Armbruster
CPUClass method dump_statistics() takes an fprintf()-like callback and a FILE * to pass to it. Most callers pass fprintf() and stderr. log_cpu_state() passes fprintf() and qemu_log_file. hmp_info_registers() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The callback gets passed around a lot, which is tiresome. The type-punning around monitor_fprintf() is ugly. Drop the callback, and call qemu_fprintf() instead. Also gets rid of the type-punning, since qemu_fprintf() takes NULL instead of the current monitor cast to FILE *. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-15-armbru@redhat.com>
2019-04-18target: Simplify how the TARGET_cpu_list() printMarkus Armbruster
The various TARGET_cpu_list() take an fprintf()-like callback and a FILE * to pass to it. Their callers (vl.c's main() via list_cpus(), bsd-user/main.c's main(), linux-user/main.c's main()) all pass fprintf() and stdout. Thus, the flexibility provided by the (rather tiresome) indirection isn't actually used. Drop the callback, and call qemu_printf() instead. Calling printf() would also work, but would make the code unsuitable for monitor context without making it simpler. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190417191805.28198-10-armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-03-25target/arm: make pmccntr_op_start/finish staticAndrew Jones
These functions are not used outside helper.c Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190322162333.17159-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Implement ARMv8.5-FRINTRichard Henderson
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-11-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Implement ARMv8.5-CondMRichard Henderson
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-9-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Implement ARMv8.4-CondMRichard Henderson
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-8-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: fixed up block comment style] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>