aboutsummaryrefslogtreecommitdiff
path: root/target/arm/cpu.h
AgeCommit message (Collapse)Author
2022-06-08target/arm: Rename sve_zcr_len_for_el to sve_vqm1_for_elRichard Henderson
This will be used for both Normal and Streaming SVE, and the value does not necessarily come from ZCR_ELx. While we're at it, emphasize the units in which the value is returned. Patch produced by git grep -l sve_zcr_len_for_el | \ xargs -n1 sed -i 's/sve_zcr_len_for_el/sve_vqm1_for_el/g' and then adding a function comment. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220607203306.657998-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08target/arm: Use uint32_t instead of bitmap for sve vq'sRichard Henderson
The bitmap need only hold 15 bits; bitmap is over-complicated. We can simplify operations quite a bit with plain logical ops. The introduction of SVE_VQ_POW2_MAP eliminates the need for looping in order to search for powers of two. Simply perform the logical ops and use count leading or trailing zeros as required to find the result. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220607203306.657998-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08linux-user/aarch64: Introduce sve_vqRichard Henderson
Add an interface function to extract the digested vector length rather than the raw zcr_el[1] value. This fixes an incorrect return from do_prctl_set_vl where we didn't take into account the set of vector lengths supported by the cpu. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220607203306.657998-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08target/arm: Rename TBFLAG_A64 ZCR_LEN to VLRichard Henderson
With SME, the vector length does not only come from ZCR_ELx. Comment that this is either NVL or SVL, like the pseudocode. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220607203306.657998-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08target/arm: Implement FEAT_DoubleFaultPeter Maydell
The FEAT_DoubleFault extension adds the following: * All external aborts on instruction fetches and translation table walks for instruction fetches must be synchronous. For QEMU this is already true. * SCR_EL3 has a new bit NMEA which disables the masking of SError interrupts by PSTATE.A when the SError interrupt is taken to EL3. For QEMU we only need to make the bit writable, because we have no sources of SError interrupts. * SCR_EL3 has a new bit EASE which causes synchronous external aborts taken to EL3 to be taken at the same entry point as SError. (Note that this does not mean that they are SErrors for purposes of PSTATE.A masking or that the syndrome register reports them as SErrors: it just means that the vector offset is different.) * The existing SCTLR_EL3.IESB has an effective value of 1 when SCR_EL3.NMEA is 1. For QEMU this is a no-op because we don't need different behaviour based on IESB (we don't need to do anything to ensure that error exceptions are synchronized). So for QEMU the things we need to change are: * Make SCR_EL3.{NMEA,EASE} writable * When taking a synchronous external abort at EL3, adjust the vector entry point if SCR_EL3.EASE is set * Advertise the feature in the ID registers Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220531151431.949322-1-peter.maydell@linaro.org
2022-05-19target/arm: Use FIELD definitions for CPACR, CPTR_ELxRichard Henderson
We had a few CPTR_* bits defined, but missed quite a few. Complete all of the fields up to ARMv9.2. Use FIELD_EX64 instead of manual extract32. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220517054850.177016-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-19target/arm: Enable FEAT_HCX for -cpu maxRichard Henderson
This feature adds a new register, HCRX_EL2, which controls many of the newer AArch64 features. So far the register is effectively RES0, because none of the new features are done. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220517054850.177016-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-19target/arm: Make number of counters in PMCR follow the CPUPeter Maydell
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This means that we don't provide the 6 counters that are required by the Arm BSA (Base System Architecture) specification if the CPU supports the Virtualization extensions. Instead of having a single PMCR_NUM_COUNTERS, make each CPU type specify the PMCR reset value (obtained from the appropriate TRM), and use the 'N' field of that value to define the number of counters provided. This means that we now supply 6 counters instead of 4 for: Cortex-A9, Cortex-A15, Cortex-A53, Cortex-A57, Cortex-A72, Cortex-A76, Neoverse-N1, '-cpu max' This CPU goes from 4 to 8 counters: A64FX These CPUs remain with 4 counters: Cortex-A7, Cortex-A8 This CPU goes down from 4 to 3 counters: Cortex-R5 Note that because we now use the PMCR reset value of the specific implementation, we no longer set the LC bit out of reset. This has an UNKNOWN value out of reset for all cores with any AArch32 support, so guest software should be setting it anyway if it wants it. This change was originally landed in commit f7fb73b8cdd3f7 (during the 6.0 release cycle) but was then reverted by commit 21c2dd77a6aa517 before that release because it did not work with KVM. This version fixes that by creating the scratch vCPU in kvm_arm_get_host_cpu_features() with the KVM_ARM_VCPU_PMU_V3 feature if KVM supports it, and then only asking KVM for the PMCR_EL0 value if the vCPU has a PMU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [PMM: Added the correct value for a64fx] Message-id: 20220513122852.4063586-1-peter.maydell@linaro.org
2022-05-19hw/intc/arm_gicv3: Use correct number of priority bits for the CPUPeter Maydell
Make the GICv3 set its number of bits of physical priority from the implementation-specific value provided in the CPU state struct, in the same way we already do for virtual priority bits. Because this would be a migration compatibility break, we provide a property force-8-bit-prio which is enabled for 7.0 and earlier versioned board models to retain the legacy "always use 8 bits" behaviour. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220512151457.3899052-6-peter.maydell@linaro.org Message-id: 20220506162129.2896966-5-peter.maydell@linaro.org
2022-05-19target/arm: Implement FEAT_IDSTPeter Maydell
The Armv8.4 feature FEAT_IDST specifies that exceptions generated by read accesses to the feature ID space should report a syndrome code of 0x18 (EC_SYSTEMREGISTERTRAP) rather than 0x00 (EC_UNCATEGORIZED). The feature ID space is defined to be: op0 == 3, op1 == {0,1,3}, CRn == 0, CRm == {0-7}, op2 == {0-7} In our implementation we might return the EC_UNCATEGORIZED syndrome value for a system register access in four cases: * no reginfo struct in the hashtable * cp_access_ok() fails (ie ri->access doesn't permit the access) * ri->accessfn returns CP_ACCESS_TRAP_UNCATEGORIZED at runtime * ri->type includes ARM_CP_RAISES_EXC, and the readfn raises an UNDEF exception at runtime We have very few regdefs that set ARM_CP_RAISES_EXC, and none of them are in the feature ID space. (In the unlikely event that any are added in future they would need to take care of setting the correct syndrome themselves.) This patch deals with the other three cases, and enables FEAT_IDST for AArch64 -cpu max. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220509155457.3560724-1-peter.maydell@linaro.org
2022-05-19target/arm: Implement FEAT_S2FWBPeter Maydell
Implement the handling of FEAT_S2FWB; the meat of this is in the new combined_attrs_fwb() function which combines S1 and S2 attributes when HCR_EL2.FWB is set. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220505183950.2781801-4-peter.maydell@linaro.org
2022-05-09target/arm: Enable FEAT_CSV2_2 for -cpu maxRichard Henderson
There is no branch prediction in TCG, therefore there is no need to actually include the context number into the predictor. Therefore all we need to do is add the state for SCXTNUM_ELx. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Implement virtual SError exceptionsRichard Henderson
Virtual SError exceptions are raised by setting HCR_EL2.VSE, and are routed to EL1 just like other virtual exceptions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Add minimal RAS registersRichard Henderson
Add only the system registers required to implement zero error records. This means that all values for ERRSELR are out of range, which means that it and all of the indexed error record registers need not be implemented. Add the EL2 registers required for injecting virtual SError. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-05target/arm: Add isar_feature_{aa64,any}_rasRichard Henderson
Add the aa64 predicate for detecting RAS support from id registers. We already have the aa32 version from the M-profile work. Add the 'any' predicate for testing both aa64 and aa32. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220501055028.646596-34-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-05target/arm: Add isar predicates for FEAT_Debugv8p2Richard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220501055028.646596-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-05target/arm: Split out cpregs.hRichard Henderson
Move ARMCPRegInfo and all related declarations to a new internal header, out of the public cpu.h. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220501055028.646596-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Remove fpexc32_accessRichard Henderson
This function is incorrect in that it does not properly consider CPTR_EL2.FPEN. We've already got another mechanism for raising an FPU access trap: ARM_CP_FPU, so use that instead. Remove CP_ACCESS_TRAP_FP_EL{2,3}, which becomes unused. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Change CPUArchState.thumb to boolRichard Henderson
Bool is a more appropriate type for this value. Adjust the assignments to use true/false. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Change CPUArchState.aarch64 to boolRichard Henderson
Bool is a more appropriate type for this value. Adjust the assignments to use true/false. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Update SCTLR bits to ARMv9.2Richard Henderson
Update SCTLR_ELx fields per ARM DDI0487 H.a. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Update SCR_EL3 bits to ARMv8.8Richard Henderson
Update SCR_EL3 fields per ARM DDI0487 H.a. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22target/arm: Update ISAR fields for ARMv8.8Richard Henderson
Update isar fields per ARM DDI0487 H.a. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-06Move CPU softfloat unions to cpu-float.hMarc-André Lureau
The types are no longer used in bswap.h since commit f930224fffe ("bswap.h: Remove unused float-access functions"), there isn't much sense in keeping it there and having a dependency on fpu/. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220323155743.1585078-29-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace TARGET_WORDS_BIGENDIANMarc-André Lureau
Convert the TARGET_WORDS_BIGENDIAN macro, similarly to what was done with HOST_BIG_ENDIAN. The new TARGET_BIG_ENDIAN macro is either 0 or 1, and thus should always be defined to prevent misuse. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Suggested-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-8-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace config-time define HOST_WORDS_BIGENDIANMarc-André Lureau
Replace a config-time define with a compile time condition define (compatible with clang and gcc) that must be declared prior to its usage. This avoids having a global configure time define, but also prevents from bad usage, if the config header wasn't included before. This can help to make some code independent from qemu too. gcc supports __BYTE_ORDER__ from about 4.6 and clang from 3.2. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> [ For the s390x parts I'm involved in ] Acked-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-7-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-18target/arm: Make rvbar settable after realizeEdgar E. Iglesias
Make the rvbar property settable after realize. This is done in preparation to model the ZynqMP's runtime configurable rvbar. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20220316164645.2303510-3-edgar.iglesias@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-08Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20220307' into staging target-arm queue: * cleanups of qemu_oom_check() and qemu_memalign() * target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero * target/arm/translate-neon: Simplify align field check for VLD3 * GICv3 ITS: add more trace events * GICv3 ITS: implement 8-byte accesses properly * GICv3: fix minor issues with some trace/log messages * ui/cocoa: Use the standard about panel * target/arm: Provide cpu property for controling FEAT_LPA2 * hw/arm/virt: Disable LPA2 for -machine virt-6.2 # gpg: Signature made Mon 07 Mar 2022 16:46:06 GMT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20220307: hw/arm/virt: Disable LPA2 for -machine virt-6.2 target/arm: Provide cpu property for controling FEAT_LPA2 ui/cocoa: Use the standard about panel hw/intc/arm_gicv3_cpuif: Fix register names in ICV_HPPIR read trace event hw/intc/arm_gicv3: Fix missing spaces in error log messages hw/intc/arm_gicv3: Specify valid and impl in MemoryRegionOps hw/intc/arm_gicv3_its: Add trace events for table reads and writes hw/intc/arm_gicv3_its: Add trace events for commands target/arm/translate-neon: Simplify align field check for VLD3 target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero osdep: Move memalign-related functions to their own header util: Put qemu_vfree() in memalign.c util: Use meson checks for valloc() and memalign() presence util: Share qemu_try_memalign() implementation between POSIX and Windows meson.build: Don't misdetect posix_memalign() on Windows util: Return valid allocation for qemu_try_memalign() with zero size util: Unify implementations of qemu_memalign() util: Make qemu_oom_check() a static function Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-07target/arm: Provide cpu property for controling FEAT_LPA2Richard Henderson
There is a Linux kernel bug present until v5.12 that prevents booting with FEAT_LPA2 enabled. As a workaround for TCG, allow the feature to be disabled from -cpu max. Since this kernel bug is present in the Fedora 31 image that we test in avocado, disable lpa2 on the command-line. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-06target: Use ArchCPU as interface to target CPUPhilippe Mathieu-Daudé
ArchCPU is our interface with target-specific code. Use it as a forward-declared opaque pointer (abstract type), having its structure defined by each target. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-15-f4bug@amsat.org>
2022-03-06target: Introduce and use OBJECT_DECLARE_CPU_TYPE() macroPhilippe Mathieu-Daudé
Replace the boilerplate code to declare CPU QOM types and macros, and forward-declare the CPU instance type. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-14-f4bug@amsat.org>
2022-03-06target: Use CPUArchState as interface to target-specific CPU statePhilippe Mathieu-Daudé
While CPUState is our interface with generic code, CPUArchState is our interface with target-specific code. Use CPUArchState as an abstract type, defined by each target. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-13-f4bug@amsat.org>
2022-03-02target/arm: Implement FEAT_LPA2Richard Henderson
This feature widens physical addresses (and intermediate physical addresses for 2-stage translation) from 48 to 52 bits, when using 4k or 16k pages. This introduces the DS bit to TCR_ELx, which is RES0 unless the page size is enabled and supports LPA2, resulting in the effective value of DS for a given table walk. The DS bit changes the format of the page table descriptor slightly, moving the PS field out to TCR so that all pages have the same sharability and repurposing those bits of the page table descriptor for the highest bits of the output address. Do not yet enable FEAT_LPA2; we need extra plumbing to avoid tickling an old kernel bug. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Implement FEAT_LVARichard Henderson
This feature is relatively small, as it applies only to 64k pages and thus requires no additional changes to the table descriptor walking algorithm, only a change to the minimum TSZ (which is the inverse of the maximum virtual address space size). Note that this feature widens VBAR_ELx, but we already treat the register as being 64 bits wide. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-01-20hw/arm/virt: KVM: Enable PAuth when supported by the hostMarc Zyngier
Add basic support for Pointer Authentication when running a KVM guest and that the host supports it, loosely based on the SVE support. Although the feature is enabled by default when the host advertises it, it is possible to disable it by setting the 'pauth=off' CPU property. The 'pauth' comment is removed from cpu-features.rst, as it is now common to both TCG and KVM. Tested on an Apple M1 running 5.16-rc6. Cc: Eric Auger <eric.auger@redhat.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220107150154.2490308-1-maz@kernel.org [PMM: fixed indentation] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-21include/exec: Move cpu_signal_handler declarationRichard Henderson
There is nothing target specific about this. The implementation is host specific, but the declaration is 100% common. Reviewed-By: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21target/arm: Add TB flag for "MVE insns not predicated"Peter Maydell
Our current codegen for MVE always calls out to helper functions, because some byte lanes might be predicated. The common case is that in fact there is no predication active and all lanes should be updated together, so we can produce better code by detecting that and using the TCG generic vector infrastructure. Add a TB flag that is set when we can guarantee that there is no active MVE predication, and a bool in the DisasContext. Subsequent patches will use this flag to generate improved code for some instructions. In most cases when the predication state changes we simply end the TB after that instruction. For the code called from vfp_access_check() that handles lazy state preservation and creating a new FP context, we can usually avoid having to try to end the TB because luckily the new value of the flag following the register changes in those sequences doesn't depend on any runtime decisions. We do have to end the TB if the guest has enabled lazy FP state preservation but not automatic state preservation, but this is an odd corner case that is not going to be common in real-world code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
2021-09-21hvf: arm: Implement -cpu hostPeter Maydell
Now that we have working system register sync, we push more target CPU properties into the virtual machine. That might be useful in some situations, but is not the typical case that users want. So let's add a -cpu host option that allows them to explicitly pass all CPU capabilities of their host CPU into the guest. Signed-off-by: Alexander Graf <agraf@csgraf.de> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210916155404.86958-7-agraf@csgraf.de [PMM: drop unnecessary #include line from .h file] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-14target/arm: Restrict cpu_exec_interrupt() handler to sysemuPhilippe Mathieu-Daudé
Restrict cpu_exec_interrupt() and its callees to sysemu. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210911165434.531552-8-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13target/arm: Take an exception if PSTATE.IL is setPeter Maydell
In v8A, the PSTATE.IL bit is set for various kinds of illegal exception return or mode-change attempts. We already set PSTATE.IL (or its AArch32 equivalent CPSR.IL) in all those cases, but we weren't implementing the part of the behaviour where attempting to execute an instruction with PSTATE.IL takes an immediate exception with an appropriate syndrome value. Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code to take an exception instead of whatever the instruction would have been. PSTATE.IL and CPSR.IL change only on exception entry, attempted exception exit, and various AArch32 mode changes via cpsr_write(). These places generally already rebuild the hflags, so the only place we need an extra rebuild_hflags call is in the illegal-return codepath of the AArch64 exception_return helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-2-richard.henderson@linaro.org Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [rth: Added missing returns; set IL bit in syndrome] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-26target/arm: Do hflags rebuild in cpsr_write()Peter Maydell
Currently we rely on all the callsites of cpsr_write() to rebuild the cached hflags if they change one of the CPSR bits which we use as a TB flag and cache in hflags. This is a bit awkward when we want to change the set of CPSR bits that we cache, because it means we need to re-audit all the cpsr_write() callsites to see which flags they are writing and whether they now need to rebuild the hflags. Switch instead to making cpsr_write() call arm_rebuild_hflags() itself if one of the bits being changed is a cached bit. We don't do the rebuild for the CPSRWriteRaw write type, because that kind of write is generally doing something special anyway. For the CPSRWriteRaw callsites in the KVM code and inbound migration we definitely don't want to recalculate the hflags; the callsites in boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves anyway because of other CPU state changes they make. This allows us to drop explicit arm_rebuild_hflags() calls in a couple of places where the only reason we needed to call it was the CPSR write. This fixes a bug where we were incorrectly failing to rebuild hflags in the code path for a gdbstub write to CPSR, which meant that you could make QEMU assert by breaking into a running guest, altering the CPSR to change the value of, for example, CPSR.E, and then continuing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TJDBXPeter Maydell
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TTEEPeter Maydell
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses to the Thumb2EE TEECR and TEEHBR registers to be trapped to the hypervisor. Implement these traps. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
2021-08-26target/arm/cpu: Introduce sve_vq_supported bitmapAndrew Jones
Allow CPUs that support SVE to specify which SVE vector lengths they support by setting them in this bitmap. Currently only the 'max' and 'host' CPU types supports SVE and 'host' requires KVM which obtains its supported bitmap from the host. So, we only need to initialize the bitmap for 'max' with TCG. And, since 'max' should support all SVE vector lengths we simply fill the bitmap. Future CPU types may have less trivial maps though. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25target/arm: Implement M-profile trapping on division by zeroPeter Maydell
Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
2021-07-27target/arm: Add sve-default-vector-length cpu propertyRichard Henderson
Mirror the behavour of /proc/sys/abi/sve_default_vector_length under the real linux kernel. We have no way of passing along a real default across exec like the kernel can, but this is a decent way of adjusting the startup vector length of a process. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-4-richard.henderson@linaro.org [PMM: tweaked docs formatting, document -1 special-case, added fixup patch from RTH mentioning QEMU's maximum veclen.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03target/arm: Add isar_feature_{aa32, aa64, aa64_sve}_bf16Richard Henderson
Note that the SVE BFLOAT16 support does not require SVE2, it is an independent extension. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525225817.400336-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03target/arm: Allow board models to specify initial NS VTORPeter Maydell
Currently we allow board models to specify the initial value of the Secure VTOR register, using an init-svtor property on the TYPE_ARMV7M object which is plumbed through to the CPU. Allow board models to also specify the initial value of the Non-secure VTOR via a similar init-nsvtor property. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210520152840.24453-10-peter.maydell@linaro.org
2021-06-03target/arm: Make FPSCR.LTPSIZE writable for MVEPeter Maydell
The M-profile FPSCR has an LTPSIZE field, but if MVE is not implemented it is read-only and always reads as 4; this is how QEMU currently handles it. Make the field writable when MVE is implemented. We can safely add the field to the MVE migration struct because currently no CPUs enable MVE and so the migration struct is never used. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210520152840.24453-8-peter.maydell@linaro.org
2021-06-03target/arm: Implement M-profile VPR registerPeter Maydell
If MVE is implemented for an M-profile CPU then it has a VPR register, which tracks predication information. Implement the read and write handling of this register, and the migration of its state. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210520152840.24453-7-peter.maydell@linaro.org