aboutsummaryrefslogtreecommitdiff
path: root/target/arm/helper.c
AgeCommit message (Collapse)Author
2019-04-29target/arm: Set FPCCR.S when executing M-profile floating point insnsPeter Maydell
The M-profile FPCCR.S bit indicates the security status of the floating point context. In the pseudocode ExecuteFPCheck() function it is unconditionally set to match the current security state whenever a floating point instruction is executed. Implement this by adding a new TB flag which tracks whether FPCCR.S is different from the current security state, so that we only need to emit the code to update it in the less-common case when it is not already set correctly. Note that we will add the handling for the other work done by ExecuteFPCheck() in later commits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
2019-04-29target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flagsPeter Maydell
We are close to running out of TB flags for AArch32; we could start using the cs_base word, but before we do that we can economise on our usage by sharing the same bits for the VFP VECSTRIDE field and the XScale XSCALE_CPAR field. This works because no XScale CPU ever had VFP. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
2019-04-29target/arm: Handle floating point registers in exception returnPeter Maydell
Handle floating point registers in exception return. This corresponds to pseudocode functions ValidateExceptionReturn(), ExceptionReturn(), PopStack() and ConsumeExcStackFrame(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
2019-04-29target/arm: Allow for floating point in callee stack integrity checkPeter Maydell
The magic value pushed onto the callee stack as an integrity check is different if floating point is present. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
2019-04-29target/arm: Clean excReturn bits when tail chainingPeter Maydell
The TailChain() pseudocode specifies that a tail chaining exception should sanitize the excReturn all-ones bits and (if there is no FPU) the excReturn FType bits; we weren't doing this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
2019-04-29target/arm: Clear CONTROL.SFPA in BXNS and BLXNSPeter Maydell
For v8M floating point support, transitions from Secure to Non-secure state via BLNS and BLXNS must clear the CONTROL.SFPA bit. (This corresponds to the pseudocode BranchToNS() function.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
2019-04-29target/arm: Implement v7m_update_fpccr()Peter Maydell
Implement the code which updates the FPCCR register on an exception entry where we are going to use lazy FP stacking. We have to defer to the NVIC to determine whether the various exceptions are currently ready or not. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
2019-04-29target/arm: Handle floating point registers in exception entryPeter Maydell
Handle floating point registers in exception entry. This corresponds to the FP-specific parts of the pseudocode functions ActivateException() and PushStack(). We defer the code corresponding to UpdateFPCCR() to a later patch. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
2019-04-29target/arm/helper: don't return early for STKOF faults during stackingPeter Maydell
Currently the code in v7m_push_stack() which detects a violation of the v8M stack limit simply returns early if it does so. This is OK for the current integer-only code, but won't work for the floating point handling we're about to add. We need to continue executing the rest of the function so that we check for other exceptions like not having permission to use the FPU and so that we correctly set the FPCCR state if we are doing lazy stacking. Refactor to avoid the early return. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
2019-04-29target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROLPeter Maydell
The M-profile CONTROL register has two bits -- SFPA and FPCA -- which relate to floating-point support, and should be RES0 otherwise. Handle them correctly in the MSR/MRS register access code. Neither is banked between security states, so they are stored in v7m.control[M_REG_S] regardless of current security state. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
2019-04-29target/arm: Clear CONTROL_S.SFPA in SG insn if FPU presentPeter Maydell
If the floating point extension is present, then the SG instruction must clear the CONTROL_S.SFPA bit. Implement this. (On a no-FPU system the bit will always be zero, so we don't need to make the clearing of the bit conditional on ARM_FEATURE_VFP.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
2019-04-29target/arm: Honour M-profile FP enable bitsPeter Maydell
Like AArch64, M-profile floating point has no FPEXC enable bit to gate floating point; so always set the VFPEN TB flag. M-profile also has CPACR and NSACR similar to A-profile; they behave slightly differently: * the CPACR is banked between Secure and Non-Secure * if the NSACR forces a trap then this is taken to the Secure state, not the Non-Secure state Honour the CPACR and NSACR settings. The NSACR handling requires us to borrow the exception.target_el field (usually meaningless for M profile) to distinguish the NOCP UsageFault taken to Secure state from the more usual fault taken to the current security state. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
2019-04-18target: Simplify how the TARGET_cpu_list() printMarkus Armbruster
The various TARGET_cpu_list() take an fprintf()-like callback and a FILE * to pass to it. Their callers (vl.c's main() via list_cpus(), bsd-user/main.c's main(), linux-user/main.c's main()) all pass fprintf() and stdout. Thus, the flexibility provided by the (rather tiresome) indirection isn't actually used. Drop the callback, and call qemu_printf() instead. Calling printf() would also work, but would make the code unsuitable for monitor context without making it simpler. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190417191805.28198-10-armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-03-25target/arm: make pmccntr_op_start/finish staticAndrew Jones
These functions are not used outside helper.c Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190322162333.17159-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-25target/arm: fix crash on pmu register accessAndrew Jones
Fix a QEMU NULL derefence that occurs when the guest attempts to enable PMU counters with a non-v8 cpu model or a v8 cpu model which has not configured a PMU. Fixes: 4e7beb0cc0f3 ("target/arm: Add a timer to predict PMU counter overflow") Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190322162333.17159-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-15target/arm: change arch timer registers access permissionDongjiu Geng
Some generic arch timer registers are Config-RW in the EL0, which means the EL0 exception level can have write permission if it is appropriately configured. When VM access registers, QEMU firstly checks whether they have RW permission, then check whether it is appropriately configured. If they are defined to read only in EL0, even though they have been appropriately configured, they still do not have write permission. So need to add the write permission according to ARMV8 spec when define it. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Message-id: 1552395177-12608-1-git-send-email-gengdongjiu@huawei.com Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Implement ARMv8.0-PredInvRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Split out arm_sctlrRichard Henderson
Minimize the number of places that will need updating when the virtual host extensions are added. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"Peter Maydell
This reverts commit 823e1b3818f9b10b824ddcd756983b6e2fa68730, which introduces a regression running EDK2 guest firmware under KVM: error: kvm run failed Function not implemented PC=000000013f5a6208 X00=00000000404003c4 X01=000000000000003a X02=0000000000000000 X03=00000000404003c4 X04=0000000000000000 X05=0000000096000046 X06=000000013d2ef270 X07=000000013e3d1710 X08=09010755ffaf8ba8 X09=ffaf8b9cfeeb5468 X10=feeb546409010756 X11=09010757ffaf8b90 X12=feeb50680903068b X13=090306a1ffaf8bc0 X14=0000000000000000 X15=0000000000000000 X16=000000013f872da0 X17=00000000ffffa6ab X18=0000000000000000 X19=000000013f5a92d0 X20=000000013f5a7a78 X21=000000000000003a X22=000000013f5a7ab2 X23=000000013f5a92e8 X24=000000013f631090 X25=0000000000000010 X26=0000000000000100 X27=000000013f89501b X28=000000013e3d14e0 X29=000000013e3d12a0 X30=000000013f5a2518 SP=000000013b7be0b0 PSTATE=404003c4 -Z-- EL1t with [ 3507.926571] kvm [35042]: load/store instruction decoding not implemented in the host dmesg. Revert the change for the moment until we can investigate the cause of the regression. Reported-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21target/arm: Split out vfp_helper.cRichard Henderson
Move all of the fp helpers out of helper.c into a new file. This is code movement only. Since helper.c has no copyright header, take the one from cpu.h for the new file. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190215192302.27855-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21target/arm: Stop unintentional sign extension in pmu_initAaron Lindsay OS
This was introduced by commit bf8d09694ccc07487cd73d7562081fdaec3370c8 target/arm: Don't clear supported PMU events when initializing PMCEID1 and identified by Coverity (CID 1398645). Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reported-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190219144621.450-1-aaron@os.amperecomputing.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21target/arm: v8M MPU should use background region as default, not alwaysPeter Maydell
The "background region" for a v8M MPU is a default which will be used (if enabled, and if the access is privileged) if the access does not match any specific MPU region. We were incorrectly using it always (by putting the condition at the wrong nesting level). This meant that we would always return the default background permissions rather than the correct permissions for a specific region, and also that we would not return the right information in response to a TT instruction. Move the check for the background region to the same place in the logic as the equivalent v8M MPUCheck() pseudocode puts it. This in turn means we must adjust the condition we use to detect matches in multiple regions to avoid false-positives. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190214113408.10214-1-peter.maydell@linaro.org
2019-02-18qapi: make query-cpu-definitions depend on specific targetsMarc-André Lureau
It depends on TARGET_PPC || TARGET_ARM || TARGET_I386 || TARGET_S390X. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190214152251.2073-15-armbru@redhat.com>
2019-02-15target/arm: Split out FPSCR.QC to a vector fieldRichard Henderson
Change the representation of this field such that it is easy to set from vector code. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190209033847.9014-11-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: Fix set of bits kept in xregs[ARM_VFP_FPSCR]Richard Henderson
Given that we mask bits properly on set, there is no reason to mask them again on get. We failed to clear the exception status bits, 0x9f, which means that the wrong value would be returned on get. Except in the (probably normal) case in which the set clears all of the bits. Simplify the code in set to also clear the RES0 bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190209033847.9014-10-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: Split out flags setting from vfp comparesRichard Henderson
Minimize the code within a macro by splitting out a helper function. Use deposit32 instead of manual bit manipulation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190209033847.9014-9-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: Fix vfp_gdb_get/set_reg vs FPSCRRichard Henderson
The components of this register is stored in several different locations. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190209033847.9014-7-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15arm: Allow system registers for KVM guests to be changed by QEMU codePeter Maydell
At the moment the Arm implementations of kvm_arch_{get,put}_registers() don't support having QEMU change the values of system registers (aka coprocessor registers for AArch32). This is because although kvm_arch_get_registers() calls write_list_to_cpustate() to update the CPU state struct fields (so QEMU code can read the values in the usual way), kvm_arch_put_registers() does not call write_cpustate_to_list(), meaning that any changes to the CPU state struct fields will not be passed back to KVM. The rationale for this design is documented in a comment in the AArch32 kvm_arch_put_registers() -- writing the values in the cpregs list into the CPU state struct is "lossy" because the write of a register might not succeed, and so if we blindly copy the CPU state values back again we will incorrectly change register values for the guest. The assumption was that no QEMU code would need to write to the registers. However, when we implemented debug support for KVM guests, we broke that assumption: the code to handle "set the guest up to take a breakpoint exception" does so by updating various guest registers including ESR_EL1. Support this by making kvm_arch_put_registers() synchronize CPU state back into the list. We sync only those registers where the initial write succeeds, which should be sufficient. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Dongjiu Geng <gengdongjiu@huawei.com>
2019-02-15target/arm: expose remaining CPUID registers as RAZAlex Bennée
There are a whole bunch more registers in the CPUID space which are currently not used but are exposed as RAZ. To avoid too much duplication we expand ARMCPRegUserSpaceInfo to understand glob patterns so we only need one entry to tweak whole ranges of registers. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190205190224.2198-5-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: expose MPIDR_EL1 to userspaceAlex Bennée
As this is a single register we could expose it with a simple ifdef but we use the existing modify_arm_cp_regs mechanism for consistency. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190205190224.2198-4-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: expose CPUID registers to userspaceAlex Bennée
A number of CPUID registers are exposed to userspace by modern Linux kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's user-mode emulation we don't need to emulate the kernels trap but just return the value the trap would have done. To avoid too much #ifdef hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs) before defining the registers. The modify routine is driven by a simple data structure which describes which bits are exported and which are fixed. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190205190224.2198-3-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: relax permission checks for HWCAP_CPUID registersAlex Bennée
Although technically not visible to userspace the kernel does make them visible via a trap and emulate ABI. We provide a new permission mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust the minimum permission check accordingly. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190205190224.2198-2-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15target/arm: Implement HACR_EL2Peter Maydell
HACR_EL2 is a register with IMPDEF behaviour, which allows implementation specific trapping to EL2. Implement it as RAZ/WI, since QEMU's implementation has no extra traps. This also matches what h/w implementations like Cortex-A53 and A57 do. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190205181218.8995-1-peter.maydell@linaro.org
2019-02-15target/arm: Fix CRn to be 14 for PMEVTYPER/PMEVCNTRAaron Lindsay OS
This bug was introduced in: commit 5ecdd3e47cadae83a62dc92b472f1fe163b56f59 target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPER Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190205135129.19338-1-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WIPeter Maydell
The {IOE, DZE, OFE, UFE, IXE, IDE} bits in the FPSCR/FPCR are for enabling trapped IEEE floating point exceptions (where IEEE exception conditions cause a CPU exception rather than updating the FPSR status bits). QEMU doesn't implement this (and nor does the hardware we're modelling), but for implementations which don't implement trapped exception handling these control bits are supposed to be RAZ/WI. This allows guest code to test for whether the feature is present by trying to write to the bit and checking whether it sticks. QEMU is incorrectly making these bits read as written. Make them RAZ/WI as the architecture requires. In particular this was causing problems for the NetBSD automatic test suite. Reported-by: Martin Husemann <martin@netbsd.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190131130700.28392-1-peter.maydell@linaro.org
2019-02-05target/arm: Compute TB_FLAGS for TBI for user-onlyPeter Maydell
Enables, but does not turn on, TBI for CONFIG_USER_ONLY. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-4-richard.henderson@linaro.org [PMM: adjusted #ifdeffery to placate clang, which otherwise complains about static functions that are unused in the CONFIG_USER_ONLY build] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignoreRichard Henderson
Split out gen_top_byte_ignore in preparation of handling these data accesses; the new tbflags field is not yet honored. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Cache the GP bit for a page in MemTxAttrsRichard Henderson
Caching the bit means that we will not have to re-walk the page tables to look up the bit during translation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190128223118.5255-6-richard.henderson@linaro.org [PMM: no need to OR in guarded bit status] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Add BT and BTYPE to tb->flagsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Enable API, APK bits in SCR, HCRRichard Henderson
These bits become writable with the ARMv8.3-PAuth extension. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190129143511.12311-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Add a timer to predict PMU counter overflowAaron Lindsay OS
Make PMU overflow interrupts more accurate by using a timer to predict when they will overflow rather than waiting for an event to occur which allows us to otherwise check them. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190124162401.5111-3-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01target/arm: Send interrupts on PMU counter overflowAaron Lindsay OS
Whenever we notice that a counter overflow has occurred, send an interrupt. This is made more reliable with the addition of a timer in a follow-on commit. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190124162401.5111-2-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-29target/arm: Don't clear supported PMU events when initializing PMCEID1Aaron Lindsay OS
A bug was introduced during a respin of: commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0 This patch introduced two calls to get_pmceid() during CPU initialization - one each for PMCEID0 and PMCEID1. In addition to building the register values, get_pmceid() clears an internal array mapping event numbers to their implementations (supported_event_map) before rebuilding it. This is an optimization since much of the logic is shared. However, since it was called twice, the contents of supported_event_map reflect only the events in PMCEID1 (the second call to get_pmceid()). Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back into a single function call, and name it more appropriately since it is doing more than simply generating the contents of the PMCEID[01] registers. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190123195814.29253-1-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-29target/arm: v8m: Ensure IDAU is respected if SAU is disabledThomas Roth
The current behavior of v8m_security_lookup in helper.c only checks whether the IDAU specifies a higher security if the SAU is enabled. If SAU.ALLNS is set to 1, this will lead to addresses being treated as non-secure, even though the IDAU indicates that they must be secure. This patch changes the behavior to also check the IDAU if the SAU is currently disabled. (This brings the behaviour here into line with the v8M Arm ARM SecurityCheck() pseudocode.) Signed-off-by: Thomas Roth <code@stacksmashing.net> Message-id: CAGGekkuc+-tvp5RJP7CM+Jy_hJF7eiRHZ96132sb=hPPCappKg@mail.gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: added pseudocode ref to the commit message, fixed comment style] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-29target/arm: Fix validation of 32-bit address spaces for aa32Richard Henderson
When tsz == 0, aarch32 selects the address space via exclusion, and there are no "top_bits" remaining that require validation. Fixes: ba97be9f4a4 Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190125184913.5970-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Implement PMSWINCAaron Lindsay
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-14-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: PMU: Set PMCR.N to 4Aaron Lindsay
This both advertises that we support four counters and enables them because the pmu_num_counters() reads this value from PMCR. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-13-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: PMU: Add instruction and cycle eventsAaron Lindsay
The instruction event is only enabled when icount is used, cycles are always supported. Always defining get_cycle_count (but altering its behavior depending on CONFIG_USER_ONLY) allows us to remove some CONFIG_USER_ONLY #defines throughout the rest of the code. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-12-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPERAaron Lindsay
Add arrays to hold the registers, the definitions themselves, access functions, and logic to reset counters when PMCR.P is set. Update filtering code to support counters other than PMCCNTR. Support migration with raw read/write functions. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181211151945.29137-11-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0Aaron Lindsay
This commit doesn't add any supported events, but provides the framework for adding them. We store the pm_event structs in a simple array, and provide the mapping from the event numbers to array indexes in the supported_event_map array. Because the value of PMCEID[01] depends upon which events are supported at runtime, generate it dynamically. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20181211151945.29137-10-aaron@os.amperecomputing.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>