aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2018-10-24target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 testRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-9-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Convert sve from feature bit to aa64pfr0 testRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-8-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Convert jazelle from feature bit to isar1 testRichard Henderson
Having V6 alone imply jazelle was wrong for cortex-m0. Change to an assertion for V6 & !M. This was harmless, because the only place we tested ARM_FEATURE_JAZELLE was for 'bxj' in disas_arm(), which is unreachable for M-profile cores. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Convert division from feature bits to isar0 testsRichard Henderson
Both arm and thumb2 division are controlled by the same ISAR field, which takes care of the arm implies thumb case. Having M imply thumb2 division was wrong for cortex-m0, which is v6m and does not have thumb2 at all, much less thumb2 division. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Convert v8 extensions from feature bits to isar testsRichard Henderson
Most of the v8 extensions are self-contained within the ISAR registers and are not implied by other feature bits, which makes them the easiest to convert. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: V8M should not imply V7VERichard Henderson
Instantiating mps2-an505 (cortex-m33) will fail make check when V7VE asserts that ID_ISAR0.Divide includes ARM division. It is also wrong to include ARM_FEATURE_LPAE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Move some system registers into a substructureRichard Henderson
Create struct ARMISARegisters, to be accessed during translation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181016223115.24100-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24target/arm: Add support for VCPU event statesDongjiu Geng
This patch extends the qemu-kvm state sync logic with support for KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception. And also it can support the exception state migration. The SError exception states include SError pending state and ESR value, the kvm_put/get_vcpu_events() will be called when set or get system registers. When do migration, if source machine has SError pending, QEMU will do this migration regardless whether the target machine supports to specify guest ESR value, because if target machine does not support that, it can also inject the SError with zero ESR value. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-18target/arm: Check HAVE_CMPXCHG128 at translate timeRichard Henderson
Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18target/arm: Convert to HAVE_CMPXCHG128Richard Henderson
Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-16target/arm: Initialize ARMMMUFaultInfo in v7m_stack_read/writePeter Maydell
The get_phys_addr() functions take a pointer to an ARMMMUFaultInfo struct, which they fill in only if a fault occurs. This means that the caller must always zero-initialize the struct before passing it in. We forgot to do this in v7m_stack_read() and v7m_stack_write(). Correct the error. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181011172057.9466-1-peter.maydell@linaro.org
2018-10-16target/arm: Mask PMOVSR writes based on supported countersAaron Lindsay
This is an amendment to my earlier patch: commit 7ece99b17e832065236c07a158dfac62619ef99b Author: Aaron Lindsay <alindsay@codeaurora.org> Date: Thu Apr 26 11:04:39 2018 +0100 target/arm: Mask PMU register writes based on PMCR_EL0.N Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181010203735.27918-3-aclindsa@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Mark PMINTENCLR and PMINTENCLR_EL1 accesses as possibly doing IOAaron Lindsay
I previously fixed this for PMINTENSET_EL1, but missed these. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Signed-off-by: Aaron Lindsay <aclindsa@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181010203735.27918-2-aclindsa@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Add the Cortex-A72Edgar E. Iglesias
Add the ARM Cortex-A72. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20181011021931.4249-11-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target-arm: powerctl: Enable HVC when starting CPUs to EL2Edgar E. Iglesias
When QEMU provides the equivalent of the EL3 firmware, we need to enable HVCs in scr_el3 when turning on CPUs that target EL2. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20181011021931.4249-10-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Fix cortex-a7 id_isar0Richard Henderson
The incorrect value advertised only thumb2 div without arm div. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181008212205.17752-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Align cortex-r5 id_isar0Richard Henderson
The missing nibble made it more difficult to read. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181008212205.17752-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Define fields of ISAR registersRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181008212205.17752-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16target/arm: Fix aarch64_sve_change_el wrt EL0Richard Henderson
At present we assert: arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed. The comment in arm_el_is_aa64 explains why asking about EL0 without extra information is impossible. Add an extra argument to provide it from the surrounding context. Fixes: 0ab5953b00b3 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181008212205.17752-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Add v8M stack checks for MSR to SP_NSPeter Maydell
Updating the NS stack pointer via MSR to SP_NS should include a check whether the new SP value is below the stack limit. No other kinds of update to the various stack pointer and limit registers via MSR should perform a check. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-14-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks for VLDM/VSTMPeter Maydell
Add the v8M stack checks for the VLDM/VSTM (aka VPUSH/VPOP) instructions. This code is currently unreachable because we haven't yet implemented M profile floating point support, but since the change is simple, we add it now because otherwise we're likely to forget to do it later. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-13-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks for Thumb push/popPeter Maydell
Add v8M stack checks for the 16-bit Thumb push/pop encodings: STMDB, STMFD, LDM, LDMIA, LDMFD. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-12-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks for T32 load/store singlePeter Maydell
Add v8M stack checks for the instructions in the T32 "load/store single" encoding class: these are the "immediate pre-indexed" and "immediate, post-indexed" LDR and STR instructions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-11-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks for Thumb2 LDM/STMPeter Maydell
Add the v8M stack checks for: * LDM (T2 encoding) * STM (T2 encoding) This includes the 32-bit encodings of the instructions listed in v8M ARM ARM rule R_YVWT as * LDM, LDMIA, LDMFD * LDMDB, LDMEA * POP (multiple registers) * PUSH (muliple registers) * STM, STMIA, STMEA * STMDB, STMFD We perform the stack limit before doing any other part of the load or store. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-10-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks for LDRD/STRD (imm)Peter Maydell
Add the v8M stack checks for: * LDRD (immediate) * STRD (immediate) Loads and stores are more complicated than ADD/SUB/MOV, because we must ensure that memory accesses below the stack limit are not performed, so we can't simply do the check when we actually update SP. For these instructions, if the stack limit check triggers we must not: * perform any memory access below the SP limit * update PC, SP or the load/store base register but it is IMPDEF whether we: * perform any accesses above or equal to the SP limit * update destination registers for loads For QEMU we choose to always check the limit before doing any other part of the load or store, so we won't update any registers or perform any memory accesses. It is UNKNOWN whether the limit check triggers for a load or store where the initial SP value is below the limit and one of the stores would be below the limit, but the writeback moves SP to above the limit. For QEMU we choose to trigger the check in this situation. Note that limit checks happen only for loads and stores which update SP via writeback; they do not happen for loads and stores which simply use SP as a base register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-9-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack limit checks on NS function callsPeter Maydell
Check the v8M stack limits when pushing the frame for a non-secure function call via BLXNS. In order to be able to generate the exception we need to promote raise_exception() from being local to op_helper.c so we can call it from helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-8-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks on exception entryPeter Maydell
Add checks for breaches of the v8M stack limit when the stack pointer is decremented to push the exception frame for exception entry. Note that the exception-entry case is unique in that the stack pointer is updated to be the limit value if the limit is hit (per rule R_ZLZG). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-7-peter.maydell@linaro.org
2018-10-08target/arm: Add some comments in Thumb decodePeter Maydell
Add some comments to the Thumb decoder indicating what bits of the instruction have been decoded at various points in the code. This is not an exhaustive set of comments; we're gradually adding comments as we work with particular bits of the code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-6-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks on ADD/SUB/MOV of SPPeter Maydell
Add code to insert calls to a helper function to do the stack limit checking when we handle these forms of instruction that write to SP: * ADD (SP plus immediate) * ADD (SP plus register) * SUB (SP minus immediate) * SUB (SP minus register) * MOV (register) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-5-peter.maydell@linaro.org
2018-10-08target/arm: Move v7m_using_psp() to internals.hPeter Maydell
We're going to want v7m_using_psp() in op_helper.c in the next patch, so move it from helper.c to internals.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-4-peter.maydell@linaro.org
2018-10-08target/arm: Define new EXCP type for v8M stack overflowsPeter Maydell
Define EXCP_STKOF, and arrange for it to cause us to take a UsageFault with CFSR.STKOF set. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-3-peter.maydell@linaro.org
2018-10-08target/arm: Define new TBFLAG for v8M stack checkingPeter Maydell
The Arm v8M architecture includes hardware stack limit checking. When certain instructions update the stack pointer, if the new value of SP is below the limit set in the associated limit register then an exception is taken. Add a TB flag that tracks whether the limit-checking code needs to be emitted. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20181002163556.10279-2-peter.maydell@linaro.org
2018-10-08target/arm: Pass TCGMemOpIdx to sve memory helpersRichard Henderson
There is quite a lot of code required to compute cpu_mem_index, or even put together the full TCGMemOpIdx. This can easily be done at translation time. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite vector gather first-fault loadsRichard Henderson
This implements the feature for softmmu, and moves the main loop out of a macro and into a function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite vector gather storesRichard Henderson
This fixes the endianness problem for softmmu, and moves the main loop out of a macro and into an inlined function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite vector gather loadsRichard Henderson
This fixes the endianness problem for softmmu, and moves the main loop out of a macro and into an inlined function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Split contiguous stores for endiannessRichard Henderson
We can choose the endianness at translation time, rather than re-computing it at execution time. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Split contiguous loads for endiannessRichard Henderson
We can choose the endianness at translation time, rather than re-computing it at execution time. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite helper_sve_st[1234]*_rRichard Henderson
This fixes the endianness problem for softmmu, and moves the main loop out of a macro and into an inlined function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite helper_sve_ld[234]*_rRichard Henderson
Use the same *_tlb primitives as we use for ld1. For linux-user, this hoists the set of helper_retaddr. For softmmu, hoists the computation of the current mmu_idx outside the loop, fixes the endianness problem, and moves the main loop out of a macro and into an inlined function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Rewrite helper_sve_ld1*_r using pagesRichard Henderson
Uses tlb_vaddr_to_host for correct operation with softmmu. Optimize for accesses within a single page or pair of pages. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Clear unused predicate bits for LD1RQRichard Henderson
The 16-byte load only uses 16 predicate bits. But while reusing the other load infrastructure, we find other bits that are set and trigger an assert. To avoid this and retain the assert, zero-extend the predicate that we pass to the LD1 helper. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Adjust aarch64_cpu_dump_state for system mode SVERichard Henderson
Use the existing helpers to determine if (1) the fpu is enabled, (2) sve state is enabled, and (3) the current sve vector length. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Handle SVE vector length changes in system modeRichard Henderson
SVE vector length can change when changing EL, or when writing to one of the ZCR_ELn registers. For correctness, our implementation requires that predicate bits that are inaccessible are never set. Which means noticing length changes and zeroing the appropriate register bits. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Pass in current_el to fp and sve_exception_elRichard Henderson
We are going to want to determine whether sve is enabled for EL other than current. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Adjust sve_exception_elRichard Henderson
Check for EL3 before testing CPTR_EL3.EZ. Return 0 when the exception should be routed via AdvSIMDFPAccessTrap. Mirror the structure of CheckSVEEnabled more closely. Fixes: 5be5e8eda78 Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Define ID_AA64ZFR0_EL1Richard Henderson
Given that the only field defined for this new register may only be 0, we don't actually need to change anything except the name. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Don't read r4 from v8M exception stackframe twicePeter Maydell
A cut-and-paste error meant we were reading r4 from the v8M callee-saves exception stack frame twice. This is harmless since it just meant we did two memory accesses to the same location, but it's unnecessary. Delete it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002150304.2287-1-peter.maydell@linaro.org
2018-10-08target/arm: Correct condition for v8M callee stack pushPeter Maydell
In v7m_exception_taken() we were incorrectly using a "LR bit EXCRET.ES is 1" check when it should be 0 (compare the pseudocode ExceptionTaken() function). This meant we didn't stack the callee-saved registers when tailchaining from a NonSecure to a Secure exception. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002145940.30931-1-peter.maydell@linaro.org
2018-10-08target/arm: fix code comments errorDongjiu Geng
The parameter of kvm_arm_init_cpreg_list() is ARMCPU instead of CPUState, so correct the note to make it match the code. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Message-id: 1538069046-5757-1-git-send-email-gengdongjiu@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>