aboutsummaryrefslogtreecommitdiff
path: root/target/arm/internals.h
AgeCommit message (Collapse)Author
2020-10-20target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11Richard Henderson
Unlike many other bits in HCR_EL2, the description for this bit does not contain the phrase "if ... this field behaves as 0 for all purposes other than", so do not squash the bit in arm_hcr_el2_eff. Instead, replicate the E2H+TGE test in the two places that require it. Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Always pass cacheattr to get_phys_addrRichard Henderson
We need to check the memattr of a page in order to determine whether it is Tagged for MTE. Between Stage1 and Stage2, this becomes simpler if we always collect this data, instead of occasionally being presented with NULL. Use the nonnull attribute to allow the compiler to check that all pointer arguments are non-null. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-42-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add mte helpers for sve scalar + int loadsRichard Henderson
Because the elements are sequential, we can eliminate many tests all at once when the tag hits TCMA, or if the page(s) are not Tagged. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-34-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement helper_mte_checkNRichard Henderson
Fill out the stub that was added earlier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-27-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement helper_mte_check1Richard Henderson
Fill out the stub that was added earlier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add gen_mte_check1Richard Henderson
Replace existing uses of check_data_tbi in translate-a64.c that perform a single logical memory access. Leave the helper blank for now to reduce the patch size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Move regime_tcr to internals.hRichard Henderson
We will shortly need this in mte_helper.c as well. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Move regime_el to internals.hRichard Henderson
We will shortly need this in mte_helper.c as well. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement the ADDG, SUBG instructionsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement the IRG instructionRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE bits to tb_flagsRichard Henderson
Cache the composite ATA setting. Cache when MTE is fully enabled, i.e. access to tags are enabled and tag checks affect the PE. Do this for both the normal context and the UNPRIV context. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE system registersRichard Henderson
This is TFSRE0_EL1, TFSR_EL1, TFSR_EL2, TFSR_EL3, RGSR_EL1, GCR_EL1, GMID_EL1, and PSTATE.TCO. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-14target-arm: kvm64: handle SIGBUS signal from kernel or KVMDongjiu Geng
Add a SIGBUS signal handler. In this handler, it checks the SIGBUS type, translates the host VA delivered by host to guest PA, then fills this PA to guest APEI GHES memory, then notifies guest according to the SIGBUS type. When guest accesses the poisoned memory, it will generate a Synchronous External Abort(SEA). Then host kernel gets an APEI notification and calls memory_failure() to unmapped the affected page in stage 2, finally returns to guest. Guest continues to access the PG_hwpoison page, it will trap to KVM as stage2 fault, then a SIGBUS_MCEERR_AR synchronous signal is delivered to Qemu, Qemu records this error address into guest APEI GHES memory and notifes guest using Synchronous-External-Abort(SEA). In order to inject a vSEA, we introduce the kvm_inject_arm_sea() function in which we can setup the type of exception and the syndrome information. When switching to guest, the target vcpu will jump to the synchronous external abort vector table entry. The ESR_ELx.DFSC is set to synchronous external abort(0x10), and the ESR_ELx.FnV is set to not valid(0x1), which will tell guest that FAR is not valid and hold an UNKNOWN value. These values will be set to KVM register structures through KVM_SET_ONE_REG IOCTL. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20200512030609.19593-10-gengdongjiu@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Remove sve_memopidxRichard Henderson
None of the sve helpers use TCGMemOpIdx any longer, so we can stop passing it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Introduce core_to_aa64_mmu_idxRichard Henderson
If by context we know that we're in AArch64 mode, we need not test for M-profile when reconstructing the full ARMMMUIdx. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Move DBGDIDR into ARMISARegistersPeter Maydell
We're going to want to read the DBGDIDR register from KVM in a subsequent commit, which means it needs to be in the ARMISARegisters sub-struct. Move it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-12-peter.maydell@linaro.org
2020-02-21target/arm: Stop assuming DBGDIDR always existsPeter Maydell
The AArch32 DBGDIDR defines properties like the number of breakpoints, watchpoints and context-matching comparators. On an AArch64 CPU, the register may not even exist if AArch32 is not supported at EL1. Currently we hard-code use of DBGDIDR to identify the number of breakpoints etc; this works for all our TCG CPUs, but will break if we ever add an AArch64-only CPU. We also have an assert() that the AArch32 and AArch64 registers match, which currently works only by luck for KVM because we don't populate either of these ID registers from the KVM vCPU and so they are both zero. Clean this up so we have functions for finding the number of breakpoints, watchpoints and context comparators which look in the appropriate ID register. This allows us to drop the "check that AArch64 and AArch32 agree on the number of breakpoints etc" asserts: * we no longer look at the AArch32 versions unless that's the right place to be looking * it's valid to have a CPU (eg AArch64-only) where they don't match * we shouldn't have been asserting the validity of ID registers in a codepath used with KVM anyway Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-11-peter.maydell@linaro.org
2020-02-21target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registersPeter Maydell
Enforce a convention that an isar_feature function that tests a 32-bit ID register always has _aa32_ in its name, and one that tests a 64-bit ID register always has _aa64_ in its name. We already follow this except for three cases: thumb_div, arm_div and jazelle, which all need _aa32_ adding. (As noted in the comment, isar_feature_aa32_fp16_arith() is an exception in that it currently tests ID_AA64PFR0_EL1, but will switch to MVFR1 once we've properly implemented FP16 for AArch32.) Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-2-peter.maydell@linaro.org
2020-02-21target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbidRichard Henderson
For the purpose of rebuild_hflags_a64, we do not need to compute all of the va parameters, only tbi. Moreover, we can compute them in a form that is more useful to storing in hflags. This eliminates the need for aa64_va_parameter_both, so fold that in to aa64_va_parameter. The remaining calls to aa64_va_parameter are in get_phys_addr_lpae and in pauth_helper.c. This reduces the total cpu consumption of aa64_va_parameter in a kernel boot plus a kvm guest kernel boot from 3% to 0.5%. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216194343.21331-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Update MSR access to UAORichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Enforce PAN semantics in get_S1protRichard Henderson
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Update MSR access for PANRichard Henderson
For aarch64, there's a dedicated msr (imm, reg) insn. For aarch32, this is done via msr to cpsr. Writes from el0 are ignored, which is already handled by the CPSR_USER mask. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Introduce aarch64_pstate_valid_maskRichard Henderson
Use this along the exception return path, where we previously accepted any values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Mask CPSR_J when Jazelle is not enabledRichard Henderson
The J bit signals Jazelle mode, and so of course is RES0 when the feature is not enabled. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200208125816.14954-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Split out aarch32_cpsr_valid_maskRichard Henderson
Split this helper out of msr_mask in translate.c. At the same time, transform the negative reductive logic to positive accumulative logic. It will be usable along the exception paths. While touching msr_mask, fix up formatting. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200208125816.14954-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabledRichard Henderson
To implement PAN, we will want to swap, for short periods of time, to a different privileged mmu_idx. In addition, we cannot do this with flushing alone, because the AT* instructions have both PAN and PAN-less versions. Add the ARMMMUIdx*_PAN constants where necessary next to the corresponding ARMMMUIdx* constant. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Add arm_mmu_idx_is_stage1_of_2Richard Henderson
Use a common predicate for querying stage1-ness. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Add regime_has_2_rangesRichard Henderson
Create a predicate to indicate whether the regime has both positive and negative addresses. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Reorganize ARMMMUIdxRichard Henderson
Prepare for, but do not yet implement, the EL2&0 regime. This involves adding the new MMUIdx enumerators and adjusting some of the MMUIdx related predicates to match. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2Richard Henderson
This is part of a reorganization to the set of mmu_idx. The non-secure EL2 regime only has a single stage translation; there is no point in pointing out that the idx is for stage1. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3Richard Henderson
This is part of a reorganization to the set of mmu_idx. The EL3 regime only has a single stage translation, and is always secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]Richard Henderson
This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the Secure EL1&0 regime. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*Richard Henderson
This is part of a reorganization to the set of mmu_idx. The EL1&0 regime is the only one that uses 2-stage translation. Spelling out Stage avoids confusion with Secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2Richard Henderson
The EL1&0 regime is the only one that uses 2-stage translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*Richard Henderson
This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the EL1&0 regime. The ultimate goal is -- Non-secure regimes: ARMMMUIdx_E10_0, ARMMMUIdx_E20_0, ARMMMUIdx_E10_1, ARMMMUIdx_E2, ARMMMUIdx_E20_2, -- Secure regimes: ARMMMUIdx_SE10_0, ARMMMUIdx_SE10_1, ARMMMUIdx_SE3, -- Helper mmu_idx for non-secure EL1&0 stage1 and stage2 ARMMMUIdx_Stage2, ARMMMUIdx_Stage1_E0, ARMMMUIdx_Stage1_E1, The 'S' prefix is reserved for "Secure". Unless otherwise specified, each mmu_idx represents all stages of translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Split out arm_mmu_idx_elRichard Henderson
Avoid calling arm_current_el() twice. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Declare some M-profile functions publiclyPhilippe Mathieu-Daudé
In the next commit we will split the M-profile functions from this file. Some function will be called out of helper.c. Declare them in the "internals.h" header. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-22-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Declare arm_log_exception() function publiclyPhilippe Mathieu-Daudé
In few commits we will split the M-profile functions from this file, and this function will also be called in the new file. Declare it in the "internals.h" header. Since it is in the middle of a block of M profile functions, move it previous to this block to ease the later refactor. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-21-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Restrict PSCI to TCGPhilippe Mathieu-Daudé
Under KVM, the kernel gets the HVC call and handle the PSCI requests. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-20-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Move TLB related routines to tlb_helper.cPhilippe Mathieu-Daudé
These routines are TCG specific. The arm_deliver_fault() function is only used within the new helper. Make it static. Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-13-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Declare get_phys_addr() function publiclyPhilippe Mathieu-Daudé
In the next commit we will split the TLB related routines of this file, and this function will also be called in the new file. Declare it in the "internals.h" header. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-12-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-05-10target/arm: Convert to CPUClass::tlb_fillRichard Henderson
Cc: qemu-arm@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-05target/arm: Split helper_msr_i_pstate into 3Richard Henderson
The EL0+UMA check is unique to DAIF. While SPSel had avoided the check by nature of already checking EL >= 1, the other post v8.0 extensions to MSR (imm) allow EL0 and do not require UMA. Avoid the unconditional write to pc and use raise_exception_ra to unwind. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Compute TB_FLAGS for TBI for user-onlyPeter Maydell
Enables, but does not turn on, TBI for CONFIG_USER_ONLY. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190204132126.3255-4-richard.henderson@linaro.org [PMM: adjusted #ifdeffery to placate clang, which otherwise complains about static functions that are unused in the CONFIG_USER_ONLY build] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-05target/arm: Default handling of BTYPE during translationRichard Henderson
The branch target exception for guarded pages has high priority, and only 8 instructions are valid for that case. Perform this check before doing any other decode. Clear BTYPE after all insns that neither set BTYPE nor exit via exception (DISAS_NORETURN). Not yet handled are insns that exit via DISAS_NORETURN for some other reason, like direct branches. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190128223118.5255-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Decode TBID from TCRRichard Henderson
Use TBID in aa64_va_parameters depending on the data parameter. This automatically updates all existing users of the function. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Add aa64_va_parameters_bothRichard Henderson
We will want to check TBI for I and D simultaneously. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Export aa64_va_parameters to internals.hRichard Henderson
We need to reuse this from helper-a64.c. Provide a stub definition for CONFIG_USER_ONLY. This matches the stub definitions that we removed for arm_regime_tbi{0,1} before. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Create ARMVAParameters and helpersRichard Henderson
Split out functions to extract the virtual address parameters. Let the functions choose T0 or T1 address space half, if present. Extract (most of) the control bits that vary between EL or Tx. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-19-richard.henderson@linaro.org [PMM: fixed minor checkpatch comment nits] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Introduce arm_stage1_mmu_idxRichard Henderson
While we could expose stage_1_mmu_idx, the combination is probably going to be more useful. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>