aboutsummaryrefslogtreecommitdiff
path: root/target/arm/internals.h
AgeCommit message (Collapse)Author
2021-04-30target/arm: Rename mte_probe1 to mte_probeRichard Henderson
For consistency with the mte_check1 + mte_checkN merge to mte_check, rename the probe function as well. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210416183106.1516563-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Merge mte_check1, mte_checkNRichard Henderson
The mte_check1 and mte_checkN functions are now identical. Drop mte_check1 and rename mte_checkN to mte_check. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210416183106.1516563-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Replace MTEDESC ESIZE+TSIZE with SIZEM1Richard Henderson
After recent changes, mte_checkN does not use ESIZE, and mte_check1 never used TSIZE. We can combine the two into a single field: SIZEM1. Choose to pass size - 1 because size == 0 is never used, our immediate need in mte_probe_int is for the address of the last byte (ptr + size - 1), and since almost all operations are powers of 2, this makes the immediate constant one bit smaller. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210416183106.1516563-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-05target/arm: Add support for FEAT_SSBS, Speculative Store Bypass SafeRebecca Cran
Add support for FEAT_SSBS. SSBS (Speculative Store Bypass Safe) is an optional feature in ARMv8.0, and mandatory in ARMv8.5. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210216224543.16142-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-18exec: Move TranslationBlock typedef to qemu/typedefs.hRichard Henderson
This also means we don't need an extra declaration of the structure in hw/core/cpu.h. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210208233906.479571-2-richard.henderson@linaro.org> Message-Id: <20210213130325.14781-11-alex.bennee@linaro.org>
2021-02-16target/arm: Split out syndrome.h from internals.hRichard Henderson
Move everything related to syndromes to a new file, which can be shared with linux-user. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210212184902.1251044-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-16target/arm: Use the proper TBI settings for linux-userRichard Henderson
We were fudging TBI1 enabled to speed up the generated code. Now that we've improved the code generation, remove this. Also, tidy the comment to reflect the current code. The pauth test was testing a kernel address (-1) and making incorrect assumptions about TBI1; stick to userland addresses. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210212184902.1251044-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-11target/arm: Add support for FEAT_DIT, Data Independent TimingRebecca Cran
Add support for FEAT_DIT. DIT (Data Independent Timing) is a required feature for ARMv8.4. Since virtual machine execution is largely nondeterministic and TCG is outside of the security domain, it's implemented as a NOP. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-05cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClassClaudio Fontana
we cannot in principle make the TCG Operations field definitions conditional on CONFIG_TCG in code that is included by both common_ss and specific_ss modules. Therefore, what we can do safely to restrict the TCG fields to TCG-only builds, is to move all tcg cpu operations into a separate header file, which is only included by TCG, target-specific code. This leaves just a NULL pointer in the cpu.h for the non-TCG builds. This also tidies up the code in all targets a bit, having all TCG cpu operations neatly contained by a dedicated data struct. Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-16-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-19target/arm: Introduce PREDDESC field definitionsRichard Henderson
SVE predicate operations cannot use the "usual" simd_desc encoding, because the lengths are not a multiple of 8. But we were abusing the SIMD_* fields to store values anyway. This abuse broke when SIMD_OPRSZ_BITS was modified in e2e7168a214. Introduce a new set of field definitions for exclusive use of predicates, so that it is obvious what kind of predicate we are manipulating. To be used in future patches. Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210113062650.593824-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: set HPFAR_EL2.NS on secure stage 2 faultsRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: secure stage 2 translation regimeRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: add MMU stage 1 for Secure EL2Rémi Denis-Courmont
This adds the MMU indices for EL2 stage 1 in secure state. To keep code contained, which is largelly identical between secure and non-secure modes, the MMU indices are reassigned. The new assignments provide a systematic pattern with a non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11Richard Henderson
Unlike many other bits in HCR_EL2, the description for this bit does not contain the phrase "if ... this field behaves as 0 for all purposes other than", so do not squash the bit in arm_hcr_el2_eff. Instead, replicate the E2H+TGE test in the two places that require it. Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Always pass cacheattr to get_phys_addrRichard Henderson
We need to check the memattr of a page in order to determine whether it is Tagged for MTE. Between Stage1 and Stage2, this becomes simpler if we always collect this data, instead of occasionally being presented with NULL. Use the nonnull attribute to allow the compiler to check that all pointer arguments are non-null. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-42-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add mte helpers for sve scalar + int loadsRichard Henderson
Because the elements are sequential, we can eliminate many tests all at once when the tag hits TCMA, or if the page(s) are not Tagged. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-34-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement helper_mte_checkNRichard Henderson
Fill out the stub that was added earlier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-27-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement helper_mte_check1Richard Henderson
Fill out the stub that was added earlier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add gen_mte_check1Richard Henderson
Replace existing uses of check_data_tbi in translate-a64.c that perform a single logical memory access. Leave the helper blank for now to reduce the patch size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Move regime_tcr to internals.hRichard Henderson
We will shortly need this in mte_helper.c as well. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Move regime_el to internals.hRichard Henderson
We will shortly need this in mte_helper.c as well. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement the ADDG, SUBG instructionsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Implement the IRG instructionRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE bits to tb_flagsRichard Henderson
Cache the composite ATA setting. Cache when MTE is fully enabled, i.e. access to tags are enabled and tag checks affect the PE. Do this for both the normal context and the UNPRIV context. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-06-26target/arm: Add MTE system registersRichard Henderson
This is TFSRE0_EL1, TFSR_EL1, TFSR_EL2, TFSR_EL3, RGSR_EL1, GCR_EL1, GMID_EL1, and PSTATE.TCO. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-14target-arm: kvm64: handle SIGBUS signal from kernel or KVMDongjiu Geng
Add a SIGBUS signal handler. In this handler, it checks the SIGBUS type, translates the host VA delivered by host to guest PA, then fills this PA to guest APEI GHES memory, then notifies guest according to the SIGBUS type. When guest accesses the poisoned memory, it will generate a Synchronous External Abort(SEA). Then host kernel gets an APEI notification and calls memory_failure() to unmapped the affected page in stage 2, finally returns to guest. Guest continues to access the PG_hwpoison page, it will trap to KVM as stage2 fault, then a SIGBUS_MCEERR_AR synchronous signal is delivered to Qemu, Qemu records this error address into guest APEI GHES memory and notifes guest using Synchronous-External-Abort(SEA). In order to inject a vSEA, we introduce the kvm_inject_arm_sea() function in which we can setup the type of exception and the syndrome information. When switching to guest, the target vcpu will jump to the synchronous external abort vector table entry. The ESR_ELx.DFSC is set to synchronous external abort(0x10), and the ESR_ELx.FnV is set to not valid(0x1), which will tell guest that FAR is not valid and hold an UNKNOWN value. These values will be set to KVM register structures through KVM_SET_ONE_REG IOCTL. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Xiang Zheng <zhengxiang9@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20200512030609.19593-10-gengdongjiu@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Remove sve_memopidxRichard Henderson
None of the sve helpers use TCGMemOpIdx any longer, so we can stop passing it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Introduce core_to_aa64_mmu_idxRichard Henderson
If by context we know that we're in AArch64 mode, we need not test for M-profile when reconstructing the full ARMMMUIdx. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-21target/arm: Move DBGDIDR into ARMISARegistersPeter Maydell
We're going to want to read the DBGDIDR register from KVM in a subsequent commit, which means it needs to be in the ARMISARegisters sub-struct. Move it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-12-peter.maydell@linaro.org
2020-02-21target/arm: Stop assuming DBGDIDR always existsPeter Maydell
The AArch32 DBGDIDR defines properties like the number of breakpoints, watchpoints and context-matching comparators. On an AArch64 CPU, the register may not even exist if AArch32 is not supported at EL1. Currently we hard-code use of DBGDIDR to identify the number of breakpoints etc; this works for all our TCG CPUs, but will break if we ever add an AArch64-only CPU. We also have an assert() that the AArch32 and AArch64 registers match, which currently works only by luck for KVM because we don't populate either of these ID registers from the KVM vCPU and so they are both zero. Clean this up so we have functions for finding the number of breakpoints, watchpoints and context comparators which look in the appropriate ID register. This allows us to drop the "check that AArch64 and AArch32 agree on the number of breakpoints etc" asserts: * we no longer look at the AArch32 versions unless that's the right place to be looking * it's valid to have a CPU (eg AArch64-only) where they don't match * we shouldn't have been asserting the validity of ID registers in a codepath used with KVM anyway Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-11-peter.maydell@linaro.org
2020-02-21target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registersPeter Maydell
Enforce a convention that an isar_feature function that tests a 32-bit ID register always has _aa32_ in its name, and one that tests a 64-bit ID register always has _aa64_ in its name. We already follow this except for three cases: thumb_div, arm_div and jazelle, which all need _aa32_ adding. (As noted in the comment, isar_feature_aa32_fp16_arith() is an exception in that it currently tests ID_AA64PFR0_EL1, but will switch to MVFR1 once we've properly implemented FP16 for AArch32.) Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-2-peter.maydell@linaro.org
2020-02-21target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbidRichard Henderson
For the purpose of rebuild_hflags_a64, we do not need to compute all of the va parameters, only tbi. Moreover, we can compute them in a form that is more useful to storing in hflags. This eliminates the need for aa64_va_parameter_both, so fold that in to aa64_va_parameter. The remaining calls to aa64_va_parameter are in get_phys_addr_lpae and in pauth_helper.c. This reduces the total cpu consumption of aa64_va_parameter in a kernel boot plus a kvm guest kernel boot from 3% to 0.5%. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200216194343.21331-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Update MSR access to UAORichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Enforce PAN semantics in get_S1protRichard Henderson
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Update MSR access for PANRichard Henderson
For aarch64, there's a dedicated msr (imm, reg) insn. For aarch32, this is done via msr to cpsr. Writes from el0 are ignored, which is already handled by the CPSR_USER mask. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Introduce aarch64_pstate_valid_maskRichard Henderson
Use this along the exception return path, where we previously accepted any values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Mask CPSR_J when Jazelle is not enabledRichard Henderson
The J bit signals Jazelle mode, and so of course is RES0 when the feature is not enabled. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200208125816.14954-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Split out aarch32_cpsr_valid_maskRichard Henderson
Split this helper out of msr_mask in translate.c. At the same time, transform the negative reductive logic to positive accumulative logic. It will be usable along the exception paths. While touching msr_mask, fix up formatting. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200208125816.14954-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabledRichard Henderson
To implement PAN, we will want to swap, for short periods of time, to a different privileged mmu_idx. In addition, we cannot do this with flushing alone, because the AT* instructions have both PAN and PAN-less versions. Add the ARMMMUIdx*_PAN constants where necessary next to the corresponding ARMMMUIdx* constant. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-13target/arm: Add arm_mmu_idx_is_stage1_of_2Richard Henderson
Use a common predicate for querying stage1-ness. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Add regime_has_2_rangesRichard Henderson
Create a predicate to indicate whether the regime has both positive and negative addresses. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Reorganize ARMMMUIdxRichard Henderson
Prepare for, but do not yet implement, the EL2&0 regime. This involves adding the new MMUIdx enumerators and adjusting some of the MMUIdx related predicates to match. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2Richard Henderson
This is part of a reorganization to the set of mmu_idx. The non-secure EL2 regime only has a single stage translation; there is no point in pointing out that the idx is for stage1. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3Richard Henderson
This is part of a reorganization to the set of mmu_idx. The EL3 regime only has a single stage translation, and is always secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]Richard Henderson
This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the Secure EL1&0 regime. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*Richard Henderson
This is part of a reorganization to the set of mmu_idx. The EL1&0 regime is the only one that uses 2-stage translation. Spelling out Stage avoids confusion with Secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2Richard Henderson
The EL1&0 regime is the only one that uses 2-stage translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-07target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*Richard Henderson
This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the EL1&0 regime. The ultimate goal is -- Non-secure regimes: ARMMMUIdx_E10_0, ARMMMUIdx_E20_0, ARMMMUIdx_E10_1, ARMMMUIdx_E2, ARMMMUIdx_E20_2, -- Secure regimes: ARMMMUIdx_SE10_0, ARMMMUIdx_SE10_1, ARMMMUIdx_SE3, -- Helper mmu_idx for non-secure EL1&0 stage1 and stage2 ARMMMUIdx_Stage2, ARMMMUIdx_Stage1_E0, ARMMMUIdx_Stage1_E1, The 'S' prefix is reserved for "Secure". Unless otherwise specified, each mmu_idx represents all stages of translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24target/arm: Split out arm_mmu_idx_elRichard Henderson
Avoid calling arm_current_el() twice. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20191023150057.25731-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Declare some M-profile functions publiclyPhilippe Mathieu-Daudé
In the next commit we will split the M-profile functions from this file. Some function will be called out of helper.c. Declare them in the "internals.h" header. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-22-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>