aboutsummaryrefslogtreecommitdiff
path: root/target/arm/ptw.c
AgeCommit message (Collapse)Author
2023-05-14target/arm: Correct AArch64.S2MinTxSZ 32-bit EL1 input size checkPeter Maydell
In check_s2_mmu_setup() we have a check that is attempting to implement the part of AArch64.S2MinTxSZ that is specific to when EL1 is AArch32: if !s1aarch64 then // EL1 is AArch32 min_txsz = Min(min_txsz, 24); Unfortunately we got this wrong in two ways: (1) The minimum txsz corresponds to a maximum inputsize, but we got the sense of the comparison wrong and were faulting for all inputsizes less than 40 bits (2) We try to implement this as an extra check that happens after we've done the same txsz checks we would do for an AArch64 EL1, but in fact the pseudocode is *loosening* the requirements, so that txsz values that would fault for an AArch64 EL1 do not fault for AArch32 EL1, because it does Min(old_min, 24), not Max(old_min, 24). You can see this also in the text of the Arm ARM in table D8-8, which shows that where the implemented PA size is less than 40 bits an AArch32 EL1 is still OK with a configured stage2 T0SZ for a 40 bit IPA, whereas if EL1 is AArch64 then the T0SZ must be big enough to constrain the IPA to the implemented PA size. Because of part (2), we can't do this as a separate check, but have to integrate it into aa64_va_parameters(). Add a new argument to that function to indicate that EL1 is 32-bit. All the existing callsites except the one in get_phys_addr_lpae() can pass 'false', because they are either doing a lookup for a stage 1 regime or else they don't care about the tsz/tsz_oob fields. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1627 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230509092059.3176487-1-peter.maydell@linaro.org (cherry picked from commit 478dccbb99db0bf8f00537dd0b4d0de88d5cb537) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-05-14target/arm: Fix handling of SW and NSW bits for stage 2 walksPeter Maydell
We currently don't correctly handle the VSTCR_EL2.SW and VTCR_EL2.NSW configuration bits. These allow configuration of whether the stage 2 page table walks for Secure IPA and NonSecure IPA should do their descriptor reads from Secure or NonSecure physical addresses. (This is separate from how the translation table base address and other parameters are set: an NS IPA always uses VTTBR_EL2 and VTCR_EL2 for its base address and walk parameters, regardless of the NSW bit, and similarly for Secure.) Provide a new function ptw_idx_for_stage_2() which returns the MMU index to use for descriptor reads, and use it to set up the .in_ptw_idx wherever we call get_phys_addr_lpae(). For a stage 2 walk, wherever we call get_phys_addr_lpae(): * .in_ptw_idx should be ptw_idx_for_stage_2() of the .in_mmu_idx * .in_secure should be true if .in_mmu_idx is Stage2_S This allows us to correct S1_ptw_translate() so that it consistently always sets its (out_secure, out_phys) to the result it gets from the S2 walk (either by calling get_phys_addr_lpae() or by TLB lookup). This makes better conceptual sense because the S2 walk should return us an (address space, address) tuple, not an address that we then randomly assign to S or NS. Our previous handling of SW and NSW was broken, so guest code trying to use these bits to put the s2 page tables in the "other" address space wouldn't work correctly. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230504135425.2748672-3-peter.maydell@linaro.org (cherry picked from commit fcc0b0418fff655f20fd0cf86a1bbdc41fd2e7c6) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-04-10target/arm: Copy guarded bit in combine_cacheattrsRichard Henderson
The guarded bit comes from the stage1 walk. Fixes: Coverity CID 1507929 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230407185149.3253946-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-04-10target/arm: PTE bit GP only applies to stage1Richard Henderson
Only perform the extract of GP during the stage1 walk. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230407185149.3253946-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-06target/arm: Rewrite check_s2_mmu_setupRichard Henderson
Integrate neighboring code from get_phys_addr_lpae which computed starting level, as it is easier to validate when doing both at the same time. Mirror the checks at the start of AArch{64,32}.S2Walk, especially S2InvalidSL and S2InconsistentSL. This reverts 49ba115bb74, which was incorrect -- there is nothing in the ARM pseudocode that depends on TxSZ, i.e. outputsize; the pseudocode is consistent in referencing PAMax. Fixes: 49ba115bb74 ("target/arm: Pass outputsize down to check_s2_mmu_setup") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230227225832.816605-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-28accel/tcg: Add 'size' param to probe_access_fullRichard Henderson
Change to match the recent change to probe_access_flags. All existing callers updated to supply 0, so no change in behaviour. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-28accel/tcg: Add 'size' param to probe_access_flags()Daniel Henrique Barboza
probe_access_flags() as it is today uses probe_access_full(), which in turn uses probe_access_internal() with size = 0. probe_access_internal() then uses the size to call the tlb_fill() callback for the given CPU. This size param ('fault_size' as probe_access_internal() calls it) is ignored by most existing .tlb_fill callback implementations, e.g. arm_cpu_tlb_fill(), ppc_cpu_tlb_fill(), x86_cpu_tlb_fill() and mips_cpu_tlb_fill() to name a few. But RISC-V riscv_cpu_tlb_fill() actually uses it. The 'size' parameter is used to check for PMP (Physical Memory Protection) access. This is necessary because PMP does not make any guarantees about all the bytes of the same page having the same permissions, i.e. the same page can have different PMP properties, so we're forced to make sub-page range checks. To allow RISC-V emulation to do a probe_acess_flags() that covers PMP, we need to either add a 'size' param to the existing probe_acess_flags() or create a new interface (e.g. probe_access_range_flags). There are quite a few probe_* APIs already, so let's add a 'size' param to probe_access_flags() and re-use this API. This is done by open coding what probe_access_full() does inside probe_acess_flags() and passing the 'size' param to probe_acess_internal(). Existing probe_access_flags() callers use size = 0 to not change their current API usage. 'size' is asserted to enforce single page access like probe_access() already does. No behavioral changes intended. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230223234427.521114-2-dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-27target/arm: Don't access TCG code when debugging with KVMFabiano Rosas
When TCG is disabled this part of the code should not be reachable, so wrap it with an ifdef for now. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-03target/arm: Fix physical address resolution for Stage2Richard Henderson
Conversion to probe_access_full missed applying the page offset. Cc: qemu-stable@nongnu.org Reported-by: Sid Manning <sidneym@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230126233134.103193-1-richard.henderson@linaro.org Fixes: f3639a64f602 ("target/arm: Use softmmu tlbs for page table walking") Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Fix in_debug path in S1_ptw_translateRichard Henderson
During the conversion, the test against get_phys_addr_lpae got inverted, meaning that successful translations went to the 'failed' label. Cc: qemu-stable@nongnu.org Fixes: f3639a64f60 ("target/arm: Use softmmu tlbs for page table walking") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1417 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230114054605.2977022-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Add PMSAv8r functionalityTobias Röhmel
Add PMSAv8r translation. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Make stage_2_format for cache attributes optionalTobias Röhmel
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike VMSAv8, the stage 2 attributes are in the same format as the stage 1 attributes (8-bit MAIR format). Rather than converting the MAIR format to the format used for VMSA stage 2 (bits [5:2] of a VMSA stage 2 descriptor) and then converting back to do the attribute combination, allow combined_attrs_nofwb() to accept s2 attributes that are already in the MAIR format. We move the assert() to combined_attrs_fwb(), because that function really does require a VMSA stage 2 attribute format. (We will never get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.) Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm:Set lg_page_size to 0 if either S1 or S2 asks for itPeter Maydell
In get_phys_addr_twostage() we set the lg_page_size of the result to the maximum of the stage 1 and stage 2 page sizes. This works for the case where we do want to create a TLB entry, because we know the common TLB code only creates entries of the TARGET_PAGE_SIZE and asking for a size larger than that only means that invalidations invalidate the whole larger area. However, if lg_page_size is smaller than TARGET_PAGE_SIZE this effectively means "don't create a TLB entry"; in this case if either S1 or S2 said "this covers less than a page and can't go in a TLB" then the final result also should be marked that way. Set the resulting page size to 0 if either stage asked for a less-than-a-page entry, and expand the comment to explain what's going on. This has no effect for VMSA because currently the VMSA lookup always returns results that cover at least TARGET_PAGE_SIZE; however when we add v8R support it will reuse this code path, and for v8R the S1 and S2 results can be smaller than TARGET_PAGE_SIZE. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
2022-11-22target/arm: Use signed quantity to represent VMSAv8-64 translation levelArd Biesheuvel
The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-22target/arm: Don't do two-stage lookup if stage 2 is disabledPeter Maydell
In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
2022-11-21target/arm: Limit LPA2 effective output address when TCR.DS == 0Ard Biesheuvel
With LPA2, the effective output address size is at most 48 bits when TCR.DS == 0. This case is currently unhandled in the page table walker, where we happily assume LVA/64k granule when outputsize > 48 and param.ds == 0, resulting in the wrong conversion to be used from a page table descriptor to a physical address. if (outputsize > 48) { if (param.ds) { descaddr |= extract64(descriptor, 8, 2) << 50; } else { descaddr |= extract64(descriptor, 12, 4) << 48; } So cap the outputsize to 48 when TCR.DS is cleared, as per the architecture. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221116170316.259695-1-ardb@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04target/arm: Two fixes for secure ptwRichard Henderson
Reversed the sense of non-secure in get_phys_addr_lpae, and failed to initialize attrs.secure for ARMMMUIdx_Phys_S. Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04target/arm: Fix Privileged Access Never (PAN) for aarch32Timofey Kutergin
When we implemented the PAN support we theoretically wanted to support it for both AArch32 and AArch64, but in practice several bugs made it essentially unusable with an AArch32 guest. Fix all those problems: - Use CPSR.PAN to check for PAN state in aarch32 mode - throw permission fault during address translation when PAN is enabled and kernel tries to access user acessible page - ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN). Signed-off-by: Timofey Kutergin <tkutergin@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221027112619.2205229-1-tkutergin@gmail.com [PMM: tweak commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Use the max page size in a 2-stage ptwRichard Henderson
We had only been reporting the stage2 page size. This causes problems if stage1 is using a larger page size (16k, 2M, etc), but stage2 is using a smaller page size, because cputlb does not set large_page_{addr,mask} properly. Fix by using the max of the two page sizes. Reported-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Implement FEAT_HAFDBS, dirty bit portionRichard Henderson
Perform the atomic update for hardware management of the dirty bit. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Implement FEAT_HAFDBS, access flag portionRichard Henderson
Perform the atomic update for hardware management of the access flag. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org [PMM: Fix accidental PROT_WRITE to PAGE_WRITE; add missing main-loop.h include] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Tidy merging of attributes from descriptor and tableRichard Henderson
Replace some gotos with some nested if statements. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Consider GP an attribute in get_phys_addr_lpaeRichard Henderson
Both GP and DBM are in the upper attribute block. Extend the computation of attrs to include them, then simplify the setting of guarded. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Don't shift attrs in get_phys_addr_lpaeRichard Henderson
Leave the upper and lower attributes in the place they originate from in the descriptor. Shifting them around is confusing, since one cannot read the bit numbers out of the manual. Also, new attributes have been added which would alter the shifts. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Fix fault reporting in get_phys_addr_lpaeRichard Henderson
Always overriding fi->type was incorrect, as we would not properly propagate the fault type from S1_ptw_translate, or arm_ldq_ptw. Simplify things by providing a new label for a translation fault. For other faults, store into fi directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Remove loop from get_phys_addr_lpaeRichard Henderson
The unconditional loop was used both to iterate over levels and to control parsing of attributes. Use an explicit goto in both cases. While this appears less clean for iterating over levels, we will need to jump back into the middle of this loop for atomic updates, which is even uglier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptwRichard Henderson
Separate S1 translation from the actual lookup. Will enable lpae hardware updates. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Add ptw_idx to S1TranslateRichard Henderson
Hoist the computation of the mmu_idx for the ptw up to get_phys_addr_with_struct and get_phys_addr_twostage. This removes the duplicate check for stage2 disabled from the middle of the walk, performing it only once. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Introduce regime_is_stage2Richard Henderson
Reduce the amount of typing required for this check. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27target/arm: Implement FEAT_E0PDPeter Maydell
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the OS to forbid EL0 access to half of the address space. Since this is an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can implement it entirely in aa64_va_parameters(). This requires moving the existing regime_is_user() to internals.h so that the code in helper.c can get at it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
2022-10-20target/arm: Use bool consistently for get_phys_addr subroutinesRichard Henderson
The return type of the functions is already bool, but in a few instances we used an integer type with the return statement. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Split out get_phys_addr_twostageRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use softmmu tlbs for page table walkingRichard Henderson
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and arm_ldq_ptw. Use probe_access_full to find the host address, and if so use a host load. If the probe fails, we've got our fault info already. On the off chance that page tables are not in RAM, continue to use the address_space_ld* functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Move be test for regime into S1TranslateResultRichard Henderson
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Plumb debug into S1TranslateRichard Henderson
Before using softmmu page tables for the ptw, plumb down a debug parameter so that we can query page table entries from gdbstub without modifying cpu state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Split out S1Translate typeRichard Henderson
Consolidate most of the inputs and outputs of S1_ptw_translate into a single structure. Plumb this through arm_ld*_ptw from the controlling get_phys_addr_* routine. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Add ARMMMUIdx_Phys_{S,NS}Richard Henderson
Not yet used, but add mmu indexes for 1-1 mapping to physical addresses. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use probe_access_full for BTIRichard Henderson
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. In is_guarded_page, use probe_access_full instead of just guessing that the tlb entry is still present. Also handles the FIXME about executing from device memory. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Use ARMGranuleSize in ARMVAParametersPeter Maydell
Now we have an enum for the granule size, use it in the ARMVAParameters struct instead of the using16k/using64k bools. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
2022-10-10target/arm: Use tlb_set_page_fullRichard Henderson
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull, so that it may be passed directly to tlb_set_page_full. The change is large, but mostly mechanical. The major non-mechanical change is page_size -> lg_page_size. Most of the time this is obvious, and is related to TARGET_PAGE_BITS. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221001162318.153420-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Fix cacheattr in get_phys_addr_disabledRichard Henderson
Do not apply memattr or shareability for Stage2 translations. Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the pseudocode in AArch64.S1DisabledOutput. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221001162318.153420-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Split out get_phys_addr_disabledRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Fix ATS12NSO* from S PL1Richard Henderson
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so that we use is_secure instead of the current security state. These AT* operations have been broken since arm_hcr_el2_eff gained a check for "el2 enabled" for Secure EL2. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Pass HCR to attribute subroutines.Richard Henderson
These subroutines did not need ENV for anything except retrieving the effective value of HCR anyway. We have computed the effective value of HCR in the callers, and this will be especially important for interpreting HCR in a non-current security state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Remove env argument from combined_attrs_fwbRichard Henderson
This value is unused. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221001162318.153420-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Hoist read of *is_secure in S1_ptw_translateRichard Henderson
Rename the argument to is_secure_ptr, and introduce a local variable is_secure with the value. We only write back to the pointer toward the end of the function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.MRichard Henderson
The effect of TGE does not only apply to non-secure state, now that Secure EL2 exists. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Reorg regime_translation_disabledRichard Henderson
Use a switch on mmu_idx for the a-profile indexes, instead of three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Fold secure and non-secure a-profile mmu indexesRichard Henderson
For a-profile aarch64, which does not bank system registers, it takes quite a lot of code to switch between security states. In the process, registers such as TCR_EL{1,2} must be swapped, which in itself requires the flushing of softmmu tlbs. Therefore it doesn't buy us anything to separate tlbs by security state. Retain the distinction between Stage2 and Stage2_S. This will be important as we implement FEAT_RME, and do not wish to add a third set of mmu indexes for Realm state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10target/arm: Merge regime_is_secure into get_phys_addrRichard Henderson
This is the last use of regime_is_secure; remove it entirely before changing the layout of ARMMMUIdx. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221001162318.153420-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>