aboutsummaryrefslogtreecommitdiff
path: root/target/arm/ptw.c
AgeCommit message (Collapse)Author
2024-10-02target/arm: Avoid target_ulong for physical address lookupsArd Biesheuvel
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermediate physical address when LPAE is being used. Given that Linux may place leaf level user page tables in high memory when built for LPAE, the kernel will crash with an external abort as soon as it enters user space when running with more than ~3 GiB of system RAM. So replace target_ulong with vaddr in places where it may carry an address value that is not representable in 32 bits. Fixes: f3639a64f602ea ("target/arm: Use softmmu tlbs for page table walking") Cc: qemu-stable@nongnu.org Reported-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-id: 20240927071051.1444768-1-ardb+git@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 67d762e716a7127ecc114e9708254316dd521911) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-08-13target/arm: Fix usage of MMU indexes when EL3 is AArch32Peter Maydell
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. We could fix this in one of two ways: * The most straightforward is to add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This matches how we use indexes for the AArch64 regimes, and preserves propirties like being able to determine the privilege level from an MMU index without any other information. However it would add two MMU indexes (we can share one with ARMMMUIdx_EL3), and we are already using 14 of the 16 the core TLB code permits. * The more complicated approach is the one we take here. We use the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0 than we do for NonSecure PL1&0. This saves on MMU indexes, but means we need to check in some places whether we're in the Secure PL1&0 regime or not before we interpret an MMU index. The changes in this commit were created by auditing all the places where we use specific ARMMMUIdx_ values, and checking whether they needed to be changed to handle the new index value usage. Note for potential stable backports: taking also the previous (comment-change-only) commit might make the backport easier. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
2024-05-06exec/cpu: Extract page-protection definitions to page-protection.hPhilippe Mathieu-Daudé
Extract page-protection definitions from "exec/cpu-all.h" to "exec/page-protection.h". The list of files requiring the new header was generated using: $ git grep -wE \ 'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)' Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240427155714.53669-3-philmd@linaro.org>
2024-03-05target/arm: Do memory type alignment check when translation enabledRichard Henderson
If translation is enabled, and the PTE memory type is Device, enable checking alignment via TLB_CHECK_ALIGNMENT. While the check is done later than it should be per the ARM, it's better than not performing the check at all. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-7-richard.henderson@linaro.org [PMM: tweaks to comment text] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-27arm/ptw: Handle atomic updates of page tables entries in MMIO during PTW.Jonathan Cameron
I'm far from confident this handling here is correct. Hence RFC. In particular not sure on what locks I should hold for this to be even moderately safe. The function already appears to be inconsistent in what it returns as the CONFIG_ATOMIC64 block returns the endian converted 'eventual' value of the cmpxchg whereas the TCG_OVERSIZED_GUEST case returns the previous value. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-id: 20240219161229.11776-1-Jonathan.Cameron@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-15target/arm: arm_pamax() no longer needs to do feature propagationPeter Maydell
In arm_pamax(), we need to cope with the virt board calling this function on a CPU object which has been inited but not realize. We used to do propagation of feature-flag implications (such as "V7VE implies LPAE") at realize, so we have some code in arm_pamax() which manually checks for both V7VE and LPAE feature flags. In commit b8f7959f28c4f36 we moved the feature propagation for almost all features from realize to post-init. That means that now when the virt board calls arm_pamax(), the feature propagation has been done. So we can drop the manual propagation handling and check only for the feature we actually care about, which is ARM_FEATURE_LPAE. Retain the comment that the virt board is calling this function with a not completely realized CPU object, because that is a potential beartrap for later changes which is worth calling out. (Note that b8f7959f28c4f36 actually fixed a bug in the arm_pamax() handling: arm_pamax() was missing a check for ARM_FEATURE_V8, so it incorrectly thought that the qemu-system-arm 'max' CPU did not have LPAE and turned off 'highmem' support in the virt board. Following b8f7959f28c4f36 qemu-system-arm 'max' is treated the same as 'cortex-a15' and other v7 LPAE CPUs, because the generic feature propagation code does correctly propagate V8 -> V7VE -> LPAE.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240109143804.1118307-1-peter.maydell@linaro.org
2024-01-09target/arm: Handle FEAT_NV page table attribute changesPeter Maydell
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,1} the handling of some of the page table attribute bits changes for the EL1&0 translation regime: * for block and page descriptors: - bit [54] holds PXN, not UXN - bit [53] is RES0, and the effective value of UXN is 0 - bit [6], AP[1], is treated as 0 * for table descriptors, when hierarchical permissions are enabled: - bit [60] holds PXNTable, not UXNTable - bit [59] is RES0 - bit [61], APTable[0] is treated as 0 Implement these changes to the page table attribute handling. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-08system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-11-02target/arm: Correctly propagate stage 1 BTI guarded bit in a two-stage walkPeter Maydell
In a two-stage translation, the result of the BTI guarded bit should be the guarded bit from the first stage of translation, as there is no BTI guard information in stage two. Our code tried to do this, but got it wrong, because we currently have two fields where the GP bit information might live (ARMCacheAttrs::guarded and CPUTLBEntryFull::extra::arm::guarded), and we were storing the GP bit in the latter during the stage 1 walk but trying to copy the former in combine_cacheattrs(). Remove the duplicated storage, and always use the field in CPUTLBEntryFull; correctly propagate the stage 1 value to the output in get_phys_addr_twostage(). Note for stable backports: in v8.0 and earlier the field is named result->f.guarded, not result->f.extra.arm.guarded. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1950 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231031173723.26582-1-peter.maydell@linaro.org
2023-10-27target/arm: Move feature test functions to their own headerPeter Maydell
The feature test functions isar_feature_*() now take up nearly a thousand lines in target/arm/cpu.h. This header file is included by a lot of source files, most of which don't need these functions. Move the feature test functions to their own header file. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
2023-10-03target/arm: Replace TARGET_PAGE_ENTRY_EXTRAAnton Johansson
TARGET_PAGE_ENTRY_EXTRA is a macro that allows guests to specify additional fields for caching with the full TLB entry. This macro is replaced with a union in CPUTLBEntryFull, thus making CPUTLB target-agnostic at the cost of slightly inflated CPUTLBEntryFull for non-arm guests. Note, this is needed to ensure that fields in CPUTLB don't vary in offset between various targets. (arm is the only guest actually making use of this feature.) Signed-off-by: Anton Johansson <anjo@rev.ng> Message-Id: <20230912153428.17816-2-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-22target/arm: Pass security space rather than flag for AT instructionsJean-Philippe Brucker
At the moment we only handle Secure and Nonsecure security spaces for the AT instructions. Add support for Realm and Root. For AArch64, arm_security_space() gives the desired space. ARM DDI0487J says (R_NYXTL): If EL3 is implemented, then when an address translation instruction that applies to an Exception level lower than EL3 is executed, the Effective value of SCR_EL3.{NSE, NS} determines the target Security state that the instruction applies to. For AArch32, some instructions can access NonSecure space from Secure, so we still need to pass the state explicitly to do_ats_write(). Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230809123706.1842548-5-jean-philippe@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22target/arm: Skip granule protection checks for AT instructionsJean-Philippe Brucker
GPC checks are not performed on the output address for AT instructions, as stated by ARM DDI 0487J in D8.12.2: When populating PAR_EL1 with the result of an address translation instruction, granule protection checks are not performed on the final output address of a successful translation. Rename get_phys_addr_with_secure(), since it's only used to handle AT instructions. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230809123706.1842548-4-jean-philippe@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22target/arm/ptw: Load stage-2 tables from realm physical spaceJean-Philippe Brucker
In realm state, stage-2 translation tables are fetched from the realm physical address space (R_PGRQD). Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230809123706.1842548-2-jean-philippe@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22target/arm/ptw: Report stage 2 fault level for stage 2 faults on stage 1 ptwPeter Maydell
When we report faults due to stage 2 faults during a stage 1 page table walk, the 'level' parameter should be the level of the walk in stage 2 that faulted, not the level of the walk in stage 1. Correct the reporting of these faults. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-15-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Check for block descriptors at invalid levelsPeter Maydell
The architecture doesn't permit block descriptors at any arbitrary level of the page table walk; it depends on the granule size which levels are permitted. We implemented only a partial version of this check which assumes that block descriptors are valid at all levels except level 3, which meant that we wouldn't deliver the Translation fault for all cases of this sort of guest page table error. Implement the logic corresponding to the pseudocode AArch64.DecodeDescriptorType() and AArch64.BlockDescSupported(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-14-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Set attributes correctly for MMU disabled data accessesPeter Maydell
When the MMU is disabled, data accesses should be Device nGnRnE, Outer Shareable, Untagged. We handle the other cases from AArch64.S1DisabledOutput() correctly but missed this one. Device nGnRnE is memattr == 0, so the only part we were missing was that shareability should be set to 2 for both insn fetches and data accesses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-13-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Drop S1Translate::out_securePeter Maydell
We only use S1Translate::out_secure in two places, where we are setting up MemTxAttrs for a page table load. We can use arm_space_is_secure(ptw->out_space) instead, which guarantees that we're setting the MemTxAttrs secure and space fields consistently, and allows us to drop the out_secure field in S1Translate entirely. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-12-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Remove S1Translate::in_securePeter Maydell
We no longer look at the in_secure field of the S1Translate struct anyway, so we can remove it and all the code which sets it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-11-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Remove last uses of ptw->in_securePeter Maydell
Replace the last uses of ptw->in_secure with appropriate checks on ptw->in_space. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-10-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Only fold in NSTable bit effects in Secure statePeter Maydell
When we do a translation in Secure state, the NSTable bits in table descriptors may downgrade us to NonSecure; we update ptw->in_secure and ptw->in_space accordingly. We guard that check correctly with a conditional that means it's only applied for Secure stage 1 translations. However, later on in get_phys_addr_lpae() we fold the effects of the NSTable bits into the final descriptor attributes bits, and there we do it unconditionally regardless of the CPU state. That means that in Realm state (where in_secure is false) we will set bit 5 in attrs, and later use it to decide to output to non-secure space. We don't in fact need to do this folding in at all any more (since commit 2f1ff4e7b9f30c): if an NSTable bit was set then we have already set ptw->in_space to ARMSS_NonSecure, and in that situation we don't look at attrs bit 5. The only thing we still need to deal with is the real NS bit in the final descriptor word, so we can just drop the code that ORed in the NSTable bit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-9-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Pass an ARMSecuritySpace to arm_hcr_el2_eff_secstate()Peter Maydell
arm_hcr_el2_eff_secstate() takes a bool secure, which it uses to determine whether EL2 is enabled in the current security state. With the advent of FEAT_RME this is no longer sufficient, because EL2 can be enabled for Secure state but not for Root, and both of those will pass 'secure == true' in the callsites in ptw.c. As it happens in all of our callsites in ptw.c we either avoid making the call or else avoid using the returned value if we're doing a translation for Root, so this is not a behaviour change even if the experimental FEAT_RME is enabled. But it is less confusing in the ptw.c code if we avoid the use of a bool secure that duplicates some of the information in the ArmSecuritySpace argument. Make arm_hcr_el2_eff_secstate() take an ARMSecuritySpace argument instead. Because we always want to know the HCR_EL2 for the security state defined by the current effective value of SCR_EL3.{NSE,NS}, it makes no sense to pass ARMSS_Root here, and we assert that callers don't do that. To avoid the assert(), we thus push the call to arm_hcr_el2_eff_secstate() down into the cases in regime_translation_disabled() that need it, rather than calling the function and ignoring the result for the Root space translations. All other calls to this function in ptw.c are already in places where we have confirmed that the mmu_idx is a stage 2 translation or that the regime EL is not 3. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-7-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Pass ARMSecurityState to regime_translation_disabled()Peter Maydell
Plumb the ARMSecurityState through to regime_translation_disabled() rather than just a bool is_secure. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-6-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Pass ptw into get_phys_addr_pmsa*() and get_phys_addr_disabled()Peter Maydell
In commit 6d2654ffacea813916176 we created the S1Translate struct and used it to plumb through various arguments that we were previously passing one-at-a-time to get_phys_addr_v5(), get_phys_addr_v6(), and get_phys_addr_lpae(). Extend that pattern to get_phys_addr_pmsav5(), get_phys_addr_pmsav7(), get_phys_addr_pmsav8() and get_phys_addr_disabled(), so that all the get_phys_addr_* functions we call from get_phys_addr_nogpc() take the S1Translate struct rather than the mmu_idx and is_secure bool. (This refactoring is a prelude to having the called functions look at ptw->is_space rather than using an is_secure boolean.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-5-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Set s1ns bit in fault info more consistentlyPeter Maydell
The s1ns bit in ARMMMUFaultInfo is documented as "true if we faulted on a non-secure IPA while in secure state". Both the places which look at this bit only do so after having confirmed that this is a stage 2 fault and we're dealing with Secure EL2, which leaves the ptw.c code free to set the bit to any random value in the other cases. Instead of taking advantage of that freedom, consistently make the bit be set to false for the "not a stage 2 fault for Secure EL2" cases. This removes some cases where we were using an 'is_secure' boolean and leaving the reader guessing about whether that was the right thing for Realm and Root cases. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-4-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Don't report GPC faults on stage 1 ptw as stage2 faultsPeter Maydell
In S1_ptw_translate() we set up the ARMMMUFaultInfo if the attempt to translate the page descriptor address into a physical address fails. This used to only be possible if we are doing a stage 2 ptw for that descriptor address, and so the code always sets fi->stage2 and fi->s1ptw to true. However, with FEAT_RME it is also possible for the lookup of the page descriptor address to fail because of a Granule Protection Check fault. These should not be reported as stage 2, otherwise arm_deliver_fault() will incorrectly set HPFAR_EL2. Similarly the s1ptw bit should only be set for stage 2 faults on stage 1 translation table walks, i.e. not for GPC faults. Add a comment to the the other place where we might detect a stage2-fault-on-stage-1-ptw, in arm_casq_ptw(), noting why we know in that case that it must really be a stage 2 fault and not a GPC fault. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-3-peter.maydell@linaro.org
2023-08-22target/arm/ptw: Don't set fi->s1ptw for UnsuppAtomicUpdate faultPeter Maydell
For an Unsupported Atomic Update fault where the stage 1 translation table descriptor update can't be done because it's to an unsupported memory type, this is a stage 1 abort (per the Arm ARM R_VSXXT). This means we should not set fi->s1ptw, because this will cause the code in the get_phys_addr_lpae() error-exit path to mark it as stage 2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230807141514.19075-2-peter.maydell@linaro.org
2023-07-17target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bitsPeter Maydell
In get_phys_addr_twostage() the code that applies the effects of VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure. Now we also have f.attrs.space for FEAT_RME, we need to keep the two in sync. These bits only have an effect for Secure space translations, not for Root, so use the input in_space field to determine whether to apply them rather than the input is_secure. This doesn't actually make a difference because Root translations are never two-stage, but it's a little clearer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
2023-07-17target/arm: Fix S1_ptw_translate() debug pathPeter Maydell
In commit fe4a5472ccd6 we rearranged the logic in S1_ptw_translate() so that the debug-access "call get_phys_addr_*" codepath is used both when S1 is doing ptw reads from stage 2 and when it is doing ptw reads from physical memory. However, we didn't update the calculation of s2ptw->in_space and s2ptw->in_secure to account for the "ptw reads from physical memory" case. This meant that debug accesses when in Secure state broke. Create a new function S2_security_space() which returns the correct security space to use for the ptw load, and use it to determine the correct .in_secure and .in_space fields for the stage 2 lookup for the ptw load. Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org Fixes: fe4a5472ccd6 ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate") Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-17target/arm/ptw.c: Add comments to S1Translate struct fieldsPeter Maydell
Add comments to the in_* fields in the S1Translate struct that explain what they're doing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
2023-07-03plugins: force slow path when plugins instrument memory opsAlex Bennée
The lack of SVE memory instrumentation has been an omission in plugin handling since it was introduced. Fortunately we can utilise the probe_* functions to force all all memory access to follow the slow path. We do this by checking the access type and presence of plugin memory callbacks and if set return the TLB_MMIO flag. We have to jump through a few hoops in user mode to re-use the flag but it was the desired effect: ./qemu-system-aarch64 -display none -serial mon:stdio \ -M virt -cpu max -semihosting-config enable=on \ -kernel ./tests/tcg/aarch64-softmmu/memory-sve \ -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin gives (disas doesn't currently understand st1w): 0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM And for user-mode: ./qemu-aarch64 \ -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \ -d plugin \ ./tests/tcg/aarch64-linux-user/sha512-sve gives: 1..10 ok 1 - do_test(&tests[i]) 0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af (4007c0 is the ld1b in the sha512-sve) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Robert Henry <robhenry@microsoft.com> Cc: Aaron Lindsay <aaron@os.amperecomputing.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
2023-07-03target/arm: make arm_casq_ptw CONFIG_TCG onlyAlex Bennée
The ptw code is accessed by non-TCG code (specifically arm_pamax and arm_cpu_get_phys_page_attrs_debug) but most of it is really only for TCG emulation. Seeing as we already assert for a non TARGET_AARCH64 build lets extend the test rather than further messing with the ifdef ladder. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-19-alex.bennee@linaro.org>
2023-06-23target/arm: Implement the granule protection checkRichard Henderson
Place the check at the end of get_phys_addr_with_struct, so that we check all physical results. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Use get_phys_addr_with_struct for stage2Richard Henderson
This fixes a bug in which we failed to initialize the result attributes properly after the memset. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Move s1_is_el0 into S1TranslateRichard Henderson
Instead of passing this to get_phys_addr_lpae, stash it in the S1Translate structure. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Use get_phys_addr_with_struct in S1_ptw_translateRichard Henderson
Do not provide a fast-path for physical addresses, as those will need to be validated for GPC. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Handle no-execute for Realm and Root regimesRichard Henderson
While Root and Realm may read and write data from other spaces, neither may execute from other pa spaces. This happens for Stage1 EL3, EL2, EL2&0, and Stage2 EL1&0. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Handle Block and Page bits for security spaceRichard Henderson
With Realm security state, bit 55 of a block or page descriptor during the stage2 walk becomes the NS bit; during the stage1 walk the bit 5 NS bit is RES0. With Root security state, bit 11 of the block or page descriptor during the stage1 walk becomes the NSE bit. Rather than collecting an NS bit and applying it later, compute the output pa space from the input pa space and unconditionally assign. This means that we no longer need to adjust the output space earlier for the NSTable bit. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: NSTable is RES0 for the RME EL3 regimeRichard Henderson
Test in_space instead of in_secure so that we don't switch out of Root space. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Pipe ARMSecuritySpace through ptw.cRichard Henderson
Add input and output space members to S1Translate. Set and adjust them in S1_ptw_translate, and the various points at which we drop secure state. Initialize the space in get_phys_addr; for now leave get_phys_addr_with_secure considering only secure vs non-secure spaces. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Remove __attribute__((nonnull)) from ptw.cRichard Henderson
This was added in 7e98e21c098 as part of a reorg in which one of the argument had been legally NULL, and this caught actual instances. Now that the reorg is complete, this serves little purpose. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Introduce ARMMMUIdx_Phys_{Realm,Root}Richard Henderson
With FEAT_RME, there are four physical address spaces. For now, just define the symbols, and mention them in the same spots as the other Phys indexes in ptw.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23target/arm: Adjust the order of Phys and Stage2 ARMMMUIdxRichard Henderson
It will be helpful to have ARMMMUIdx_Phys_* to be in the same relative order as ARMSecuritySpace enumerators. This requires the adjustment to the nstable check. While there, check for being in secure state rather than rely on clearing the low bit making no change to non-secure state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230620124418.805717-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-07target/arm: Only include tcg/oversized-guest.h if CONFIG_TCGRichard Henderson
Fixes the build for --disable-tcg. This header is only needed for cross-hosting. Without CONFIG_TCG, we know this is an AArch64 host, CONFIG_ATOMIC64 will be set, and the TCG_OVERSIZED_GUEST block will never be compiled. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Split out tcg/oversized-guest.hRichard Henderson
Move a use of TARGET_LONG_BITS out of tcg/tcg.h. Include the new file only where required. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05target/arm: Fix test of TCG_OVERSIZED_GUESTRichard Henderson
The symbol is always defined, even if to 0. We wanted to test for TCG_OVERSIZED_GUEST == 0. This fixed, the #error is reached while building arm-softmmu, because TCG_OVERSIZED_GUEST is not true (nor supposed to be true) for arm32 guest on a 32-bit host. But that's ok, because this feature doesn't apply to arm32. Add an #ifdef for TARGET_AARCH64. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-12target/arm: Correct AArch64.S2MinTxSZ 32-bit EL1 input size checkPeter Maydell
In check_s2_mmu_setup() we have a check that is attempting to implement the part of AArch64.S2MinTxSZ that is specific to when EL1 is AArch32: if !s1aarch64 then // EL1 is AArch32 min_txsz = Min(min_txsz, 24); Unfortunately we got this wrong in two ways: (1) The minimum txsz corresponds to a maximum inputsize, but we got the sense of the comparison wrong and were faulting for all inputsizes less than 40 bits (2) We try to implement this as an extra check that happens after we've done the same txsz checks we would do for an AArch64 EL1, but in fact the pseudocode is *loosening* the requirements, so that txsz values that would fault for an AArch64 EL1 do not fault for AArch32 EL1, because it does Min(old_min, 24), not Max(old_min, 24). You can see this also in the text of the Arm ARM in table D8-8, which shows that where the implemented PA size is less than 40 bits an AArch32 EL1 is still OK with a configured stage2 T0SZ for a 40 bit IPA, whereas if EL1 is AArch64 then the T0SZ must be big enough to constrain the IPA to the implemented PA size. Because of part (2), we can't do this as a separate check, but have to integrate it into aa64_va_parameters(). Add a new argument to that function to indicate that EL1 is 32-bit. All the existing callsites except the one in get_phys_addr_lpae() can pass 'false', because they are either doing a lookup for a stage 1 regime or else they don't care about the tsz/tsz_oob fields. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1627 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230509092059.3176487-1-peter.maydell@linaro.org
2023-05-12target/arm: Fix handling of SW and NSW bits for stage 2 walksPeter Maydell
We currently don't correctly handle the VSTCR_EL2.SW and VTCR_EL2.NSW configuration bits. These allow configuration of whether the stage 2 page table walks for Secure IPA and NonSecure IPA should do their descriptor reads from Secure or NonSecure physical addresses. (This is separate from how the translation table base address and other parameters are set: an NS IPA always uses VTTBR_EL2 and VTCR_EL2 for its base address and walk parameters, regardless of the NSW bit, and similarly for Secure.) Provide a new function ptw_idx_for_stage_2() which returns the MMU index to use for descriptor reads, and use it to set up the .in_ptw_idx wherever we call get_phys_addr_lpae(). For a stage 2 walk, wherever we call get_phys_addr_lpae(): * .in_ptw_idx should be ptw_idx_for_stage_2() of the .in_mmu_idx * .in_secure should be true if .in_mmu_idx is Stage2_S This allows us to correct S1_ptw_translate() so that it consistently always sets its (out_secure, out_phys) to the result it gets from the S2 walk (either by calling get_phys_addr_lpae() or by TLB lookup). This makes better conceptual sense because the S2 walk should return us an (address space, address) tuple, not an address that we then randomly assign to S or NS. Our previous handling of SW and NSW was broken, so guest code trying to use these bits to put the s2 page tables in the "other" address space wouldn't work correctly. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230504135425.2748672-3-peter.maydell@linaro.org
2023-05-12target/arm: Don't allow stage 2 page table walks to downgrade to NSPeter Maydell
Bit 63 in a Table descriptor is only the NSTable bit for stage 1 translations; in stage 2 it is RES0. We were incorrectly looking at it all the time. This causes problems if: * the stage 2 table descriptor was incorrectly setting the RES0 bit * we are doing a stage 2 translation in Secure address space for a NonSecure stage 1 regime -- in this case we would incorrectly do an immediate downgrade to NonSecure A bug elsewhere in the code currently prevents us from getting to the second situation, but when we fix that it will be possible. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230504135425.2748672-2-peter.maydell@linaro.org
2023-04-20target/arm: Implement FEAT_PAN3Peter Maydell
FEAT_PAN3 adds an EPAN bit to SCTLR_EL1 and SCTLR_EL2, which allows the PAN bit to make memory non-privileged-read/write if it is user-executable as well as if it is user-read/write. Implement this feature and enable it in the AArch64 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230331145045.2584941-4-peter.maydell@linaro.org