aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2023-01-23target/arm: Look up ARMCPRegInfo at runtimeRichard Henderson
Do not encode the pointer as a constant in the opcode stream. This pointer is specific to the cpu that first generated the translation, which runs into problems with both hot-pluggable cpus and user-only threads, as cpus are removed. It's also a potential correctness issue in the theoretical case of a slightly-heterogenous system, because if CPU 0 generates a TB and then CPU 1 executes it, CPU 1 will end up using CPU 0's hash table, which might have a wrong set of registers in it. (All our current systems are either completely homogenous, M-profile, or have CPUs sufficiently different that they wouldn't be sharing TBs anyway because the differences would show up in the TB flags, so the correctness issue is only theoretical, not practical.) Perform the lookup in either helper_access_check_cp_reg, or a new helper_lookup_cp_reg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230106194451.1213153-3-richard.henderson@linaro.org [PMM: added note in commit message about correctness issue] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Reorg do_coproc_insnRichard Henderson
Move the ri == NULL case to the top of the function and return. This allows the else to be removed and the code unindented. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20230106194451.1213153-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: provide stubs for more external debug registersEvgeny Iakovlev
Qemu doesn't implement Debug Communication Channel, as well as the rest of external debug interface. However, Microsoft Hyper-V in tries to access some of those registers during an EL2 context switch. Since there is no architectural way to not advertise support for external debug, provide RAZ/WI stubs for OSDTRRX_EL1, OSDTRTX_EL1 and OSECCR_EL1 registers in the same way the rest of DCM is currently done. Do account for access traps though with access_tda. Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230120155929.32384-3-eiakovlev@linux.microsoft.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: implement DBGCLAIM registersEvgeny Iakovlev
The architecture does not define any functionality for the CLAIM tag bits. So we will just keep the raw bits, as per spec. Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230120155929.32384-2-eiakovlev@linux.microsoft.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Don't set EXC_RETURN.ES if Security Extension not presentPeter Maydell
In v7m_exception_taken(), for v8M we set the EXC_RETURN.ES bit if either the exception targets Secure or if the CPU doesn't implement the Security Extension. This is incorrect: the v8M Arm ARM specifies that the ES bit should be RES0 if the Security Extension is not implemented, and the pseudocode agrees. Remove the incorrect condition, so that we leave the ES bit 0 if the Security Extension isn't implemented. This doesn't have any guest-visible effects for our current set of emulated CPUs, because all our v8M CPUs implement the Security Extension; but it's worth fixing in case we add a v8M CPU without the extension in future. Reported-by: Igor Kotrasinski <i.kotrasinsk@samsung.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-23target/arm: Fix in_debug path in S1_ptw_translateRichard Henderson
During the conversion, the test against get_phys_addr_lpae got inverted, meaning that successful translations went to the 'failed' label. Cc: qemu-stable@nongnu.org Fixes: f3639a64f60 ("target/arm: Use softmmu tlbs for page table walking") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1417 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230114054605.2977022-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Fix physical address resolution for MTERichard Henderson
Conversion to probe_access_full missed applying the page offset. Fixes: b8967ddf ("target/arm: Use probe_access_full for MTE") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1416 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230114031213.2970349-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Unify set_pstate() SM/ZA helpers as set_svcr()Richard Henderson
Unify the two helper_set_pstate_{sm,za} in this function. Do not call helper_* functions from svcr_write. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-8-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Rebuild hflags in aarch64_set_svcr()Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-7-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Reset ZA state in aarch64_set_svcr()Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-6-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Reset SVE state in aarch64_set_svcr()Richard Henderson
Move arm_reset_sve_state() calls to aarch64_set_svcr(). Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-5-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Introduce aarch64_set_svcr()Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-4-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Rebuild hflags in set_pstate() helpersRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-3-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm/sme: Reorg SME access handling in handle_msr_i()Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230112102436.1913-2-philmd@linaro.org Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org> [PMD: Split patch in multiple tiny steps] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Unify checking for M Main Extension in MRS/MSRDavid Reiss
BASEPRI, FAULTMASK, and their _NS equivalents only exist on devices with the Main Extension. However, the MRS instruction did not check this, and the MSR instruction handled it inconsistently (warning BASEPRI, but silently ignoring writes to BASEPRI_NS). Unify this behavior and always warn when reading or writing any of these registers if the extension is not present. Signed-off-by: David Reiss <dreiss@meta.com> Message-id: 167330628518.10497.13100425787268927786-0@git.sr.ht Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23target/arm: Widen cnthctl_el2 to uint64_tRichard Henderson
This is a 64-bit register on AArch64, even if the high 44 bits are RES0. Because this is defined as ARM_CP_STATE_BOTH, we are asserting that the cpreg field is 64-bits. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1400 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230115171633.3171890-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-13target/arm: allow writes to SCR_EL3.HXEn bit when FEAT_HCX is enabledEvgeny Iakovlev
ARM trusted firmware, when built with FEAT_HCX support, sets SCR_EL3.HXEn bit to allow EL2 to modify HCRX_EL2 register without trapping it in EL3. Qemu uses a valid mask to clear unsupported SCR_EL3 bits when emulating SCR_EL3 write, and that mask doesn't include SCR_EL3.HXEn bit even if FEAT_HCX is enabled and exposed to the guest. As a result EL3 writes of that bit are ignored. Cc: qemu-stable@nongnu.org Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com> Message-id: 20230105221251.17896-4-eiakovlev@linux.microsoft.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-12target/arm: Fix sve_probe_pageRichard Henderson
Don't dereference CPUTLBEntryFull until we verify that the page is valid. Move the other user-only info field updates after the valid check to match. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1412 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230104190056.305143-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: align exposed ID registers with LinuxZhuojia Shen
In CPUID registers exposed to userspace, some registers were missing and some fields were not exposed. This patch aligns exposed ID registers and their fields with what the upstream kernel currently exposes. Specifically, the following new ID registers/fields are exposed to userspace: ID_AA64PFR1_EL1.BT: bits 3-0 ID_AA64PFR1_EL1.MTE: bits 11-8 ID_AA64PFR1_EL1.SME: bits 27-24 ID_AA64ZFR0_EL1.SVEver: bits 3-0 ID_AA64ZFR0_EL1.AES: bits 7-4 ID_AA64ZFR0_EL1.BitPerm: bits 19-16 ID_AA64ZFR0_EL1.BF16: bits 23-20 ID_AA64ZFR0_EL1.SHA3: bits 35-32 ID_AA64ZFR0_EL1.SM4: bits 43-40 ID_AA64ZFR0_EL1.I8MM: bits 47-44 ID_AA64ZFR0_EL1.F32MM: bits 55-52 ID_AA64ZFR0_EL1.F64MM: bits 59-56 ID_AA64SMFR0_EL1.F32F32: bit 32 ID_AA64SMFR0_EL1.B16F32: bit 34 ID_AA64SMFR0_EL1.F16F32: bit 35 ID_AA64SMFR0_EL1.I8I32: bits 39-36 ID_AA64SMFR0_EL1.F64F64: bit 48 ID_AA64SMFR0_EL1.I16I64: bits 55-52 ID_AA64SMFR0_EL1.FA64: bit 63 ID_AA64MMFR0_EL1.ECV: bits 63-60 ID_AA64MMFR1_EL1.AFP: bits 47-44 ID_AA64MMFR2_EL1.AT: bits 35-32 ID_AA64ISAR0_EL1.RNDR: bits 63-60 ID_AA64ISAR1_EL1.FRINTTS: bits 35-32 ID_AA64ISAR1_EL1.BF16: bits 47-44 ID_AA64ISAR1_EL1.DGH: bits 51-48 ID_AA64ISAR1_EL1.I8MM: bits 55-52 ID_AA64ISAR2_EL1.WFxT: bits 3-0 ID_AA64ISAR2_EL1.RPRES: bits 7-4 ID_AA64ISAR2_EL1.GPA3: bits 11-8 ID_AA64ISAR2_EL1.APA3: bits 15-12 The code is also refactored to use symbolic names for ID register fields for better readability and maintainability. The test case in tests/tcg/aarch64/sysregs.c is also updated to match the intended behavior. Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com> Message-id: DS7PR12MB6309FB585E10772928F14271ACE79@DS7PR12MB6309.namprd12.prod.outlook.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: use Sn_n_Cn_Cn_n syntax to work with older assemblers that don't recognize id_aa64isar2_el1 and id_aa64mmfr2_el1] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: cleanup cpu includesClaudio Fontana
Remove some unused headers. Signed-off-by: Claudio Fontana <cfontana@suse.de> Acked-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Message-id: 20221213190537.511-7-farosas@suse.de [added back some includes that are still needed at this point] Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Remove unused includes from helper.cFabiano Rosas
Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20221213190537.511-6-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Remove unused includes from m_helper.cFabiano Rosas
Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20221213190537.511-5-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Fix checkpatch brace errors in helper.cFabiano Rosas
Fix this: ERROR: braces {} are necessary for all arms of this statement Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20221213190537.511-4-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Fix checkpatch space errors in helper.cFabiano Rosas
Fix the following: ERROR: spaces required around that '|' (ctx:VxV) ERROR: space required before the open parenthesis '(' ERROR: spaces required around that '+' (ctx:VxB) ERROR: space prohibited between function name and open parenthesis '(' (the last two still have some occurrences in macros which I left behind because it might impact readability) Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20221213190537.511-3-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Fix checkpatch comment style warnings in helper.cFabiano Rosas
Fix these: WARNING: Block comments use a leading /* on a separate line WARNING: Block comments use * on subsequent lines WARNING: Block comments use a trailing */ on a separate line Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20221213190537.511-2-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: fix handling of HLT semihosting in system modeAlex Bennée
The check semihosting_enabled() wants to know if the guest is currently in user mode. Unlike the other cases the test was inverted causing us to block semihosting calls in non-EL0 modes. Cc: qemu-stable@nongnu.org Fixes: 19b26317e9 (target/arm: Honour -semihosting-config userspace=on) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Add ARM Cortex-R52 CPUTobias Röhmel
All constants are taken from the ARM Cortex-R52 Processor TRM Revision: r1p3 Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-8-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Add PMSAv8r functionalityTobias Röhmel
Add PMSAv8r translation. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Add PMSAv8r registersTobias Röhmel
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Message-id: 20221206102504.165775-6-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Enable TTBCR_EAE for ARMv8-R AArch32Tobias Röhmel
ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even tough they don't have the TTBCR register. See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R AArch32 architecture profile Version:A.c section C1.2. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-5-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Make stage_2_format for cache attributes optionalTobias Röhmel
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike VMSAv8, the stage 2 attributes are in the same format as the stage 1 attributes (8-bit MAIR format). Rather than converting the MAIR format to the format used for VMSA stage 2 (bits [5:2] of a VMSA stage 2 descriptor) and then converting back to do the attribute combination, allow combined_attrs_nofwb() to accept s2 attributes that are already in the MAIR format. We move the assert() to combined_attrs_fwb(), because that function really does require a VMSA stage 2 attribute format. (We will never get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.) Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Make RVBAR available for all ARMv8 CPUsTobias Röhmel
RVBAR shadows RVBAR_ELx where x is the highest exception level if the highest EL is not EL3. This patch also allows ARMv8 CPUs to change the reset address with the rvbar property. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221206102504.165775-3-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm: Don't add all MIDR aliases for cores that implement PMSATobias Röhmel
Cores with PMSA have the MPUIR register which has the same encoding as the MIDR alias with opc2=4. So we only add that alias if we are not realizing a core that implements PMSA. Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221206102504.165775-2-tobias.roehmel@rwth-aachen.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05target/arm:Set lg_page_size to 0 if either S1 or S2 asks for itPeter Maydell
In get_phys_addr_twostage() we set the lg_page_size of the result to the maximum of the stage 1 and stage 2 page sizes. This works for the case where we do want to create a TLB entry, because we know the common TLB code only creates entries of the TARGET_PAGE_SIZE and asking for a size larger than that only means that invalidations invalidate the whole larger area. However, if lg_page_size is smaller than TARGET_PAGE_SIZE this effectively means "don't create a TLB entry"; in this case if either S1 or S2 said "this covers less than a page and can't go in a TLB" then the final result also should be marked that way. Set the resulting page size to 0 if either stage asked for a less-than-a-page entry, and expand the comment to explain what's going on. This has no effect for VMSA because currently the VMSA lookup always returns results that cover at least TARGET_PAGE_SIZE; however when we add v8R support it will reuse this code path, and for v8R the S1 and S2 results can be smaller than TARGET_PAGE_SIZE. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
2022-12-16target/arm: Convert to 3-phase resetPeter Maydell
Convert the Arm CPU class to use 3-phase reset, so it doesn't need to use device_class_set_parent_reset() any more. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-id: 20221124115023.2437291-3-peter.maydell@linaro.org
2022-12-15target/arm: Restrict arm_cpu_exec_interrupt() to TCG acceleratorPhilippe Mathieu-Daudé
When building with --disable-tcg on Darwin we get: target/arm/cpu.c:725:16: error: incomplete definition of type 'struct TCGCPUOps' cc->tcg_ops->do_interrupt(cs); ~~~~~~~~~~~^ Commit 083afd18a9 ("target/arm: Restrict cpu_exec_interrupt() handler to sysemu") limited this block to system emulation, but neglected to also limit it to TCG. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-id: 20221209110823.59495-1-philmd@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15hw/misc: Move some arm-related files from specific_ss into softmmu_ssThomas Huth
The header target/arm/kvm-consts.h checks CONFIG_KVM which is marked as poisoned in common code, so the files that include this header have to be added to specific_ss and recompiled for each, qemu-system-arm and qemu-system-aarch64. However, since the kvm headers are only optionally used in kvm-constants.h for some sanity checks, we can additionally check the NEED_CPU_H macro first to avoid the poisoned CONFIG_KVM macro, so kvm-constants.h can also be used from "common" files (without the sanity checks - which should be OK since they are still done from other target-specific files instead). This way, and by adjusting some other include statements in the related files here and there, we can move some files from specific_ss into softmmu_ss, so that they only need to be compiled once during the build process. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221202154023.293614-1-thuth@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15target/arm: Report FEAT_EVT for TCG '-cpu max'Peter Maydell
Update the ID registers for TCG's '-cpu max' to report the FEAT_EVT Enhanced Virtualization Traps support. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Implement HCR_EL2.TID4 trapsPeter Maydell
For FEAT_EVT, the HCR_EL2.TID4 trap allows trapping of the cache ID registers CCSIDR_EL1, CCSIDR2_EL1, CLIDR_EL1 and CSSELR_EL1 (and their AArch32 equivalents). This is a subset of the registers trapped by HCR_EL2.TID2, which includes all of these and also the CTR_EL0 register. Our implementation already uses a separate access function for CTR_EL0 (ctr_el0_access()), so all of the registers currently using access_aa64_tid2() should also be checking TID4. Make that function check both TID2 and TID4, and rename it appropriately. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Implement HCR_EL2.TICAB,TOCU trapsPeter Maydell
For FEAT_EVT, the HCR_EL2.TICAB bit allows trapping of the ICIALLUIS and IC IALLUIS cache maintenance instructions. The HCR_EL2.TOCU bit traps all the other cache maintenance instructions that operate to the point of unification: AArch64 IC IVAU, IC IALLU, DC CVAU AArch32 ICIMVAU, ICIALLU, DCCMVAU The two trap bits between them cover all of the cache maintenance instructions which must also check the HCR_TPU flag. Turn the old aa64_cacheop_pou_access() function into a helper function which takes the set of HCR_EL2 flags to check as an argument, and call it from new access_ticab() and access_tocu() functions as appropriate for each cache op. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Implement HCR_EL2.TTLBOS trapsPeter Maydell
For FEAT_EVT, the HCR_EL2.TTLBOS bit allows trapping on EL1 use of TLB maintenance instructions that operate on the outer shareable domain: TLBI VMALLE1OS, TLBI VAE1OS, TLBI ASIDE1OS,TLBI VAAE1OS, TLBI VALE1OS, TLBI VAALE1OS, TLBI RVAE1OS, TLBI RVAAE1OS, TLBI RVALE1OS, and TLBI RVAALE1OS. (There are no AArch32 outer-shareable TLB maintenance ops.) Implement the trapping. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Implement HCR_EL2.TTLBIS trapsPeter Maydell
For FEAT_EVT, the HCR_EL2.TTLBIS bit allows trapping on EL1 use of TLB maintenance instructions that operate on the inner shareable domain: AArch64: TLBI VMALLE1IS, TLBI VAE1IS, TLBI ASIDE1IS, TLBI VAAE1IS, TLBI VALE1IS, TLBI VAALE1IS, TLBI RVAE1IS, TLBI RVAAE1IS, TLBI RVALE1IS, and TLBI RVAALE1IS. AArch32: TLBIALLIS, TLBIMVAIS, TLBIASIDIS, TLBIMVAAIS, TLBIMVALIS, and TLBIMVAALIS. Add the trapping support. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Allow relevant HCR bits to be written for FEAT_EVTPeter Maydell
FEAT_EVT adds five new bits to the HCR_EL2 register: TTLBIS, TTLBOS, TICAB, TOCU and TID4. These allow the guest to enable trapping of various EL1 instructions to EL2. In this commit, add the necessary code to allow the guest to set these bits if the feature is present; because the bit is always zero when the feature isn't present we won't need to use explicit feature checks in the "trap on condition" tests in the following commits. Note that although full implementation of the feature (mandatory from Armv8.5 onward) requires all five trap bits, the ID registers permit a value indicating that only TICAB, TOCU and TID4 are implemented, which might be the case for CPUs between Armv8.2 and Armv8.5. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15target/arm: Add Cortex-A55 CPUTimofey Kutergin
The Cortex-A55 is one of the newer armv8.2+ CPUs; in particular it supports the Privileged Access Never (PAN) feature. Add a model of this CPU, so you can use a CPU type on the virt board that models a specific real hardware CPU, rather than having to use the QEMU-specific "max" CPU type. Signed-off-by: Timofey Kutergin <tkutergin@gmail.com> Message-id: 20221121150819.2782817-1-tkutergin@gmail.com [PMM: tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-14qapi machine: Elide redundant has_FOO in generated CMarkus Armbruster
The has_FOO for pointer-valued FOO are redundant, except for arrays. They are also a nuisance to work with. Recent commit "qapi: Start to elide redundant has_FOO in generated C" provided the means to elide them step by step. This is the step for qapi/machine*.json. Said commit explains the transformation in more detail. The invariant violations mentioned there do not occur here. Cc: Eduardo Habkost <eduardo@habkost.net> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Yanan Wang <wangyanan55@huawei.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221104160712.3005652-16-armbru@redhat.com>
2022-11-29target/arm: Set TCGCPUOps.restore_state_to_opc for v7mEvgeny Ermakov
This setting got missed, breaking v7m. Fixes: 56c6c98df85c ("target/arm: Convert to tcg_ops restore_state_to_opc") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1347 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221129204146.550394-1-richard.henderson@linaro.org>
2022-11-22target/arm: Use signed quantity to represent VMSAv8-64 translation levelArd Biesheuvel
The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-22target/arm: Don't do two-stage lookup if stage 2 is disabledPeter Maydell
In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
2022-11-21target/arm: Limit LPA2 effective output address when TCR.DS == 0Ard Biesheuvel
With LPA2, the effective output address size is at most 48 bits when TCR.DS == 0. This case is currently unhandled in the page table walker, where we happily assume LVA/64k granule when outputsize > 48 and param.ds == 0, resulting in the wrong conversion to be used from a page table descriptor to a physical address. if (outputsize > 48) { if (param.ds) { descaddr |= extract64(descriptor, 8, 2) << 50; } else { descaddr |= extract64(descriptor, 12, 4) << 48; } So cap the outputsize to 48 when TCR.DS is cleared, as per the architecture. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221116170316.259695-1-ardb@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04target/arm: Two fixes for secure ptwRichard Henderson
Reversed the sense of non-secure in get_phys_addr_lpae, and failed to initialize attrs.secure for ARMMMUIdx_Phys_S. Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>