aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2024-01-10target/riscv/kvm: add RISCV_CONFIG_REG()Daniel Henrique Barboza
Create a RISCV_CONFIG_REG() macro, similar to what other regs use, to hide away some of the boilerplate. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-5-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv/kvm: change timer regs size to u64Daniel Henrique Barboza
KVM_REG_RISCV_TIMER regs are always u64 according to the KVM API, but at this moment we'll return u32 regs if we're running a RISCV32 target. Use the kvm_riscv_reg_id_u64() helper in RISCV_TIMER_REG() to fix it. Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-4-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv/kvm: change KVM_REG_RISCV_FP_D to u64Daniel Henrique Barboza
KVM_REG_RISCV_FP_D regs are always u64 size. Using kvm_riscv_reg_id() in RISCV_FP_D_REG() ends up encoding the wrong size if we're running with TARGET_RISCV32. Create a new helper that returns a KVM ID with u64 size and use it with RISCV_FP_D_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-3-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv/kvm: change KVM_REG_RISCV_FP_F to u32Daniel Henrique Barboza
KVM_REG_RISCV_FP_F regs have u32 size according to the API, but by using kvm_riscv_reg_id() in RISCV_FP_F_REG() we're returning u64 sizes when running with TARGET_RISCV64. The most likely reason why no one noticed this is because we're not implementing kvm_cpu_synchronize_state() in RISC-V yet. Create a new helper that returns a KVM ID with u32 size and use it in RISCV_FP_F_REG(). Reported-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Message-ID: <20231208183835.2411523-2-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv/cpu.c: fix machine IDs gettersDaniel Henrique Barboza
mvendorid is an uint32 property, mimpid/marchid are uint64 properties. But their getters are returning bools. The reason this went under the radar for this long is because we have no code using the getters. The problem can be seem via the 'qom-get' API though. Launching QEMU with the 'veyron-v1' CPU, a model with: VEYRON_V1_MVENDORID: 0x61f (1567) VEYRON_V1_MIMPID: 0x111 (273) VEYRON_V1_MARCHID: 0x8000000000010000 (9223372036854841344) This is what the API returns when retrieving these properties: (qemu) qom-get /machine/soc0/harts[0] mvendorid true (qemu) qom-get /machine/soc0/harts[0] mimpid true (qemu) qom-get /machine/soc0/harts[0] marchid true After this patch: (qemu) qom-get /machine/soc0/harts[0] mvendorid 1567 (qemu) qom-get /machine/soc0/harts[0] mimpid 273 (qemu) qom-get /machine/soc0/harts[0] marchid 9223372036854841344 Fixes: 1e34150045 ("target/riscv/cpu.c: restrict 'mvendorid' value") Fixes: a1863ad368 ("target/riscv/cpu.c: restrict 'mimpid' value") Fixes: d6a427e2c0 ("target/riscv/cpu.c: restrict 'marchid' value") Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231211170732.2541368-1-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv/pmp: Use hwaddr instead of target_ulong for RV32Ivan Klokov
The Sv32 page-based virtual-memory scheme described in RISCV privileged spec Section 5.3 supports 34-bit physical addresses for RV32, so the PMP scheme must support addresses wider than XLEN for RV32. However, PMP address register format is still 32 bit wide. Signed-off-by: Ivan Klokov <ivan.klokov@syntacore.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231123091214.20312-1-ivan.klokov@syntacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv: Not allow write mstatus_vs without RVVLIU Zhiwei
If CPU does not implement the Vector extension, it usually means mstatus vs hardwire to zero. So we should not allow write a non-zero value to this field. Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231215023313.1708-1-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv: Fix th.dcache.cval1 priviledge checkLIU Zhiwei
According to the specification, the th.dcache.cvall1 can be executed under all priviledges. The specification about xtheadcmo located in, https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadcmo/dcache_cval1.adoc Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Christoph Muellner <christoph.muellner@vrull.eu> Message-ID: <20231208094315.177-1-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv: The whole vector register move instructions depend on vsewMax Chou
The RISC-V v spec 16.6 section says that the whole vector register move instructions operate as if EEW=SEW. So it should depends on the vsew field of vtype register. Signed-off-by: Max Chou <max.chou@sifive.com> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20231129170400.21251-3-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10target/riscv: Add vill check for whole vector register move instructionsMax Chou
The ratified version of RISC-V V spec section 16.6 says that `The instructions operate as if EEW=SEW`. So the whole vector register move instructions depend on the vtype register that means the whole vector register move instructions should raise an illegal-instruction exception when vtype.vill=1. Signed-off-by: Max Chou <max.chou@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20231129170400.21251-2-max.chou@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-09target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUsPeter Maydell
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Enhance CPU_LOG_INT to show SPSR on AArch64 exception-entryPeter Maydell
We already print various lines of information when we take an exception, including the ELR and (if relevant) the FAR. Now that FEAT_NV means that we might report something other than the old PSTATE to the guest as the SPSR, it's worth logging this as well. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Report HCR_EL2.{NV,NV1,NV2} in cpu dumpsPeter Maydell
When interpreting CPU dumps where FEAT_NV and FEAT_NV2 are in use, it's helpful to include the values of HCR_EL2.{NV,NV1,NV2} in the CPU dump format, as a way of distinguishing when we are in EL1 as part of executing guest-EL2 and when we are just in normal EL1. Add the bits to the end of the log line that shows PSTATE and similar information: PSTATE=000003c9 ---- EL2h BTYPE=0 NV NV2 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Mark up VNCR offsets (offsets >= 0x200, except GIC)Peter Maydell
Mark up the cpreginfo structs to indicate offsets for system registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in the Arm ARM. This covers all the remaining offsets at 0x200 and above, except for the GIC ICH_* registers. (Note that because we don't implement FEAT_SPE, FEAT_TRF, FEAT_MPAM, FEAT_BRBE or FEAT_AMUv1p1 we don't implement any of the registers that use offsets at 0x800 and above.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Mark up VNCR offsets (offsets 0x168..0x1f8)Peter Maydell
Mark up the cpreginfo structs to indicate offsets for system registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in the Arm ARM. This commit covers offsets 0x168 to 0x1f8. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Mark up VNCR offsets (offsets 0x100..0x160)Peter Maydell
Mark up the cpreginfo structs to indicate offsets for system registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in the Arm ARM. This commit covers offsets 0x100 to 0x160. Many (but not all) of the registers in this range have _EL12 aliases, and the slot in memory is shared between the _EL12 version of the register and the _EL1 version. Where we programmatically generate the regdef for the _EL12 register, arrange that its nv2_redirect_offset is set up correctly to do this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Mark up VNCR offsets (offsets 0x0..0xff)Peter Maydell
Mark up the cpreginfo structs to indicate offsets for system registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in the Arm ARM. This commit covers offsets below 0x100; all of these registers are redirected to memory regardless of the value of HCR_EL2.NV1. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Report VNCR_EL2 based faults correctlyPeter Maydell
If FEAT_NV2 redirects a system register access to a memory offset from VNCR_EL2, that access might fault. In this case we need to report the correct syndrome information: * Data Abort, from same-EL * no ISS information * the VNCR bit (bit 13) is set and the exception must be taken to EL2. Save an appropriate syndrome template when generating code; we can then use that to: * select the right target EL * reconstitute a correct final syndrome for the data abort * report the right syndrome if we take a FEAT_RME granule protection fault on the VNCR-based write Note that because VNCR is bit 13, we must start keeping bit 13 in template syndromes, by adjusting ARM_INSN_START_WORD2_SHIFT. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Implement FEAT_NV2 redirection of sysregs to RAMPeter Maydell
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by EL1 to certain system registers are redirected to RAM. The full list of affected registers is in the table in rule R_CSRPQ in the Arm ARM. The registers may be normally accessible at EL1 (like ACTLR_EL1), or normally UNDEF at EL1 (like HCR_EL2). Some registers redirect to RAM only when HCR_EL2.NV1 is 0, and some only when HCR_EL2.NV1 is 1; others trap in both cases. Add the infrastructure for identifying which registers should be redirected and turning them into memory accesses. This code does not set the correct syndrome or arrange for the exception to be taken to the correct target EL if the access via VNCR_EL2 faults; we will do that in the next commit. Subsequent commits will mark up the relevant regdefs to set their nv2_redirect_offset, and if relevant one of the two flags which indicates that the redirect happens only for a particular value of HCR_EL2.NV1. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-09target/arm: Handle FEAT_NV2 redirection of SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2Peter Maydell
Under FEAT_NV2, when HCR_EL2.{NV,NV2} == 0b11 at EL1, accesses to the registers SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2 and TFSR_EL2 (which would UNDEF without FEAT_NV or FEAT_NV2) should instead access the equivalent EL1 registers SPSR_EL1, ELR_EL1, ESR_EL1, FAR_EL1 and TFSR_EL1. Because there are only five registers involved and the encoding for the EL1 register is identical to that of the EL2 register except that opc1 is 0, we handle this by finding the EL1 register in the hash table and using it instead. Note that traps that apply to direct accesses to the EL1 register, such as active fine-grained traps or other trap bits, do not trigger when it is accessed via the EL2 encoding in this way. However, some traps that are defined by the EL2 register may apply. We therefore call the EL2 register's accessfn first. The only one of the five which has such traps is TFSR_EL2: make sure its accessfn correctly handles both FEAT_NV (where we trap to EL2 without checking ATA bits) and FEAT_NV2 (where we check ATA bits and then redirect to TFSR_EL1). (We don't need the NV1 tbflag bit until the next patch, but we introduce it here to avoid putting the NV, NV1, NV2 bits in an odd order.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Handle FEAT_NV2 changes to when SPSR_EL1.M reports EL2Peter Maydell
With FEAT_NV2, the condition for when SPSR_EL1.M should report that an exception was taken from EL2 changes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Implement VNCR_EL2 registerPeter Maydell
For FEAT_NV2, a new system register VNCR_EL2 holds the base address of the memory which nested-guest system register accesses are redirected to. Implement this register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Handle HCR_EL2 accesses for FEAT_NV2 bitsPeter Maydell
FEAT_NV2 defines another new bit in HCR_EL2: NV2. When the feature is enabled, allow this bit to be written in HCR_EL2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Add FEAT_NV to max, neoverse-n2, neoverse-v1 CPUsPeter Maydell
Enable FEAT_NV on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs. We continue to downgrade FEAT_NV2 support to FEAT_NV for the latter two CPU types. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Handle FEAT_NV page table attribute changesPeter Maydell
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,1} the handling of some of the page table attribute bits changes for the EL1&0 translation regime: * for block and page descriptors: - bit [54] holds PXN, not UXN - bit [53] is RES0, and the effective value of UXN is 0 - bit [6], AP[1], is treated as 0 * for table descriptors, when hierarchical permissions are enabled: - bit [60] holds PXNTable, not UXNTable - bit [59] is RES0 - bit [61], APTable[0] is treated as 0 Implement these changes to the page table attribute handling. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Treat LDTR* and STTR* as LDR/STR when NV, NV1 is 1, 1Peter Maydell
FEAT_NV requires (per I_JKLJK) that when HCR_EL2.{NV,NV1} is {1,1} the unprivileged-access instructions LDTR, STTR etc behave as normal loads and stores. Implement the check that handles this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Don't honour PSTATE.PAN when HCR_EL2.{NV, NV1} == {1, 1}Peter Maydell
For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled even when the PSTATE.PAN bit is set. Implement this by having arm_pan_enabled() return false in this situation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Always use arm_pan_enabled() when checking if PAN is enabledPeter Maydell
Currently the code in target/arm/helper.c mostly checks the PAN bits in env->pstate or env->uncached_cpsr directly when it wants to know if PAN is enabled, because in most callsites we know whether we are in AArch64 or AArch32. We do have an arm_pan_enabled() function, but we only use it in a few places where the code might run in either an AArch32 or AArch64 context. For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled even when the PSTATE.PAN bit is set, the "is PAN enabled" test becomes more complicated. Make all places that check for PAN use arm_pan_enabled(), so we have a place to put the FEAT_NV test. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Trap registers when HCR_EL2.{NV, NV1} == {1, 1}Peter Maydell
When HCR_EL2.{NV,NV1} is {1,1} we must trap five extra registers to EL2: VBAR_EL1, ELR_EL1, SPSR_EL1, SCXTNUM_EL1 and TFSR_EL1. Implement these traps. This trap does not apply when FEAT_NV2 is implemented and enabled; include the check that HCR_EL2.NV2 is 0 here, to save us having to come back and add it later. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Set SPSR_EL1.M correctly when nested virt is enabledPeter Maydell
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,0} and an exception is taken from EL1 to EL1 then the reported EL in SPSR_EL1.M should be EL2, not EL1. Implement this behaviour. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Make NV reads of CurrentEL return EL2Peter Maydell
FEAT_NV requires that when HCR_EL2.NV is set reads of the CurrentEL register from EL1 always report EL2 rather than the real EL. Implement this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Trap sysreg accesses for FEAT_NVPeter Maydell
For FEAT_NV, accesses to system registers and instructions from EL1 which would normally UNDEF there but which work in EL2 need to instead be trapped to EL2. Detect this both for "we know this will UNDEF at translate time" and "we found this UNDEFs at runtime", and make the affected registers trap to EL2 instead. The Arm ARM defines the set of registers that should trap in terms of their names; for our implementation this would be both awkward and inefficent as a test, so we instead trap based on the opc1 field of the sysreg. The regularity of the architectural choice of encodings for sysregs means that in practice this captures exactly the correct set of registers. Regardless of how we try to define the registers this trapping applies to, there's going to be a certain possibility of breakage if new architectural features introduce new registers that don't follow the current rules (FEAT_MEC is one example already visible in the released sysreg XML, though not yet in the Arm ARM). This approach seems to me to be straightforward and likely to require a minimum of manual overrides. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Move FPU/SVE/SME access checks up above ARM_CP_SPECIAL_MASK checkPeter Maydell
In handle_sys() we don't do the check for whether the register is marked as needing an FPU/SVE/SME access check until after we've handled the special cases covered by ARM_CP_SPECIAL_MASK. This is conceptually the wrong way around, because if for example we happen to implement an FPU-access-checked register as ARM_CP_NOP, we should do the access check first. Move the access checks up so they are with all the other access checks, not sandwiched between the special-case read/write handling and the normal-case read/write handling. This doesn't change behaviour at the moment, because we happen not to define any cpregs with both ARM_CPU_{FPU,SVE,SME} and one of the cases dealt with by ARM_CP_SPECIAL_MASK. Moving this code also means we have the correct place to put the FEAT_NV/FEAT_NV2 access handling, which should come after the access checks and before we try to do any read/write action. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Make EL2 cpreg accessfns safe for FEAT_NV EL1 accessesPeter Maydell
FEAT_NV and FEAT_NV2 will allow EL1 to attempt to access cpregs that only exist at EL2. This means we're going to want to run their accessfns when the CPU is at EL1. In almost all cases, the behaviour we want is "the accessfn returns OK if at EL1". Mostly the accessfn already does the right thing; in a few cases we need to explicitly check that the EL is not 1 before applying various trap controls, or split out an accessfn used both for an _EL1 and an _EL2 register into two so we can handle the FEAT_NV case correctly for the _EL2 register. There are two registers where we want the accessfn to trap for a FEAT_NV EL1 access: VSTTBR_EL2 and VSTCR_EL2 should UNDEF an access from NonSecure EL1, not trap to EL2 under FEAT_NV. The way we have written sel2_access() already results in this behaviour. We can identify the registers we care about here because they all have opc1 == 4 or 5. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: *_EL12 registers should UNDEF when HCR_EL2.E2H is 0Peter Maydell
The alias registers like SCTLR_EL12 only exist when HCR_EL2.E2H is 1; they should UNDEF otherwise. We weren't implementing this. Add an intercept of the accessfn for these aliases, and implement the UNDEF check. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Record correct opcode fields in cpreg for E2H aliasesPeter Maydell
For FEAT_VHE, we define a set of register aliases, so that for instance: * the SCTLR_EL1 either accesses the real SCTLR_EL1, or (if E2H is 1) SCTLR_EL2 * a new SCTLR_EL12 register accesses SCTLR_EL1 if E2H is 1 However when we create the 'new_reg' cpreg struct for the SCTLR_EL12 register, we duplicate the information in the SCTLR_EL1 cpreg, which means the opcode fields are those of SCTLR_EL1, not SCTLR_EL12. This is a problem for code which looks at the cpreg opcode fields to determine behaviour (e.g. in access_check_cp_reg()). In practice the current checks we do there don't intersect with the *_EL12 registers, but for FEAT_NV this will become a problem. Write the correct values from the encoding into the new_reg struct. This restores the invariant that the cpreg that you get back from the hashtable has opcode fields that match the key you used to retrieve it. When we call the readfn or writefn for the target register, we pass it the cpreg struct for that target register, not the one for the alias, in case the readfn/writefn want to look at the opcode fields to determine behaviour. This means we need to interpose custom read/writefns for the e12 aliases. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Allow use of upper 32 bits of TBFLAG_A64Peter Maydell
The TBFLAG_A64 TB flag bits go in flags2, which for AArch64 guests we know is 64 bits. However at the moment we use FIELD_EX32() and FIELD_DP32() to read and write these bits, which only works for bits 0 to 31. Since we're about to add a flag that uses bit 32, switch to FIELD_EX64() and FIELD_DP64() so that this will work. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Always honour HCR_EL2.TSC when HCR_EL2.NV is setPeter Maydell
The HCR_EL2.TSC trap for trapping EL1 execution of SMC instructions has a behaviour change for FEAT_NV when EL3 is not implemented: * in older architecture versions TSC was required to have no effect (i.e. the SMC insn UNDEFs) * with FEAT_NV, when HCR_EL2.NV == 1 the trap must apply (i.e. SMC traps to EL2, as it already does in all cases when EL3 is implemented) * in newer architecture versions, the behaviour either without FEAT_NV or with FEAT_NV and HCR_EL2.NV == 0 is relaxed to an IMPDEF choice between UNDEF and trap-to-EL2 (i.e. it is permitted to always honour HCR_EL2.TSC) for AArch64 only Add the condition to honour the trap bit when HCR_EL2.NV == 1. We leave the HCR_EL2.NV == 0 case with the existing (UNDEF) behaviour, as our IMPDEF choice (both because it avoids a behaviour change for older CPU models and because we'd have to distinguish AArch32 from AArch64 if we opted to trap to EL2). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Enable trapping of ERET for FEAT_NVPeter Maydell
When FEAT_NV is turned on via the HCR_EL2.NV bit, ERET instructions are trapped, with the same syndrome information as for the existing FEAT_FGT fine-grained trap (in the pseudocode this is handled in AArch64.CheckForEretTrap()). Rename the DisasContext and tbflag bits to reflect that they are no longer exclusively for FGT traps, and set the tbflag bit when FEAT_NV is enabled as well as when the FGT is enabled. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Implement HCR_EL2.AT handlingPeter Maydell
The FEAT_NV HCR_EL2.AT bit enables trapping of some address translation instructions from EL1 to EL2. Implement this behaviour. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NVPeter Maydell
FEAT_NV defines three new bits in HCR_EL2: NV, NV1 and AT. When the feature is enabled, allow these bits to be written, and flush the TLBs for the bits which affect page table interpretation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09target/arm: Set CTR_EL0.{IDC,DIC} for the 'max' CPUPeter Maydell
The CTR_EL0 register has some bits which allow the implementation to tell the guest that it does not need to do cache maintenance for data-to-instruction coherence and instruction-to-data coherence. QEMU doesn't emulate caches and so our cache maintenance insns are all NOPs. We already have some models of specific CPUs where we set these bits (e.g. the Neoverse V1), but the 'max' CPU still uses the settings it inherits from Cortex-A57. Set the bits for 'max' as well, so the guest doesn't need to do unnecessary work. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-08Replace "iothread lock" with "BQL" in commentsStefan Hajnoczi
The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL) in their names. Update the code comments to use "BQL" instead of "iothread lock". Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-id: 20240102153529.486531-5-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql()Stefan Hajnoczi
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL) instead, it is already widely used and unambiguous. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-4-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08qemu/main-loop: rename QEMU_IOTHREAD_LOCK_GUARD to BQL_LOCK_GUARDStefan Hajnoczi
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL) instead, it is already widely used and unambiguous. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi
The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into stagingPeter Maydell
trivial patches for 2024-01-05 # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmWYWJEPHG1qdEB0bHMu # bXNrLnJ1AAoJEHAbT2saaT5Z4PEH/2vA3XIPf96IlrZilBFIOYfb8wkw6AGI7BG8 # R3xps+j4ih/RreQdJzswFzfCDaBZvdEPlHtu3YFsIKqfa/svLdVU6GKqjNiDq6XY # FvoQAUZCSg6NaF8Xgd4AETcw7FedW0nodDzpE/jBj5WQjd1eJoD26uF4cYicVzIt # gtb6tJJ3LtYc0pNIzxk2hPFTUrXTpfA5kdIADmd6Tg1sH87JJpWnmR49/a89Kpst # mU/j2KtmqL94YFH93qbkNQ2jkcnQ6DimsOpgPBNVMmKdXSUA9eF3DHo54nzIbhnN # rvWXiUp6d7EjyqTI0IquuajFnlRBRyn4VvtJPbxuzr78GH8XJ9o= # =Iz+M # -----END PGP SIGNATURE----- # gpg: Signature made Fri 05 Jan 2024 19:29:21 GMT # gpg: using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59 # gpg: issuer "mjt@tls.msk.ru" # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full] # gpg: aka "Michael Tokarev <mjt@corpit.ru>" [full] # gpg: aka "Michael Tokarev <mjt@debian.org>" [full] # Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5 # Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59 * tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu: docs: use "buses" rather than "busses" edu: fix DMA range upper bound check hw/net: cadence_gem: Fix MDIO_OP_xxx values audio/audio.c: remove trailing newline in error_setg chardev/char.c: fix "abstract device type" error message target/riscv: Fix mcycle/minstret increment behavior Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-06target/loongarch: move translate modules to tcg/Song Gao
Introduce the target/loongarch/tcg directory. Its purpose is to hold the TCG code that is selected by CONFIG_TCG Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240102020200.3462097-2-gaosong@loongson.cn>
2024-01-06target/loongarch/meson: move gdbstub.c to loongarch.ssSong Gao
gdbstub.c is not specific to TCG and can be used by other accelerators, such as KVM accelerator Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240102020200.3462097-1-gaosong@loongson.cn>
2024-01-05target/riscv: Fix mcycle/minstret increment behaviorXu Lu
The mcycle/minstret counter's stop flag is mistakenly updated on a copy on stack. Thus the counter increments even when the CY/IR bit in the mcountinhibit register is set. This commit corrects its behavior. Fixes: 3780e33732f88 (target/riscv: Support mcycle/minstret write operation) Signed-off-by: Xu Lu <luxu.kernel@bytedance.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>