aboutsummaryrefslogtreecommitdiff
path: root/target/arm/tcg
AgeCommit message (Collapse)Author
2023-06-06target/arm: Sink gen_mte_check1 into load/store_exclusiveRichard Henderson
No need to duplicate this check across multiple call sites. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}rRichard Henderson
Round len_align to 16 instead of 8, handling an odd 8-byte as part of the tail. Use MO_ATOM_NONE to indicate that all of these memory ops have only byte atomicity. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2GRichard Henderson
This fixes a bug in that these two insns should have been using atomic 16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}Richard Henderson
While we don't require 16-byte atomicity here, using a single larger operation simplifies the code. Introduce finalize_memop_asimd for this. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06target/arm: Use tcg_gen_qemu_ld_i128 for LDXPRichard Henderson
While we don't require 16-byte atomicity here, using a single larger load simplifies the code, and makes it a closer match to STXP. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06target/arm: Introduce finalize_memop_{atom,pair}Richard Henderson
Let finalize_memop_atom be the new basic function, with finalize_memop and finalize_memop_pair testing FEAT_LSE2 to apply the appropriate atomicity. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-05target/arm: Add missing include of exec/exec-all.hRichard Henderson
This had been pulled in via exec/translator.h, but the include of exec-all.h will be removed. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05target/arm: Tidy helpers for translationRichard Henderson
Move most includes from *translate*.c to translate.h, ensuring that we get the ordering correct. Ensure cpu.h is first. Use disas/disas.h instead of exec/log.h. Drop otherwise unused includes. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05accel/tcg: Introduce translator_io_startRichard Henderson
New wrapper around gen_io_start which takes care of the USE_ICOUNT check, as well as marking the DisasContext to end the TB. Remove exec/gen-icount.h. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Split helper-proto.hRichard Henderson
Create helper-proto-common.h without the target specific portion. Use that in tcg-op-common.h. Include helper-proto.h in target/arm and target/hexagon before helper-info.c.inc; all other targets are already correct in this regard. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Split helper-gen.hRichard Henderson
Create helper-gen-common.h without the target specific portion. Use that in tcg-op-common.h. Reorg headers in target/arm to ensure that helper-gen.h is included before helper-info.c.inc. All other targets are already correct in this regard. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Pass TCGHelperInfo to tcg_gen_callNRichard Henderson
In preparation for compiling tcg/ only once, eliminate the all_helpers array. Instantiate the info structs for the generic helpers in accel/tcg/, and the structs for the target-specific helpers in each translate.c. Since we don't see all of the info structs at startup, initialize at first use, using g_once_init_* to make sure we don't race while doing so. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05target/arm: Include helper-gen.h in translator.hRichard Henderson
This had been included via tcg-op-common.h via tcg-op.h, but that is going away. It is needed for inlines within translator.h, so we might as well do it there and not individually in each translator c file. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30target/arm: Explicitly select short-format FSR for M-profilePeter Maydell
For M-profile, there is no guest-facing A-profile format FSR, but we still use the env->exception.fsr field to pass fault information from the point where a fault is raised to the code in arm_v7m_cpu_do_interrupt() which interprets it and sets the M-profile specific fault status registers. So it doesn't matter whether we fill in env->exception.fsr in the short format or the LPAE format, as long as both sides agree. As it happens arm_v7m_cpu_do_interrupt() assumes short-form. In compute_fsr_fsc() we weren't explicitly choosing short-form for M-profile, but instead relied on it falling out in the wash because arm_s1_regime_using_lpae_format() would be false. This was broken in commit 452c67a4 when we added v8R support, because we said "PMSAv8 is always LPAE format" (as it is for v8R), forgetting that we were implicitly using this code path on M-profile. At that point we would hit a g_assert_not_reached(): ERROR:../../target/arm/internals.h:549:arm_fi_to_lfsc: code should not be reached #7 0x0000555555e055f7 in arm_fi_to_lfsc (fi=0x7fffecff9a90) at ../../target/arm/internals.h:549 #8 0x0000555555e05a27 in compute_fsr_fsc (env=0x555557356670, fi=0x7fffecff9a90, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff9a1c) at ../../target/arm/tlb_helper.c:95 #9 0x0000555555e05b62 in arm_deliver_fault (cpu=0x555557354800, addr=268961344, access_type=MMU_INST_FETCH, mmu_idx=1, fi=0x7fffecff9a90) at ../../target/arm/tlb_helper.c:132 #10 0x0000555555e06095 in arm_cpu_tlb_fill (cs=0x555557354800, address=268961344, size=1, access_type=MMU_INST_FETCH, mmu_idx=1, probe=false, retaddr=0) at ../../target/arm/tlb_helper.c:260 The specific assertion changed when commit fcc7404eff24b4c added "assert not M-profile" to arm_is_secure_below_el3(), because the conditions being checked in compute_fsr_fsc() include arm_el_is_aa64(), which will end up calling arm_is_secure_below_el3() and asserting before we try to call arm_fi_to_lfsc(): #7 0x0000555555efaf43 in arm_is_secure_below_el3 (env=0x5555574665a0) at ../../target/arm/cpu.h:2396 #8 0x0000555555efb103 in arm_is_el2_enabled (env=0x5555574665a0) at ../../target/arm/cpu.h:2448 #9 0x0000555555efb204 in arm_el_is_aa64 (env=0x5555574665a0, el=1) at ../../target/arm/cpu.h:2509 #10 0x0000555555efbdfd in compute_fsr_fsc (env=0x5555574665a0, fi=0x7fffecff99e0, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff996c) Avoid the assertion and the incorrect FSR format selection by explicitly making M-profile use the short-format in this function. Fixes: 452c67a42704 ("target/arm: Enable TTBCR_EAE for ARMv8-R AArch32")a Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1658 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230523131726.866635-1-peter.maydell@linaro.org
2023-05-23accel/tcg: Unify cpu_{ld,st}*_{be,le}_mmuRichard Henderson
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. The only use of the functions with explicit endianness was in target/sparc64, and that was only to satisfy the assert: the correct endianness is already built into memop. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-18target/arm: Convert ERET, ERETAA, ERETAB to decodetreePeter Maydell
Convert the exception-return insns ERET, ERETA and ERETB to decodetree. These were the last insns left in the legacy decoder function disas_uncond_reg_b(), which allows us to remove it. The old decoder explicitly decoded the DRPS instruction, only in order to call unallocated_encoding() on it, exactly as would have happened if it hadn't decoded it. This is because this insn always UNDEFs unless the CPU is in halting-debug state, which we don't emulate. So we list the pattern in a comment in a64.decode, but don't actively decode it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-21-peter.maydell@linaro.org
2023-05-18target/arm: Convert BRAA, BRAB, BLRAA, BLRAB to decodetreePeter Maydell
Convert the last four BR-with-pointer-auth insns to decodetree. The remaining cases in the outer switch in disas_uncond_b_reg() all return early rather than leaving the case statement, so we can delete the now-unused code at the end of that function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-20-peter.maydell@linaro.org
2023-05-18target/arm: Convert BRA[AB]Z, BLR[AB]Z, RETA[AB] to decodetreePeter Maydell
Convert the single-register pointer-authentication variants of BR, BLR, RET to decodetree. (BRAA/BLRAA are in a different branch of the legacy decoder and will be dealt with in the next commit.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-19-peter.maydell@linaro.org
2023-05-18target/arm: Convert BR, BLR, RET to decodetreePeter Maydell
Convert the simple (non-pointer-auth) BR, BLR and RET insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-18-peter.maydell@linaro.org
2023-05-18target/arm: Convert conditional branch insns to decodetreePeter Maydell
Convert the immediate conditional branch insn B.cond to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-17-peter.maydell@linaro.org
2023-05-18target/arm: Convert TBZ, TBNZ to decodetreePeter Maydell
Convert the test-and-branch-immediate insns TBZ and TBNZ to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-16-peter.maydell@linaro.org
2023-05-18target/arm: Convert CBZ, CBNZ to decodetreePeter Maydell
Convert the compare-and-branch-immediate insns CBZ and CBNZ to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-15-peter.maydell@linaro.org
2023-05-18target/arm: Convert unconditional branch immediate to decodetreePeter Maydell
Convert the unconditional branch immediate insns B and BL to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-14-peter.maydell@linaro.org
2023-05-18target/arm: Convert Extract instructions to decodetreePeter Maydell
Convert the EXTR instruction to decodetree (this is the only one in the 'Extract" class). This is the last of the dp-immediate insns in the legacy decoder, so we can now remove disas_data_proc_imm(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-13-peter.maydell@linaro.org
2023-05-18target/arm: Convert Bitfield to decodetreeRichard Henderson
Convert the BFM, SBFM, UBFM instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-12-peter.maydell@linaro.org [PMM: Rebased] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Convert Move wide (immediate) to decodetreeRichard Henderson
Convert the MON, MOVZ, MOVK instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-11-peter.maydell@linaro.org [PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Convert Logical (immediate) to decodetreeRichard Henderson
Convert the ADD, ORR, EOR, ANDS (immediate) instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-10-peter.maydell@linaro.org [PMM: rebased] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Replace bitmask64 with MAKE_64BIT_MASKRichard Henderson
Use the bitops.h macro rather than rolling our own here. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-9-peter.maydell@linaro.org
2023-05-18target/arm: Convert Add/subtract (immediate with tags) to decodetreeRichard Henderson
Convert the ADDG and SUBG (immediate) instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-8-peter.maydell@linaro.org [PMM: Rebased; use TRANS_FEAT()] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Convert Add/subtract (immediate) to decodetreeRichard Henderson
Convert the ADD and SUB (immediate) instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-7-peter.maydell@linaro.org [PMM: Rebased; adjusted to use translate.h's TRANS macro] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Split gen_add_CC and gen_sub_CCRichard Henderson
Split out specific 32-bit and 64-bit functions. These carry the same signature as tcg_gen_add_i64, and so will be easier to pass as callbacks. Retain gen_add_CC and gen_sub_CC during conversion. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-6-peter.maydell@linaro.org [PMM: rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Convert PC-rel addressing to decodetreeRichard Henderson
Convert the ADR and ADRP instructions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-5-peter.maydell@linaro.org [PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Pull calls to disas_sve() and disas_sme() out of legacy decoderPeter Maydell
The SVE and SME decode is already done by decodetree. Pull the calls to these decoders out of the legacy decoder. This doesn't change behaviour because all the patterns in sve.decode and sme.decode already require the bits that the legacy decoder is decoding to have the correct values. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-4-peter.maydell@linaro.org
2023-05-18target/arm: Create decodetree skeleton for A64Peter Maydell
The A64 translator uses a hand-written decoder for everything except SVE or SME. It's fairly well structured, but it's becoming obvious that it's still more painful to add instructions to than the A32 translator, because putting a new instruction into the right place in a hand-written decoder is much harder than adding new instruction patterns to a decodetree file. As the first step in conversion to decodetree, create the skeleton of the decodetree decoder; where it does not handle instructions we will fall back to the legacy decoder (which will be for everything at the moment, since there are no patterns in a64.decode). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230512144106.3608981-3-peter.maydell@linaro.org
2023-05-18target/arm: Split out disas_a64_legacyRichard Henderson
Split out all of the decode stuff from aarch64_tr_translate_insn. Call it disas_a64_legacy to indicate it will be replaced. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20230512144106.3608981-2-peter.maydell@linaro.org [PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-18target/arm: Fix vd == vm overlap in sve_ldff1_zRichard Henderson
If vd == vm, copy vm to scratch, so that we can pre-zero the output and still access the gather indicies. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1612 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230504104232.1877774-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-12target/arm: Correct AArch64.S2MinTxSZ 32-bit EL1 input size checkPeter Maydell
In check_s2_mmu_setup() we have a check that is attempting to implement the part of AArch64.S2MinTxSZ that is specific to when EL1 is AArch32: if !s1aarch64 then // EL1 is AArch32 min_txsz = Min(min_txsz, 24); Unfortunately we got this wrong in two ways: (1) The minimum txsz corresponds to a maximum inputsize, but we got the sense of the comparison wrong and were faulting for all inputsizes less than 40 bits (2) We try to implement this as an extra check that happens after we've done the same txsz checks we would do for an AArch64 EL1, but in fact the pseudocode is *loosening* the requirements, so that txsz values that would fault for an AArch64 EL1 do not fault for AArch32 EL1, because it does Min(old_min, 24), not Max(old_min, 24). You can see this also in the text of the Arm ARM in table D8-8, which shows that where the implemented PA size is less than 40 bits an AArch32 EL1 is still OK with a configured stage2 T0SZ for a 40 bit IPA, whereas if EL1 is AArch64 then the T0SZ must be big enough to constrain the IPA to the implemented PA size. Because of part (2), we can't do this as a separate check, but have to integrate it into aa64_va_parameters(). Add a new argument to that function to indicate that EL1 is 32-bit. All the existing callsites except the one in get_phys_addr_lpae() can pass 'false', because they are either doing a lookup for a stage 1 regime or else they don't care about the tsz/tsz_oob fields. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1627 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230509092059.3176487-1-peter.maydell@linaro.org
2023-05-12target/arm: Move helper-{a64,mve,sme,sve}.h to tcg/Richard Henderson
While we cannot move the main "helper.h" out of target/arm/, due to usage by generic code, we can move the sub-includes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-id: 20230504110412.1892411-3-richard.henderson@linaro.org Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-12target/arm: Move translate-a32.h, arm_ldst.h, sve_ldst_internal.h to tcg/Richard Henderson
These files got missed when populating tcg/. Because they are included with "", no change to the users required. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230504110412.1892411-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-02target/arm: Define and use new load_cpu_field_low32()Peter Maydell
In several places in the 32-bit Arm translate.c, we try to use load_cpu_field() to load from a CPUARMState field into a TCGv_i32 where the field is actually 64-bit. This works on little-endian hosts, but gives the wrong half of the register on big-endian. Add a new load_cpu_field_low32() which loads the low 32 bits of a 64-bit field into a TCGv_i32. The new macro includes a compile-time check against accidentally using it on a field of the wrong size. Use it to fix the two places in the code where we were using load_cpu_field() on a 64-bit field. This fixes a bug where on big-endian hosts the guest would crash after executing an ERET instruction, and a more corner case one where some UNDEFs for attempted accesses to MSR banked registers from Secure EL1 might go to the wrong EL. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230424153909.1419369-2-peter.maydell@linaro.org
2023-05-02target/arm: move cpu_tcg to tcg/cpu32.cClaudio Fontana
move the module containing cpu models definitions for 32bit TCG-only CPUs to tcg/ and rename it for clarity. Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230426180013.14814-8-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-02target/arm: Move 64-bit TCG CPUs into tcg/Fabiano Rosas
Move the 64-bit CPUs that are TCG-only: - cortex-a35 - cortex-a55 - cortex-a72 - cortex-a76 - a64fx - neoverse-n1 Keep the CPUs that can be used with KVM: - cortex-a57 - cortex-a53 - max - host Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230426180013.14814-6-farosas@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-04-20target/arm: Don't set ISV when reporting stage 1 faults in ESR_EL2Peter Maydell
The syndrome value reported to ESR_EL2 should only contain the detailed instruction syndrome information when the fault has been caused by a stage 2 abort, not when the fault was a stage 1 abort (i.e. caused by execution at EL2). We were getting this wrong and reporting the detailed ISV information all the time. Fix the bug by checking fi->stage2. Add a TODO comment noting the cases where we'll have to come back and revisit this when we implement FEAT_LS64 and friends. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230331145045.2584941-3-peter.maydell@linaro.org
2023-04-20target/arm: Pass ARMMMUFaultInfo to merge_syn_data_abort()Peter Maydell
We already pass merge_syn_data_abort() two fields from the ARMMMUFaultInfo struct, and we're about to want to use a third field. Refactor to just pass a pointer to the fault info. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230331145045.2584941-2-peter.maydell@linaro.org
2023-04-03target/arm: Fix generated code for cpreg reads when HSTR is activePeter Maydell
In commit 049edada we added some code to handle HSTR_EL2 traps, which we did as an inline "conditionally branch over a gen_exception_insn()". Unfortunately this fails to take account of the fact that gen_exception_insn() will set s->base.is_jmp to DISAS_NORETURN. That means that at the end of the TB we won't generate the necessary code to handle the "branched over the trap and continued normal execution" codepath. The result is that the TCG main loop thinks that we stopped execution of the TB due to a situation that only happens when icount is enabled, and hits an assertion. Explicitly set is_jmp back to DISAS_NEXT so we generate the correct code for when execution continues past this insn. Note that this only happens for cpreg reads; writes will call gen_lookup_tb() which generates a valid end-of-TB. Fixes: 049edada ("target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1551 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230330101900.2320380-1-peter.maydell@linaro.org
2023-04-03target/arm: Fix non-TCG build failure by inlining pauth_ptr_mask()Philippe Mathieu-Daudé
aarch64_gdb_get_pauth_reg() -- although disabled since commit 5787d17a42 ("target/arm: Don't advertise aarch64-pauth.xml to gdb") is still compiled in. It calls pauth_ptr_mask() which is located in target/arm/tcg/pauth_helper.c, a TCG specific helper. To avoid a linking error when TCG is not enabled: Undefined symbols for architecture arm64: "_pauth_ptr_mask", referenced from: _aarch64_gdb_get_pauth_reg in target_arm_gdbstub64.c.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) - Inline pauth_ptr_mask() in aarch64_gdb_get_pauth_reg() (this is the single user), - Rename pauth_ptr_mask_internal() as pauth_ptr_mask() and inline it in "internals.h", Fixes: e995d5cce4 ("target/arm: Implement gdbstub pauth extension") Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20230328212516.29592-1-philmd@linaro.org [PMM: reinstated doc comment] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-03-28softmmu: Restrict cpu_check_watchpoint / address_matches to TCG accelPhilippe Mathieu-Daudé
Both cpu_check_watchpoint() and cpu_watchpoint_address_matches() are specific to TCG system emulation. Declare them in "tcg-cpu-ops.h" to be sure accessing them from non-TCG code is a compilation error. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230328173117.15226-2-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-13target/arm: Avoid tcg_const_ptr in handle_revRichard Henderson
Here it is not trivial to notice first initialization, so explicitly zero the temps. Use an array for the output, rather than separate tcg_rd/tcg_rd_hi variables. Fixes a bug by adding a missing clear_vec_high. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-13target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrnRichard Henderson
It is easy enough to use mov instead of or-with-zero and relying on the optimizer to fold away the or. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-03-13target/arm: Avoid tcg_const_ptr in disas_simd_zip_trnRichard Henderson
It is easy enough to use mov instead of or-with-zero and relying on the optimizer to fold away the or. Use an array for the output, rather than separate tcg_res{l,h} variables. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>