aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2020-05-11target/arm: Fix tcg_gen_gvec_dup_imm vs DUP (indexed)Richard Henderson
DUP (indexed) can duplicate 128-bit elements, so using esz unconditionally can assert in tcg_gen_gvec_dup_imm. Fixes: 8711e71f9cbb Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20200507172352.15418-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Use tcg_gen_gvec_5_ptr for sve FMLA/FCMLARichard Henderson
Now that we can pass 7 parameters, do not encode register operands within simd_data. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Taylor Simpson <tsimpson@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200507172352.15418-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Restrict TCG cpus to TCG accelPhilippe Mathieu-Daudé
A KVM-only build won't be able to run TCG cpus. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200504172448.9402-6-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm/cpu: Restrict v8M IDAU interface to Aarch32 CPUsPhilippe Mathieu-Daudé
As IDAU is a v8M feature, restrict it to the Aarch32 CPUs. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200504172448.9402-5-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm/cpu: Use ARRAY_SIZE() to iterate over ARMCPUInfo[]Philippe Mathieu-Daudé
Use ARRAY_SIZE() to iterate over ARMCPUInfo[]. Since on the aarch64-linux-user build, arm_cpus[] is empty, add the cpu_count variable and only iterate when it is non-zero. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200504172448.9402-4-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Make set_feature() available for other filesThomas Huth
Move the common set_feature() and unset_feature() functions from cpu.c and cpu64.c to cpu.h. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200504172448.9402-3-philmd@redhat.com Message-ID: <20190921150420.30743-2-thuth@redhat.com> [PMD: Split Thomas's patch in two: set_feature, cpu_register] Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm/kvm: Inline set_feature() callsPhilippe Mathieu-Daudé
We want to move the inlined declarations of set_feature() from cpu*.c to cpu.h. To avoid clashing with the KVM declarations, inline the few KVM calls. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200504172448.9402-2-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Remove sve_memopidxRichard Henderson
None of the sve helpers use TCGMemOpIdx any longer, so we can stop passing it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Reuse sve_probe_page for gather loadsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Reuse sve_probe_page for scatter storesRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Reuse sve_probe_page for gather first-fault loadsRichard Henderson
This avoids the need for a separate set of helpers to implement no-fault semantics, and will enable MTE in the future. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Use SVEContLdSt for contiguous storesRichard Henderson
Follow the model set up for contiguous loads. This handles watchpoints correctly for contiguous stores, recognizing the exception before any changes to memory. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Update contiguous first-fault and no-fault loadsRichard Henderson
With sve_cont_ldst_pages, the differences between first-fault and no-fault are minimal, so unify the routines. With cpu_probe_watchpoint, we are able to make progress through pages with TLB_WATCHPOINT set when the watchpoint does not actually fire. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Use SVEContLdSt for multi-register contiguous loadsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Handle watchpoints in sve_ld1_rRichard Henderson
Handle all of the watchpoints for active elements all at once, before we've modified the vector register. This removes the TLB_WATCHPOINT bit from page[].flags, which means that we can use the normal fast path via RAM. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Use SVEContLdSt in sve_ld1_rRichard Henderson
First use of the new helper functions, so we can remove the unused markup. No longer need a scratch for user-only, as we completely probe the page set before reading; system mode still requires a scratch for MMIO. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Adjust interface of sve_ld1_host_fnRichard Henderson
The current interface includes a loop; change it to load a single element. We will then be able to use the function for ld{2,3,4} where individual vector elements are not adjacent. Replace each call with the simplest possible loop over active elements. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Add sve infrastructure for page lookupRichard Henderson
For contiguous predicated memory operations, we want to minimize the number of tlb lookups performed. We have open-coded this for sve_ld1_r, but for correctness with MTE we will need this for all of the memory operations. Create a structure that holds the bounds of active elements, and metadata for two pages. Add routines to find those active elements, lookup the pages, and run watchpoints for those pages. Temporarily mark the functions unused to avoid Werror. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Drop manual handling of set/clear_helper_retaddrRichard Henderson
Since we converted back to cpu_*_data_ra, we do not need to do this ourselves. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Use cpu_*_data_ra for sve_ldst_tlb_fnRichard Henderson
Use the "normal" memory access functions, rather than the softmmu internal helper functions directly. Since fb901c905dc3, cpu_mem_index is now a simple extract from env->hflags and not a large computation. Which means that it's now more work to pass around this value than it is to recompute it. This only adjusts the primitives, and does not clean up all of the uses within sve_helper.c. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11target/arm: Drop access_el3_aa32ns_aa64any()Edgar E. Iglesias
Calling access_el3_aa32ns() works for AArch32 only cores but it does not handle 32-bit EL2 on top of 64-bit EL3 for mixed 32/64-bit cores. Merge access_el3_aa32ns_aa64any() into access_el3_aa32ns() and only use the latter. Fixes: 68e9c2fe65 ("target-arm: Add VTCR_EL2") Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20200505141729.31930-2-edgar.iglesias@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-06target/arm: Use tcg_gen_gvec_dup_immRichard Henderson
In a few cases, we're able to remove some manual replication. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-04target/arm: Move gen_ function typedefs to translate.hPeter Maydell
We're going to want at least some of the NeonGen* typedefs for the refactored 32-bit Neon decoder, so move them all to translate.h since it makes more sense to keep them in one group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-23-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same VMUL, VMLA, VMLS, VSHL to decodetreePeter Maydell
Convert the Neon VMUL, VMLA, VMLS and VSHL insns in the 3-reg-same grouping to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-20-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same VQADD/VQSUB to decodetreePeter Maydell
Convert the Neon VQADD/VQSUB insns in the 3-reg-same grouping to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-19-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same comparisons to decodetreePeter Maydell
Convert the Neon comparison ops in the 3-reg-same grouping to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-18-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same VMAX/VMIN to decodetreePeter Maydell
Convert the Neon 3-reg-same VMAX and VMIN insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-17-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same logic ops to decodetreePeter Maydell
Convert the Neon logic ops in the 3-reg-same grouping to decodetree. Note that for the logic ops the 'size' field forms part of their decode and the actual operations are always bitwise. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-16-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 3-reg-same VADD/VSUB to decodetreePeter Maydell
Convert the Neon 3-reg-same VADD and VSUB insns to decodetree. Note that we don't need the neon_3r_sizes[op] check here because all size values are OK for VADD and VSUB; we'll add this when we convert the first insn that has size restrictions. For this we need one of the GVecGen*Fn typedefs currently in translate-a64.h; move them all to translate.h as a block so they are visible to the 32-bit decoder. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-15-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 'load/store single structure' to decodetreePeter Maydell
Convert the Neon "load/store single structure to one lane" insns to decodetree. As this is the last set of insns in the neon load/store group, we can remove the whole disas_neon_ls_insn() function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-14-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon 'load single structure to all lanes' to decodetreePeter Maydell
Convert the Neon "load single structure to all lanes" insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-13-peter.maydell@linaro.org
2020-05-04target/arm: Convert Neon load/store multiple structures to decodetreePeter Maydell
Convert the Neon "load/store multiple structures" insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-12-peter.maydell@linaro.org
2020-05-04target/arm: Convert VFM[AS]L (scalar) to decodetreePeter Maydell
Convert the VFM[AS]L (scalar) insns in the 2reg-scalar-ext group to decodetree. These are the last ones in the group so we can remove all the legacy decode for the group. Note that in disas_thumb2_insn() the parts of this encoding space where the decodetree decoder returns false will correctly be directed to illegal_op by the "(insn & (1 << 28))" check so they won't fall into disas_coproc_insn() by mistake. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-11-peter.maydell@linaro.org
2020-05-04target/arm: Convert V[US]DOT (scalar) to decodetreePeter Maydell
Convert the V[US]DOT (scalar) insns in the 2reg-scalar-ext group to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-10-peter.maydell@linaro.org
2020-05-04target/arm: Convert VCMLA (scalar) to decodetreePeter Maydell
Convert VCMLA (scalar) in the 2reg-scalar-ext group to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-9-peter.maydell@linaro.org
2020-05-04target/arm: Convert VFM[AS]L (vector) to decodetreePeter Maydell
Convert the VFM[AS]L (vector) insns to decodetree. This is the last insn in the legacy decoder for the 3same_ext group, so we can delete the legacy decoder function for the group entirely. Note that in disas_thumb2_insn() the parts of this encoding space where the decodetree decoder returns false will correctly be directed to illegal_op by the "(insn & (1 << 28))" check so they won't fall into disas_coproc_insn() by mistake. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-8-peter.maydell@linaro.org
2020-05-04target/arm: Convert V[US]DOT (vector) to decodetreePeter Maydell
Convert the V[US]DOT (vector) insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-7-peter.maydell@linaro.org
2020-05-04target/arm: Convert VCADD (vector) to decodetreePeter Maydell
Convert the VCADD (vector) insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-6-peter.maydell@linaro.org
2020-05-04target/arm: Convert VCMLA (vector) to decodetreePeter Maydell
Convert the VCMLA (vector) insns in the 3same extension group to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-5-peter.maydell@linaro.org
2020-05-04target/arm: Add stubs for AArch32 Neon decodetreePeter Maydell
Add the infrastructure for building and invoking a decodetree decoder for the AArch32 Neon encodings. At the moment the new decoder covers nothing, so we always fall back to the existing hand-written decode. We follow the same pattern we did for the VFP decodetree conversion (commit 78e138bc1f672c145ef6ace74617d and following): code that deals with Neon will be moving gradually out to translate-neon.vfp.inc, which we #include into translate.c. In order to share the decode files between A32 and T32, we split Neon into 3 parts: * data-processing * load-store * 'shared' encodings The first two groups of instructions have similar but not identical A32 and T32 encodings, so we need to manually transform the T32 encoding into the A32 one before calling the decoder; the third group covers the Neon instructions which are identical in A32 and T32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200430181003.21682-4-peter.maydell@linaro.org
2020-05-04target/arm: Don't allow Thumb Neon insns without FEATURE_NEONPeter Maydell
We were accidentally permitting decode of Thumb Neon insns even if the CPU didn't have the FEATURE_NEON bit set, because the feature check was being done before the call to disas_neon_data_insn() and disas_neon_ls_insn() in the Arm decoder but was omitted from the Thumb decoder. Push the feature bit check down into the called functions so it is done for both Arm and Thumb encodings. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20200430181003.21682-3-peter.maydell@linaro.org
2020-05-04target/arm/translate-vfp.inc.c: Remove duplicate simd_r32 checkPeter Maydell
Somewhere along theline we accidentally added a duplicate "using D16-D31 when they don't exist" check to do_vfm_dp() (probably an artifact of a patchseries rebase). Remove it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20200430181003.21682-2-peter.maydell@linaro.org
2020-05-04target/arm: Use uint64_t for midr field in CPU state structPhilippe Mathieu-Daudé
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0. Represent it in QEMU's ARMCPU struct with a uint64_t, not a uint32_t. This fixes an error when compiling with -Werror=conversion because we were manipulating the register value using a local uint64_t variable: target/arm/cpu64.c: In function ‘aarch64_max_initfn’: target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion] 628 | cpu->midr = t; | ^ and future-proofs us against a possible future architecture change using some of the top 32 bits. Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20200428172634.29707-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-04target/arm: Use correct variable for setting 'max' cpu's ID_AA64DFR0Peter Maydell
In aarch64_max_initfn() we update both 32-bit and 64-bit ID registers. The intended pattern is that for 64-bit ID registers we use FIELD_DP64 and the uint64_t 't' register, while 32-bit ID registers use FIELD_DP32 and the uint32_t 'u' register. For ID_AA64DFR0 we accidentally used 'u', meaning that the top 32 bits of this 64-bit ID register would end up always zero. Luckily at the moment that's what they should be anyway, so this bug has no visible effects. Use the right-sized variable. Fixes: 3bec78447a958d481991 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20200423110915.10527-1-peter.maydell@linaro.org
2020-05-04target/arm: Implement ARMv8.2-TTS2UXNPeter Maydell
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2 translation table descriptors from just bit [54] to bits [54:53], allowing stage 2 to control execution permissions separately for EL0 and EL1. Implement the new semantics of the XN field and enable the feature for our 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-5-peter.maydell@linaro.org
2020-05-04target/arm: Add new 's1_is_el0' argument to get_phys_addr_lpae()Peter Maydell
For ARMv8.2-TTS2UXN, the stage 2 page table walk wants to know whether the stage 1 access is for EL0 or not, because whether exec permission is given can depend on whether this is an EL0 or EL1 access. Add a new argument to get_phys_addr_lpae() so the call sites can pass this information in. Since get_phys_addr_lpae() doesn't already have a doc comment, add one so we have a place to put the documentation of the semantics of the new s1_is_el0 argument. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-4-peter.maydell@linaro.org
2020-05-04target/arm: Use enum constant in get_phys_addr_lpae() callPeter Maydell
The access_type argument to get_phys_addr_lpae() is an MMUAccessType; use the enum constant MMU_DATA_LOAD rather than a literal 0 when we call it in S1_ptw_translate(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-3-peter.maydell@linaro.org
2020-05-04target/arm: Don't use a TLB for ARMMMUIdx_Stage2Peter Maydell
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU TLB. However we never actually use the TLB -- all stage 2 lookups are done by direct calls to get_phys_addr_lpae() followed by a physical address load via address_space_ld*(). Remove Stage2 from the list of ARM MMU indexes which correspond to real core MMU indexes, and instead put it in the set of "NOTLB" ARM MMU indexes. This allows us to drop NB_MMU_MODES to 11. It also means we can safely add support for the ARMv8.3-TTS2UXN extension, which adds permission bits to the stage 2 descriptors which define execute permission separatel for EL0 and EL1; supporting that while keeping Stage2 in a QEMU TLB would require us to use separate TLBs for "Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a lot of extra complication given we aren't even using the QEMU TLB. In the process of updating the comment on our MMU index use, fix a couple of other minor errors: * NS EL2 EL2&0 was missing from the list in the comment * some text hadn't been updated from when we bumped NB_MMU_MODES above 8 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200330210400.11724-2-peter.maydell@linaro.org
2020-05-04target/arm: Make VQDMULL undefined when U=1Fredrik Strupe
According to Arm ARM, VQDMULL is only valid when U=0, while having U=1 is unallocated. Signed-off-by: Fredrik Strupe <fredrik@strupe.net> Fixes: 695272dcb976 ("target-arm: Handle UNDEF cases for Neon 3-regs-different-widths") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-04-30target/arm/cpu: Update coding style to make checkpatch.pl happyPhilippe Mathieu-Daudé
We will move this code in the next commit. Clean it up first to avoid checkpatch.pl errors. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200423073358.27155-5-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>