aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2020-11-10target/arm: add space before the open parenthesis '('Xinhao Zhang
Fix code style. Space required before the open parenthesis '('. Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by: Kai Deng <dengkai1@huawei.com> Message-id: 20201103114529.638233-3-zhangxinhao1@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-10target/arm: Don't use '#' flag of printf formatXinhao Zhang
Fix code style. Don't use '#' flag of printf format ('%#') in format strings, use '0x' prefix instead Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by: Kai Deng <dengkai1@huawei.com> Message-id: 20201103114529.638233-2-zhangxinhao1@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-10target/arm: add spaces around operatorXinhao Zhang
Fix code style. Operator needs spaces both sides. Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by: Kai Deng <dengkai1@huawei.com> Message-id: 20201103114529.638233-1-zhangxinhao1@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Get correct MMU index for other-security-statePeter Maydell
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el(). This is incorrect when the security state being queried is not the current one, because arm_current_el() uses the current security state to determine which of the banked CONTROL.nPRIV bits to look at. The effect was that if (for instance) Secure state was in privileged mode but Non-Secure was not then we would return the wrong MMU index. The only places where we are using this function in a way that could trigger this bug are for the stack loads during a v8M function-return and for the instruction fetch of a v8M SG insn. Fix the bug by expanding out the M-profile version of the arm_current_el() logic inline so it can use the passed in secstate rather than env->v7m.secure. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
2020-11-02target/arm: fix LORID_EL1 access checkRémi Denis-Courmont
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: fix handling of HCR.FBRémi Denis-Courmont
HCR should be applied when NS is set, not when it is cleared. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hostsPeter Maydell
The helper functions for performing the udot/sdot operations against a scalar were not using an address-swizzling macro when converting the index of the scalar element into a pointer into the vm array. This had no effect on little-endian hosts but meant we generated incorrect results on big-endian hosts. For these insns, the index is indexing over group of 4 8-bit values, so 32 bits per indexed entity, and H4() is therefore what we want. (For Neon the only possible input indexes are 0 and 1.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
2020-11-02target/arm: Fix float16 pairwise Neon ops on big-endian hostsPeter Maydell
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error meant we were using the H4() address swizzler macro rather than the H2() which is required for 2-byte data. This had no effect on little-endian hosts but meant we put the result data into the destination Dreg in the wrong order on big-endian hosts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
2020-11-02target/arm: Improve do_prewiden_3dRichard Henderson
We can use proper widening loads to extend 32-bit inputs, and skip the "widenfn" step. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-12-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Simplify do_long_3d and do_2scalar_longRichard Henderson
In both cases, we can sink the write-back and perform the accumulate into the normal destination temps. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-11-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Rename neon_load_reg64 to vfp_load_reg64Richard Henderson
The only uses of this function are for loading VFP double-precision values, and nothing to do with NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-10-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Add read/write_neon_element64Richard Henderson
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-9-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Rename neon_load_reg32 to vfp_load_reg32Richard Henderson
The only uses of this function are for loading VFP single-precision values, and nothing to do with NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-8-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Expand read/write_neon_element32 to all MemOpRichard Henderson
We can then use this to improve VMOV (scalar to gp) and VMOV (gp to scalar) so that we simply perform the memory operation that we wanted, rather than inserting or extracting from a 32-bit quantity. These were the last uses of neon_load/store_reg, so remove them. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-7-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Add read/write_neon_element32Richard Henderson
Model these off the aa64 read/write_vec_element functions. Use it within translate-neon.c.inc. The new functions do not allocate or free temps, so this rearranges the calling code a bit. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Use neon_element_offset in vfp_reg_offsetRichard Henderson
This seems a bit more readable than using offsetof CPU_DoubleU. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Use neon_element_offset in neon_load/store_regRichard Henderson
These are the only users of neon_reg_offset, so remove that. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Move neon_element_offset to translate.cRichard Henderson
This will shortly have users outside of translate-neon.c.inc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Introduce neon_full_reg_offsetRichard Henderson
This function makes it clear that we're talking about the whole register, and not the 32-bit piece at index 0. This fixes a bug when running on a big-endian host. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-27linux-user: Set PAGE_TARGET_1 for TARGET_PROT_BTIRichard Henderson
Transform the prot bit to a qemu internal page bit, and save it in the page tables. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201021173749.111103-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Implement FPSCR.LTPSIZE for M-profile LOB extensionPeter Maydell
If the M-profile low-overhead-branch extension is implemented, FPSCR bits [18:16] are a new field LTPSIZE. If MVE is not implemented (currently always true for us) then this field always reads as 4 and ignores writes. These bits used to be the vector-length field for the old short-vector extension, so we need to take care that they are not misinterpreted as setting vec_len. We do this with a rearrangement of the vfp_set_fpscr() code that deals with vec_len, vec_stride and also the QC bit; this obviates the need for the M-profile only masking step that we used to have at the start of the function. We provide a new field in CPUState for LTPSIZE, even though this will always be 4, in preparation for MVE, so we don't have to come back later and split it out of the vfp.xregs[FPSCR] value. (This state struct field will be saved and restored as part of the FPSCR value via the vmstate_fpscr in machine.c.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-11-peter.maydell@linaro.org
2020-10-20target/arm: Allow M-profile CPUs with FP16 to set FPSCR.FP16Peter Maydell
M-profile CPUs with half-precision floating point support should be able to write to FPSCR.FZ16, but an M-profile specific masking of the value at the top of vfp_set_fpscr() currently prevents that. This is not yet an active bug because we have no M-profile FP16 CPUs, but needs to be fixed before we can add any. The bits that the masking is effectively preventing from being set are the A-profile only short-vector Len and Stride fields, plus the Neon QC bit. Rearrange the order of the function so that those fields are handled earlier and only under a suitable guard; this allows us to drop the M-profile specific masking, making FZ16 writeable. This change also makes the QC bit correctly RAZ/WI for older no-Neon A-profile cores. This refactoring also paves the way for the low-overhead-branch LTPSIZE field, which uses some of the bits that are used for A-profile Stride and Len. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-10-peter.maydell@linaro.org
2020-10-20target/arm: Fix has_vfp/has_neon ID reg squashing for M-profilePeter Maydell
In arm_cpu_realizefn(), if the CPU has VFP or Neon disabled then we squash the ID register fields so that we don't advertise it to the guest. This code was written for A-profile and needs some tweaks to work correctly on M-profile: * A-profile only fields should not be zeroed on M-profile: - MVFR0.FPSHVEC,FPTRAP - MVFR1.SIMDLS,SIMDINT,SIMDSP,SIMDHP - MVFR2.SIMDMISC * M-profile only fields should be zeroed on M-profile: - MVFR1.FP16 In particular, because MVFR1.SIMDHP on A-profile is the same field as MVFR1.FP16 on M-profile this code was incorrectly disabling FP16 support on an M-profile CPU (where has_neon is always false). This isn't a visible bug yet because we don't have any M-profile CPUs with FP16 support, but the change is necessary before we introduce any. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-9-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M low-overhead-loop instructionsPeter Maydell
v8.1M's "low-overhead-loop" extension has three instructions for looping: * DLS (start of a do-loop) * WLS (start of a while-loop) * LE (end of a loop) The loop-start instructions are both simple operations to start a loop whose iteration count (if any) is in LR. The loop-end instruction handles "decrement iteration count and jump back to loop start"; it also caches the information about the branch back to the start of the loop to improve performance of the branch on subsequent iterations. As with the branch-future instructions, the architecture permits an implementation to discard the LO_BRANCH_INFO cache at any time, and QEMU takes the IMPDEF option to never set it in the first place (equivalent to discarding it immediately), because for us a "real" implementation would be unnecessary complexity. (This implementation only provides the simple looping constructs; the vector extension MVE (Helium) adds some extra variants to handle looping across vectors. We'll add those later when we implement MVE.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-8-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M branch-future insns (as NOPs)Peter Maydell
v8.1M implements a new 'branch future' feature, which is a set of instructions that request the CPU to perform a branch "in the future", when it reaches a particular execution address. In hardware, the expected implementation is that the information about the branch location and destination is cached and then acted upon when execution reaches the specified address. However the architecture permits an implementation to discard this cached information at any point, and so guest code must always include a normal branch insn at the branch point as a fallback. In particular, an implementation is specifically permitted to treat all BF insns as NOPs (which is equivalent to discarding the cached information immediately). For QEMU, implementing this caching of branch information would be complicated and would not improve the speed of execution at all, so we make the IMPDEF choice to implement all BF insns as NOPs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-7-peter.maydell@linaro.org
2020-10-20target/arm: Don't allow BLX imm for M-profilePeter Maydell
The BLX immediate insn in the Thumb encoding always performs a switch from Thumb to Arm state. This would be totally useless in M-profile which has no Arm decoder, and so the instruction does not exist at all there. Make the encoding UNDEF for M-profile. (This part of the encoding space is used for the branch-future and low-overhead-loop insns in v8.1M.) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-6-peter.maydell@linaro.org
2020-10-20target/arm: Make the t32 insn[25:23]=111 group non-overlappingPeter Maydell
The t32 decode has a group which represents a set of insns which overlap with B_cond_thumb because they have [25:23]=111 (which is an invalid condition code field for the branch insn). This group is currently defined using the {} overlap-OK syntax, but it is almost entirely non-overlapping patterns. Switch it over to use a non-overlapping group. For this to be valid syntactically, CPS must move into the same overlapping-group as the hint insns (CPS vs hints was the only actual use of the overlap facility for the group). The non-overlapping subgroup for CLREX/DSB/DMB/ISB/SB is no longer necessary and so we can remove it (promoting those insns to be members of the parent group). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-5-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M conditional-select insnsPeter Maydell
v8.1M brings four new insns to M-profile: * CSEL : Rd = cond ? Rn : Rm * CSINC : Rd = cond ? Rn : Rm+1 * CSINV : Rd = cond ? Rn : ~Rm * CSNEG : Rd = cond ? Rn : -Rm Implement these. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201019151301.2046-4-peter.maydell@linaro.org
2020-10-20target/arm: Implement v8.1M NOCP handlingPeter Maydell
From v8.1M, disabled-coprocessor handling changes slightly: * coprocessors 8, 9, 14 and 15 are also governed by the cp10 enable bit, like cp11 * an extra range of instruction patterns is considered to be inside the coprocessor space We previously marked these up with TODO comments; implement the correct behaviour. Unfortunately there is no ID register field which indicates this behaviour. We could in theory test an unrelated ID register which indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch >= 3 (low-overhead-loops), but it seems better to simply define a new ARM_FEATURE_V8_1M feature flag and use it for this and other new-in-v8.1M behaviour that isn't identifiable from the ID registers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201019151301.2046-3-peter.maydell@linaro.org
2020-10-20target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11Richard Henderson
Unlike many other bits in HCR_EL2, the description for this bit does not contain the phrase "if ... this field behaves as 0 for all purposes other than", so do not squash the bit in arm_hcr_el2_eff. Instead, replicate the E2H+TGE test in the two places that require it. Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Fix reported EL for mte_check_failRichard Henderson
The reporting in AArch64.TagCheckFail only depends on PSTATE.EL, and not the AccType of the operation. There are two guest visible problems that affect LDTR and STTR because of this: (1) Selecting TCF0 vs TCF1 to decide on reporting, (2) Report "data abort same el" not "data abort lower el". Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Remove redundant mmu_idx lookupRichard Henderson
We already have the full ARMMMUIdx as computed from the function parameter. For the purpose of regime_has_2_ranges, we can ignore any difference between AccType_Normal and AccType_Unpriv, which would be the only difference between the passed mmu_idx and arm_mmu_idx_el. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Use tlb_flush_page_bits_by_mmuidx*Richard Henderson
When TBI is enabled in a given regime, 56 bits of the address are significant and we need to clear out any other matching virtual addresses with differing tags. The other uses of tlb_flush_page (without mmuidx) in this file are only used by aarch32 mode. Fixes: 38d931687fa1 Reported-by: Jordan Frank <jordanfrank@fb.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201016210754.818257-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: AArch32 VCVT fixed-point to float is always round-to-nearestPeter Maydell
For AArch32, unlike the VCVT of integer to float, which honours the rounding mode specified by the FPSCR, VCVT of fixed-point to float is always round-to-nearest. (AArch64 fixed-point-to-float conversions always honour the FPCR rounding mode.) Implement this by providing _round_to_nearest versions of the relevant helpers which set the rounding mode temporarily when making the call to the underlying softfloat function. We only need to change the VFP VCVT instructions, because the standard- FPSCR value used by the Neon VCVT is always set to round-to-nearest, so we don't need to do the extra work of saving and restoring the rounding mode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201013103532.13391-1-peter.maydell@linaro.org
2020-10-20target/arm: Fix SMLAD incorrect setting of Q bitPeter Maydell
The SMLAD instruction is supposed to: * signed multiply Rn[15:0] * Rm[15:0] * signed multiply Rn[31:16] * Rm[31:16] * perform a signed addition of the products and Ra * set Rd to the low 32 bits of the theoretical infinite-precision result * set the Q flag if the sign-extension of Rd would differ from the infinite-precision result (ie on overflow) Our current implementation doesn't quite do this, though: it performs an addition of the products setting Q on overflow, and then it adds Ra, again possibly setting Q. This sometimes incorrectly sets Q when the architecturally mandated only-check-for-overflow-once algorithm does not. For instance: r1 = 0x80008000; r2 = 0x80008000; r3 = 0xffffffff smlad r0, r1, r2, r3 This is (-32768 * -32768) + (-32768 * -32768) - 1 The products are both 0x4000_0000, so when added together as 32-bit signed numbers they overflow (and QEMU sets Q), but because the addition of Ra == -1 brings the total back down to 0x7fff_ffff there is no overflow for the complete operation and setting Q is incorrect. Fix this edge case by resorting to 64-bit arithmetic for the case where we need to add three values together. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201009144712.11187-1-peter.maydell@linaro.org
2020-10-08target/arm: Make '-cpu max' have a 48-bit PAPeter Maydell
QEMU supports a 48-bit physical address range, but we don't currently expose it in the '-cpu max' ID registers (you get the same range as Cortex-A57, which is 44 bits). Set the ID_AA64MMFR0.PARange field to indicate 48 bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201001160116.18095-1-peter.maydell@linaro.org
2020-10-08hw/arm/virt: Implement kvm-steal-timeAndrew Jones
We add the kvm-steal-time CPU property and implement it for machvirt. A tiny bit of refactoring was also done to allow pmu and pvtime to use the same vcpu device helper functions. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Message-id: 20201001061718.101915-7-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-08target/arm/kvm: Make uncalled stubs explicitly unreachableAndrew Jones
When we compile without KVM support !defined(CONFIG_KVM) we generate stubs for functions that the linker will still encounter. Sometimes these stubs can be executed safely and are placed in paths where they get executed with or without KVM. Other functions should never be called without KVM. Those functions should be guarded by kvm_enabled(), but should also be robust to refactoring mistakes. Putting a g_assert_not_reached() in the function should help. Additionally, the g_assert_not_reached() calls may actually help the linker remove some code. We remove the stubs for kvm_arm_get/put_virtual_time(), as they aren't necessary at all - the only caller is in kvm.c Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Message-id: 20201001061718.101915-3-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-05icount: rename functions to be consistent with the module nameClaudio Fontana
Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-10-05cpu-timers, icount: new modulesClaudio Fontana
refactoring of cpus.c continues with cpu timer state extraction. cpu-timers: responsible for the softmmu cpu timers state, including cpu clocks and ticks. icount: counts the TCG instructions executed. As such it is specific to the TCG accelerator. Therefore, it is built only under CONFIG_TCG. One complication is due to qtest, which uses an icount field to warp time as part of qtest (qtest_clock_warp). In order to solve this problem, provide a separate counter for qtest. This requires fixing assumptions scattered in the code that qtest_enabled() implies icount_enabled(), checking each specific case. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [remove redundant initialization with qemu_spice_init] Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [fix lingering calls to icount_get] Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-10-01target/arm: Fix SVE spliceRichard Henderson
While converting to gen_gvec_ool_zzzp, we lost passing a->esz as the data argument to the function. Fixes: 36cbb7a8e71 Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200918000500.2690937-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-01target/arm: Fix sve ldr/strRichard Henderson
The mte update missed a bit when producing clean addresses. Fixes: b2aa8879b88 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200916014102.2446323-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-01target/arm: Make isar_feature_aa32_fp16_arith() handle M-profilePeter Maydell
The M-profile definition of the MVFR1 ID register differs slightly from the A-profile one, and in particular the check for "does the CPU support fp16 arithmetic" is not the same. We don't currently implement any M-profile CPUs with fp16 arithmetic, so this is not yet a visible bug, but correcting the logic now disarms this beartrap for when we eventually do. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-6-peter.maydell@linaro.org
2020-10-01target/arm: Add ID register values for Cortex-M0Peter Maydell
Give the Cortex-M0 ID register values corresponding to its implemented behaviour. These will not be guest-visible but will be used to govern the behaviour of QEMU's emulation. We use the same values that the Cortex-M3 does. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-5-peter.maydell@linaro.org
2020-10-01target/arm: Move id_pfr0, id_pfr1 into ARMISARegistersPeter Maydell
Move the id_pfr0 and id_pfr1 fields into the ARMISARegisters sub-struct. We're going to want id_pfr1 for an isar_features check, and moving both at the same time avoids an odd inconsistency. Changes other than the ones to cpu.h and kvm64.c made automatically with: perl -p -i -e 's/cpu->id_pfr/cpu->isar.id_pfr/' target/arm/*.c hw/intc/armv7m_nvic.c Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-3-peter.maydell@linaro.org
2020-10-01target/arm: Replace ARM_FEATURE_PXN with ID_MMFR0.VMSA checkPeter Maydell
The ARM_FEATURE_PXN bit indicates whether the CPU supports the PXN bit in short-descriptor translation table format descriptors. This is indicated by ID_MMFR0.VMSA being at least 0b0100. Replace the feature bit with an ID register check, in line with our preference for ID register checks over feature bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-2-peter.maydell@linaro.org
2020-09-24Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into ↵Peter Maydell
staging Pull request This includes the atomic_ -> qatomic_ rename that touches many files and is prone to conflicts. # gpg: Signature made Wed 23 Sep 2020 17:08:43 BST # gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full] # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full] # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * remotes/stefanha/tags/block-pull-request: qemu/atomic.h: rename atomic_ to qatomic_ tests: add test-fdmon-epoll fdmon-poll: reset npfd when upgrading to fdmon-epoll gitmodules: add qemu.org vbootrom submodule gitmodules: switch to qemu.org meson mirror gitmodules: switch to qemu.org qboot mirror docs/system: clarify deprecation schedule virtio-crypto: don't modify elem->in/out_sg virtio-blk: undo destructive iov_discard_*() operations util/iov: add iov_discard_undo() virtio: add vhost-user-fs-ccw device libvhost-user: handle endianness as mandated by the spec MAINTAINERS: add Stefan Hajnoczi as block/nvme.c maintainer Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-09-23qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi
clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
2020-09-22qom: simplify object_find_property / object_class_find_propertyDaniel P. Berrangé
When debugging QEMU it is often useful to put a breakpoint on the error_setg_internal method impl. Unfortunately the object_property_add / object_class_property_add methods call object_property_find / object_class_property_find methods to check if a property exists already before adding the new property. As a result there are a huge number of calls to error_setg_internal on startup of most QEMU commands, making it very painful to set a breakpoint on this method. Most callers of object_find_property and object_class_find_property, however, pass in a NULL for the Error parameter. This simplifies the methods to remove the Error parameter entirely, and then adds some new wrapper methods that are able to raise an Error when needed. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200914135617.1493072-1-berrange@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2020-09-18qom: Remove module_obj_name parameter from OBJECT_DECLARE* macrosEduardo Habkost
One of the goals of having less boilerplate on QOM declarations is to avoid human error. Requiring an extra argument that is never used is an opportunity for mistakes. Remove the unused argument from OBJECT_DECLARE_TYPE and OBJECT_DECLARE_SIMPLE_TYPE. Coccinelle patch used to convert all users of the macros: @@ declarer name OBJECT_DECLARE_TYPE; identifier InstanceType, ClassType, lowercase, UPPERCASE; @@ OBJECT_DECLARE_TYPE(InstanceType, ClassType, - lowercase, UPPERCASE); @@ declarer name OBJECT_DECLARE_SIMPLE_TYPE; identifier InstanceType, lowercase, UPPERCASE; @@ OBJECT_DECLARE_SIMPLE_TYPE(InstanceType, - lowercase, UPPERCASE); Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Acked-by: Paul Durrant <paul@xen.org> Acked-by: Thomas Huth <thuth@redhat.com> Message-Id: <20200916182519.415636-4-ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>