aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2021-08-25target/arm: Implement MVE saturating doubling multiply accumulatesPeter Maydell
Implement the MVE saturating doubling multiply accumulate insns VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply, double, add the accumulator shifted by the element size, possibly round, saturate to twice the element size, then take the high half of the result. The *MLAH insns do vector * scalar + vector, and the *MLASH insns do vector * vector + scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLAPeter Maydell
Implement the MVE VMLA insn, which multiplies a vector by a scalar and accumulates into another vector. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLADAV and VMLSLDAVPeter Maydell
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and VMLSLDAV insns already implemented, these accumulate multiplied vector elements; but they accumulate a 32-bit result rather than a 64-bit one. Note that these encodings overlap with what would be RdaHi=0b111 for VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFnPeter Maydell
The MVEGenDualAccOpFn is a bit misnamed, since it is used for the "long dual accumulate" operations that use a 64-bit accumulator. Rename it to MVEGenLongDualAccOpFn so we can use the former name for the 32-bit accumulator insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE narrowing movesPeter Maydell
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN. These take a double-width input, narrow it (possibly saturating) and store the result to either the top or bottom half of the output element. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VABAVPeter Maydell
Implement the MVE VABAV insn, which computes absolute differences between elements of two vectors and accumulates the result into a general purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer min/max across vectorPeter Maydell
Implement the MVE integer min/max across vector insns VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum from the vector elements and a general purpose register, and store the maximum back into the general purpose register. These insns overlap with VRMLALDAVH (they use what would be RdaHi=0b110). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Move 'x' and 'a' bit definitions into vmlaldav formatsPeter Maydell
All the users of the vmlaldav formats have an 'x bit in bit 12 and an 'a' bit in bit 5; move these to the format rather than specifying them in each insn pattern. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE shift-by-scalarPeter Maydell
Implement the MVE instructions which perform shifts by a scalar. These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the shift amount in a general purpose register and shift every element in the vector by that amount. Mostly we can reuse the helper functions for shift-by-immediate; we do need two new helpers for VQRSHL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMLASPeter Maydell
Implement the MVE VMLAS insn, which multiplies a vector by a vector and adds a scalar. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VPSELPeter Maydell
Implement the MVE VPSEL insn, which sets each byte of the destination vector Qd to the byte from either Qn or Qm depending on the value of the corresponding bit in VPR.P0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector-vs-scalar comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions that compare each element against a scalar from a general purpose register. These are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)" encodings T4, T5 and T6. We have to move the decodetree pattern for VPST, because it overlaps with VCMP T4 with size = 0b11. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE integer vector comparisonsPeter Maydell
Implement the MVE integer vector comparison instructions. These are "VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings T1, T2 and T3. These insns compare corresponding elements in each vector, and update the VPR.P0 predicate bits with the results of the comparison. VPT also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively "VCMP then VPST". Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Factor out gen_vpst()Peter Maydell
Factor out the "generate code to update VPR.MASK01/MASK23" part of trans_VPST(); we are going to want to reuse it for the VPT insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE incrementing/decrementing dup insnsPeter Maydell
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP, VIWDUP and VDWDUP. These fill the elements of a vector with successively incrementing values, starting at the offset specified in a general purpose register. The final value of the offset is written back to this register. The wrapping variants take a second general purpose register which specifies the point where the count should wrap back to 0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Implement MVE VMULL (polynomial)Peter Maydell
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the inputs are in either the low or the high half of each double-width element. The assembler for this insn indicates the size with "P8" or "P16", encoded into bit 28 as size = 0 or 1. We choose to follow the same encoding as VQDMULL and decode this into a->size as MO_16 or MO_32 indicating the size of the result elements. This then carries through to the helper function names where it then matches up with the existing pmull_h() which does an 8x8->16 operation and a new pmull_w() which does the 16x16->32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix VLDRB/H/W for predicated elementsPeter Maydell
For vector loads, predicated elements are zeroed, instead of retaining their previous values (as happens for most data processing operations). This means we need to distinguish "beat not executed due to ECI" (don't touch destination element) from "beat executed but predicated out" (zero destination element). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix VPT advance when ECI is non-zeroPeter Maydell
We were not paying attention to the ECI state when advancing the VPT state. Architecturally, VPT state advance happens for every beat (see the pseudocode VPTAdvance()), so on every beat the 4 bits of VPR.P0 corresponding to the current beat are inverted if required, and at the end of beats 1 and 3 the VPR MASK fields are updated. This means that if the ECI state says we should not be executing all 4 beats then we need to skip some of the updating of the VPR that we currently do in mve_advance_vpt(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Factor out mve_eci_mask()Peter Maydell
In some situations we need a mask telling us which parts of the vector correspond to beats that are not being executed because of ECI, separately from the combined "which bytes are predicated away" mask. Factor this mask calculation out of mve_element_mask() into its own function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix calculation of LTP mask when LR is 0Peter Maydell
In mve_element_mask(), we calculate a mask for tail predication which should have a number of 1 bits based on the value of LR. However, our MAKE_64BIT_MASK() macro has undefined behaviour when passed a zero length. Special case this to give the all-zeroes mask we require. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix MVE 48-bit SQRSHRL for small right shiftsPeter Maydell
We got an edge case wrong in the 48-bit SQRSHRL implementation: if the shift is to the right, although it always makes the result smaller than the input value it might not be within the 48-bit range the result is supposed to be if the input had some bits in [63..48] set and the shift didn't bring all of those within the [47..0] range. Handle this similarly to the way we already do for this case in do_uqrshl48_d(): extend the calculated result from 48 bits, and return that if not saturating or if it doesn't change the result; otherwise fall through to return a saturated value. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix 48-bit saturating shiftsPeter Maydell
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge cases wrong and failed to saturate correctly: (1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs() does to obtain the saturated most-negative and most-positive 48-bit signed values for the large-shift-left case. This gives (1 << 47) for saturate-to-most-negative, but we weren't sign-extending this value to the 64-bit output as the pseudocode requires. (2) For left shifts by less than 48, we copied the "8/16 bit" code from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right thing because it assumes the C type we're working with is at least twice the number of bits we're saturating to (so that a shift left by bits-1 can't shift anything off the top of the value). This isn't true for bits == 48, so we would incorrectly return 0 rather than the most-positive value for situations like "shift (1 << 44) right by 20". Instead check for saturation by doing the shift and signextend and then testing whether shifting back left again gives the original value. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix mask handling for MVE narrowing operationsPeter Maydell
In the MVE helpers for the narrowing operations (DO_VSHRN and DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for the 'top' versions of the insn. This is because the loop works over the double-sized input elements and shifts the predicate mask by that many bits each time, but when we write out the half-sized output we must look at the mask bits for whichever half of the element we are writing to. Correct this by shifting the whole mask right by ESIZE bits for the 'top' insns. This allows us also to simplify the saturation bit checking (where we had noticed that we needed to look at a different mask bit for the 'top' insn.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix signed VADDVPeter Maydell
A cut-and-paste error meant we handled signed VADDV like unsigned VADDV; fix the type used. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Fix MVE VSLI by 0 and VSRI by <dt>Peter Maydell
In the MVE shift-and-insert insns, we special case VSLI by 0 and VSRI by <dt>. VSRI by <dt> means "don't update the destination", which is what we've implemented. However VSLI by 0 is "set destination to the input", so we don't want to use the same special-casing that we do for VSRI by <dt>. Since the generic logic gives the right answer for a shift by 0, just use that. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Print MVE VPR in CPU dumpsPeter Maydell
Include the MVE VPR register value in the CPU dumps produced by arm_cpu_dump_state() if we are printing FPU information. This makes it easier to interpret debug logs when predication is active. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25target/arm: Note that we handle VMOVL as a special case of VSHLLPeter Maydell
Although the architecture doesn't define it as an alias, VMOVL (vector move long) is encoded as a VSHLL with a zero shift. Add a comment in the decode file noting that we handle VMOVL as part of VSHLL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-13target/i386: Fixed size of constant for WindowsLara Lazier
~0UL has 64 bits on Linux and 32 bits on Windows. Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512 Reported-by: Volker Rümelin <vr_qemu@t-online.de> Signed-off-by: Lara Lazier <laramglazier@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210812111056.26926-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-30target/nios2: Mark raise_exception() as noreturnPhilippe Mathieu-Daudé
Raised exceptions don't return, so mark the helper with noreturn. Fixes: 032c76bc6f9 ("nios2: Add architecture emulation support") Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210729101315.2318714-1-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-29Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into ↵Peter Maydell
staging Bugfixes. # gpg: Signature made Thu 29 Jul 2021 09:15:54 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini-gitlab/tags/for-upstream: libvhost-user: fix -Werror=format= warnings with __u64 fields meson: fix meson 0.58 warning with libvhost-user subproject target/i386: fix typo in ctl_has_irq target/i386: Added consistency checks for event injection configure: Add -Werror to avx2, avx512 tests Makefile: ignore long options i386: assert 'cs->kvm_state' is not null Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-29target/i386: fix typo in ctl_has_irqPaolo Bonzini
The shift constant was incorrect, causing int_prio to always be zero. Signed-off-by: Lara Lazier <laramglazier@gmail.com> [Rewritten commit message since v1 had already been included. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29target/i386: Added consistency checks for event injectionLara Lazier
VMRUN exits with SVM_EXIT_ERR if either: * The event injected has a reserved type. * When the event injected is of type 3 (exception), and the vector that has been specified does not correspond to an exception. This does not fix the entire exc_inj test in kvm-unit-tests. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210725090855.19713-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29i386: assert 'cs->kvm_state' is not nullVitaly Kuznetsov
Coverity reports potential NULL pointer dereference in get_supported_hv_cpuid_legacy() when 'cs->kvm_state' is NULL. While 'cs->kvm_state' can indeed be NULL in hv_cpuid_get_host(), kvm_hyperv_expand_features() makes sure that it only happens when KVM_CAP_SYS_HYPERV_CPUID is supported and KVM_CAP_SYS_HYPERV_CPUID implies KVM_CAP_HYPERV_CPUID so get_supported_hv_cpuid_legacy() is never really called. Add asserts to strengthen the protection against broken KVM behavior. Coverity: CID 1458243 Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210716115852.418293-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29target/ppc: Ease L=0 requirement on cmp/cmpi/cmpl/cmpli for ppc32Matheus Ferst
In commit 8f0a4b6a9b, we started to require L=0 for ppc32 to match what The Programming Environments Manual say: "For 32-bit implementations, the L field must be cleared, otherwise the instruction form is invalid." The stricter behavior, however, broke AROS boot on sam460ex, which is a regression from 6.0. This patch partially reverts the change, raising the exception only for CPUs known to require L=0 (e500 and e500mc) and logging a guest error for other cases. Both behaviors are acceptable by the PowerISA, which allows "the system illegal instruction error handler to be invoked or yield boundedly undefined results." Reported-by: BALATON Zoltan <balaton@eik.bme.hu> Fixes: 8f0a4b6a9b ("target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree") Tested-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210720135507.2444635-1-matheus.ferst@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-27target/arm: Add sve-default-vector-length cpu propertyRichard Henderson
Mirror the behavour of /proc/sys/abi/sve_default_vector_length under the real linux kernel. We have no way of passing along a real default across exec like the kernel can, but this is a decent way of adjusting the startup vector length of a process. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-4-richard.henderson@linaro.org [PMM: tweaked docs formatting, document -1 special-case, added fixup patch from RTH mentioning QEMU's maximum veclen.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Export aarch64_sve_zcr_get_valid_lenRichard Henderson
Rename from sve_zcr_get_valid_len and make accessible from outside of helper.c. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Correctly bound length in sve_zcr_get_valid_lenRichard Henderson
Currently, our only caller is sve_zcr_len_for_el, which has already masked the length extracted from ZCR_ELx, so the masking done here is a nop. But we will shortly have uses from other locations, where the length will be unmasked. Saturate the length to ARM_MAX_VQ instead of truncating to the low 4 bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27docs: Update path that mentions deprecated.rstMao Zhongyi
Missed in commit f3478392 "docs: Move deprecation, build and license info out of system/" Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Report M-profile alignment faults correctly to the guestPeter Maydell
For M-profile, we weren't reporting alignment faults triggered by the generic TCG code correctly to the guest. These get passed into arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile style exception.fsr value of 1. We didn't check for this, and so they fell through into the default of "assume this is an MPU fault" and were reported to the guest as a data access violation MPU fault. Report these alignment faults as UsageFaults which set the UNALIGNED bit in the UFSR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
2021-07-27target/arm: Add missing 'return's after calling v7m_exception_taken()Peter Maydell
In do_v7m_exception_exit(), we perform various checks as part of performing the exception return. If one of these checks fails, the architecture requires that we take an appropriate exception on the existing stackframe. We implement this by calling v7m_exception_taken() to set up to take the new exception, and then immediately returning from do_v7m_exception_exit() without proceeding any further with the unstack-and-exception-return process. In a couple of checks that are new in v8.1M, we forgot the "return" statement, with the effect that if bad code in the guest tripped over these checks we would set up to take a UsageFault exception but then blunder on trying to also unstack and return from the original exception, with the probable result that the guest would crash. Add the missing return statements. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
2021-07-27target/arm: Enforce that M-profile SP low 2 bits are always zeroPeter Maydell
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be RES0H, which is to say that they must be hardwired to zero so that guest attempts to write non-zero values to them are ignored. Implement this behaviour by masking out the low bits: * for writes to r13 by the gdbstub * for writes to any of the various flavours of SP via MSR * for writes to r13 via store_reg() in generated code Note that all the direct uses of cpu_R[] in translate.c are in places where the register is definitely not r13 (usually because that has been checked for as an UNDEFINED or UNPREDICTABLE case and handled as UNDEF). All the other writes to regs[13] in C code are either: * A-profile only code * writes of values we can guarantee to be aligned, such as - writes of previous-SP-value plus or minus a 4-aligned constant - writes of the value in an SP limit register (which we already enforce to be aligned) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
2021-07-26Merge remote-tracking branch 'remotes/quic/tags/pull-hex-20210725' into stagingPeter Maydell
The Hexagon target was silently failing the SIGSEGV test because the signal handler was not called. Patch 1/2 fixes the Hexagon target Patch 2/2 drops include qemu.h from target/hexagon/op_helper.c **** Changes in v2 **** Drop changes to linux-test.c due to intermittent failures on riscv # gpg: Signature made Sun 25 Jul 2021 22:39:38 BST # gpg: using RSA key 7B0244FB12DE4422 # gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422 * remotes/quic/tags/pull-hex-20210725: target/hexagon: Drop include of qemu.h Hexagon (target/hexagon) remove put_user_*/get_user_* Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-23i386: do not call cpudef-only models functions for max, host, baseClaudio Fontana
Some cpu properties have to be set only for cpu models in builtin_x86_defs, registered with x86_register_cpu_model_type, and not for cpu models "base", "max", and the subclass "host". These properties are the ones set by function x86_cpu_apply_props, (also including kvm_default_props, tcg_default_props), and the "vendor" property for the KVM and HVF accelerators. After recent refactoring of cpu, which also affected these properties, they were instead set unconditionally for all x86 cpus. This has been detected as a bug with Nested on AMD with cpu "host", as svm was not turned on by default, due to the wrongful setting of kvm_default_props via x86_cpu_apply_props, which set svm to "off". Rectify the bug introduced in commit "i386: split cpu accelerators" and document the functions that are builtin_x86_defs-only. Signed-off-by: Claudio Fontana <cfontana@suse.de> Tested-by: Alexander Bulekov <alxndr@bu.edu> Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/477 Message-Id: <20210723112921.12637-1-cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-23target/i386: Added consistency checks for CR3Lara Lazier
All MBZ in CR3 must be zero (APM2 15.5) Added checks in both helper_vmrun and helper_write_crN. When EFER.LMA is zero the upper 32 bits needs to be zeroed. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210723112740.45962-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into ↵Peter Maydell
staging Bugfixes. # gpg: Signature made Thu 22 Jul 2021 14:11:27 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini-gitlab/tags/for-upstream: configure: Let --without-default-features disable vhost-kernel and vhost-vdpa configure: Fix the default setting of the "xen" feature configure: Allow vnc to get disabled with --without-default-features configure: Fix --without-default-features propagation to meson meson: fix dependencies for modinfo configure: Drop obsolete check for the alloc_size attribute target/i386: Added consistency checks for EFER target/i386: Added consistency checks for CR4 target/i386: Added V_INTR_PRIO check to virtual interrupts qemu-config: restore "machine" in qmp_query_command_line_options() usb: fix usb-host dependency check chardev-spice: add missing module_obj directive vl: Parse legacy default_machine_opts qemu-config: fix memory leak on ferror() qemu-config: never call the callback after an error, fix leak Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-22target/i386: Added consistency checks for EFERLara Lazier
EFER.SVME has to be set, and EFER reserved bits must be zero. In addition the combinations * EFER.LMA or EFER.LME is non-zero and the processor does not support LM * non-zero EFER.LME and CR0.PG and zero CR4.PAE * non-zero EFER.LME and CR0.PG and zero CR0.PE * non-zero EFER.LME, CR0.PG, CR4.PAE, CS.L and CS.D are all invalid. (AMD64 Architecture Programmer's Manual, V2, 15.5) Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210721152651.14683-3-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22target/i386: Added consistency checks for CR4Lara Lazier
All MBZ bits in CR4 must be zero. (APM2 15.5) Added reserved bitmask and added checks in both helper_vmrun and helper_write_crN. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210721152651.14683-2-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22target/i386: Added V_INTR_PRIO check to virtual interruptsLara Lazier
The APM2 states that The processor takes a virtual INTR interrupt if V_IRQ and V_INTR_PRIO indicate that there is a virtual interrupt pending whose priority is greater than the value in V_TPR. Signed-off-by: Lara Lazier <laramglazier@gmail.com> Message-Id: <20210721152651.14683-1-laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-21target/hexagon: Drop include of qemu.hPeter Maydell
The qemu.h file is a CONFIG_USER_ONLY header; it doesn't appear on the include path for softmmu builds. Currently we include it unconditionally in target/hexagon/op_helper.c. We used to need it for the put_user_*() and get_user_*() functions, but now that we have removed the uses of those from op_helper.c, the only reason it's still there is that we're implicitly relying on it pulling in some other headers. Explicitly include the headers we need for other functions, and drop the include of qemu.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210717103017.20491-1-peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Taylor Simpson <tsimpson@quicinc.com> Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
2021-07-21Hexagon (target/hexagon) remove put_user_*/get_user_*Taylor Simpson
Replace put_user_* with cpu_st*_data_ra Replace get_user_* with cpu_ld*_data_ra Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Taylor Simpson <tsimpson@quicinc.com> Message-Id: <1626384156-6248-2-git-send-email-tsimpson@quicinc.com>