aboutsummaryrefslogtreecommitdiff
path: root/target/i386
AgeCommit message (Collapse)Author
2022-09-01target/i386: Dot product AVX helper prepPaul Brook
Make the dpps and dppd helpers AVX-ready I can't see any obvious reason why dppd shouldn't work on 256 bit ymm registers, but both AMD and Intel agree that it's xmm only. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-17-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: reimplement AVX comparison helpersPaul Brook
AVX includes an additional set of comparison predicates, some of which our softfloat implementation does not expose as separate functions. Rewrite the helpers in terms of floatN_compare for future extensibility. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-24-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Floating point arithmetic helper AVX prepPaul Brook
Prepare the "easy" floating point vector helpers for AVX No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-16-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Destructive vector helpers for AVXPaul Brook
These helpers need to take special care to avoid overwriting source values before the wole result has been calculated. Currently they use a dummy Reg typed variable to store the result then assign the whole register. This will cause 128 bit operations to corrupt the upper half of the register, so replace it with explicit temporaries and element assignments. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-14-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Misc integer AVX helper prepPaul Brook
More preparatory work for AVX support in various integer vector helpers No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-13-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rewrite simple integer vector helpersPaul Brook
Rewrite the "simple" vector integer helpers in preperation for AVX support. While the current code is able to use the same prototype for unary (a = F(b)) and binary (a = F(b, c)) operations, future changes will cause them to diverge. No functional changes to existing helpers Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-12-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rewrite vector shift helperPaul Brook
Rewrite the vector shift helpers in preperation for AVX support (3 operand form and 256 bit vectors). For now keep the existing two operand interface. No functional changes to existing helpers. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-11-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: rewrite destructive 3DNow operationsPaolo Bonzini
Remove use of the MOVE macro, since it will be purged from MMX/SSE as well. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add CHECK_NO_VEXPaul Brook
Reject invalid VEX encodings on MMX instructions. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-7-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: do not cast gen_helper_* function pointersPaolo Bonzini
Use a union to store the various possible kinds of function pointers, and access the correct one based on the flags. SSEOpHelper_table6 and SSEOpHelper_table7 right now only have one case, but this would change with AVX's 3- and 4-argument operations. Use unions there too, to keep the code more similar for the three tables. Extracted from a patch by Paul Brook <paul@nowt.org>. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add size suffix to vector FP helpersPaolo Bonzini
For AVX we're going to need both 128 bit (xmm) and 256 bit (ymm) variants of floating point helpers. Add the register type suffix to the existing *PS and *PD helpers (SS and SD variants are only valid on 128 bit vectors) No functional changes. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-15-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: isolate MMX code morePaolo Bonzini
Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: check SSE table flags instead of hardcoding opcodesPaolo Bonzini
Put more flags to work to avoid hardcoding lists of opcodes. The op7 case for SSE_OPF_CMP is included for homogeneity and because AVX needs it, but it is never used by SSE or MMX. Extracted from a patch by Paul Brook <paul@nowt.org>. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Move 3DNOW decoderPaul Brook
Handle 3DNOW instructions early to avoid complicating the MMX/SSE logic. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-25-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rework sse_op_table6/7Paul Brook
Add a flags field each row in sse_op_table6 and sse_op_table7. Initially this is only used as a replacement for the magic SSE41_SPECIAL pointer. The other flags are mostly relevant for the AVX implementation but can be applied to SSE as well. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-6-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Rework sse_op_table1Paul Brook
Add a flags field to each row in sse_op_table1. Initially this is only used as a replacement for the magic SSE_SPECIAL and SSE_DUMMY pointers, the other flags are mostly relevant for the AVX implementation but can be applied to SSE as well. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-5-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: Add ZMM_OFFSET macroPaul Brook
Add a convenience macro to get the address of an xmm_regs element within CPUX86State. This was originally going to be the basis of an implementation that broke operations into 128 bit chunks. I scrapped that idea, so this is now a purely cosmetic change. But I think a worthwhile one - it reduces the number of function calls that need to be split over multiple lines. No functional changes. Signed-off-by: Paul Brook <paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220424220204.2493824-9-paul@nowt.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: formatting fixesPaolo Bonzini
Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: do not use MOVL to move data between SSE registersPaolo Bonzini
Write down explicitly the load/store sequence. Extracted from a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: DPPS rounding fixPaolo Bonzini
The DPPS (Dot Product) instruction is defined to first sum pairs of intermediate results, then sum those values to get the final result. i.e. (A+B)+(C+D) We incrementally sum the results, i.e. ((A+B)+C)+D, which can result in incorrect rouding. For consistency, also change the variable names to the ones used in the Intel SDM and implement DPPD following the manual. Based on a patch by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01target/i386: fix PHSUB* instructions with dest=srcPaolo Bonzini
The computation must not overwrite neither the destination nor the source before the last element has been computed. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01i386: do kvm_put_msr_feature_control() first thing when vCPU is resetVitaly Kuznetsov
kvm_put_sregs2() fails to reset 'locked' CR4/CR0 bits upon vCPU reset when it is in VMX root operation. Do kvm_put_msr_feature_control() before kvm_put_sregs2() to (possibly) kick vCPU out of VMX root operation. It also seems logical to do kvm_put_msr_feature_control() before kvm_put_nested_state() and not after it, especially when 'real' nested state is set. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220818150113.479917-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01i386: reset KVM nested state upon CPU resetVitaly Kuznetsov
Make sure env->nested_state is cleaned up when a vCPU is reset, it may be stale after an incoming migration, kvm_arch_put_registers() may end up failing or putting vCPU in a weird state. Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220818150113.479917-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-08-05target/i386: display deprecation status in '-cpu help'Daniel P. Berrangé
When the user queries CPU models via QMP there is a 'deprecated' flag present, however, this is not done for the CLI '-cpu help' command. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2022-08-01misc: fix commonly doubled up wordsDaniel P. Berrangé
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220707163720.1421716-5-berrange@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-07-13hvf: Enable RDTSCP supportCameron Esfahani
Pass through RDPID and RDTSCP support in CPUID if host supports it. Correctly detect if CPU_BASED_TSC_OFFSET and CPU_BASED2_RDTSCP would be supported in primary and secondary processor-based VM-execution controls. Enable RDTSCP in secondary processor controls if RDTSCP support is indicated in CPUID. Signed-off-by: Cameron Esfahani <dirty@apple.com> Message-Id: <20220214185605.28087-7-f4bug@amsat.org> Tested-by: Silvio Moioli <moio@suse.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1011 Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-06-08Fix 'writeable' typosPeter Maydell
We have about 30 instances of the typo/variant spelling 'writeable', and over 500 of the more common 'writable'. Standardize on the latter. Change produced with: sed -i -e 's/\([Ww][Rr][Ii][Tt]\)[Ee]\([Aa][Bb][Ll][Ee]\)/\1\2/g' $(git grep -il writeable) and then hand-undoing the instance in linux-headers/linux/kvm.h. Most of these changes are in comments or documentation; the exceptions are: * a local variable in accel/hvf/hvf-accel-ops.c * a local variable in accel/kvm/kvm-all.c * the PMCR_WRITABLE_MASK macro in target/arm/internals.h * the EPT_VIOLATION_GPA_WRITABLE macro in target/i386/hvf/vmcs.h (which is never used anywhere) * the AR_TYPE_WRITABLE_MASK macro in target/i386/hvf/vmx.h (which is never used anywhere) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Stefan Weil <sw@weilnetz.de> Message-id: 20220505095015.2714666-1-peter.maydell@linaro.org
2022-06-06x86: cpu: fixup number of addressable IDs for logical processors sharing cacheIgor Mammedov
When QEMU is started with '-cpu host,host-cache-info=on', it will passthrough host's number of logical processors sharing cache and number of processor cores in the physical package. QEMU already fixes up the later to correctly reflect number of configured cores for VM, however number of logical processors sharing cache is still comes from host CPU, which confuses guest started with: -machine q35,accel=kvm \ -cpu host,host-cache-info=on,l3-cache=off \ -smp 20,sockets=2,dies=1,cores=10,threads=1 \ -numa node,nodeid=0,memdev=ram-node0 \ -numa node,nodeid=1,memdev=ram-node1 \ -numa cpu,socket-id=0,node-id=0 \ -numa cpu,socket-id=1,node-id=1 on 2 socket Xeon 4210R host with 10 cores per socket with CPUID[04H]: ... --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1f (31) maximum IDs for cores in pkg = 0xf (15) ... that doesn't match number of logical processors VM was configured with and as result RHEL 9.0 guest complains: sched: CPU #10's llc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency. WARNING: CPU: 10 PID: 0 at arch/x86/kernel/smpboot.c:421 topology_sane.isra.0+0x67/0x80 ... Call Trace: set_cpu_sibling_map+0x176/0x590 start_secondary+0x5b/0x150 secondary_startup_64_no_verify+0xc2/0xcb Fix it by capping max number of logical processors to vcpus/socket as it was configured, which fixes the issue. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2088311 Message-Id: <20220524151020.2541698-3-imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06x86: cpu: make sure number of addressable IDs for processor cores meets the specIgor Mammedov
Accourding Intel's CPUID[EAX=04H] resulting bits 31 - 26 in EAX should be: " **** The nearest power-of-2 integer that is not smaller than (1 + EAX[31:26]) is the number of unique Core_IDs reserved for addressing different processor cores in a physical package. Core ID is a subset of bits of the initial APIC ID. " ensure that values stored in EAX[31-26] always meets this condition. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220524151020.2541698-2-imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06target/i386: Fix wrong count settingYang Zhong
The previous patch used wrong count setting with index value, which got wrong value from CPUID(EAX=12,ECX=0):EAX. So the SGX1 instruction can't be exposed to VM and the SGX decice can't work in VM. Fixes: d19d6ffa0710 ("target/i386: introduce helper to access supported CPUID") Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220530131834.1222801-1-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-06-06target/i386/tcg: Fix masking of real-mode addresses with A20 bitStephen Michael Jothen
The correct A20 masking is done if paging is enabled (protected mode) but it seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2 section 1.2.4: > If the sum of the segment base and effective address carries over into bit 20, > that bit can be optionally truncated to mimic the 20-bit address wrapping of the > 8086 processor by using the A20M# input signal to mask the A20 address bit. Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line afterwards, the correct wrapping wasn't taking place. `handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit. In real mode, it fills the TLB with the raw unmasked address. However, for the protected mode, the `mmu_translate' function does the correct A20 masking. The fix then should be to just apply the A20 mask in the first branch of the if statement. Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com> Message-Id: <Yo5MUMSz80jXtvt9@air-old.local> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Direct TLB flush hypercallVitaly Kuznetsov
Hyper-V TLFS allows for L0 and L1 hypervisors to collaborate on L2's TLB flush hypercalls handling. With the correct setup, L2's TLB flush hypercalls can be handled by L0 directly, without the need to exit to L1. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-6-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Support extended GVA ranges for TLB flush hypercallsVitaly Kuznetsov
KVM kind of supported "extended GVA ranges" (up to 4095 additional GFNs per hypercall) since the implementation of Hyper-V PV TLB flush feature (Linux-4.18) as regardless of the request, full TLB flush was always performed. "Extended GVA ranges for TLB flush hypercalls" feature bit wasn't exposed then. Now, as KVM gains support for fine-grained TLB flush handling, exposing this feature starts making sense. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V XMM fast hypercall input featureVitaly Kuznetsov
Hyper-V specification allows to pass parameters for certain hypercalls using XMM registers ("XMM Fast Hypercall Input"). When the feature is in use, it allows for faster hypercalls processing as KVM can avoid reading guest's memory. KVM supports the feature since v5.14. Rename HV_HYPERCALL_{PARAMS_XMM_AVAILABLE -> XMM_INPUT_AVAILABLE} to comply with KVM. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Enlightened MSR bitmap featureVitaly Kuznetsov
The newly introduced enlightenment allow L0 (KVM) and L1 (Hyper-V) hypervisors to collaborate to avoid unnecessary updates to L2 MSR-Bitmap upon vmexits. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Use hv_build_cpuid_leaf() for HV_CPUID_NESTED_FEATURESVitaly Kuznetsov
Previously, HV_CPUID_NESTED_FEATURES.EAX CPUID leaf was handled differently as it was only used to encode the supported eVMCS version range. In fact, there are also feature (e.g. Enlightened MSR-Bitmap) bits there. In preparation to adding these features, move HV_CPUID_NESTED_FEATURES leaf handling to hv_build_cpuid_leaf() and drop now-unneeded 'hyperv_nested'. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25target/i386/kvm: Fix disabling MPX on "-cpu host" with MPX-capable hostMaciej S. Szmigiero
Since KVM commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled") it is not possible to disable MPX on a "-cpu host" just by adding "-mpx" there if the host CPU does indeed support MPX. QEMU will fail to set MSR_IA32_VMX_TRUE_{EXIT,ENTRY}_CTLS MSRs in this case and so trigger an assertion failure. Instead, besides "-mpx" one has to explicitly add also "-vmx-exit-clear-bndcfgs" and "-vmx-entry-load-bndcfgs" to QEMU command line to make it work, which is a bit convoluted. Make the MPX-related bits in FEAT_VMX_{EXIT,ENTRY}_CTLS dependent on MPX being actually enabled so such workarounds are no longer necessary. Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <51aa2125c76363204cc23c27165e778097c33f0b.1653323077.git.maciej.szmigiero@oracle.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-23target/i386: Remove LBREn bit check when access Arch LBR MSRsYang Weijiang
Live migration can happen when Arch LBR LBREn bit is cleared, e.g., when migration happens after guest entered SMM mode. In this case, we still need to migrate Arch LBR MSRs. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220517155024.33270-1-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-16Merge tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu ↵Richard Henderson
into staging virtio,pc,pci: fixes,cleanups,features most of CXL support fixes, cleanups all over the place Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmKCuLIPHG1zdEByZWRo # YXQuY29tAAoJECgfDbjSjVRpdDUH/12SmWaAo+0+SdIHgWFFxsmg3t/EdcO38fgi # MV+GpYdbp6TlU3jdQhrMZYmFdkVVydBdxk93ujCLbFS0ixTsKj31j0IbZMfdcGgv # SLqnV+E3JdHqnGP39q9a9rdwYWyqhkgHoldxilIFW76ngOSapaZVvnwnOMAMkf77 # 1LieL4/Xq7N9Ho86Zrs3IczQcf0czdJRDaFaSIu8GaHl8ELyuPhlSm6CSqqrEEWR # PA/COQsLDbLOMxbfCi5v88r5aaxmGNZcGbXQbiH9qVHw65nlHyLH9UkNTdJn1du1 # f2GYwwa7eekfw/LCvvVwxO1znJrj02sfFai7aAtQYbXPvjvQiqA= # =xdSk # -----END PGP SIGNATURE----- # gpg: Signature made Mon 16 May 2022 01:48:50 PM PDT # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [undefined] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (86 commits) vhost-user-scsi: avoid unlink(NULL) with fd passing virtio-net: don't handle mq request in userspace handler for vhost-vdpa vhost-vdpa: change name and polarity for vhost_vdpa_one_time_request() vhost-vdpa: backend feature should set only once vhost-net: fix improper cleanup in vhost_net_start vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated hw/i386/amd_iommu: Fix IOMMU event log encoding errors hw/i386: Make pic a property of common x86 base machine type hw/i386: Make pit a property of common x86 base machine type include/hw/pci/pcie_host: Correct PCIE_MMCFG_SIZE_MAX include/hw/pci/pcie_host: Correct PCIE_MMCFG_BUS_MASK docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG vhost-user: more master/slave things virtio: add vhost support for virtio devices virtio: drop name parameter for virtio_init() virtio/vhost-user: dynamically assign VhostUserHostNotifiers hw/virtio/vhost-user: don't suppress F_CONFIG when supported include/hw: start documenting the vhost API ... Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-05-16target/i386: Fix sanity check on max APIC ID / X2APIC enablementDavid Woodhouse
The check on x86ms->apic_id_limit in pc_machine_done() had two problems. Firstly, we need KVM to support the X2APIC API in order to allow IRQ delivery to APICs >= 255. So we need to call/check kvm_enable_x2apic(), which was done elsewhere in *some* cases but not all. Secondly, microvm needs the same check. So move it from pc_machine_done() to x86_cpus_init() where it will work for both. The check in kvm_cpu_instance_init() is now redundant and can be dropped. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Acked-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20220314142544.150555-1-dwmw2@infradead.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-05-14target/i386: Support Arch LBR in CPUID enumerationYang Weijiang
If CPUID.(EAX=07H, ECX=0):EDX[19] is set to 1, the processor supports Architectural LBRs. In this case, CPUID leaf 01CH indicates details of the Architectural LBRs capabilities. XSAVE support for Architectural LBRs is enumerated in CPUID.(EAX=0DH, ECX=0FH). Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-9-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: introduce helper to access supported CPUIDPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Enable Arch LBR migration states in vmstateYang Weijiang
The Arch LBR record MSRs and control MSRs will be migrated to destination guest if the vcpus were running with Arch LBR active. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-8-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add MSR access interface for Arch LBRYang Weijiang
In the first generation of Arch LBR, the max support Arch LBR depth is 32, both host and guest use the value to set depth MSR. This can simplify the implementation of patch given the side-effect of mismatch of host/guest depth MSR: XRSTORS will reset all recording MSRs to 0s if the saved depth mismatches MSR_ARCH_LBR_DEPTH. In most of the cases Arch LBR is not in active status, so check the control bit before save/restore the big chunck of Arch LBR MSRs. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-7-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add XSAVES support for Arch LBRYang Weijiang
Define Arch LBR bit in XSS and save/restore structure for XSAVE area size calculation. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-6-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Enable support for XSAVES based featuresYang Weijiang
There're some new features, including Arch LBR, depending on XSAVES/XRSTORS support, the new instructions will save/restore data based on feature bits enabled in XCR0 | XSS. This patch adds the basic support for related CPUID enumeration and meanwhile changes the name from FEAT_XSAVE_COMP_{LO|HI} to FEAT_XSAVE_XCR0_{LO|HI} to differentiate clearly the feature bits in XCR0 and those in XSS. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-5-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add kvm_get_one_msr helperYang Weijiang
When try to get one msr from KVM, I found there's no such kind of existing interface while kvm_put_one_msr() is there. So here comes the patch. It'll remove redundant preparation code before finally call KVM_GET_MSRS IOCTL. No functional change intended. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-4-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add lbr-fmt vPMU option to support guest LBRYang Weijiang
The Last Branch Recording (LBR) is a performance monitor unit (PMU) feature on Intel processors which records a running trace of the most recent branches taken by the processor in the LBR stack. This option indicates the LBR format to enable for guest perf. The LBR feature is enabled if below conditions are met: 1) KVM is enabled and the PMU is enabled. 2) msr-based-feature IA32_PERF_CAPABILITIES is supporterd on KVM. 3) Supported returned value for lbr_fmt from above msr is non-zero. 4) Guest vcpu model does support FEAT_1_ECX.CPUID_EXT_PDCM. 5) User-provided lbr-fmt value doesn't violate its bitmask (0x3f). 6) Target guest LBR format matches that of host. Co-developed-by: Like Xu <like.xu@linux.intel.com> Signed-off-by: Like Xu <like.xu@linux.intel.com> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-3-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14i386/cpu: Remove the deprecated cpu model 'Icelake-Client'Robert Hoo
Icelake, is the codename for Intel 3rd generation Xeon Scalable server processors. There isn't ever client variants. This "Icelake-Client" CPU model was added wrongly and imaginarily. It has been deprecated since v5.2, now it's time to remove it completely from code. Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <1647247859-4947-1-git-send-email-robert.hu@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14WHPX: fixed TPR/CR8 translation issues affecting VM debuggingIvan Shcherbakov
This patch fixes the following error that would occur when trying to resume a WHPX-accelerated VM from a breakpoint: qemu: WHPX: Failed to set interrupt state registers, hr=c0350005 The error arises from an incorrect CR8 value being passed to WHvSetVirtualProcessorRegisters() that doesn't match the value set via WHvSetVirtualProcessorInterruptControllerState2(). Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>