aboutsummaryrefslogtreecommitdiff
path: root/target/i386/cpu.h
AgeCommit message (Collapse)Author
2022-10-20target/i386: implement F16C instructionsPaolo Bonzini
F16C only consists of two instructions, which are a bit peculiar nevertheless. First, they access only the low half of an YMM or XMM register for the packed-half operand; the exact size still depends on the VEX.L flag. This is similar to the existing avx_movx flag, but not exactly because avx_movx is hardcoded to affect operand 2. To this end I added a "ph" format name; it's possible to reuse this approach for the VPMOVSX and VPMOVZX instructions, though that would also require adding two more formats for the low-quarter and low-eighth of an operand. Second, VCVTPS2PH is somewhat weird because it *stores* the result of the instruction into memory rather than loading it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: add AVX_EN hflagPaul Brook
Add a new hflag bit to determine whether AVX instructions are allowed Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-4-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Define XMMReg and access macros, align ZMM registersRichard Henderson
This will be used for emission and endian adjustments of gvec operations. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822223722.1697758-2-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use MMU_NESTED_IDX for vmload/vmsaveRichard Henderson
Use MMU_NESTED_IDX for each memory access, rather than just a single translation to physical. Adjust svm_save_seg and svm_load_seg to pass in mmu_idx. This removes the last use of get_hphys so remove it. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-7-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDXRichard Henderson
These new mmu indexes will be helpful for improving paging and code throughout the target. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-6-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18hyperv: fix SynIC SINT assertion failure on guest resetMaciej S. Szmigiero
Resetting a guest that has Hyper-V VMBus support enabled triggers a QEMU assertion failure: hw/hyperv/hyperv.c:131: synic_reset: Assertion `QLIST_EMPTY(&synic->sint_routes)' failed. This happens both on normal guest reboot or when using "system_reset" HMP command. The failing assertion was introduced by commit 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") to catch dangling SINT routes on SynIC reset. The root cause of this problem is that the SynIC itself is reset before devices using SINT routes have chance to clean up these routes. Since there seems to be no existing mechanism to force reset callbacks (or methods) to be executed in specific order let's use a similar method that is already used to reset another interrupt controller (APIC) after devices have been reset - by invoking the SynIC reset from the machine reset handler via a new x86_cpu_after_reset() function co-located with the existing x86_cpu_reset() in target/i386/cpu.c. Opportunistically move the APIC reset handler there, too. Fixes: 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") # exposed the bug Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <cb57cee2e29b20d06f81dce054cbcea8b5d497e8.1664552976.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-13Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* scsi-disk: support setting CD-ROM block size via device options * target/i386: Implement MSR_CORE_THREAD_COUNT MSR * target/i386: notify VM exit support * target/i386: PC-relative translation block support * target/i386: support for XSAVE state in signal frames (linux-user) # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNFKP4UHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroNJnwgAgCcOOxmY4Qem0Gd1L+SJKpEtGMOd # 4LY7443vT36pMpvqFNSfp5GBjDT1MgTD8BIY28miLMq959LT89LyM9g/H7IKOT82 # uyCsW3jW+6F19EZVkNvzTt+3USn/kaHn50zA4Ss9kvdNZr31b2LYqtglVCznfZwH # oI1rDhvsXubq8oWvwkqH7IwduK8mw+EB5Yz7AjYQ6eiYjenTrQBObpwQNbb4rlUf # oRm8dk/YJ2gfI2HQkoznGEbgpngy2tIU1vHNEpIk5NpwXxrulOyui3+sWaG4pH8f # oAOrSDC23M5A6jBJJAzDJ1q6M677U/kwJypyGQ7IyvyhECXE3tR+lHX1eA== # =tqeJ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 11 Oct 2022 04:27:42 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (37 commits) linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstate linux-user: i386/signal: support FXSAVE fpstate on 32-bit emulation linux-user: i386/signal: move fpstate at the end of the 32-bit frames KVM: x86: Implement MSR_CORE_THREAD_COUNT MSR i386: kvm: Add support for MSR filtering x86: Implement MSR_CORE_THREAD_COUNT MSR target/i386: Enable TARGET_TB_PCREL target/i386: Inline gen_jmp_im target/i386: Add cpu_eip target/i386: Create eip_cur_tl target/i386: Merge gen_jmp_tb and gen_goto_tb into gen_jmp_rel target/i386: Remove MemOp argument to gen_op_j*_ecx target/i386: Use gen_jmp_rel for DISAS_TOO_MANY target/i386: Use gen_jmp_rel for gen_jcc target/i386: Use gen_jmp_rel for loop, repz, jecxz insns target/i386: Create gen_jmp_rel target/i386: Use DISAS_TOO_MANY to exit after gen_io_start target/i386: Create eip_next_* target/i386: Truncate values for lcall_real to i32 target/i386: Introduce DISAS_JUMP ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-11linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstatePaolo Bonzini
Add support for saving/restoring extended save states when signals are delivered. This allows using AVX, MPX or PKRU registers in signal handlers. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-10i386: kvm: extend kvm_{get, put}_vcpu_events to support pending triple faultChenyi Qiang
For the direct triple faults, i.e. hardware detected and KVM morphed to VM-Exit, KVM will never lose them. But for triple faults sythesized by KVM, e.g. the RSM path, if KVM exits to userspace before the request is serviced, userspace could migrate the VM and lose the triple fault. A new flag KVM_VCPUEVENT_VALID_TRIPLE_FAULT is defined to signal that the event.triple_fault_pending field contains a valid state if the KVM_CAP_X86_TRIPLE_FAULT_EVENT capability is enabled. Acked-by: Peter Xu <peterx@redhat.com> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> Message-Id: <20220929072014.20705-2-chenyi.qiang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-06dump: Replace opaque DumpState pointer with a typed oneJanosch Frank
It's always better to convey the type of a pointer if at all possible. So let's add the DumpState typedef to typedefs.h and move the dump note functions from the opaque pointers to DumpState pointers. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> CC: Peter Maydell <peter.maydell@linaro.org> CC: Cédric Le Goater <clg@kaod.org> CC: Daniel Henrique Barboza <danielhb413@gmail.com> CC: David Gibson <david@gibson.dropbear.id.au> CC: Greg Kurz <groug@kaod.org> CC: Palmer Dabbelt <palmer@dabbelt.com> CC: Alistair Francis <alistair.francis@wdc.com> CC: Bin Meng <bin.meng@windriver.com> CC: Cornelia Huck <cohuck@redhat.com> CC: Thomas Huth <thuth@redhat.com> CC: Richard Henderson <richard.henderson@linaro.org> CC: David Hildenbrand <david@redhat.com> Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
2022-05-25i386: Hyper-V Direct TLB flush hypercallVitaly Kuznetsov
Hyper-V TLFS allows for L0 and L1 hypervisors to collaborate on L2's TLB flush hypercalls handling. With the correct setup, L2's TLB flush hypercalls can be handled by L0 directly, without the need to exit to L1. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-6-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Support extended GVA ranges for TLB flush hypercallsVitaly Kuznetsov
KVM kind of supported "extended GVA ranges" (up to 4095 additional GFNs per hypercall) since the implementation of Hyper-V PV TLB flush feature (Linux-4.18) as regardless of the request, full TLB flush was always performed. "Extended GVA ranges for TLB flush hypercalls" feature bit wasn't exposed then. Now, as KVM gains support for fine-grained TLB flush handling, exposing this feature starts making sense. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V XMM fast hypercall input featureVitaly Kuznetsov
Hyper-V specification allows to pass parameters for certain hypercalls using XMM registers ("XMM Fast Hypercall Input"). When the feature is in use, it allows for faster hypercalls processing as KVM can avoid reading guest's memory. KVM supports the feature since v5.14. Rename HV_HYPERCALL_{PARAMS_XMM_AVAILABLE -> XMM_INPUT_AVAILABLE} to comply with KVM. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Hyper-V Enlightened MSR bitmap featureVitaly Kuznetsov
The newly introduced enlightenment allow L0 (KVM) and L1 (Hyper-V) hypervisors to collaborate to avoid unnecessary updates to L2 MSR-Bitmap upon vmexits. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25i386: Use hv_build_cpuid_leaf() for HV_CPUID_NESTED_FEATURESVitaly Kuznetsov
Previously, HV_CPUID_NESTED_FEATURES.EAX CPUID leaf was handled differently as it was only used to encode the supported eVMCS version range. In fact, there are also feature (e.g. Enlightened MSR-Bitmap) bits there. In preparation to adding these features, move HV_CPUID_NESTED_FEATURES leaf handling to hv_build_cpuid_leaf() and drop now-unneeded 'hyperv_nested'. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220525115949.1294004-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add MSR access interface for Arch LBRYang Weijiang
In the first generation of Arch LBR, the max support Arch LBR depth is 32, both host and guest use the value to set depth MSR. This can simplify the implementation of patch given the side-effect of mismatch of host/guest depth MSR: XRSTORS will reset all recording MSRs to 0s if the saved depth mismatches MSR_ARCH_LBR_DEPTH. In most of the cases Arch LBR is not in active status, so check the control bit before save/restore the big chunck of Arch LBR MSRs. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-7-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add XSAVES support for Arch LBRYang Weijiang
Define Arch LBR bit in XSS and save/restore structure for XSAVE area size calculation. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-6-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Enable support for XSAVES based featuresYang Weijiang
There're some new features, including Arch LBR, depending on XSAVES/XRSTORS support, the new instructions will save/restore data based on feature bits enabled in XCR0 | XSS. This patch adds the basic support for related CPUID enumeration and meanwhile changes the name from FEAT_XSAVE_COMP_{LO|HI} to FEAT_XSAVE_XCR0_{LO|HI} to differentiate clearly the feature bits in XCR0 and those in XSS. Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-5-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-14target/i386: Add lbr-fmt vPMU option to support guest LBRYang Weijiang
The Last Branch Recording (LBR) is a performance monitor unit (PMU) feature on Intel processors which records a running trace of the most recent branches taken by the processor in the LBR stack. This option indicates the LBR format to enable for guest perf. The LBR feature is enabled if below conditions are met: 1) KVM is enabled and the PMU is enabled. 2) msr-based-feature IA32_PERF_CAPABILITIES is supporterd on KVM. 3) Supported returned value for lbr_fmt from above msr is non-zero. 4) Guest vcpu model does support FEAT_1_ECX.CPUID_EXT_PDCM. 5) User-provided lbr-fmt value doesn't violate its bitmask (0x3f). 6) Target guest LBR format matches that of host. Co-developed-by: Like Xu <like.xu@linux.intel.com> Signed-off-by: Like Xu <like.xu@linux.intel.com> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Message-Id: <20220215195258.29149-3-weijiang.yang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-13target/i386: Remove unused XMMReg, YMMReg types and CPUState fieldsPeter Maydell
In commit b7711471f5 in 2014 we refactored the handling of the x86 vector registers so that instead of separate structs XMMReg, YMMReg and ZMMReg for representing the 16-byte, 32-byte and 64-byte width vector registers and multiple fields in the CPU state, we have a single type (XMMReg, later renamed to ZMMReg) and a single struct field (xmm_regs). However, in 2017 in commit c97d6d2cdf97ed some of the old struct types and CPU state fields got added back, when we merged in the hvf support (which had developed in a separate fork that had presumably not had the refactoring of b7711471f5), as part of code handling xsave. Commit f585195ec07 then almost immediately dropped that xsave code again in favour of sharing the xsave handling with KVM, but forgot to remove the now unused CPU state fields and struct types. Delete the unused types and CPUState fields. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220412110047.1497190-1-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06hyperv: Add support to process syndbg commandsJon Doron
SynDbg commands can come from two different flows: 1. Hypercalls, in this mode the data being sent is fully encapsulated network packets. 2. SynDbg specific MSRs, in this mode only the data that needs to be transfered is passed. Signed-off-by: Jon Doron <arilou@gmail.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220216102500.692781-4-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Move CPU softfloat unions to cpu-float.hMarc-André Lureau
The types are no longer used in bswap.h since commit f930224fffe ("bswap.h: Remove unused float-access functions"), there isn't much sense in keeping it there and having a dependency on fpu/. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20220323155743.1585078-29-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-04-06Replace config-time define HOST_WORDS_BIGENDIANMarc-André Lureau
Replace a config-time define with a compile time condition define (compatible with clang and gcc) that must be declared prior to its usage. This avoids having a global configure time define, but also prevents from bad usage, if the config header wasn't included before. This can help to make some code independent from qemu too. gcc supports __BYTE_ORDER__ from about 4.6 and clang from 3.2. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> [ For the s390x parts I'm involved in ] Acked-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220323155743.1585078-7-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-24target/i386: properly reset TSC on resetPaolo Bonzini
Some versions of Windows hang on reboot if their TSC value is greater than 2^54. The calibration of the Hyper-V reference time overflows and fails; as a result the processors' clock sources are out of sync. The issue is that the TSC _should_ be reset to 0 on CPU reset and QEMU tries to do that. However, KVM special cases writing 0 to the TSC and thinks that QEMU is trying to hot-plug a CPU, which is correct the first time through but not later. Thwart this valiant effort and reset the TSC to 1 instead, but only if the CPU has been run once. For this to work, env->tsc has to be moved to the part of CPUArchState that is not zeroed at the beginning of x86_cpu_reset. Reported-by: Vadim Rozenfeld <vrozenfe@redhat.com> Supersedes: <20220324082346.72180-1-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-23KVM: x86: workaround invalid CPUID[0xD,9] info on some AMD processorsPaolo Bonzini
Some AMD processors expose the PKRU extended save state even if they do not have the related PKU feature in CPUID. Worse, when they do they report a size of 64, whereas the expected size of the PKRU extended save state is 8, therefore the esa->size == eax assertion does not hold. The state is already ignored by KVM_GET_SUPPORTED_CPUID because it was not enabled in the host XCR0. However, QEMU kvm_cpu_xsave_init() runs before QEMU invokes arch_prctl() to enable dynamically-enabled save states such as XTILEDATA, and KVM_GET_SUPPORTED_CPUID hides save states that have yet to be enabled. Therefore, kvm_cpu_xsave_init() needs to consult the host CPUID instead of KVM_GET_SUPPORTED_CPUID, and dies with an assertion failure. When setting up the ExtSaveArea array to match the host, ignore features that KVM does not report as supported. This will cause QEMU to skip the incorrect CPUID leaf instead of tripping the assertion. Closes: https://gitlab.com/qemu-project/qemu/-/issues/916 Reported-by: Daniel P. Berrangé <berrange@redhat.com> Analyzed-by: Yang Zhong <yang.zhong@intel.com> Reported-by: Peter Krempa <pkrempa@redhat.com> Tested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: Support XFD and AMX xsave data migrationZeng Guang
XFD(eXtended Feature Disable) allows to enable a feature on xsave state while preventing specific user threads from using the feature. Support save and restore XFD MSRs if CPUID.D.1.EAX[4] enumerate to be valid. Likewise migrate the MSRs and related xsave state necessarily. Signed-off-by: Zeng Guang <guang.zeng@intel.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220217060434.52460-8-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: add support for KVM_CAP_XSAVE2 and AMX state migrationJing Liu
When dynamic xfeatures (e.g. AMX) are used by the guest, the xsave area would be larger than 4KB. KVM_GET_XSAVE2 and KVM_SET_XSAVE under KVM_CAP_XSAVE2 works with a xsave buffer larger than 4KB. Always use the new ioctls under KVM_CAP_XSAVE2 when KVM supports it. Signed-off-by: Jing Liu <jing2.liu@intel.com> Signed-off-by: Zeng Guang <guang.zeng@intel.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220217060434.52460-7-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: Add XFD faulting bit for state componentsJing Liu
Intel introduces XFD faulting mechanism for extended XSAVE features to dynamically enable the features in runtime. If CPUID (EAX=0Dh, ECX=n, n>1).ECX[2] is set as 1, it indicates support for XFD faulting of this state component. Signed-off-by: Jing Liu <jing2.liu@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220217060434.52460-5-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: Grant AMX permission for guestYang Zhong
Kernel allocates 4K xstate buffer by default. For XSAVE features which require large state component (e.g. AMX), Linux kernel dynamically expands the xstate buffer only after the process has acquired the necessary permissions. Those are called dynamically- enabled XSAVE features (or dynamic xfeatures). There are separate permissions for native tasks and guests. Qemu should request the guest permissions for dynamic xfeatures which will be exposed to the guest. This only needs to be done once before the first vcpu is created. KVM implemented one new ARCH_GET_XCOMP_SUPP system attribute API to get host side supported_xcr0 and Qemu can decide if it can request dynamically enabled XSAVE features permission. https://lore.kernel.org/all/20220126152210.3044876-1-pbonzini@redhat.com/ Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Jing Liu <jing2.liu@intel.com> Message-Id: <20220217060434.52460-4-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: Add AMX XTILECFG and XTILEDATA componentsJing Liu
The AMX TILECFG register and the TMMx tile data registers are saved/restored via XSAVE, respectively in state component 17 (64 bytes) and state component 18 (8192 bytes). Add AMX feature bits to x86_ext_save_areas array to set up AMX components. Add structs that define the layout of AMX XSAVE areas and use QEMU_BUILD_BUG_ON to validate the structs sizes. Signed-off-by: Jing Liu <jing2.liu@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220217060434.52460-3-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-15x86: Fix the 64-byte boundary enumeration for extended stateJing Liu
The extended state subleaves (EAX=0Dh, ECX=n, n>1).ECX[1] indicate whether the extended state component locates on the next 64-byte boundary following the preceding state component when the compacted format of an XSAVE area is used. Right now, they are all zero because no supported component needed the bit to be set, but the upcoming AMX feature will use it. Fix the subleaves value according to KVM's supported cpuid. Signed-off-by: Jing Liu <jing2.liu@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20220217060434.52460-2-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-06target: Use ArchCPU as interface to target CPUPhilippe Mathieu-Daudé
ArchCPU is our interface with target-specific code. Use it as a forward-declared opaque pointer (abstract type), having its structure defined by each target. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-15-f4bug@amsat.org>
2022-03-06target: Introduce and use OBJECT_DECLARE_CPU_TYPE() macroPhilippe Mathieu-Daudé
Replace the boilerplate code to declare CPU QOM types and macros, and forward-declare the CPU instance type. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-14-f4bug@amsat.org>
2022-03-06target: Use CPUArchState as interface to target-specific CPU statePhilippe Mathieu-Daudé
While CPUState is our interface with generic code, CPUArchState is our interface with target-specific code. Use CPUArchState as an abstract type, defined by each target. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220214183144.27402-13-f4bug@amsat.org>
2022-02-16target/i386: add TCG support for UMIPGareth Webb
Signed-off-by: Gareth Webb <gareth.webb@umbralsoftware.co.uk> Message-Id: <164425598317.21902.4257759159329756142-1@git.sr.ht> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-01-12KVM: use KVM_{GET|SET}_SREGS2 when supported.Maxim Levitsky
This allows to make PDPTRs part of the migration stream and thus not reload them after migration which is against X86 spec. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20211101132300.192584-2-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-11-02KVM: SVM: add migration support for nested TSC scalingMaxim Levitsky
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20211101132300.192584-4-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-10-01i386: Make Hyper-V version id configurableVitaly Kuznetsov
Currently, we hardcode Hyper-V version id (CPUID 0x40000002) to WS2008R2 and it is known that certain tools in Windows check this. It seems useful to provide some flexibility by making it possible to change this info at will. CPUID information is defined in TLFS as: EAX: Build Number EBX Bits 31-16: Major Version Bits 15-0: Minor Version ECX Service Pack EDX Bits 31-24: Service Branch Bits 23-0: Service Number Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210902093530.345756-8-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-10-01i386: Implement pseudo 'hv-avic' ('hv-apicv') enlightenmentVitaly Kuznetsov
The enlightenment allows to use Hyper-V SynIC with hardware APICv/AVIC enabled. Normally, Hyper-V SynIC disables these hardware features and suggests the guest to use paravirtualized AutoEOI feature. Linux-4.15 gains support for conditional APICv/AVIC disablement, the feature stays on until the guest tries to use AutoEOI feature with SynIC. With 'HV_DEPRECATING_AEOI_RECOMMENDED' bit exposed, modern enough Windows/ Hyper-V versions should follow the recommendation and not use the (unwanted) feature. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210902093530.345756-7-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-10-01i386: Support KVM_CAP_HYPERV_ENFORCE_CPUIDVitaly Kuznetsov
By default, KVM allows the guest to use all currently supported Hyper-V enlightenments when Hyper-V CPUID interface was exposed, regardless of if some features were not announced in guest visible CPUIDs. hv-enforce-cpuid feature alters this behavior and only allows the guest to use exposed Hyper-V enlightenments. The feature is supported by Linux >= 5.14 and is not enabled by default in QEMU. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210902093530.345756-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-10-01i386: Support KVM_CAP_ENFORCE_PV_FEATURE_CPUIDVitaly Kuznetsov
By default, KVM allows the guest to use all currently supported PV features even when they were not announced in guest visible CPUIDs. Introduce a new "kvm-pv-enforce-cpuid" flag to limit the supported feature set to the exposed features. The feature is supported by Linux >= 5.10 and is not enabled by default in QEMU. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210902093530.345756-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30i386: Add get/set/migrate support for SGX_LEPUBKEYHASH MSRsSean Christopherson
On real hardware, on systems that supports SGX Launch Control, those MSRs are initialized to digest of Intel's signing key; on systems that don't support SGX Launch Control, those MSRs are not available but hardware always uses digest of Intel's signing key in EINIT. KVM advertises SGX LC via CPUID if and only if the MSRs are writable. Unconditionally initialize those MSRs to digest of Intel's signing key when CPU is realized and reset to reflect the fact. This avoids potential bug in case kvm_arch_put_registers() is called before kvm_arch_get_registers() is called, in which case guest's virtual SGX_LEPUBKEYHASH MSRs will be set to 0, although KVM initializes those to digest of Intel's signing key by default, since KVM allows those MSRs to be updated by Qemu to support live migration. Save/restore the SGX Launch Enclave Public Key Hash MSRs if SGX Launch Control (LC) is exposed to the guest. Likewise, migrate the MSRs if they are writable by the guest. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-11-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30i386: Add SGX CPUID leaf FEAT_SGX_12_1_EAXSean Christopherson
CPUID leaf 12_1_EAX is an Intel-defined feature bits leaf enumerating the platform's SGX capabilities that may be utilized by an enclave, e.g. whether or not an enclave can gain access to the provision key. Currently there are six capabilities: - INIT: set when the enclave has has been initialized by EINIT. Cannot be set by software, i.e. forced to zero in CPUID. - DEBUG: permits a debugger to read/write into the enclave. - MODE64BIT: the enclave runs in 64-bit mode - PROVISIONKEY: grants has access to the provision key - EINITTOKENKEY: grants access to the EINIT token key, i.e. the enclave can generate EINIT tokens - KSS: Key Separation and Sharing enabled for the enclave. Note that the entirety of CPUID.0x12.0x1, i.e. all registers, enumerates the allowed ATTRIBUTES (128 bits), but only bits 31:0 are directly exposed to the user (via FEAT_12_1_EAX). Bits 63:32 are currently all reserved and bits 127:64 correspond to the allowed XSAVE Feature Request Mask, which is calculated based on other CPU features, e.g. XSAVE, MPX, AVX, etc... and is not exposed to the user. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-10-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30i386: Add SGX CPUID leaf FEAT_SGX_12_0_EBXSean Christopherson
CPUID leaf 12_0_EBX is an Intel-defined feature bits leaf enumerating the platform's SGX extended capabilities. Currently there is a single capabilitiy: - EXINFO: record information about #PFs and #GPs in the enclave's SSA Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-9-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30i386: Add SGX CPUID leaf FEAT_SGX_12_0_EAXSean Christopherson
CPUID leaf 12_0_EAX is an Intel-defined feature bits leaf enumerating the CPU's SGX capabilities, e.g. supported SGX instruction sets. Currently there are four enumerated capabilities: - SGX1 instruction set, i.e. "base" SGX - SGX2 instruction set for dynamic EPC management - ENCLV instruction set for VMM oversubscription of EPC - ENCLS-C instruction set for thread safe variants of ENCLS Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-8-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30i386: Add primary SGX CPUID and MSR definesSean Christopherson
Add CPUID defines for SGX and SGX Launch Control (LC), as well as defines for their associated FEATURE_CONTROL MSR bits. Define the Launch Enclave Public Key Hash MSRs (LE Hash MSRs), which exist when SGX LC is present (in CPUID), and are writable when SGX LC is enabled (in FEATURE_CONTROL). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-7-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-21include/exec: Move cpu_signal_handler declarationRichard Henderson
There is nothing target specific about this. The implementation is host specific, but the declaration is 100% common. Reviewed-By: Warner Losh <imp@bsdimp.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14user: Remove cpu_get_pic_interrupt() stubsPhilippe Mathieu-Daudé
cpu_get_pic_interrupt() is now unreachable from user-mode, delete the unnecessary stubs. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210911165434.531552-25-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14target/i386: Restrict sysemu-only fpu_helper helpersPhilippe Mathieu-Daudé
Restrict some sysemu-only fpu_helper helpers (see commit 83a3d9c7402: "i386: separate fpu_helper sysemu-only parts"). Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Warner Losh <imp@bsdimp.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210911165434.531552-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13target/i386: Added vVMLOAD and vVMSAVE featureLara Lazier
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without causing a VMEXIT. (APM2 15.33.1) Signed-off-by: Lara Lazier <laramglazier@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>