Age | Commit message (Collapse) | Author |
|
KVM Hyper-V based guests can notify hypervisor about
occurred guest crash by writing into Hyper-V crash MSR's.
This patch does handling and migration of HV_X64_MSR_CRASH_P0-P4,
HV_X64_MSR_CRASH_CTL msrs. User can enable these MSR's by
'hv-crash' option.
Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Andreas Färber <afaerber@suse.de>
Message-Id: <1435924905-8926-13-git-send-email-den@openvz.org>
[Folks, stop abrviating variable names!!! Also fix compilation on
non-Linux/x86. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
We create optional sections with this patch. But we already have
optional subsections. Instead of having two mechanism that do the
same, we can just generalize it.
For subsections we just change:
- Add a needed function to VMStateDescription
- Remove VMStateSubsection (after removal of the needed function
it is just a VMStateDescription)
- Adjust the whole tree, moving the needed function to the corresponding
VMStateDescription
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU
address space gets an extra region which is an alias of
/machine/smram. This extra region is enabled or disabled
as the CPU enters/exits SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Right now, the AVX512 registers are split in many different fields:
xmm_regs for the low 128 bits of the first 16 registers, ymmh_regs
for the next 128 bits of the same first 16 registers, zmmh_regs
for the next 256 bits of the same first 16 registers, and finally
hi16_zmm_regs for the full 512 bits of the second 16 bit registers.
This makes it simple to move data in and out of the xsave region,
but would be a nightmare for a hypothetical TCG implementation and
leads to a proliferation of [XYZ]MM_[BWLSQD] macros. Instead,
this patch marshals data manually from the xsave region to a single
32x512-bit array, simplifying the macro jungle and clarifying which
bits are in which vmstate subsection.
The migration format is unaffected.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
After the next patch, each vmstate field will extract parts of a larger
(32x512-bit) array, so we cannot check the vmstate field against the
type of the array.
While changing this, change the macros to accept the index of the first
element (which will not be 0 for Hi16_ZMM_REGS) instead of the number
of elements (which is always CPU_NB_REGS).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Add xsaves related definition, it also adds corresponding part
to kvm_get/put, and vmstate.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Add AVX512 feature bits, register definition and corresponding
xsave/vmstate support.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This patch introduces cpu_set_fpuc() function, which changes fpuc field
of the CPU state and calls update_fp_status() function.
These calls update status of softfloat library and prevent bugs caused
by non-coherent rounding settings of the FPU and softfloat.
v2 changes:
* Added missed calls and intoduced setter function (as suggested by TeLeMan)
Reviewed-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
|
|
We currently define the number of variable range MTRR registers as 8
in the CPUX86State structure and vmstate, but use MSR_MTRRcap_VCNT
(also 8) to report to guests the number available. Change this to
use MSR_MTRRcap_VCNT consistently.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Invariant TSC documentation mentions that "invariant TSC will run at a
constant rate in all ACPI P-, C-. and T-states".
This is not the case if migration to a host with different TSC frequency
is allowed, or if savevm is performed. So block migration/savevm.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[AF+mtosatti: Updated error message]
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
After previous Peter patch, they are redundant. This way we don't
assign them except when needed. Once there, there were lots of case
where the ".fields" indentation was wrong:
.fields = (VMStateField []) {
and
.fields = (VMStateField []) {
Change all the combinations to:
.fields = (VMStateField[]){
The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
|
|
CS.RPL is not equal to the CPL in the few instructions between
setting CR0.PE and reloading CS. We get this right in the common
case, because writes to CR0 do not modify the CPL, but it would
not be enough if an SMI comes exactly during that brief period.
Were this to happen, the RSM instruction would erroneously set
CPL to the low two bits of the real-mode selector; and if they are
not 00, the next instruction fetch cannot access the code segment
and causes a triple fault.
However, SS.DPL *is* always equal to the CPL. In real processors
(AMD only) there is a weird case of SYSRET setting SS.DPL=SS.RPL
from the STAR register while forcing CPL=3, but we do not emulate
that.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The subsection already exists in one well-known enterprise Linux
distribution, but for some strange reason the fields were swapped
when forward-porting the patch to upstream.
Limit headaches for said enterprise Linux distributor when the
time will come to rebase their version of QEMU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1396452782-21473-1-git-send-email-pbonzini@redhat.com
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Use CPUState. Allows to clean up CPUArchState in gdbstub.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Use CPUState. This lets us drop a few local env usages.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
http://msdn.microsoft.com/en-us/library/windows/hardware/ff541625%28v=vs.85%29.aspx
This code is generic for activating reference time counter or virtual reference time stamp counter
Signed-off-by: Vadim Rozenfeld <vrozenfe@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Vadim Rozenfeld <vrozenfe@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Vadim Rozenfeld <vrozenfe@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Add some MPX related definiation, and hardcode sizes and offsets
of xsave features 3 and 4. It also add corresponding part to
kvm_get/put_xsave, and vmstate.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Convert steal time MSR vmsd callback pointer to proper X86CPU type.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The recent KVM patch adds IA32_FEATURE_CONTROL support. QEMU needs
to clear this MSR when reset vCPU and keep the value of it when
migration. This patch add this feature.
Signed-off-by: Arthur Chunqi Li <yzt356@gmail.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
|
|
Older KVM version put invalid value in the segments registers dpl field for
real mode guests (0x3).
This breaks migration from those hosts to hosts with unrestricted guest support.
We detect it by checking CS dpl value for real mode guest and fix the dpl values
of all the segment registers.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Older KVM versions save CS dpl value to an invalid value for real mode guests
(0x3). This patch detect this situation when loading CPU state and set all the
segments dpl to zero.
This will allow migration from older KVM on host without unrestricted guest
to hosts with restricted guest support.
For example migration from a Penryn host (with kernel 2.6.32) to
a Westmere host (for real mode guest) will fail with "kvm: unhandled exit 80000021".
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Read and write steal time MSR, so that reporting is functional across
migration.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
|
|
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Expose vmstate_cpu as vmstate_x86_cpu and hook it up to CPUClass::vmsd.
Adapt opaques and VMState fields to X86CPU. Drop cpu_{save,load}().
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Implicit use of dr7 bit field is a little hard to understand,
so define constants for them and use them consistently.
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
* qemu-kvm/uq/master:
qemu-kvm/pci-assign: 64 bits bar emulation
target-i386: Enabling IA32_TSC_ADJUST for QEMU KVM guest VMs
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
CPUID.7.0.EBX[1]=1 indicates IA32_TSC_ADJUST MSR 0x3b is supported
Basic design is to emulate the MSR by allowing reads and writes to the
hypervisor vcpu specific locations to store the value of the emulated MSRs.
In this way the IA32_TSC_ADJUST value will be included in all reads to
the TSC MSR whether through rdmsr or rdtsc.
As this is a new MSR that the guest may access and modify its value needs
to be migrated along with the other MRSs. The changes here are specifically
for recognizing when IA32_TSC_ADJUST is enabled in CPUID and code added
for migrating its value.
Signed-off-by: Will Auld <will.auld@intel.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Support get/set of new PV EOI MSR, for migration.
Add an optional section for MSR value - send it
out in case MSR was changed from the default value (0).
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Scripted conversion:
sed -i "s/CPUState/CPUX86State/g" target-i386/*.[hc]
sed -i "s/#define CPUX86State/#define CPUState/" target-i386/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
It's needed for its default value - bit 0 specifies that "rep movs" is
good enough for memcpy, and Linux may use a slower memcpu if it is not set,
depending on cpu family/model.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
KVM add emulation of lapic tsc deadline timer for guest.
This patch is co-operation work at qemu side.
Use subsections to save/restore the field (mtosatti).
Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
This reverts commit bfc2455ddbb41148494a084d15777e6bed7533c3.
New patch with subsections will follow.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
KVM add emulation of lapic tsc deadline timer for guest.
This patch is co-operation work at qemu side.
Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Most exec-all.h include directives are now useless, remove them.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
There is no need to specify version on the subsection fields.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
These FPU states are properly maintained by KVM but not yet by TCG. So
far we unconditionally set them to 0 in the guest which may cause
state corruptions, though not with modern guests.
To avoid breaking backward migration, use a conditional subsection that
is only written if any of the three fields is non-zero. The guest's
FNINIT clears them frequently, and cleared IA32_MISC_ENABLE MSR[2]
reduces the probability of non-zero values further so that this
subsection is not expected to restrict migration in any common scenario.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Now that target-i386 uses softfloat, floatx80 is always available and
there is no need anymore to have code handling both float64 and floax80.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
This reverts commit c995b495b9d6e60ab1e390bd398a22425d0b3c8c.
From Jan Kiszka:
Ouch, indeed. Moreover, CPU_SAVE_VERSION was not updated (likely the
reason for the breakage). Thanks for debugging this!
Anthony (or whoever), please revert this unneeded commit in qemu.git.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Add save/restore of MSR for migration and cpuid bit.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:
- cpu_synchronize_all_states in qemu_savevm_state_complete
(initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
(writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
(writeback after system reset)
These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:
- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE (on init or vmload, all VCPUs stopped as well)
This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.
cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.
Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
This reverts commit ebbc8a3d8e76d0402f8a08c10c0f32e24715d41d.
As suggested by Jan Kiszka,
"It was obsoleted by d1793b836f8f123b961c613de1bb1c0c185c84cc and now
saves/restores a useless field."
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Marcelo correctly remarked that there are usage conflicts between QEMU
core code and KVM /wrt exception_index. So spend a separate field and
also save/restore it properly.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|