diff options
author | Jan Kiszka <jan.kiszka@siemens.com> | 2010-03-01 19:10:30 +0100 |
---|---|---|
committer | Marcelo Tosatti <mtosatti@redhat.com> | 2010-03-04 00:29:28 -0300 |
commit | ea375f9ab8c76686dca0af8cb4f87a4eb569cad3 (patch) | |
tree | 51e0476453c95a64bd34bc148082ac277a458203 /kvm.h | |
parent | b0b1d69079fcb9453f45aade9e9f6b71422147b0 (diff) |
KVM: Rework VCPU state writeback API
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:
- cpu_synchronize_all_states in qemu_savevm_state_complete
(initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
(writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
(writeback after system reset)
These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:
- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE (on init or vmload, all VCPUs stopped as well)
This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.
cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.
Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Diffstat (limited to 'kvm.h')
-rw-r--r-- | kvm.h | 25 |
1 files changed, 24 insertions, 1 deletions
@@ -82,7 +82,14 @@ int kvm_arch_pre_run(CPUState *env, struct kvm_run *run); int kvm_arch_get_registers(CPUState *env); -int kvm_arch_put_registers(CPUState *env); +/* state subset only touched by the VCPU itself during runtime */ +#define KVM_PUT_RUNTIME_STATE 1 +/* state subset modified during VCPU reset */ +#define KVM_PUT_RESET_STATE 2 +/* full state set, modified during initialization or on vmload */ +#define KVM_PUT_FULL_STATE 3 + +int kvm_arch_put_registers(CPUState *env, int level); int kvm_arch_init(KVMState *s, int smp_cpus); @@ -126,6 +133,8 @@ int kvm_check_extension(KVMState *s, unsigned int extension); uint32_t kvm_arch_get_supported_cpuid(CPUState *env, uint32_t function, int reg); void kvm_cpu_synchronize_state(CPUState *env); +void kvm_cpu_synchronize_post_reset(CPUState *env); +void kvm_cpu_synchronize_post_init(CPUState *env); /* generic hooks - to be moved/refactored once there are more users */ @@ -136,4 +145,18 @@ static inline void cpu_synchronize_state(CPUState *env) } } +static inline void cpu_synchronize_post_reset(CPUState *env) +{ + if (kvm_enabled()) { + kvm_cpu_synchronize_post_reset(env); + } +} + +static inline void cpu_synchronize_post_init(CPUState *env) +{ + if (kvm_enabled()) { + kvm_cpu_synchronize_post_init(env); + } +} + #endif |