diff options
author | Juan Quintela <quintela@redhat.com> | 2012-09-21 11:18:18 +0200 |
---|---|---|
committer | Juan Quintela <quintela@redhat.com> | 2012-12-20 23:09:25 +0100 |
commit | e4ed1541ac9413eac494a03532e34beaf8a7d1c5 (patch) | |
tree | 38a01a9697455a8e8f376372cbf9557513f118f5 /bsd-user | |
parent | f50b4986b261fc10065289d2a03deba24d824988 (diff) |
savevm: New save live migration method: pending
Code just now does (simplified for clarity)
if (qemu_savevm_state_iterate(s->file) == 1) {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
Problem here is that qemu_savevm_state_iterate() returns 1 when it
knows that remaining memory to sent takes less than max downtime.
But this means that we could end spending 2x max_downtime, one
downtime in qemu_savevm_iterate, and the other in
qemu_savevm_state_complete.
Changed code to:
pending_size = qemu_savevm_state_pending(s->file, max_size);
DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
if (pending_size >= max_size) {
ret = qemu_savevm_state_iterate(s->file);
} else {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
So what we do is: at current network speed, we calculate the maximum
number of bytes we can sent: max_size.
Then we ask every save_live section how much they have pending. If
they are less than max_size, we move to complete phase, otherwise we
do an iterate one.
This makes things much simpler, because now individual sections don't
have to caluclate the bandwidth (it was implossible to do right from
there).
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'bsd-user')
0 files changed, 0 insertions, 0 deletions