diff options
author | Juan Quintela <quintela@redhat.com> | 2012-09-21 11:18:18 +0200 |
---|---|---|
committer | Juan Quintela <quintela@redhat.com> | 2012-12-20 23:09:25 +0100 |
commit | e4ed1541ac9413eac494a03532e34beaf8a7d1c5 (patch) | |
tree | 38a01a9697455a8e8f376372cbf9557513f118f5 /migration.c | |
parent | f50b4986b261fc10065289d2a03deba24d824988 (diff) |
savevm: New save live migration method: pending
Code just now does (simplified for clarity)
if (qemu_savevm_state_iterate(s->file) == 1) {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
Problem here is that qemu_savevm_state_iterate() returns 1 when it
knows that remaining memory to sent takes less than max downtime.
But this means that we could end spending 2x max_downtime, one
downtime in qemu_savevm_iterate, and the other in
qemu_savevm_state_complete.
Changed code to:
pending_size = qemu_savevm_state_pending(s->file, max_size);
DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
if (pending_size >= max_size) {
ret = qemu_savevm_state_iterate(s->file);
} else {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
So what we do is: at current network speed, we calculate the maximum
number of bytes we can sent: max_size.
Then we ask every save_live section how much they have pending. If
they are less than max_size, we move to complete phase, otherwise we
do an iterate one.
This makes things much simpler, because now individual sections don't
have to caluclate the bandwidth (it was implossible to do right from
there).
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'migration.c')
-rw-r--r-- | migration.c | 22 |
1 files changed, 15 insertions, 7 deletions
diff --git a/migration.c b/migration.c index 11123bcea0..b6374ae072 100644 --- a/migration.c +++ b/migration.c @@ -316,15 +316,17 @@ ssize_t migrate_fd_put_buffer(MigrationState *s, const void *data, return ret; } -void migrate_fd_put_ready(MigrationState *s) +bool migrate_fd_put_ready(MigrationState *s, uint64_t max_size) { int ret; + uint64_t pending_size; + bool last_round = false; qemu_mutex_lock_iothread(); if (s->state != MIG_STATE_ACTIVE) { DPRINTF("put_ready returning because of non-active state\n"); qemu_mutex_unlock_iothread(); - return; + return false; } if (s->first_time) { s->first_time = false; @@ -334,15 +336,19 @@ void migrate_fd_put_ready(MigrationState *s) DPRINTF("failed, %d\n", ret); migrate_fd_error(s); qemu_mutex_unlock_iothread(); - return; + return false; } } DPRINTF("iterate\n"); - ret = qemu_savevm_state_iterate(s->file); - if (ret < 0) { - migrate_fd_error(s); - } else if (ret == 1) { + pending_size = qemu_savevm_state_pending(s->file, max_size); + DPRINTF("pending size %lu max %lu\n", pending_size, max_size); + if (pending_size >= max_size) { + ret = qemu_savevm_state_iterate(s->file); + if (ret < 0) { + migrate_fd_error(s); + } + } else { int old_vm_running = runstate_is_running(); int64_t start_time, end_time; @@ -368,9 +374,11 @@ void migrate_fd_put_ready(MigrationState *s) vm_start(); } } + last_round = true; } qemu_mutex_unlock_iothread(); + return last_round; } static void migrate_fd_cancel(MigrationState *s) |