aboutsummaryrefslogtreecommitdiff
path: root/migration/migration.c
AgeCommit message (Collapse)Author
2023-10-04migration: file URISteve Sistare
Extend the migration URI to support file:<filename>. This can be used for any migration scenario that does not require a reverse path. It can be used as an alternative to 'exec:cat > file' in minimized containers that do not contain /bin/sh, and it is easier to use than the fd:<fdname> URI. It can be used in HMP commands, and as a qemu command-line parameter. For best performance, guest ram should be shared and x-ignore-shared should be true, so guest pages are not written to the file, in which case the guest may remain running. If ram is not so configured, then the user is advised to stop the guest first. Otherwise, a busy guest may re-dirty the same page, causing it to be appended to the file multiple times, and the file may grow unboundedly. That issue is being addressed in the "fixed-ram" patch series. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Tested-by: Michael Galaxy <mgalaxy@akamai.com> Reviewed-by: Michael Galaxy <mgalaxy@akamai.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <1694182931-61390-2-git-send-email-steven.sistare@oracle.com>
2023-09-29migration: Refactor repeated call of yank_unregister_instanceTejus GK
In the function qmp_migrate(), yank_unregister_instance() gets called twice which isn't required. Hence, refactoring it so that it gets called during the local_error cleanup. Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Signed-off-by: Tejus GK <tejus.gk@nutanix.com> Message-ID: <20230621130940.178659-3-tejus.gk@nutanix.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-09-27migration: Move return path cleanup to main migration threadFabiano Rosas
Now that the return path thread is allowed to finish during a paused migration, we can move the cleanup of the QEMUFiles to the main migration thread. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-9-farosas@suse.de>
2023-09-27migration: Replace the return path retry logicFabiano Rosas
Replace the return path retry logic with finishing and restarting the thread. This fixes a race when resuming the migration that leads to a segfault. Currently when doing postcopy we consider that an IO error on the return path file could be due to a network intermittency. We then keep the thread alive but have it do cleanup of the 'from_dst_file' and wait on the 'postcopy_pause_rp' semaphore. When the user issues a migrate resume, a new return path is opened and the thread is allowed to continue. There's a race condition in the above mechanism. It is possible for the new return path file to be setup *before* the cleanup code in the return path thread has had a chance to run, leading to the *new* file being closed and the pointer set to NULL. When the thread is released after the resume, it tries to dereference 'from_dst_file' and crashes: Thread 7 "return path" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffd1dbf700 (LWP 9611)] 0x00005555560e4893 in qemu_file_get_error_obj (f=0x0, errp=0x0) at ../migration/qemu-file.c:154 154 return f->last_error; (gdb) bt #0 0x00005555560e4893 in qemu_file_get_error_obj (f=0x0, errp=0x0) at ../migration/qemu-file.c:154 #1 0x00005555560e4983 in qemu_file_get_error (f=0x0) at ../migration/qemu-file.c:206 #2 0x0000555555b9a1df in source_return_path_thread (opaque=0x555556e06000) at ../migration/migration.c:1876 #3 0x000055555602e14f in qemu_thread_start (args=0x55555782e780) at ../util/qemu-thread-posix.c:541 #4 0x00007ffff38d76ea in start_thread (arg=0x7fffd1dbf700) at pthread_create.c:477 #5 0x00007ffff35efa6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Here's the race (important bit is open_return_path happening before migration_release_dst_files): migration | qmp | return path --------------------------+-----------------------------+--------------------------------- qmp_migrate_pause() shutdown(ms->to_dst_file) f->last_error = -EIO migrate_detect_error() postcopy_pause() set_state(PAUSED) wait(postcopy_pause_sem) qmp_migrate(resume) migrate_fd_connect() resume = state == PAUSED open_return_path <-- TOO SOON! set_state(RECOVER) post(postcopy_pause_sem) (incoming closes to_src_file) res = qemu_file_get_error(rp) migration_release_dst_files() ms->rp_state.from_dst_file = NULL post(postcopy_pause_rp_sem) postcopy_pause_return_path_thread() wait(postcopy_pause_rp_sem) rp = ms->rp_state.from_dst_file goto retry qemu_file_get_error(rp) SIGSEGV ------------------------------------------------------------------------------------------- We can keep the retry logic without having the thread alive and waiting. The only piece of data used by it is the 'from_dst_file' and it is only allowed to proceed after a migrate resume is issued and the semaphore released at migrate_fd_connect(). Move the retry logic to outside the thread by waiting for the thread to finish before pausing the migration. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-8-farosas@suse.de>
2023-09-27migration: Consolidate return path closing codeFabiano Rosas
We'll start calling the await_return_path_close_on_source() function from other parts of the code, so move all of the related checks and tracepoints into it. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-7-farosas@suse.de>
2023-09-27migration: Remove redundant cleanup of postcopy_qemufile_srcFabiano Rosas
This file is owned by the return path thread which is already doing cleanup. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-6-farosas@suse.de>
2023-09-27migration: Fix possible race when shutting down to_dst_fileFabiano Rosas
It's not safe to call qemu_file_shutdown() on the to_dst_file without first checking for the file's presence under the lock. The cleanup of this file happens at postcopy_pause() and migrate_fd_cleanup() which are not necessarily running in the same thread as migrate_fd_cancel(). Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-5-farosas@suse.de>
2023-09-27migration: Fix possible races when shutting down the return pathFabiano Rosas
We cannot call qemu_file_shutdown() on the return path file without taking the file lock. The return path thread could be running it's cleanup code and have just cleared the from_dst_file pointer. Checking ms->to_dst_file for errors could also race with migrate_fd_cleanup() which clears the to_dst_file pointer. Protect both accesses by taking the file lock. This was caught by inspection, it should be rare, but the next patches will start calling this code from other places, so let's do the correct thing. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-4-farosas@suse.de>
2023-09-27migration: Fix possible race when setting rp_state.errorFabiano Rosas
We don't need to set the rp_state.error right after a shutdown because qemu_file_shutdown() always sets the QEMUFile error, so the return path thread would have seen it and set the rp error itself. Setting the error outside of the thread is also racy because the thread could clear it after we set it. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-3-farosas@suse.de>
2023-09-27migration: Fix race that dest preempt thread close too earlyPeter Xu
We hit intermit CI issue on failing at migration-test over the unit test preempt/plain: qemu-system-x86_64: Unable to read from socket: Connection reset by peer Memory content inconsistency at 5b43000 first_byte = bd last_byte = bc current = 4f hit_edge = 1 ** ERROR:../tests/qtest/migration-test.c:300:check_guests_ram: assertion failed: (bad == 0) (test program exited with status code -6) Fabiano debugged into it and found that the preempt thread can quit even without receiving all the pages, which can cause guest not receiving all the pages and corrupt the guest memory. To make sure preempt thread finished receiving all the pages, we can rely on the page_requested_count being zero because preempt channel will only receive requested page faults. Note, not all the faulted pages are required to be sent via the preempt channel/thread; imagine the case when a requested page is just queued into the background main channel for migration, the src qemu will just still send it via the background channel. Here instead of spinning over reading the count, we add a condvar so the main thread can wait on it if that unusual case happened, without burning the cpu for no good reason, even if the duration is short; so even if we spin in this rare case is probably fine. It's just better to not do so. The condvar is only used when that special case is triggered. Some memory ordering trick is needed to guarantee it from happening (against the preempt thread status field), so the main thread will always get a kick when that triggers correctly. Closes: https://gitlab.com/qemu-project/qemu/-/issues/1886 Debugged-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230918172822.19052-2-farosas@suse.de>
2023-09-11migration: Add .save_prepare() handler to struct SaveVMHandlersAvihai Horon
Add a new .save_prepare() handler to struct SaveVMHandlers. This handler is called early, even before migration starts, and can be used by devices to perform early checks. Refactor migrate_init() to be able to return errors and call .save_prepare() from there. Suggested-by: Peter Xu <peterx@redhat.com> Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-09-11migration: Move more initializations to migrate_init()Avihai Horon
Initialization of mig_stats, compression_counters and VFIO bytes transferred is hard-coded in migration code path and snapshot code path. Make the code cleaner by initializing them in migrate_init(). Suggested-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-09-11migration: Add migration prefix to functions in target.cAvihai Horon
The functions in target.c are not static, yet they don't have a proper migration prefix. Add such prefix. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-07-26migration: Update error description whenever migration failsTejus GK
There are places in migration.c where the migration is marked failed with MIGRATION_STATUS_FAILED, but the failure reason is never updated. Hence libvirt doesn't know why the migration failed when it queries for it. Reviewed-by: Daniel P. Berrangé <berrange@redhat.com Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Tejus GK <tejus.gk@nutanix.com> Message-ID: <20230621130940.178659-2-tejus.gk@nutanix.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-07-26migration: Extend query-migrate to provide dirty page limit infoHyman Huang(黄勇)
Extend query-migrate to provide throttle time and estimated ring full time with dirty-limit capability enabled, through which we can observe if dirty limit take effect during live migration. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-ID: <168733225273.5845.15871826788879741674-8@git.sr.ht> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-07-26migration: Implement dirty-limit convergence algoHyman Huang(黄勇)
Implement dirty-limit convergence algo for live migration, which is kind of like auto-converge algo but using dirty-limit instead of cpu throttle to make migration convergent. Enable dirty page limit if dirty_rate_high_cnt greater than 2 when dirty-limit capability enabled, Disable dirty-limit if migration be canceled. Note that "set_vcpu_dirty_limit", "cancel_vcpu_dirty_limit" commands are not allowed during dirty-limit live migration. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-ID: <168733225273.5845.15871826788879741674-7@git.sr.ht> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-07-26migration/multifd: Rename threadinfo.c functionsFabiano Rosas
We're about to add more functions to this file so make it use the same coding style as the rest of the code. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20230607161306.31425-2-farosas@suse.de> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-07-08migration: unexport migrate_fd_error()Laszlo Ersek
The only migrate_fd_error() call sites are in "migration/migration.c", which is also where we define migrate_fd_error(). Make the function static, and remove its declaration from "migration/migration.h". Cc: Juan Quintela <quintela@redhat.com> (maintainer:Migration) Cc: Leonardo Bras <leobras@redhat.com> (reviewer:Migration) Cc: Peter Xu <peterx@redhat.com> (reviewer:Migration) Cc: qemu-trivial@nongnu.org Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2018404 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-08migration: factor out "resume_requested" in qmp_migrate()Laszlo Ersek
It cuts back on those awkward, duplicated !(has_resume && resume) expressions. Cc: Juan Quintela <quintela@redhat.com> (maintainer:Migration) Cc: Leonardo Bras <leobras@redhat.com> (reviewer:Migration) Cc: Peter Xu <peterx@redhat.com> (reviewer:Migration) Cc: qemu-trivial@nongnu.org Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2018404 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-30vfio/migration: Reset bytes_transferred properlyAvihai Horon
Currently, VFIO bytes_transferred is not reset properly: 1. bytes_transferred is not reset after a VM snapshot (so a migration following a snapshot will report incorrect value). 2. bytes_transferred is a single counter for all VFIO devices, however upon migration failure it is reset multiple times, by each VFIO device. Fix it by introducing a new function vfio_reset_bytes_transferred() and calling it during migration and snapshot start. Remove existing bytes_transferred reset in VFIO migration state notifier, which is not needed anymore. Fixes: 3710586caa5d ("qapi: Add VFIO devices migration stats in Migration stats") Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-06-30migration: Implement switchover ack logicAvihai Horon
Implement switchover ack logic. This prevents the source from stopping the VM and completing the migration until an ACK is received from the destination that it's OK to do so. To achieve this, a new SaveVMHandlers handler switchover_ack_needed() and a new return path message MIG_RP_MSG_SWITCHOVER_ACK are added. The switchover_ack_needed() handler is called during migration setup in the destination to check if switchover ack is used by the migrated device. When switchover is approved by all migrated devices in the destination that support this capability, the MIG_RP_MSG_SWITCHOVER_ACK return path message is sent to the source to notify it that it's OK to do switchover. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Peter Xu <peterx@redhat.com> Tested-by: YangHang Liu <yanghliu@redhat.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Cédric Le Goater <clg@redhat.com>
2023-06-02migration: stop tracking ram writes when cancelling background migrationFiona Ebner
Currently, it is only done when the iteration finishes successfully. Not cleaning up the userfaultfd write protection can lead to symptoms/issues such as the process hanging in memmove or GDB not being able to attach. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Message-Id: <20230526115908.196171-1-f.ebner@proxmox.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-06-02migration: restore vmstate on migration failureVladimir Sementsov-Ogievskiy
1. Otherwise failed migration just drops guest-panicked state, which is not good for management software. 2. We do keep different paused states like guest-panicked during migration with help of global_state state. 3. We do restore running state on source when migration is cancelled or failed. 4. "postmigrate" state is documented as "guest is paused following a successful 'migrate'", so originally it's only for successful path and we never documented current behavior. Let's restore paused states like guest-panicked in case of cancel or fail too. Allow same transitions like for inmigrate state. This commit changes the behavior that was introduced by commit 42da5550d6 "migration: set state to post-migrate on failure" and provides a bit different fix on related https://bugzilla.redhat.com/show_bug.cgi?id=1355683 Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230517123752.21615-6-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-06-02migration: switch from .vm_was_running to .vm_old_stateVladimir Sementsov-Ogievskiy
No logic change here, only refactoring. That's a preparation for next commit where we finally restore the stopped vm state on migration failure or cancellation. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230517123752.21615-5-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-06-02migration: never fail in global_state_store()Vladimir Sementsov-Ogievskiy
Actually global_state_store() can never fail. Let's get rid of extra error paths. To make things clear, use new runstate_get() and use same approach for global_state_store() and global_state_store_running(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230517123752.21615-3-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-18migration: Use migration_transferred_bytes() to calculate rate_limitJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230515195709.63843-9-quintela@redhat.com>
2023-05-18migration: Move migration_total_bytes() to migration-stats.cJuan Quintela
Once there rename it to migration_transferred_bytes() and pass a QEMUFile instead of a migration object. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230515195709.63843-7-quintela@redhat.com>
2023-05-18migration: Move rate_limit_max and rate_limit_used to migration_statsJuan Quintela
These way we can make them atomic and use this functions from any place. I also moved all functions that use rate_limit to migration-stats. Functions got renamed, they are not qemu_file anymore. qemu_file_rate_limit -> migration_rate_exceeded qemu_file_set_rate_limit -> migration_rate_set qemu_file_get_rate_limit -> migration_rate_get qemu_file_reset_rate_limit -> migration_rate_reset qemu_file_acct_rate_limit -> migration_rate_account. Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230515195709.63843-6-quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-18migration: Don't use INT64_MAX for unlimited rateJuan Quintela
Define and use RATE_LIMIT_DISABLED instead. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-Id: <20230515195709.63843-2-quintela@redhat.com>
2023-05-18migration: process_incoming_migration_co(): move colo part to coloVladimir Sementsov-Ogievskiy
Let's make better public interface for COLO: instead of colo_process_incoming_thread and not trivial logic around creating the thread let's make simple colo_incoming_co(), hiding implementation from generic code. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230515130640.46035-4-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-18migration: split migration_incoming_coVladimir Sementsov-Ogievskiy
Originally, migration_incoming_co was introduced by 25d0c16f625feb3b6 "migration: Switch to COLO process after finishing loadvm" to be able to enter from COLO code to one specific yield point, added by 25d0c16f625feb3b6. Later in 923709896b1b0 "migration: poll the cm event for destination qemu" we reused this variable to wake the migration incoming coroutine from RDMA code. That was doubtful idea. Entering coroutines is a very fragile thing: you should be absolutely sure which yield point you are going to enter. I don't know how much is it safe to enter during qemu_loadvm_state() which I think what RDMA want to do. But for sure RDMA shouldn't enter the special COLO-related yield-point. As well, COLO code doesn't want to enter during qemu_loadvm_state(), it want to enter it's own specific yield-point. As well, when in 8e48ac95865ac97d "COLO: Add block replication into colo process" we added bdrv_invalidate_cache_all() call (now it's called activate_all()) it became possible to enter the migration incoming coroutine during that call which is wrong too. So, let't make these things separate and disjoint: loadvm_co for RDMA, non-NULL during qemu_loadvm_state(), and colo_incoming_co for COLO, non-NULL only around specific yield. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230515130640.46035-3-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-15qemu-file: Remove total from qemu_file_total_transferred_*()Juan Quintela
Function is already quite long. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230508130909.65420-7-quintela@redhat.com>
2023-05-15migration: We set the rate_limit by a secondJuan Quintela
That the implementation does the check every 100 milliseconds is an implementation detail that shouldn't be seen on the interfaz. Notice that all callers of qemu_file_set_rate_limit() used the division or pass 0, so this change is a NOP. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230508130909.65420-4-quintela@redhat.com>
2023-05-15migration: A rate limit value of 0 is validJuan Quintela
And it is the best way to not have rate_limit. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20230508130909.65420-2-quintela@redhat.com>
2023-05-10migration: block incoming colo when capability is disabledVladimir Sementsov-Ogievskiy
We generally require same set of capabilities on source and target. Let's require x-colo capability to use COLO on target. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Zhang Chen <chen.zhang@intel.com> Message-Id: <20230428194928.1426370-11-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10migration: disallow change capabilities in COLO stateVladimir Sementsov-Ogievskiy
COLO is not listed as running state in migrate_is_running(), so, it's theoretically possible to disable colo capability in COLO state and the unexpected error in migration_iteration_finish() is reachable. Let's disallow that in qmp_migrate_set_capabilities. Than the error becomes absolutely unreachable: we can get into COLO state only with enabled capability and can't disable it while we are in COLO state. So substitute the error by simple assertion. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20230428194928.1426370-10-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10migration: process_incoming_migration_co: simplify code flow around retVladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Zhang Chen <chen.zhang@intel.com> Message-Id: <20230428194928.1426370-7-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10migration: drop colo_incoming_thread from MigrationIncomingStateVladimir Sementsov-Ogievskiy
have_colo_incoming_thread variable is unused. colo_incoming_thread can be local. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Zhang Chen <chen.zhang@intel.com> Message-Id: <20230428194928.1426370-6-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10build: move COLO under CONFIG_REPLICATIONVladimir Sementsov-Ogievskiy
We don't allow to use x-colo capability when replication is not configured. So, no reason to build COLO when replication is disabled, it's unusable in this case. Note also that the check in migrate_caps_check() is not the only restriction: some functions in migration/colo.c will just abort if called with not defined CONFIG_REPLICATION, for example: migration_iteration_finish() case MIGRATION_STATUS_COLO: migrate_start_colo_process() colo_process_checkpoint() abort() It could probably make sense to have possibility to enable COLO without REPLICATION, but this requires deeper audit of colo & replication code, which may be done later if needed. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Acked-by: Dr. David Alan Gilbert <dave@treblig.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20230428194928.1426370-4-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-10migration: Attempt disk reactivation in more failure scenariosEric Blake
Commit fe904ea824 added a fail_inactivate label, which tries to reactivate disks on the source after a failure while s->state == MIGRATION_STATUS_ACTIVE, but didn't actually use the label if qemu_savevm_state_complete_precopy() failed. This failure to reactivate is also present in commit 6039dd5b1c (also covering the new s->state == MIGRATION_STATUS_DEVICE state) and 403d18ae (ensuring s->block_inactive is set more reliably). Consolidate the two labels back into one - no matter HOW migration is failed, if there is any chance we can reach vm_start() after having attempted inactivation, it is essential that we have tried to restart disks before then. This also makes the cleanup more like migrate_fd_cancel(). Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20230502205212.134680-1-eblake@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-05-08migration: Initialize and cleanup decompression in migration.cLukas Straub
This fixes compress with colo. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-05-05migration: max_postcopy_bandwidth is a size parameterJuan Quintela
So make everything that uses it uint64_t no int64_t. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20230504113841.23130-2-quintela@redhat.com>
2023-05-03migration: Rename ram_counters to mig_statsJuan Quintela
migration_stats is just too long, and it is going to have more than ram counters in the near future. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de>
2023-05-03migration: Move ram_stats to its own file migration-stats.[ch]Juan Quintela
There is already include/qemu/stats.h, so stats.h was a bad idea. We want this file to not depend on anything else, we will move all the migration counters/stats to this struct. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de>
2023-04-27migration: Make dirty_bytes_last_sync atomicJuan Quintela
As we set its value, it needs to be operated with atomics. We rename it from remaining to better reflect its meaning. Statistics always return the real reamaining bytes. This was used to store how much pages where dirty on the previous generation, so we can calculate the expected downtime as: dirty_bytes_last_sync / current_bandwith. If we use the actual remaining bytes, we would see a very small value at the end of the iteration. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> --- I am open to use ram_bytes_remaining() in its only use and be more "optimistic" about the downtime. Don't use __nocheck() functions. Use stat64_get() now that it exists.
2023-04-27migration: Make dirty_pages_rate atomicJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> --- Don't use __nocheck() variants Use stat64_get()
2023-04-27migration: Move migration_properties to options.cJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-27migration: Remove MigrationState from block_cleanup_parameters()Juan Quintela
This makes the function more regular with everything else. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-27migration: Move block_cleanup_parameters() to options.cJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
2023-04-27migration: Move migrate_set_block_incremental() to options.cJuan Quintela
Once there, make it more regular and remove the need for MigrationState parameter. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>