aboutsummaryrefslogtreecommitdiff
path: root/migration
AgeCommit message (Collapse)Author
2019-09-16block: Remove unused masksNir Soffer
Replace confusing usage: ~BDRV_SECTOR_MASK With more clear: (BDRV_SECTOR_SIZE - 1) Remove BDRV_SECTOR_MASK and the unused BDRV_BLOCK_OFFSET_MASK which was it's last user. Signed-off-by: Nir Soffer <nsoffer@redhat.com> Message-id: 20190827185913.27427-3-nsoffer@redhat.com Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-09-12migration: fix one typo in comment of function migration_total_bytes()Wei Yang
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190912024957.11780-1-richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration/qemu-file: fix potential buf waste for extra buf_index adjustmentWei Yang
In add_to_iovec(), qemu_fflush() will be called if iovec is full. If this happens, buf_index is reset. Currently, this is not checked and buf_index would always been adjust with buf size. This is not harmful, but will waste some space in file buffer. This patch make add_to_iovec() return 1 when it has flushed the file. Then the caller could check the return value to see whether it is necessary to adjust the buf_index any more. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190911132839.23336-3-richard.weiyang@gmail.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration/qemu-file: remove check on writev_buffer in qemu_put_compression_dataWei Yang
The check of writev_buffer is in qemu_fflush, which means it is not harmful if it is NULL. And removing it will make the code consistent since all other add_to_iovec() is called without the check. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190911132839.23336-2-richard.weiyang@gmail.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration: Fix postcopy bw for recoveryPeter Xu
We've got max-postcopy-bandwidth parameter but it's not applied correctly after a postcopy recovery so the recovered migration stream will still eat the whole net bandwidth. Fix that up. Reported-by: Xiaohui Li <xiaohli@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190906130103.20961-1-peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration: Add validate-uuid capabilityYury Kotov
This capability realizes simple source validation by UUID. It's useful for live migration between hosts. Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru> Message-Id: <20190903162246.18524-2-yury-kotov@yandex-team.ru> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12qemu-file: Rework old qemu_fflush commentDr. David Alan Gilbert
Commit 11808bb removed the non-iovec based write support, the comment hung on. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190823103946.7388-1-dgilbert@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration: register_savevm_live doesn't need devDr. David Alan Gilbert
Commit 78dd48df3 removed the last caller of register_savevm_live for an instantiable device (rather than a single system wide device); so trim out the parameter. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190822115433.12070-1-dgilbert@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration: cleanup check on ops in savevm.handlers iterationsWei Yang
During migration, there are several places to iterate on savevm.handlers. And on each iteration, we need to check its ops and related callbacks before invoke it. Generally, ops is the first element to check, and it is only necessary to check it once. This patch clean all the related part in savevm.c to check ops only once in those iterations. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190819032804.8579-1-richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-12migration: multifd_send_thread always post p->sem_sync when error happenIvan Ren
When encounter error, multifd_send_thread should always notify who pay attention to it before exit. Otherwise it may block migration_thread at multifd_send_sync_main forever. Error as follow: ------------------------------------------------------------------------------- (gdb) bt #0 0x00007f4d669dfa0b in do_futex_wait.constprop.1 () from /lib64/libpthread.so.0 #1 0x00007f4d669dfa9f in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0 #2 0x00007f4d669dfb3b in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0 #3 0x0000562ccf0a5614 in qemu_sem_wait (sem=sem@entry=0x562cd1b698e8) at util/qemu-thread-posix.c:319 #4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099 #5 0x0000562ccecb95f4 in ram_save_iterate (f=0x562cd0ecc000, opaque=<optimized out>) at /qemu/migration/ram.c:3550 #6 0x0000562ccef43c23 in qemu_savevm_state_iterate (f=0x562cd0ecc000, postcopy=false) at migration/savevm.c:1189 #7 0x0000562ccef3dcf3 in migration_iteration_run (s=0x562cd09fabf0) at migration/migration.c:3131 #8 migration_thread (opaque=opaque@entry=0x562cd09fabf0) at migration/migration.c:3258 #9 0x0000562ccf0a4c26 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:502 #10 0x00007f4d669d9e25 in start_thread () from /lib64/libpthread.so.0 #11 0x00007f4d6670635d in clone () from /lib64/libc.so.6 (gdb) f 4 #4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099 1099 qemu_sem_wait(&p->sem_sync); (gdb) list 1094 } 1095 for (i = 0; i < migrate_multifd_channels(); i++) { 1096 MultiFDSendParams *p = &multifd_send_state->params[i]; 1097 1098 trace_multifd_send_sync_main_wait(p->id); 1099 qemu_sem_wait(&p->sem_sync); 1100 } 1101 trace_multifd_send_sync_main(multifd_send_state->packet_num); 1102 } 1103 (gdb) p i $1 = 0 (gdb) p multifd_send_state->params[0].pending_job $2 = 2 //It means the job before MULTIFD_FLAG_SYNC has already fail (gdb) p multifd_send_state->params[0].quit $3 = true Signed-off-by: Ivan Ren <ivanren@tencent.com> Message-Id: <1567044996-2362-1-git-send-email-ivanren@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-03multifd: Use number of channels as listen backlogJuan Quintela
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-09-03socket: Add num connections to qio_net_listener_open_sync()Juan Quintela
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-08-20memory: fix race between TCG and accesses to dirty bitmapPaolo Bonzini
There is a race between TCG and accesses to the dirty log: vCPU thread reader thread ----------------------- ----------------------- TLB check -> slow path notdirty_mem_write write to RAM set dirty flag clear dirty flag TLB check -> fast path read memory write to RAM Fortunately, in order to fix it, no change is required to the vCPU thread. However, the reader thread must delay the read after the vCPU thread has finished the write. This can be approximated conservatively by run_on_cpu, which waits for the end of the current translation block. A similar technique is used by KVM, which has to do a synchronous TLB flush after doing a test-and-clear of the dirty-page flags. Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-08-16qapi: implement block-dirty-bitmap-remove transaction actionJohn Snow
It is used to do transactional movement of the bitmap (which is possible in conjunction with merge command). Transactional bitmap movement is needed in scenarios with external snapshot, when we don't want to leave copy of the bitmap in the base image. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190708220502.12977-3-jsnow@redhat.com [Edited "since" version to 4.2 --js] Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/dirty-bitmap: add bdrv_dirty_bitmap_getJohn Snow
Add a public interface for get. While we're at it, rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked". (There are more functions to rename to the bdrv_dirty_bitmap_VERB form, but they will wait until the conclusion of this series.) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-11-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16Merge remote-tracking branch ↵Peter Maydell
'remotes/armbru/tags/pull-include-2019-08-13-v2' into staging Header cleanup patches for 2019-08-13 # gpg: Signature made Fri 16 Aug 2019 12:39:12 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-include-2019-08-13-v2: (29 commits) sysemu: Split sysemu/runstate.h off sysemu/sysemu.h sysemu: Move the VMChangeStateEntry typedef to qemu/typedefs.h Include sysemu/sysemu.h a lot less Clean up inclusion of sysemu/sysemu.h numa: Move remaining NUMA declarations from sysemu.h to numa.h Include sysemu/hostmem.h less numa: Don't include hw/boards.h into sysemu/numa.h Include hw/boards.h a bit less Include hw/qdev-properties.h less Include qemu/main-loop.h less Include qemu/queue.h slightly less Include hw/hw.h exactly where needed Include qom/object.h slightly less Include exec/memory.h slightly less Include migration/vmstate.h less migration: Move the VMStateDescription typedef to typedefs.h Clean up inclusion of exec/cpu-common.h Include hw/irq.h a lot less typedefs: Separate incomplete types and function types ide: Include hw/ide/internal a bit less outside hw/ide/ ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16sysemu: Split sysemu/runstate.h off sysemu/sysemu.hMarkus Armbruster
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence: * It's included widely: in my "build everything" tree, changing sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h, down from 5400 due to the previous two commits). * It pulls in more than a dozen additional headers. Split stuff related to run state management into its own header sysemu/runstate.h. Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400 to 4200. Touching new sysemu/runstate.h recompiles some 500 objects. Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also add qemu/main-loop.h. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-30-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [Unbreak OS-X build]
2019-08-16Include sysemu/sysemu.h a lot lessMarkus Armbruster
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). hw/qdev-core.h includes sysemu/sysemu.h since recent commit e965ffa70a "qdev: add qdev_add_vm_change_state_handler()". This is a bad idea: hw/qdev-core.h is widely included. Move the declaration of qdev_add_vm_change_state_handler() to sysemu/sysemu.h, and drop the problematic include from hw/qdev-core.h. Touching sysemu/sysemu.h now recompiles some 1800 objects. qemu/uuid.h also drops from 5400 to 1800. A few more headers show smaller improvement: qemu/notify.h drops from 5600 to 5200, qemu/timer.h from 5600 to 4500, and qapi/qapi-types-run-state.h from 5500 to 5000. Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20190812052359.30071-28-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2019-08-16Include hw/qdev-properties.h lessMarkus Armbruster
In my "build everything" tree, changing hw/qdev-properties.h triggers a recompile of some 2700 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Many places including hw/qdev-properties.h (directly or via hw/qdev.h) actually need only hw/qdev-core.h. Include hw/qdev-core.h there instead. hw/qdev.h is actually pointless: all it does is include hw/qdev-core.h and hw/qdev-properties.h, which in turn includes hw/qdev-core.h. Replace the remaining uses of hw/qdev.h by hw/qdev-properties.h. While there, delete a few superfluous inclusions of hw/qdev-core.h. Touching hw/qdev-properties.h now recompiles some 1200 objects. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Daniel P. Berrangé" <berrange@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20190812052359.30071-22-armbru@redhat.com>
2019-08-16Include qemu/main-loop.h lessMarkus Armbruster
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16Include exec/memory.h slightly lessMarkus Armbruster
Drop unnecessary inclusions from headers. Downgrade a few more to exec/hwaddr.h. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-17-armbru@redhat.com>
2019-08-16Include migration/vmstate.h lessMarkus Armbruster
In my "build everything" tree, changing migration/vmstate.h triggers a recompile of some 2700 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). hw/hw.h supposedly includes it for convenience. Several other headers include it just to get VMStateDescription. The previous commit made that unnecessary. Include migration/vmstate.h only where it's still needed. Touching it now recompiles only some 1600 objects. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-16-armbru@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16Clean up inclusion of exec/cpu-common.hMarkus Armbruster
migration/qemu-file.h neglects to include it even though it needs ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-14-armbru@redhat.com>
2019-08-14migration: add some multifd tracesJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-6-quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: Make global sem_sync semaphore by channelJuan Quintela
This makes easy to debug things because when you want for all threads to arrive at that semaphore, you know which one your are waiting for. Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-3-quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: Add traces for multifd terminate threadsJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-2-quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14qemu-file: move qemu_{get,put}_counted_string() declarationsMarc-André Lureau
Move migration helpers for strings under include/, so they can be used outside of migration/ Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190808150325.21939-2-marcandre.lureau@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: use mis->bh instead of allocating a QEMUBHWei Yang
For migration incoming side, it either quit in precopy or postcopy. It is safe to use the mis->bh for both instead of allocating a dedicated QEMUBH for postcopy. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190805053146.32326-1-richardw.yang@linux.intel.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: rename migration_bitmap_sync_range to ramblock_sync_dirty_bitmapWei Yang
Rename for better understanding of the code. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190808033155.30162-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: update ram_counters for multifd sync packetIvan Ren
Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add these bytes to ram_counters record. Signed-off-by: Ivan Ren <ivanren@tencent.com> Suggested-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <1564464816-21804-4-git-send-email-ivanren@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: add speed limit for multifd migrationIvan Ren
Limit the speed of multifd migration through common speed limitation qemu file. Signed-off-by: Ivan Ren <ivanren@tencent.com> Message-Id: <1564464816-21804-3-git-send-email-ivanren@tencent.com> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: add qemu_file_update_transfer interfaceIvan Ren
Add qemu_file_update_transfer for just update bytes_xfer for speed limitation. This will be used for further migration feature such as multifd migration. Signed-off-by: Ivan Ren <ivanren@tencent.com> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1564464816-21804-2-git-send-email-ivanren@tencent.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: always initialise ram_counters for a new migrationIvan Ren
This patch fix a multifd migration bug in migration speed calculation, this problem can be reproduced as follows: 1. start a vm and give a heavy memory write stress to prevent the vm be successfully migrated to destination 2. begin a migration with multifd 3. migrate for a long time [actually, this can be measured by transferred bytes] 4. migrate cancel 5. begin a new migration with multifd, the migration will directly run into migration_completion phase Reason as follows: Migration update bandwidth and s->threshold_size in function migration_update_counters after BUFFER_DELAY time: current_bytes = migration_total_bytes(s); transferred = current_bytes - s->iteration_initial_bytes; time_spent = current_time - s->iteration_start_time; bandwidth = (double)transferred / time_spent; s->threshold_size = bandwidth * s->parameters.downtime_limit; In multifd migration, migration_total_bytes function return qemu_ftell(s->to_dst_file) + ram_counters.multifd_bytes. s->iteration_initial_bytes will be initialized to 0 at every new migration, but ram_counters is a global variable, and history migration data will be accumulated. So if the ram_counters.multifd_bytes is big enough, it may lead pending_size >= s->threshold_size become false in migration_iteration_run after the first migration_update_counters. Signed-off-by: Ivan Ren <ivanren@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> Suggested-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <1564741121-1840-1-git-send-email-ivanren@tencent.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: remove unused field bytes_xferWei Yang
MigrationState->bytes_xfer is only set to 0 in migrate_init(). Remove this unnecessary field. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190402003106.17614-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: use QEMU_IS_ALIGNED to replace host_offsetWei Yang
Use QEMU_IS_ALIGNED for the check, it would be more consistent with other align calculations. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190806004648.8659-3-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: simplify calculation of run_start and fixup_start_addrWei Yang
The purpose of the calculation is to find a HostPage which is partially dirty. * fixup_start_addr points to the start of the HostPage to discard * run_start points to the next HostPage to check While in the middle stage, there would two cases for run_start: * aligned with HostPage means this is not partially dirty * not aligned means this is partially dirty When it is aligned, no work and calculation is necessary. run_start already points to the start of next HostPage and is ready to continue. When it is not aligned, the calculation could be simplified with: * fixup_start_addr = QEMU_ALIGN_DOWN(run_start, host_ratio) * run_start = QEMU_ALIGN_UP(run_start, host_ratio) By doing so, run_start always points to the next HostPage to check. fixup_start_addr always points to the HostPage to discard. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190806004648.8659-2-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: make PostcopyDiscardState a static variableWei Yang
In postcopy-ram.c, we provide three functions to discard certain RAMBlock range: * postcopy_discard_send_init() * postcopy_discard_send_range() * postcopy_discard_send_finish() Currently, we allocate/deallocate PostcopyDiscardState for each RAMBlock on sending discard information to destination. This is not necessary and the same data area could be reused for each RAMBlock. This patch defines PostcopyDiscardState a static variable. By doing so: 1) avoid memory allocation and deallocation to the system 2) avoid potential failure of memory allocation 3) hide some details for their users Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190724010721.2146-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: extract ram_load_precopyWei Yang
After cleanup, it would be clear to audience there are two cases ram_load: * precopy * postcopy And it is not necessary to check postcopy_running on each iteration for precopy. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190725002023.2335-3-richardw.yang@linux.intel.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: return -EINVAL directly when version_id mismatchWei Yang
It is not reasonable to continue when version_id mismatch. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190722075339.25121-2-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: equation is more proper than and to check LOADVM_QUITWei Yang
LOADVM_QUIT allows a command to quit all layers of nested loadvm loops, while current return value check is not that proper even it works now. Current return value check "ret & LOADVM_QUIT" would return true if bit[0] is 1. This would be true when ret is -1 which is used to indicate an error of handling a command. Since there is only one place return LOADVM_QUIT and no other combination of return value, use "ret == LOADVM_QUIT" would be more proper. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190718064257.29218-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: just pass RAMBlock is enoughWei Yang
RAMBlock->used_length is always passed to migration_bitmap_sync_range(), which could be retrieved from RAMBlock. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190718012547.16373-1-richardw.yang@linux.intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration: use migration_in_postcopy() to check POSTCOPY_ACTIVEWei Yang
Use common helper function to check the state. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190719071129.11880-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: start_postcopy could be true only when ↵Wei Yang
migrate_postcopy() return true There is only one place to set start_postcopy to true, qmp_migrate_start_postcopy(), which make sure start_postcopy could be set to true when migrate_postcopy() return true. So start_postcopy is true implies the other one. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190718083747.5859-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: PostcopyState is already set in ↵Wei Yang
loadvm_postcopy_handle_advise() PostcopyState is already set to ADVISE at the beginning of loadvm_postcopy_handle_advise(). Remove the redundant set. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190711080816.6405-1-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/savevm: move non SaveStateEntry condition check out of iterationWei Yang
in_postcopy and iterable_only are not SaveStateEntry specific, it would be more proper to check them out of iteration. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190709140924.13291-4-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/savevm: split qemu_savevm_state_complete_precopy() into two partsWei Yang
This is a preparation patch for further cleanup. No functional change, just wrap two major part of qemu_savevm_state_complete_precopy() into function. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190709140924.13291-3-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/savevm: flush file for iterable_only caseWei Yang
It would be proper to flush file even for iterable_only case. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190709140924.13291-2-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: do_fixup is true when host_offset is non-zeroWei Yang
This means it is not necessary to spare an extra variable to hold this condition. Use host_offset directly is fine. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190710050814.31344-3-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: reduce one operation to calculate fixup_start_addrWei Yang
Use the same way for run_end to calculate run_start, which saves one operation. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190710050814.31344-2-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-08-14migration/postcopy: discard_length must not be 0Wei Yang
Since we break the loop when there is no more page to discard, we are sure the following process would find some page to discard. It is not necessary to check it again. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20190627020822.15485-4-richardw.yang@linux.intel.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>