aboutsummaryrefslogtreecommitdiff
path: root/arch_init.c
AgeCommit message (Collapse)Author
2013-06-27rdma: introduce qemu_update_position()Michael R. Hines
RDMA writes happen asynchronously, and thus the performance accounting also needs to be able to occur asynchronously. This allows anybody to call into savevm.c to update both f->pos as well as into arch_init.c to update the acct_info structure with up-to-date values when the RDMA transfer actually completes. Reviewed-by: Juan Quintela <quintela@redhat.com> Tested-by: Chegu Vinod <chegu_vinod@hp.com> Tested-by: Michael R. Hines <mrhines@us.ibm.com> Signed-off-by: Michael R. Hines <mrhines@us.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-06-27migration: do not overwrite zero pagesPeter Lieven
on incoming migration do not memset pages to zero if they already read as zero. this will allocate a new zero page and consume memory unnecessarily. even if we madvise a MADV_DONTNEED later this will only deallocate the memory asynchronously. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-06-27Revert "migration: do not sent zero pages in bulk stage"Peter Lieven
Not sending zero pages breaks migration if a page is zero at the source but not at the destination. This can e.g. happen if different BIOS versions are used at source and destination. It has also been reported that migration on pseries is completely broken with this patch. This effectively reverts commit f1c72795af573b24a7da5eb52375c9aba8a37972. Conflicts: arch_init.c Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-06-27arch_init/ram_load: add error message for block length mismatchAlon Levy
Makes it easier to debug situations where the source and target have different ram blocks in a device and migration fails due to that, for instance a BAR size change on a PCI device. Signed-off-by: Alon Levy <alevy@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-06-17Merge remote-tracking branch 'pmaydell/configury.next' into stagingAnthony Liguori
# By Paolo Bonzini (4) and others # Via Peter Maydell * pmaydell/configury.next: ppc: Remove CONFIG_FDT conditionals microblaze: Remove CONFIG_FDT conditionals arm: Remove CONFIG_FDT conditionals configure: Require libfdt for arm, ppc, microblaze softmmu targets configure: dtc: Probe for libfdt_env.h build: drop TARGET_TYPE main: use TARGET_ARCH only for the target-specific #define build: do not use TARGET_ARCH build: rename TARGET_ARCH2 to TARGET_NAME Add a stp file for usage from build directory Message-id: 1371221594-11556-1-git-send-email-peter.maydell@linaro.org Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-06-14build: drop TARGET_TYPEPaolo Bonzini
Just use the TARGET_NAME free string. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 1370349928-20419-6-git-send-email-pbonzini@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-14main: use TARGET_ARCH only for the target-specific #definePaolo Bonzini
Everything else needs to match the executable name, which is TARGET_NAME. Before: $ sh4eb-linux-user/qemu-sh4eb --help usage: qemu-sh4 [options] program [arguments...] Linux CPU emulator (compiled for sh4 emulation) After: $ sh4eb-linux-user/qemu-sh4eb --help usage: qemu-sh4eb [options] program [arguments...] Linux CPU emulator (compiled for sh4eb emulation) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1370349928-20419-5-git-send-email-pbonzini@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-14smbios: Clean up smbios_add_field() parametersMarkus Armbruster
Having size precede the associated pointer is odd. Swap them, and fix up the types. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com> Message-id: 1370610036-10577-5-git-send-email-armbru@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-06-14smbios: Convert to error_report()Markus Armbruster
Improves diagnistics from ad hoc messages like Invalid SMBIOS UUID string to qemu-system-x86_64: -smbios type=1,uuid=gaga: Invalid UUID Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com> Message-id: 1370610036-10577-4-git-send-email-armbru@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-05-24memory: make memory_global_sync_dirty_bitmap take an AddressSpacePaolo Bonzini
Since this is a MemoryListener operation, it only makes sense on an AddressSpace granularity. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-29audio: look for the ISA and PCI busesPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1366303444-24620-8-git-send-email-pbonzini@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-04-29audio: remove HAS_AUDIOPaolo Bonzini
Several targets can have wavcapture/-soundhw support via PCI cards. HAS_AUDIO is a useless limitation, remove it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1366303444-24620-4-git-send-email-pbonzini@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-04-29audio: remove the need for audio card CONFIG_* symbolsPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1366303444-24620-3-git-send-email-pbonzini@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-04-15include: avoid useless includes of exec/ headersPaolo Bonzini
Headers in include/exec/ are for the deepest innards of QEMU, they should almost never be included directly. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-15acpi: move declarations from pc.h to acpi.hMichael S. Tsirkin
Functions defined in acpi/ should be declared in acpi.h Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-08hw: move headers to include/Paolo Bonzini
Many of these should be cleaned up with proper qdev-/QOM-ification. Right now there are many catch-all headers in include/hw/ARCH depending on cpu.h, and this makes it necessary to compile these files per-target. However, fixing this does not belong in these patches. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-04like acpi_table_install(), acpi_table_add() should propagate ErrorsLaszlo Ersek
Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Message-id: 1363821803-3380-8-git-send-email-lersek@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-04-04acpi_table_add(): accept QemuOpts and parse it with OptsVisitorLaszlo Ersek
As one consequence, strtok() -- which modifies its argument -- is replaced with g_strsplit(). Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Message-id: 1363821803-3380-6-git-send-email-lersek@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-04-04strip some whitespaceLaszlo Ersek
Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Message-id: 1363821803-3380-2-git-send-email-lersek@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-03-26Use qemu_put_buffer_async for guest memory pagesOrit Wasserman
This will remove an unneeded copy of guest memory pages. For the page header and device state we still copy the data to the static buffer the other option is to allocate the memory on demand which is more expensive. Signed-off-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: use XBZRLE only after bulk stagePeter Lieven
at the beginning of migration all pages are marked dirty and in the first round a bulk migration of all pages is performed. currently all these pages are copied to the page cache regardless of whether they are frequently updated or not. this doesn't make sense since most of these pages are never transferred again. this patch changes the XBZRLE transfer to only be used after the bulk stage has been completed. that means a page is added to the page cache the second time it is transferred and XBZRLE can benefit from the third time of transfer. since the page cache is likely smaller than the number of pages it's also likely that in the second round the page is missing in the cache due to collisions in the bulk phase. on the other hand a lot of unnecessary mallocs, memdups and frees are saved. the following results have been taken earlier while executing the test program from docs/xbzrle.txt. (+) with the patch and (-) without. (thanks to Eric Blake for reformatting and comments) + total time: 22185 milliseconds - total time: 22410 milliseconds Shaved 0.3 seconds, better than 1%! + downtime: 29 milliseconds - downtime: 21 milliseconds Not sure why downtime seemed worse, but probably not the end of the world. + transferred ram: 706034 kbytes - transferred ram: 721318 kbytes Fewer bytes sent - good. + remaining ram: 0 kbytes - remaining ram: 0 kbytes + total ram: 1057216 kbytes - total ram: 1057216 kbytes + duplicate: 108556 pages - duplicate: 105553 pages + normal: 175146 pages - normal: 179589 pages + normal bytes: 700584 kbytes - normal bytes: 718356 kbytes Fewer normal bytes... + cache size: 67108864 bytes - cache size: 67108864 bytes + xbzrle transferred: 3127 kbytes - xbzrle transferred: 630 kbytes ...and more compressed pages sent - good. + xbzrle pages: 117811 pages - xbzrle pages: 21527 pages + xbzrle cache miss: 18750 - xbzrle cache miss: 179589 And very good improvement on the cache miss rate. + xbzrle overflow : 0 - xbzrle overflow : 0 Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: do not search dirty pages in bulk stagePeter Lieven
avoid searching for dirty pages just increment the page offset. all pages are dirty anyway. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: do not sent zero pages in bulk stagePeter Lieven
during bulk stage of ram migration if a page is a zero page do not send it at all. the memory at the destination reads as zero anyway. even if there is an madvise with QEMU_MADV_DONTNEED at the target upon receipt of a zero page I have observed that the target starts swapping if the memory is overcommitted. it seems that the pages are dropped asynchronously. this patch also updates QMP to return the number of skipped pages in MigrationStats. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: add an indicator for bulk state of ram migrationPeter Lieven
the first round of ram transfer is special since all pages are dirty and thus all memory pages are transferred to the target. this patch adds a boolean variable to track this stage. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: search for zero instead of dup pagesPeter Lieven
virtually all dup pages are zero pages. remove the special is_dup_page() function and use the optimized buffer_find_nonzero_offset() function instead. here buffer_find_nonzero_offset() is used directly to avoid the unnecssary additional checks in buffer_is_zero(). raw performace gain checking 1 GByte zeroed memory over is_dup_page() is approx. 10-12% with SSE2 and 8-10% with unsigned long arithmedtic. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26move vector definitions to qemu-common.hPeter Lieven
vector optimizations will now be used at various places not just in is_dup_page() in arch_init.c Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-23Add top level changes for moxieAnthony Green
Signed-off-by: Anthony Green <green@moxielogic.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-03-11page_cache: dup memory on insertPeter Lieven
The page cache frees all data on finish, on resize and if there is collision on insert. So it should be the caches responsibility to dup the data that is stored in the cache. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: run setup callbacks out of big lockPaolo Bonzini
Only the migration_bitmap_sync() call needs the iothread lock. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: run pending/iterate callbacks out of big lockPaolo Bonzini
This makes it possible to do blocking writes directly to the socket, with no buffer in the middle. For RAM, only the migration_bitmap_sync() call needs the iothread lock. For block migration, it is needed by the block layer (including bdrv_drain_all and dirty bitmap access), but because some code is shared between iterate and complete, all of mig_save_device_dirty is run with the lock taken. In the savevm case, the iterate callback runs within the big lock. This is annoying because it complicates the rules. Luckily we do not need to do anything about it: the RAM iterate callback does not need the iothread lock, and block migration never runs during savevm. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-02-22migration: calculate expected_downtimeJuan Quintela
We removed the calculation in commit e4ed1541ac9413eac494a03532e34beaf8a7d1c5 Now we add it back. We need to create dirty_bytes_rate because we can't include cpu-all.h from migration.c, and there is no other way to include TARGET_PAGE_SIZE. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2013-02-01Allow XBZRLE decoding without enabling the capabilityOrit Wasserman
Before this fix we couldn't load a guest from XBZRLE compressed file. For example: The user activated the XBZRLE capability The user run migrate -d "exec:gzip -c > vm.gz" The user won't be able to load vm.gz and get an error. Signed-off-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-01-17Protect migration_bitmap_sync() with the ramlist lockPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
2013-01-17Unlock ramlist lock also in error casePaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
2012-12-20ram: refactor ram_save_block() return valueJuan Quintela
It could only return 0 if we only found dirty xbzrle pages that hadn't changed (i.e. they were written with the same content). We don't care about that case, it is the same than nothing dirty. So now the return of the function is how much have it written, nothing else. Adjust callers. And we also made ram_save_iterate() return the number of transferred bytes, not the number of transferred pages. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20ram: account the amount of transferred ram betterJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20ram: optimize migration bitmap walkingJuan Quintela
Instead of testing each page individually, we search what is the next dirty page with a bitmap operation. We have to reorganize the code to move from a "for" loop, to a while(dirty) loop. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20ram: Use memory_region_test_and_clear_dirtyJuan Quintela
This avoids having to do two walks over the dirty bitmap, once reading the dirty bits, and anthoer cleaning them. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20ram: Add last_sent_blockJuan Quintela
This is the last block from where we have sent data. Signed-off-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20ram: rename last_block to last_seen_blockJuan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20savevm: New save live migration method: pendingJuan Quintela
Code just now does (simplified for clarity) if (qemu_savevm_state_iterate(s->file) == 1) { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); } Problem here is that qemu_savevm_state_iterate() returns 1 when it knows that remaining memory to sent takes less than max downtime. But this means that we could end spending 2x max_downtime, one downtime in qemu_savevm_iterate, and the other in qemu_savevm_state_complete. Changed code to: pending_size = qemu_savevm_state_pending(s->file, max_size); DPRINTF("pending size %lu max %lu\n", pending_size, max_size); if (pending_size >= max_size) { ret = qemu_savevm_state_iterate(s->file); } else { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); } So what we do is: at current network speed, we calculate the maximum number of bytes we can sent: max_size. Then we ask every save_live section how much they have pending. If they are less than max_size, we move to complete phase, otherwise we do an iterate one. This makes things much simpler, because now individual sections don't have to caluclate the bandwidth (it was implossible to do right from there). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-20protect the ramlist with a separate mutexUmesh Deshpande
Add the new mutex that protects shared state between ram_save_live and the iothread. If the iothread mutex has to be taken together with the ramlist mutex, the iothread shall always be _outside_. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Umesh Deshpande <udeshpan@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2012-12-20add a version number to ram_listUmesh Deshpande
This will be used to detect if last_block might have become invalid across different calls to ram_save_live. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Umesh Deshpande <udeshpan@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2012-12-20exec: sort the memory from biggest to smallestPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20exec: change RAM list to a TAILQPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20exec: change ramlist from MRU order to a 1-item cachePaolo Bonzini
Most of the time, only 2 items will be active (from/to for a string operation, or code/data). But TCG guests likely won't have gigabytes of memory, so this actually goes down to 1 item. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: fix migration_bitmap leakPaolo Bonzini
Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-19softmmu: move include files to include/sysemu/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19misc: move include files to include/qemu/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19migration: move include files to include/migration/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>