Age | Commit message (Collapse) | Author |
|
Update qemu_fflush and stdio_close to use writev ops if they are available
Use the buffers stored in the iovec.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
All data is still copied into the static buffer.
Adjacent iovecs are coalesced so we send one big buffer
instead of many small buffers.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
This will allow us to write an iovec
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
at the beginning of migration all pages are marked dirty and
in the first round a bulk migration of all pages is performed.
currently all these pages are copied to the page cache regardless
of whether they are frequently updated or not. this doesn't make sense
since most of these pages are never transferred again.
this patch changes the XBZRLE transfer to only be used after
the bulk stage has been completed. that means a page is added
to the page cache the second time it is transferred and XBZRLE
can benefit from the third time of transfer.
since the page cache is likely smaller than the number of pages
it's also likely that in the second round the page is missing in the
cache due to collisions in the bulk phase.
on the other hand a lot of unnecessary mallocs, memdups and frees
are saved.
the following results have been taken earlier while executing
the test program from docs/xbzrle.txt. (+) with the patch and (-)
without. (thanks to Eric Blake for reformatting and comments)
+ total time: 22185 milliseconds
- total time: 22410 milliseconds
Shaved 0.3 seconds, better than 1%!
+ downtime: 29 milliseconds
- downtime: 21 milliseconds
Not sure why downtime seemed worse, but probably not the end of the world.
+ transferred ram: 706034 kbytes
- transferred ram: 721318 kbytes
Fewer bytes sent - good.
+ remaining ram: 0 kbytes
- remaining ram: 0 kbytes
+ total ram: 1057216 kbytes
- total ram: 1057216 kbytes
+ duplicate: 108556 pages
- duplicate: 105553 pages
+ normal: 175146 pages
- normal: 179589 pages
+ normal bytes: 700584 kbytes
- normal bytes: 718356 kbytes
Fewer normal bytes...
+ cache size: 67108864 bytes
- cache size: 67108864 bytes
+ xbzrle transferred: 3127 kbytes
- xbzrle transferred: 630 kbytes
...and more compressed pages sent - good.
+ xbzrle pages: 117811 pages
- xbzrle pages: 21527 pages
+ xbzrle cache miss: 18750
- xbzrle cache miss: 179589
And very good improvement on the cache miss rate.
+ xbzrle overflow : 0
- xbzrle overflow : 0
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
avoid searching for dirty pages just increment the
page offset. all pages are dirty anyway.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.
even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receipt of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.
this patch also updates QMP to return the number of
skipped pages in MigrationStats.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
the first round of ram transfer is special since all pages
are dirty and thus all memory pages are transferred to
the target. this patch adds a boolean variable to track
this stage.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
virtually all dup pages are zero pages. remove
the special is_dup_page() function and use the
optimized buffer_find_nonzero_offset() function
instead.
here buffer_find_nonzero_offset() is used directly
to avoid the unnecssary additional checks in
buffer_is_zero().
raw performace gain checking 1 GByte zeroed memory
over is_dup_page() is approx. 10-12% with SSE2
and 8-10% with unsigned long arithmedtic.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
this patch adopts the loop unrolling idea of bitmap_is_zero() to
speed up the skipping of large areas with zeros in find_next_bit().
this routine is extensively used to find dirty pages in
live migration.
testing only the find_next_bit performance on a zeroed bitfield
the loop onrolling decreased executing time by approx. 50% on x86_64.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
performance gain on SSE2 is approx. 20-25%. altivec
is not tested. performance for unsigned long arithmetic
is unchanged.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
this adds buffer_find_nonzero_offset() which is a SSE2/Altivec
optimized function that searches for non-zero content in a
buffer.
the function starts full unrolling only after the first few chunks have
been checked one by one. analyzing real memory page data has revealed
that non-zero pages are non-zero within the first 256-512 bits in
most cases. as this function is also heavily used to check for zero memory
pages this tweak has been made to avoid the high setup costs of the fully
unrolled check for non-zero pages.
due to the optimizations used in the function there are restrictions
on buffer address and search length. the function
can_use_buffer_find_nonzero_content() can be used to check if
the function can be used safely.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
vector optimizations will now be used at various places
not just in is_dup_page() in arch_init.c
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies
a variably sized buffer with VMS_VBUFFER, so should be named
VMSTATE_VBUFFER_MULTIPLY. This patch fixes this (the macro had no current
users under either name).
In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not
specify VMS_POINTER. This patch fixes this bug as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32
helper (a variably sized array with the number of elements in an int32_t),
but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of
elements in a uint32_t). This patch (trivially) fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
The current savevm code includes VMSTATE helpers for a number of commonly
used data types, but not for the float64 type used by the internal floating
point emulation code. This patch fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
This adds an _EQUAL VMSTATE helper for target_ulongs, defined in terms of
VMSTATE_UINT32_EQUAL or VMSTATE_UINT64_EQUAL as appropriate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
The savevm code already includes a number of *_EQUAL helpers which act as
sanity checks verifying that the configuration of the saved state matches
that of the machine we're loading into to work. Variants already exist
for 8 bit 16 bit and 32 bit integers, but not 64 bit integers. This patch
fills that hole, adding a UINT64 version.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
|
|
# By Dmitry Fleytman (5) and others
# Via Stefan Hajnoczi
* stefanha/net:
net: increase buffer size to accommodate Jumbo frame pkts
VMXNET3 device implementation
Packet abstraction for VMWARE network devices
Common definitions for VMWARE devices
net: iovec checksum calculator
Checksum-related utility functions
net: use socket_set_nodelay() for -netdev socket
|
|
# By Liu Yuan (1) and Stefan Weil (1)
# Via Stefan Hajnoczi
* stefanha/block:
block: Add options QDict to bdrv_file_open() prototypes (fix MinGW build)
rbd: fix compile error
|
|
# By Gerd Hoffmann
# Via Gerd Hoffmann
* kraxel/ipxe.3:
ipxe: update binaries
ipxe: disable two second timeout
|
|
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
This solves, e.g., sticky ALT when selecting a GTK menu, switching to a
different window or selecting a different virtual console.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-id: 514F417A.6010908@web.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Socket buffer sizes were hard-coded to 4K for VDE and socket netdevs. Bump this
up to 68K (ala tap netdev) to handle maximum GSO packet size (64k) plus plenty
of room for the ethernet and virtio_net headers.
Originally, ran into this limitation when using -netdev UDP sockets to connect
VM-to-VM, where VM interface is configure with MTU=9000. (Using virtio_net
NIC model). Test is simple: ping -M do -s 8500 <target>. This test will
attempt to ping with unfragmented packet of given size. Without patch, size
is limited to < 4K (minus protocol hdrs). With patch, ping test works with pkt
size up to 9000 (again, minus protocol hdrs).
v2: per Stefan, increase buf size to (4096+65536) as done in tap and apply
to vde and socket netdevs.
v1: increase buf size to 12K just for -netdev UDP sockets
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
net_checksum_add_cont()
checksum calculation for scattered data with odd chunk sizes
net_raw_checksum()
checksum calculation for a buffer
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Reduce -netdev socket latency by disabling the Nagle algorithm on
SOCK_STREAM sockets in net/socket.c. Since we are tunelling Ethernet
over TCP we shouldn't artificially delay outgoing packets, let the guest
decide packet scheduling.
I already get sub-millisecond -netdev socket ping times on localhost, so
there was no measurable difference in my testing. This won't hurt
though and may improve remote socket performance.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
|
|
The new parameter is unused yet.
This part was missing in commit 787e4a8500020695eb391e2f1cc4767ee071d441.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Commit 787e4a85 [block: Add options QDict to bdrv_file_open() prototypes] didn't
update rbd.c accordingly.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
Here's a fix for the build problem identified by Aurelien Jarno here:
http://lists.gnu.org/archive/html/qemu-devel/2013-03/msg04177.html
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
We can compute the value in cpu_dump_state anyway, and gratuitous
modifications to eflags creates heisenbugs.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
When starting from CC_OP_DYNAMIC, and issuing adox before adcx,
a typo used the wrong value for the resulting CC_OP.
Cc: Blue Swirl <blauwirbel@gmail.com>
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
* 'for-upstream' of git://github.com/mwalle/qemu:
configure: rename OpenGL feature to GLX
configure: proper OpenGL/GLX probe
target-lm32: use HELPER() macro
target-lm32: flush tlb after clearing env
target-lm32: remove dead code
target-lm32: fix cmpgui and cmpgeui opcodes
tests: tcg: lm32: add more test cases
target-lm32: don't log cpu state in translation
lm32_uart: fix receive buffering
milkymist-uart: fix receive buffering
lm32-dis: fix NULL pointer dereference
target-lm32: fix debug memory access
|
|
* 'ppc-for-upstream' of git://github.com/agraf/qemu: (58 commits)
target-ppc: Use NARROW_MODE macro for tlbie
target-ppc: Use NARROW_MODE macro for addresses
target-ppc: Use NARROW_MODE macro for comparisons
target-ppc: Use NARROW_MODE macro for branches
target-ppc: Fix add and subf carry generation in narrow mode
target-ppc: Use QOM method dispatch for MMU fault handling
target-ppc: Move ppc tlb_fill implementation into mmu_helper.c
target-ppc: Split user only code out of mmu_helper.c
mmu-hash64: Implement Virtual Page Class Key Protection
mmu-hash*: Merge translate and fault handling functions
mmu-hash*: Don't use full ppc_hash{32, 64}_translate() path for get_phys_page_debug()
mmu-hash*: Correctly mask RPN from hash PTE
mmu-hash*: Clean up real address calculation
mmu-hash*: Clean up PTE flags update
mmu-hash64: Factor SLB N bit into permissions bits
mmu-hash*: Clean up permission checking
mmu-hash32: Remove nx from context structure
mmu-hash*: Don't update PTE flags when permission is denied
mmu-hash32: Don't look up page tables on BAT permission error
mmu-hash32: Cleanup BAT lookup
...
|
|
is_tcg_gen_code() checks the upper limit of TCG generated code range wrong, so
that TCG could get broken occasionally only when CONFIG_QEMU_LDST_OPTIMIZATION
enabled. The reason is code_gen_buffer_max_size does not cover the upper range
up to (TCG_MAX_OP_SIZE * OPC_BUF_SIZE), thus code_gen_buffer_max_size should be
modified to code_gen_buffer_size.
CC: qemu-stable@nongnu.org
Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
# By Kevin Wolf (12) and Peter Lieven (2)
# Via Kevin Wolf
* kwolf/for-anthony:
nbd: Check against invalid option combinations
nbd: Use default port if only host is specified
block: Allow omitting the file name when using driver-specific options
block: Make find_image_format safe with NULL filename
block: Rename variable to avoid shadowing
block: Introduce .bdrv_parse_filename callback
nbd: Accept -drive options for the network connection
nbd: Remove unused functions
nbd: Keep hostname and port separate
qemu-socket: Make socket_optslist public
block: Pass bdrv_file_open() options to block drivers
block: Add options QDict to bdrv_file_open() prototypes
block: complete all IOs before resizing a device
Revert "block: complete all IOs before .bdrv_truncate"
|
|
# By liguang (2) and others
# Via Stefan Hajnoczi
* stefanha/trivial-patches:
qdev: remove redundant abort()
gitignore: ignore more files
Use proper term in TCG README
serial: Fix debug format strings
Fix typos and misspellings
Advertise --libdir in configure --help output
memory: fix a bug of detection of memory region collision
MinGW: Replace setsockopt by qemu_setsocketopt
|
|
# By Cornelia Huck
# Via Cornelia Huck
* cohuck/virtio-ccw-upstr:
virtio-ccw, s390-virtio: Use generic virtio-blk macro.
s390-virtio, virtio-ccw: Add config_wce for virtio-blk.
virtio-ccw: Add missing blk chs properties.
|