aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2019-08-06block/backup: disable copy_range for compressed backupVladimir Sementsov-Ogievskiy
Enabled by default copy_range ignores compress option. It's definitely unexpected for user. It's broken since introduction of copy_range usage in backup in 9ded4a011496. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190730163251.755248-3-vsementsov@virtuozzo.com Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-08-06mirror: Only mirror granularity-aligned chunksMax Reitz
In write-blocking mode, all writes to the top node directly go to the target. We must only mirror chunks of data that are aligned to the job's granularity, because that is how the dirty bitmap works. Therefore, the request alignment for writes must be the job's granularity (in write-blocking mode). Unfortunately, this forces all reads and writes to have the same granularity (we only need this alignment for writes to the target, not the source), but that is something to be fixed another time. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190805153308.2657-1-mreitz@redhat.com Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Fixes: d06107ade0ce74dc39739bac80de84b51ec18546 Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-08-06backup: Copy only dirty areasMax Reitz
The backup job must only copy areas that the copy_bitmap reports as dirty. This is always the case when using traditional non-offloading backup, because it copies each cluster separately. When offloading the copy operation, we sometimes copy more than one cluster at a time, but we only check whether the first one is dirty. Therefore, whenever copy offloading is possible, the backup job currently produces wrong output when the guest writes to an area of which an inner part has already been backed up, because that inner part will be re-copied. Fixes: 9ded4a0114968e98b41494fc035ba14f84cdf700 Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190801173900.23851-2-mreitz@redhat.com Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-30nvme: Limit blkshift to 12 (for 4 kB blocks)Max Reitz
Linux does not support blocks greater than 4 kB anyway, so we might as well limit blkshift to 12 and thus save us from some potential trouble. Reported-by: Peter Maydell <peter.maydell@linaro.org> Suggested-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190730114812.10493-1-mreitz@redhat.com Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Coverity: CID 1403771 Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-30block/copy-on-read: Fix permissions for inactive nodeKevin Wolf
The copy-on-read drive must not request the WRITE_UNCHANGED permission for its child if the node is inactive, otherwise starting a migration destination with -incoming will fail because the child cannot provide write access yet: qemu-system-x86_64: -blockdev copy-on-read,file=img,node-name=cor: Block node is read-only Earlier QEMU versions additionally ran into an abort() on the migration source side: bdrv_inactivate_recurse() failed to update permissions. This is silently ignored today because it was only supposed to loosen restrictions. This is the symptom that was originally reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1733022 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2019-07-22block: Dec. drained_end_counter before bdrv_wakeupMax Reitz
Decrementing drained_end_counter after bdrv_dec_in_flight() (which in turn invokes bdrv_wakeup() and thus aio_wait_kick()) is not very clever. We should decrement it beforehand, so that any waiting aio_poll() that is woken by bdrv_dec_in_flight() sees the decremented drained_end_counter. Because the time window between decrementing drained_end_counter and aio_wait_kick() is very small, I cannot supply a reliable regression test. However, running e.g. the /bdrv-drain/blockjob/iothread/drain_all test in test-bdrv-drain has a small chance of hanging without this patch (about 1/200 or so; it gets to nearly 100 % if you add e.g. an fputc(' ', stderr); after the bdrv_dec_in_flight()). Fixes: e037c09c78520cbdb6da7cfc6ad0256d5870b814 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190722133054.21781-2-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-22block/nvme: don't touch the completion entriesMaxim Levitsky
Completion entries are meant to be only read by the host and written by the device. The driver is supposed to scan the completions from the last point where it left, and until it sees a completion with non flipped phase bit. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716163020.13383-4-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-22block/nvme: support larger that 512 bytes sector devicesMaxim Levitsky
Currently the driver hardcodes the sector size to 512, and doesn't check the underlying device. Fix that. Also fail if underlying nvme device is formatted with metadata as this needs special support. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-id: 20190716163020.13383-3-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-22block/nvme: fix doorbell strideMaxim Levitsky
Fix the math involving non standard doorbell stride Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716163020.13383-2-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-22Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-07-19' into ↵Peter Maydell
staging nbd patches for 2019-07-19 - silence harmless compiler/valgrind warning # gpg: Signature made Fri 19 Jul 2019 21:17:12 BST # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-07-19: nbd: Initialize reply on failure Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-19nbd: Initialize reply on failureEric Blake
We've had two separate reports of different callers running into use of uninitialized data if s->quit is set (one detected by gcc -O3, another by valgrind), due to checking 'nbd_reply_is_simple(reply) || s->quit' in the wrong order. Rather than chasing down which callers need to pre-initialize reply, and whether there are any other uninitialized uses, it's easier to guarantee that reply will always be set by nbd_co_receive_one_chunk() even on failure. The uninitialized use happens to be harmless (the only time the variable is uninitialized is if s->quit is set, so the conditional results in the same action regardless of what was read from reply), and was introduced in commit 65e01d47. In fixing the problem, it can also be seen that all (one) callers pass in a non-NULL reply, so there is a dead conditional to also be cleaned up. Reported-by: Thomas Huth <thuth@redhat.com> Reported-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190719172001.19770-1-eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-07-19block: Loop unsafely in bdrv*drained_end()Max Reitz
The graph must not change in these loops (or a QLIST_FOREACH_SAFE would not even be enough). We now ensure this by only polling once in the root bdrv_drained_end() call, so we can drop the _SAFE suffix. Doing so makes it clear that the graph must not change. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-19block: Do not poll in bdrv_do_drained_end()Max Reitz
We should never poll anywhere in bdrv_do_drained_end() (including its recursive callees like bdrv_drain_invoke()), because it does not cope well with graph changes. In fact, it has been written based on the postulation that no graph changes will happen in it. Instead, the callers that want to poll must poll, i.e. all currently globally available wrappers: bdrv_drained_end(), bdrv_subtree_drained_end(), bdrv_unapply_subtree_drain(), and bdrv_drain_all_end(). Graph changes there do not matter. They can poll simply by passing a pointer to a drained_end_counter and wait until it reaches 0. This patch also adds a non-polling global wrapper for bdrv_do_drained_end() that takes a drained_end_counter pointer. We need such a variant because now no function called anywhere from bdrv_do_drained_end() must poll. This includes BdrvChildRole.drained_end(), which already must not poll according to its interface documentation, but bdrv_child_cb_drained_end() just violates that by invoking bdrv_drained_end() (which does poll). Therefore, BdrvChildRole.drained_end() must take a *drained_end_counter parameter, which bdrv_child_cb_drained_end() can pass on to the new bdrv_drained_end_no_poll() function. Note that we now have a pattern of all drained_end-related functions either polling or receiving a *drained_end_counter to let the caller poll based on that. A problem with a single poll loop is that when the drained section in bdrv_set_aio_context_ignore() ends, some nodes in the subgraph may be in the old contexts, while others are in the new context already. To let the collective poll in bdrv_drained_end() work correctly, we must not hold a lock to the old context, so that the old context can make progress in case it is different from the current context. (In the process, remove the comment saying that the current context is always the old context, because it is wrong.) In all other places, all nodes in a subtree must be in the same context, so we can just poll that. The exception of course is bdrv_drain_all_end(), but that always runs in the main context, so we can just poll NULL (like bdrv_drain_all_begin() does). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-19block: Make bdrv_parent_drained_[^_]*() staticMax Reitz
These functions are not used outside of block/io.c, there is no reason why they should be globally available. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-19block: Add @drained_end_counterMax Reitz
Callers can now pass a pointer to an integer that bdrv_drain_invoke() (and its recursive callees) will increment for every bdrv_drain_invoke_entry() operation they schedule. bdrv_drain_invoke_entry() in turn will decrement it once it has invoked BlockDriver.bdrv_co_drain_end(). We use atomic operations to access the pointee, because the bdrv_do_drained_end() caller may wish to end drained sections for multiple nodes in different AioContexts (bdrv_drain_all_end() does, for example). This is the first step to moving the polling for BdrvCoDrainData.done to become true out of bdrv_drain_invoke() and into the root drained_end function. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-19block: Introduce BdrvChild.parent_quiesce_counterMax Reitz
Commit 5cb2737e925042e6c7cd3fb0b01313950b03cddf laid out why bdrv_do_drained_end() must decrement the quiesce_counter after bdrv_drain_invoke(). It did not give a very good reason why it has to happen after bdrv_parent_drained_end(), instead only claiming symmetry to bdrv_do_drained_begin(). It turns out that delaying it for so long is wrong. Situation: We have an active commit job (i.e. a mirror job) from top to base for the following graph: filter | [file] | v top --[backing]--> base Now the VM is closed, which results in the job being cancelled and a bdrv_drain_all() happening pretty much simultaneously. Beginning the drain means the job is paused once whenever one of its nodes is quiesced. This is reversed when the drain ends. With how the code currently is, after base's drain ends (which means that it will have unpaused the job once), its quiesce_counter remains at 1 while it goes to undrain its parents (bdrv_parent_drained_end()). For some reason or another, undraining filter causes the job to be kicked and enter mirror_exit_common(), where it proceeds to invoke block_job_remove_all_bdrv(). Now base will be detached from the job. Because its quiesce_counter is still 1, it will unpause the job once more. So in total, undraining base will unpause the job twice. Eventually, this will lead to the job's pause_count going negative -- well, it would, were there not an assertion against this, which crashes qemu. The general problem is that if in bdrv_parent_drained_end() we undrain parent A, and then undrain parent B, which then leads to A detaching the child, bdrv_replace_child_noperm() will undrain A as if we had not done so yet; that is, one time too many. It follows that we cannot decrement the quiesce_counter after invoking bdrv_parent_drained_end(). Unfortunately, decrementing it before bdrv_parent_drained_end() would be wrong, too. Imagine the above situation in reverse: Undraining A leads to B detaching the child. If we had already decremented the quiesce_counter by that point, bdrv_replace_child_noperm() would undrain B one time too little; because it expects bdrv_parent_drained_end() to issue this undrain. But bdrv_parent_drained_end() won't do that, because B is no longer a parent. Therefore, we have to do something else. This patch opts for introducing a second quiesce_counter that counts how many times a child's parent has been quiesced (though c->role->drained_*). With that, bdrv_replace_child_noperm() just has to undrain the parent exactly that many times when removing a child, and it will always be right. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-16Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* VFIO bugfix for AMD SEV (Alex) * Kconfig improvements (Julio, Philippe) * MemoryRegion reference counting bugfix (King Wang) * Build system cleanups (Marc-André, myself) * rdmacm-mux off-by-one (Marc-André) * ZBC passthrough fixes (Shinichiro, myself) * WHPX build fix (Stefan) * char-pty fix (Wei Yang) # gpg: Signature made Tue 16 Jul 2019 08:31:27 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: vl: make sure char-pty message displayed by moving setbuf to the beginning create_config: remove $(CONFIG_SOFTMMU) hack Makefile: do not repeat $(CONFIG_SOFTMMU) in hw/Makefile.objs hw/usb/Kconfig: USB_XHCI_NEC requires USB_XHCI hw/usb/Kconfig: Add CONFIG_USB_EHCI_PCI target/i386: sev: Do not unpin ram device memory region checkpatch: detect doubly-encoded UTF-8 hw/lm32/Kconfig: Milkymist One provides a USB 1.1 Controller util: merge main-loop.c and iohandler.c Fix broken build with WHPX enabled memory: unref the memory region in simplify flatview hw/i386: turn off vmport if CONFIG_VMPORT is disabled rdmacm-mux: fix strcpy string warning build-sys: remove slirp cflags from main-loop.o iscsi: base all handling of check condition on scsi_sense_to_errno iscsi: fix busy/timeout/task set full scsi: add guest-recoverable ZBC errors scsi: explicitly list guest-recoverable sense codes scsi-disk: pass sense correctly for guest-recoverable errors Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-15gluster: fix .bdrv_reopen_prepare when backing file is a JSON objectStefano Garzarella
When the backing_file is specified as a JSON object, the qemu_gluster_reopen_prepare() fails with this message: invalid URI json:{"server.0.host": ...} In this case, we should call qemu_gluster_init() using the QDict 'state->options' that contains the JSON parameters already parsed. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1542445 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20190715132844.506584-1-sgarzare@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-15block/stream: Swap backing file change orderMax Reitz
bdrv_change_backing_file() can result in yields. Therefore, @base may no longer be the the backing_bs() of s->bottom afterwards. Just swap the order of the two calls to fix this. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-15block/stream: Fix error pathMax Reitz
As of commit c624b015bf14fe01f1e6452a36e63b3ea1ae4998, the stream job only freezes the chain until the overlay of the base node. The error path must consider this. Fixes: c624b015bf14fe01f1e6452a36e63b3ea1ae4998 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-3-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-15block: Add BDS.never_freezeMax Reitz
The commit and the mirror block job must be able to drop their filter node at any point. However, this will not be possible if any of the BdrvChild links to them is frozen. Therefore, we need to prevent them from ever becoming frozen. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20190703172813.6868-2-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-15nvme: Set number of queues later in nvme_init()Michal Privoznik
When creating the admin queue in nvme_init() the variable that holds the number of queues created is modified before actual queue creation. This is a problem because if creating the queue fails then the variable is left in inconsistent state. This was actually observed when I tried to hotplug a nvme disk. The control got to nvme_file_open() which called nvme_init() which failed and thus nvme_close() was called which in turn called nvme_free_queue_pair() with queue being NULL. This lead to an instant crash: #0 0x000055d9507ec211 in nvme_free_queue_pair (bs=0x55d952ddb880, q=0x0) at block/nvme.c:164 #1 0x000055d9507ee180 in nvme_close (bs=0x55d952ddb880) at block/nvme.c:729 #2 0x000055d9507ee3d5 in nvme_file_open (bs=0x55d952ddb880, options=0x55d952bb1410, flags=147456, errp=0x7ffd8e19e200) at block/nvme.c:781 #3 0x000055d9507629f3 in bdrv_open_driver (bs=0x55d952ddb880, drv=0x55d95109c1e0 <bdrv_nvme>, node_name=0x0, options=0x55d952bb1410, open_flags=147456, errp=0x7ffd8e19e310) at block.c:1291 #4 0x000055d9507633d6 in bdrv_open_common (bs=0x55d952ddb880, file=0x0, options=0x55d952bb1410, errp=0x7ffd8e19e310) at block.c:1551 #5 0x000055d950766881 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d952bb1410, flags=32768, parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, errp=0x7ffd8e19e510) at block.c:3063 #6 0x000055d950765ae4 in bdrv_open_child_bs (filename=0x0, options=0x55d9541cdff0, bdref_key=0x55d950af33aa "file", parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, allow_none=true, errp=0x7ffd8e19e510) at block.c:2712 #7 0x000055d950766633 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d9541cdff0, flags=0, parent=0x0, child_role=0x0, errp=0x7ffd8e19e908) at block.c:3011 #8 0x000055d950766dba in bdrv_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block.c:3156 #9 0x000055d9507cb635 in blk_new_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block/block-backend.c:389 #10 0x000055d950465ec5 in blockdev_init (file=0x0, bs_opts=0x55d953d00390, errp=0x7ffd8e19e908) at blockdev.c:602 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Message-id: 927aae40b617ba7d4b6c7ffe74e6d7a2595f8e86.1562770546.git.mprivozn@redhat.com Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-15iscsi: base all handling of check condition on scsi_sense_to_errnoPaolo Bonzini
Now that scsi-disk is not using scsi_sense_to_errno to separate guest-recoverable sense codes, we can modify it to simplify iscsi's own sense handling. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-15iscsi: fix busy/timeout/task set fullPaolo Bonzini
In this case, do_retry was set without calling aio_co_wake, thus never waking up the coroutine. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-12file-posix: Use max transfer length/segment count only for SCSI passthroughMaxim Levitsky
Regular kernel block devices (/dev/sda*, /dev/nvme*, etc) don't have max segment size/max segment count hardware requirements exposed to the userspace, but rather the kernel block layer takes care to split the incoming requests that violate these requirements. Allowing the kernel to do the splitting allows qemu to avoid various overheads that arise otherwise from this. This is especially visible in nbd server, exposing as a raw file, a mostly empty qcow2 image over the net. In this case most of the reads by the remote user won't even hit the underlying kernel block device, and therefore most of the overhead will be in the nbd traffic which increases significantly with lower max transfer size. In addition to that even for local block device access the peformance improves a bit due to less traffic between qemu and the kernel when large transfer sizes are used (e.g for image conversion) More info can be found at: https://bugzilla.redhat.com/show_bug.cgi?id=1647104 Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Pankaj Gupta <pagupta@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-08qcow2: Allow -o compat=v3 during qemu-img amendEric Blake
Commit b76b4f60 allowed '-o compat=v3' as an alias for the less-appealing '-o compat=1.1' for 'qemu-img create' since we want to use the QMP form as much as possible, but forgot to do likewise for qemu-img amend. Also, it doesn't help that '-o help' doesn't list our new preferred spellings. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-08block/qcow: Improve error when opening qcow2 files as qcowJohn Snow
Reported-by: radmehrsaeed7@gmail.com Fixes: https://bugs.launchpad.net/bugs/1832914 Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-07-02block/stream: introduce a bottom nodeAndrey Shinkevich
The bottom node is the intermediate block device that has the base as its backing image. It is used instead of the base node while a block stream job is running to avoid dependency on the base that may change due to the parallel jobs. The change may take place due to a filter node as well that is inserted between the base and the intermediate bottom node. It occurs when the base node is the top one for another commit or stream job. After the introduction of the bottom node, don't freeze its backing child, that's the base, anymore. Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 1559152576-281803-4-git-send-email-andrey.shinkevich@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-02block/stream: refactor stream_run: drop gotoAndrey Shinkevich
The goto is unnecessary in the stream_run() since the common exit code was removed in the commit eb23654dbe43b549ea2a9ebff9d8e: "jobs: utilize job_exit shim". Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1559152576-281803-3-git-send-email-andrey.shinkevich@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-02block: include base when checking image chain for block allocationAndrey Shinkevich
This patch is used in the 'block/stream: introduce a bottom node' that is following. Instead of the base node, the caller may pass the node that has the base as its backing image to the function bdrv_is_allocated_above() with a new parameter include_base = true and get rid of the dependency on the base that may change during commit/stream parallel jobs. Now, if the specified base is not found in the backing image chain, the QEMU will abort. Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 1559152576-281803-2-git-send-email-andrey.shinkevich@virtuozzo.com [mreitz: Squashed in the following as a rebase on conflicting patches:] Message-id: e3cf99ae-62e9-8b6e-5a06-d3c8b9363b85@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-07-02block/rbd: increase dynamically the image sizeStefano Garzarella
RBD APIs don't allow us to write more than the size set with rbd_create() or rbd_resize(). In order to support growing images (eg. qcow2), we resize the image before write operations that exceed the current size. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20190509145927.293369-1-sgarzare@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-24ssh: switch from libssh2 to libsshPino Toscano
Rewrite the implementation of the ssh block driver to use libssh instead of libssh2. The libssh library has various advantages over libssh2: - easier API for authentication (for example for using ssh-agent) - easier API for known_hosts handling - supports newer types of keys in known_hosts Use APIs/features available in libssh 0.8 conditionally, to support older versions (which are not recommended though). Adjust the iotest 207 according to the different error message, and to find the default key type for localhost (to properly compare the fingerprint with). Contributed-by: Max Reitz <mreitz@redhat.com> Adjust the various Docker/Travis scripts to use libssh when available instead of libssh2. The mingw/mxe testing is dropped for now, as there are no packages for it. Signed-off-by: Pino Toscano <ptoscano@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190620200840.17655-1-ptoscano@redhat.com Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 5873173.t2JhDm7DL7@lindworm.usersys.redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-24vmdk: Add read-only support for seSparse snapshotsSam Eiderman
Until ESXi 6.5 VMware used the vmfsSparse format for snapshots (VMDK3 in QEMU). This format was lacking in the following: * Grain directory (L1) and grain table (L2) entries were 32-bit, allowing access to only 2TB (slightly less) of data. * The grain size (default) was 512 bytes - leading to data fragmentation and many grain tables. * For space reclamation purposes, it was necessary to find all the grains which are not pointed to by any grain table - so a reverse mapping of "offset of grain in vmdk" to "grain table" must be constructed - which takes large amounts of CPU/RAM. The format specification can be found in VMware's documentation: https://www.vmware.com/support/developer/vddk/vmdk_50_technote.pdf In ESXi 6.5, to support snapshot files larger than 2TB, a new format was introduced: SESparse (Space Efficient). This format fixes the above issues: * All entries are now 64-bit. * The grain size (default) is 4KB. * Grain directory and grain tables are now located at the beginning of the file. + seSparse format reserves space for all grain tables. + Grain tables can be addressed using an index. + Grains are located in the end of the file and can also be addressed with an index. - seSparse vmdks of large disks (64TB) have huge preallocated headers - mainly due to L2 tables, even for empty snapshots. * The header contains a reverse mapping ("backmap") of "offset of grain in vmdk" to "grain table" and a bitmap ("free bitmap") which specifies for each grain - whether it is allocated or not. Using these data structures we can implement space reclamation efficiently. * Due to the fact that the header now maintains two mappings: * The regular one (grain directory & grain tables) * A reverse one (backmap and free bitmap) These data structures can lose consistency upon crash and result in a corrupted VMDK. Therefore, a journal is also added to the VMDK and is replayed when the VMware reopens the file after a crash. Since ESXi 6.7 - SESparse is the only snapshot format available. Unfortunately, VMware does not provide documentation regarding the new seSparse format. This commit is based on black-box research of the seSparse format. Various in-guest block operations and their effect on the snapshot file were tested. The only VMware provided source of information (regarding the underlying implementation) was a log file on the ESXi: /var/log/hostd.log Whenever an seSparse snapshot is created - the log is being populated with seSparse records. Relevant log records are of the form: [...] Const Header: [...] constMagic = 0xcafebabe [...] version = 2.1 [...] capacity = 204800 [...] grainSize = 8 [...] grainTableSize = 64 [...] flags = 0 [...] Extents: [...] Header : <1 : 1> [...] JournalHdr : <2 : 2> [...] Journal : <2048 : 2048> [...] GrainDirectory : <4096 : 2048> [...] GrainTables : <6144 : 2048> [...] FreeBitmap : <8192 : 2048> [...] BackMap : <10240 : 2048> [...] Grain : <12288 : 204800> [...] Volatile Header: [...] volatileMagic = 0xcafecafe [...] FreeGTNumber = 0 [...] nextTxnSeqNumber = 0 [...] replayJournal = 0 The sizes that are seen in the log file are in sectors. Extents are of the following format: <offset : size> This commit is a strict implementation which enforces: * magics * version number 2.1 * grain size of 8 sectors (4KB) * grain table size of 64 sectors * zero flags * extent locations Additionally, this commit proivdes only a subset of the functionality offered by seSparse's format: * Read-only * No journal replay * No space reclamation * No unmap support Hence, journal header, journal, free bitmap and backmap extents are unused, only the "classic" (L1 -> L2 -> data) grain access is implemented. However there are several differences in the grain access itself. Grain directory (L1): * Grain directory entries are indexes (not offsets) to grain tables. * Valid grain directory entries have their highest nibble set to 0x1. * Since grain tables are always located in the beginning of the file - the index can fit into 32 bits - so we can use its low part if it's valid. Grain table (L2): * Grain table entries are indexes (not offsets) to grains. * If the highest nibble of the entry is: 0x0: The grain in not allocated. The rest of the bytes are 0. 0x1: The grain is unmapped - guest sees a zero grain. The rest of the bits point to the previously mapped grain, see 0x3 case. 0x2: The grain is zero. 0x3: The grain is allocated - to get the index calculate: ((entry & 0x0fff000000000000) >> 48) | ((entry & 0x0000ffffffffffff) << 12) * The difference between 0x1 and 0x2 is that 0x1 is an unallocated grain which results from the guest using sg_unmap to unmap the grain - but the grain itself still exists in the grain extent - a space reclamation procedure should delete it. Unmapping a zero grain has no effect (0x2 will not change to 0x1) but unmapping an unallocated grain will (0x0 to 0x1) - naturally. In order to implement seSparse some fields had to be changed to support both 32-bit and 64-bit entry sizes. Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com> Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com> Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com> Message-id: 20190620091057.47441-4-shmuel.eiderman@oracle.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-24vmdk: Reduce the max bound for L1 table sizeSam Eiderman
512M of L1 entries is a very loose bound, only 32M are required to store the maximal supported VMDK file size of 2TB. Fixed qemu-iotest 59# - now failure occures before on impossible L1 table size. Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com> Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com> Message-id: 20190620091057.47441-3-shmuel.eiderman@oracle.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-24vmdk: Fix comment regarding max l1_size coverageSam Eiderman
Commit b0651b8c246d ("vmdk: Move l1_size check into vmdk_add_extent") extended the l1_size check from VMDK4 to VMDK3 but did not update the default coverage in the moved comment. The previous vmdk4 calculation: (512 * 1024 * 1024) * 512(l2 entries) * 65536(grain) = 16PB The added vmdk3 calculation: (512 * 1024 * 1024) * 4096(l2 entries) * 512(grain) = 1PB Adding the calculation of vmdk3 to the comment. In any case, VMware does not offer virtual disks more than 2TB for vmdk4/vmdk3 or 64TB for the new undocumented seSparse format which is not implemented yet in qemu. Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com> Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com> Message-id: 20190620091057.47441-2-shmuel.eiderman@oracle.com Reviewed-by: yuchenlin <yuchenlin@synology.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-18block/commit: Drop bdrv_child_try_set_perm()Max Reitz
commit_top_bs never requests or unshares any permissions. There is no reason to make this so explicit here. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18block/mirror: Fix child permissionsMax Reitz
We cannot use bdrv_child_try_set_perm() to give up all restrictions on the child edge, and still have bdrv_mirror_top_child_perm() request BLK_PERM_WRITE. Fix this by making bdrv_mirror_top_child_perm() return 0/BLK_PERM_ALL when we want to give up all permissions, and replacing bdrv_child_try_set_perm() by bdrv_child_refresh_perms(). The bdrv_child_try_set_perm() before removing the node with bdrv_replace_node() is then unnecessary. No permissions have changed since the previous invocation of bdrv_child_try_set_perm(). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18file-posix: Update open_flags in raw_set_perm()Max Reitz
raw_check_perm() + raw_set_perm() can change the flags associated with the current FD. If so, we have to update BDRVRawState.open_flags accordingly. Otherwise, we may keep reopening the FD even though the current one already has the correct flags. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18block: drop bs->jobVladimir Sementsov-Ogievskiy
Drop remaining users of bs->job: 1. assertions actually duplicated by assert(!bs->refcnt) 2. trace-point seems not enough reason to change stream_start to return BlockJob pointer 3. Restricting creation of two jobs based on same bs is bad idea, as 3.1 Some jobs creates filters to be their main node, so, this check don't actually prevent creating second job on same real node (which will create another filter node) (but I hope it is restricted by other mechanisms) 3.2 Even without bs->job we have two systems of permissions: op-blockers and BLK_PERM 3.3 We may want to run several jobs on one node one day And finally, drop bs->job pointer itself. Hurrah! Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18block/block-backend: blk_iostatus_reset: drop usage of bs->jobVladimir Sementsov-Ogievskiy
We are going to remove bs->job pointer. Drop it's usage in blk_iostatus_reset. blk_iostatus_reset() has only two callers: 1. blk_attach_dev(). This doesn't have anything to do with jobs and attaching a new guest device won't solve any problem the job encountered, so no reason to reset the iostatus for the job. 2. qmp_cont(). This resets the iostatus for everything. We can just call block_job_iostatus_reset() for all block jobs instead of going through BlockBackend. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-18block/replication: drop usage of bs->jobVladimir Sementsov-Ogievskiy
We are going to remove bs->job pointer. Drop it's usage in replication code. Additionally we have to return job pointer from some mirror APIs. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-06-14blkdebug: Inject errors on .bdrv_co_block_status()Max Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190507203508.18026-6-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-14blkdebug: Add "none" eventMax Reitz
Together with @iotypes and @sector, this can be used to trap e.g. the first read or write access to a certain sector without having to know what happens internally in the block layer, i.e. which "real" events happen right before such an access. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190507203508.18026-5-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-14blkdebug: Add @iotype error optionMax Reitz
This new error option allows users of blkdebug to inject errors only on certain kinds of I/O operations. Users usually want to make a very specific operation fail, not just any; but right now they simply hope that the event that triggers the error injection is followed up with that very operation. That may not be true, however, because the block layer is changing (including blkdebug, which may increase the number of types of I/O operations on which to inject errors). The new option's default has been chosen to keep backwards compatibility. Note that similar to the internal representation, we could choose to expose this option as a list of I/O types. But there is no practical use for this, because as described above, users usually know exactly which kind of operation they want to make fail, so there is no need to specify multiple I/O types at once. In addition, exposing this option as a list would require non-trivial changes to qemu_opts_absorb_qdict(). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190507203508.18026-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-06-13block/nbd: merge NBDClientSession struct back to BDRVNBDStateVladimir Sementsov-Ogievskiy
No reason to keep it separate, it differs from others block driver behavior and therefore confuses. Instead of generic 'state = (State*)bs->opaque' we have to use special helper. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190611102720.86114-4-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-06-13block/nbd: merge nbd-client.* to nbd.cVladimir Sementsov-Ogievskiy
No reason for keeping driver handlers realization separate from driver structure. We can get rid of extra header file. While being here, fix comments style, restore forgotten comments for NBD_FOREACH_REPLY_CHUNK and nbd_reply_chunk_iter_receive, remove extra includes. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190611102720.86114-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-06-13block/nbd-client: drop stale logoutVladimir Sementsov-Ogievskiy
Drop one on failure path (we have errp) and turn two others into trace points. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190611102720.86114-2-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2019-06-12block/gluster: update .help of BLOCK_OPT_PREALLOC optionStefano Garzarella
Add missing 'falloc' among the allowed values of 'preallocation' option; show it and 'full' only when they are supported. ('falloc' is supported if defined CONFIG_GLUSTERFS_FALLOCATE, 'full' is supported if defined CONFIG_GLUSTERFS_ZEROFILL) Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190524075848.23781-4-sgarzare@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2019-06-12block/file-posix: update .help of BLOCK_OPT_PREALLOC optionStefano Garzarella
Show 'falloc' among the allowed values of 'preallocation' option, only when it is supported (if defined CONFIG_POSIX_FALLOCATE) Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190524075848.23781-3-sgarzare@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]