aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2017-06-12Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell
Block layer patches # gpg: Signature made Fri 09 Jun 2017 12:47:31 BST # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: block: fix external snapshot abort permission error block/qcow.c: Fix memory leak in qcow_create() qemu-iotests: Test automatic commit job cancel on hot unplug commit: Fix use after free in completion qemu-iotests: Block migration test migration/block: Clean up BBs in block_save_complete() migration: Inactivate images after .save_live_complete_precopy() block: Fix anonymous BBs in blk_root_inactivate() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-06-09block/qcow.c: Fix memory leak in qcow_create()Peter Maydell
Coverity points out that the code path in qcow_create() for the magic "fat:" backing file name leaks the memory used to store the filename (CID 1307771). Free the memory before we overwrite the pointer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-09commit: Fix use after free in completionKevin Wolf
The final bdrv_set_backing_hd() could be working on already freed nodes because the commit job drops its references (through BlockBackends) to both overlay_bs and top already a bit earlier. One way to trigger the bug is hot unplugging a disk for which blockdev_mark_auto_del() cancels the block job. Fix this by taking BDS-level references while we're still using the nodes. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2017-06-09block: Fix anonymous BBs in blk_root_inactivate()Kevin Wolf
blk->name isn't an array, but a pointer that can be NULL. Checking for an anonymous BB must involve a NULL check first, otherwise we get crashes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2017-06-07nbd: make it thread-safe, fix qcow2 over nbdPaolo Bonzini
NBD is not thread safe, because it accesses s->in_flight without a CoMutex. Fixing this will be required for multiqueue. CoQueue doesn't have spurious wakeups but, when another coroutine can run between qemu_co_queue_next's wakeup and qemu_co_queue_wait's re-locking of the mutex, the wait condition can become false and a loop is necessary. In fact, it turns out that the loop is necessary even without this multi-threaded scenario. A particular sequence of coroutine wakeups is happening ~80% of the time when starting a guest with qcow2 image served over NBD (i.e. qemu-nbd --format=raw, and QEMU's -drive option has -format=qcow2). This patch fixes that issue too. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-06nbd/client.c: use errp instead of LOGVladimir Sementsov-Ogievskiy
Move to modern errp scheme from just LOGging errors. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20170526110913.89098-1-vsementsov@virtuozzo.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-06nbd: add errp parameter to nbd_wr_syncv()Vladimir Sementsov-Ogievskiy
Will be used in following patch to provide actual error message in some cases. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20170516094533.6160-4-vsementsov@virtuozzo.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-02gluster: add support for PREALLOC_MODE_FALLOCNiels de Vos
Add missing support for "preallocation=falloc" to the Gluster block driver. This change bases its logic on that of block/file-posix.c and removed the gluster_supports_zerofill() and qemu_gluster_zerofill() functions in favour of #ifdef checks in an easy to read switch-statement. Both glfs_zerofill() and glfs_fallocate() have been introduced with GlusterFS 3.5.0 (pkg-config glusterfs-api = 6). A #define for the availability of glfs_fallocate() has been added to ./configure. Reported-by: Satheesaran Sundaramoorthi <sasundar@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Message-id: 20170528063114.28691-1-ndevos@redhat.com URL: https://bugzilla.redhat.com/1450759 Signed-off-by: Niels de Vos <ndevos@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-30Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi
Block layer patches # gpg: Signature made Mon 29 May 2017 03:34:59 PM BST # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: block/file-*: *_parse_filename() and colons block: Fix backing paths for filenames with colons block: Tweak error message related to qemu-img amend qemu-img: Fix leakage of options on error qemu-img: copy *key-secret opts when opening newly created files qemu-img: introduce --target-image-opts for 'convert' command qemu-img: fix --image-opts usage with dd command qemu-img: add support for --object with 'dd' command qemu-img: Fix documentation of convert qcow2: remove extra local_error variable mirror: Drop permissions on s->target on completion nvme: Add support for Controller Memory Buffers iotests: 147: Don't test inet6 if not available qemu-iotests: Test streaming with missing job ID stream: fix crash in stream_start() when block_job_create() fails Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-29block/file-*: *_parse_filename() and colonsMax Reitz
The file drivers' *_parse_filename() implementations just strip the optional protocol prefix off the filename. However, for e.g. "file:foo:bar", this would lead to "foo:bar" being stored as the BDS's filename which looks like it should be managed using the "foo" protocol. This is especially troublesome if you then try to resolve a backing filename based on "foo:bar". This issue can only occur if the stripped part is a relative filename ("file:/foo:bar" will be shortened to "/foo:bar" and having a slash before the first colon means that "/foo" is not recognized as a protocol part). Therefore, we can easily fix it by prepending "./" to such filenames. Before this patch: $ ./qemu-img create -f qcow2 backing.qcow2 64M Formatting 'backing.qcow2', fmt=qcow2 size=67108864 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 $ ./qemu-img create -f qcow2 -b backing.qcow2 file:top:image.qcow2 Formatting 'file:top:image.qcow2', fmt=qcow2 size=67108864 backing_file=backing.qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 $ ./qemu-io file:top:image.qcow2 can't open device file:top:image.qcow2: Could not open backing file: Unknown protocol 'top' After this patch: $ ./qemu-io file:top:image.qcow2 [no error] Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20170522195217.12991-3-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-29block: Tweak error message related to qemu-img amendEric Blake
When converting a 1.1 image down to 0.10, qemu-iotests 060 forces a contrived failure where allocating a cluster used to replace a zero cluster reads unaligned data. Since it is a zero cluster rather than a data cluster being converted, changing the error message to match our earlier change in 'qcow2: Make distinction between zero cluster types obvious' is worthwhile. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170508171302.17805-1-eblake@redhat.com [mreitz: Commit message fixes] Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-29qcow2: remove extra local_error variableAlberto Garcia
Commit d7086422b1c1e75e320519cfe26176db6ec97a37 added a local_err variable global to the qcow2_amend_options() function, so there's no need to have this other one. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 20170511150337.21470-1-berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-29mirror: Drop permissions on s->target on completionKevin Wolf
This fixes an assertion failure that was triggered by qemu-iotests 129 on some CI host, while the same test case didn't seem to fail on other hosts. Essentially the problem is that the blk_unref(s->target) in mirror_exit() doesn't necessarily mean that the BlockBackend goes away immediately. It is possible that the job completion was triggered nested in mirror_drain(), which looks like this: BlockBackend *target = s->target; blk_ref(target); blk_drain(target); blk_unref(target); In this case, the write permissions for s->target are retained until after blk_drain(), which makes removing mirror_top_bs fail for the active commit case (can't have a writable backing file in the chain without the filter driver). Explicitly dropping the permissions first means that the additional reference doesn't hurt and the job can complete successfully even if called from the nested blk_drain(). Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2017-05-26stream: fix crash in stream_start() when block_job_create() failsAlberto Garcia
The code that tries to reopen a BlockDriverState in stream_start() when the creation of a new block job fails crashes because it attempts to dereference a pointer that is known to be NULL. This is a regression introduced in a170a91fd3eab6155da39e740381867e, likely because the code was copied from stream_complete(). Cc: qemu-stable@nongnu.org Reported-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Alberto Garcia <berto@igalia.com> Tested-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-05-24block/gluster: glfs_lseek() workaroundJeff Cody
On current released versions of glusterfs, glfs_lseek() will sometimes return invalid values for SEEK_DATA or SEEK_HOLE. For SEEK_DATA and SEEK_HOLE, the returned value should be >= the passed offset, or < 0 in the case of error: LSEEK(2): off_t lseek(int fd, off_t offset, int whence); [...] SEEK_HOLE Adjust the file offset to the next hole in the file greater than or equal to offset. If offset points into the middle of a hole, then the file offset is set to offset. If there is no hole past offset, then the file offset is adjusted to the end of the file (i.e., there is an implicit hole at the end of any file). [...] RETURN VALUE Upon successful completion, lseek() returns the resulting offset location as measured in bytes from the beginning of the file. On error, the value (off_t) -1 is returned and errno is set to indicate the error However, occasionally glfs_lseek() for SEEK_HOLE/DATA will return a value less than the passed offset, yet greater than zero. For instance, here are example values observed from this call: offs = glfs_lseek(s->fd, start, SEEK_HOLE); if (offs < 0) { return -errno; /* D1 and (H3 or H4) */ } start == 7608336384 offs == 7607877632 This causes QEMU to abort on the assert test. When this value is returned, errno is also 0. This is a reported and known bug to glusterfs: https://bugzilla.redhat.com/show_bug.cgi?id=1425293 Although this is being fixed in gluster, we still should work around it in QEMU, given that multiple released versions of gluster behave this way. This patch treats the return case of (offs < start) the same as if an error value other than ENXIO is returned; we will assume we learned nothing, and there are no holes in the file. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Message-id: 87c0140e9407c08f6e74b04131b610f2e27c014c.1495560397.git.jcody@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-24blockjob: introduce block_job_pause/resume_allPaolo Bonzini
Remove use of block_job_pause/resume from outside blockjob.c, thus making them static. The new functions are used by the block layer, so place them in blockjob_int.h. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20170508141310.8674-5-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-24blockjob: introduce block_job_early_failPaolo Bonzini
Outside blockjob.c, block_job_unref is only used when a block job fails to start, and block_job_ref is not used at all. The reference counting thus is pretty well hidden. Introduce a separate function to be used by block jobs; because block_job_ref and block_job_unref now become static, move them earlier in blockjob.c. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20170508141310.8674-4-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-18migration: migration.h was not neededJuan Quintela
This files don't use any function from migration.h, so drop it. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
2017-05-18Merge remote-tracking branch 'quintela/tags/migration/20170517' into stagingStefan Hajnoczi
migration/next for 20170517 # gpg: Signature made Wed 17 May 2017 11:46:36 AM BST # gpg: using RSA key 0xF487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" # gpg: aka "Juan Quintela <quintela@trasno.org>" # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * quintela/tags/migration/20170517: migration: Move check_migratable() into qdev.c migration: Move postcopy stuff to postcopy-ram.c migration: Move page_cache.c to migration/ migration: Create migration/blocker.h ram: Rename RAM_SAVE_FLAG_COMPRESS to RAM_SAVE_FLAG_ZERO migration: Pass Error ** argument to {save,load}_vmstate migration: Fix regression with compression threads Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-17Merge remote-tracking branch 'jtc/tags/block-pull-request' into stagingStefan Hajnoczi
# gpg: Signature made Tue 16 May 2017 04:47:09 PM BST # gpg: using RSA key 0xBDBE7B27C0DE3057 # gpg: Good signature from "Jeffrey Cody <jcody@redhat.com>" # gpg: aka "Jeffrey Cody <jeff@codyprime.org>" # gpg: aka "Jeffrey Cody <codyprime@gmail.com>" # Primary key fingerprint: 9957 4B4D 3474 90E7 9D98 D624 BDBE 7B27 C0DE 3057 * jtc/tags/block-pull-request: curl: do not do aio_poll when waiting for a free CURLState curl: convert readv to coroutines curl: convert CURLAIOCB to byte values curl: split curl_find_state/curl_init_state curl: avoid recursive locking of BDRVCURLState mutex curl: never invoke callbacks with s->mutex held curl: strengthen assertion in curl_clean_state block: curl: Allow passing cookies via QCryptoSecret Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-17migration: Create migration/blocker.hJuan Quintela
This allows us to remove lots of includes of migration/migration.h Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-05-16curl: do not do aio_poll when waiting for a free CURLStatePaolo Bonzini
Instead, put the CURLAIOCB on a wait list and yield; curl_clean_state will wake the corresponding coroutine. Because of CURL's callback-based structure, we cannot easily convert everything to CoMutex/CoQueue; keeping the QemuMutex is simpler. However, CoQueue is a simple wrapper around a linked list, so we can easily use QSIMPLEQ and open-code a CoQueue, protected by the BDRVCURLState QemuMutex instead of a CoMutex. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-8-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16curl: convert readv to coroutinesPaolo Bonzini
This is pretty simple. The bottom half goes away because, unlike bdrv_aio_readv, coroutine-based read can return immediately without yielding. However, for simplicity I kept the former bottom half handler in a separate function. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-7-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16curl: convert CURLAIOCB to byte valuesPaolo Bonzini
This is in preparation for the conversion from bdrv_aio_readv to bdrv_co_preadv, and it also requires changing some of the size_t values to uint64_t. This was broken before for disks > 2TB, but now it would break at 4GB. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-6-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16curl: split curl_find_state/curl_init_statePaolo Bonzini
If curl_easy_init fails, a CURLState is left with s->in_use = 1. Split curl_init_state in two, so that we can distinguish the two failures and call curl_clean_state if needed. While at it, simplify curl_find_state, removing a dummy loop. The aio_poll loop is moved to the sole caller that needs it. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-5-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16block/win32: fix 'ret not initialized' warningGerd Hoffmann
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 20170516074256.24731-1-kraxel@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-16curl: avoid recursive locking of BDRVCURLState mutexPaolo Bonzini
The curl driver has a ugly hack where, if it cannot find an empty CURLState, it just uses aio_poll to wait for one to be empty. This is probably buggy when used together with dataplane, and the simplest way to fix it is to use coroutines instead. A more immediate effect of the bug however is that it can cause a recursive call to curl_readv_bh_cb and recursively taking the BDRVCURLState mutex. This causes a deadlock. The fix is to unlock the mutex around aio_poll, but for cleanliness we should also take the mutex around all calls to curl_init_state, even if reaching the unlock/lock pair is impossible. The same is true for curl_clean_state. Reported-by: Kun Wei <kuwei@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20170515100059.15795-4-pbonzini@redhat.com Cc: qemu-stable@nongnu.org Cc: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16curl: never invoke callbacks with s->mutex heldPaolo Bonzini
All curl callbacks go through curl_multi_do, and hence are called with s->mutex held. Note that with comments, and make curl_read_cb drop the lock before invoking the callback. Likewise for curl_find_buf, where the callback can be invoked by the caller. Cc: qemu-stable@nongnu.org Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-3-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16curl: strengthen assertion in curl_clean_statePaolo Bonzini
curl_clean_state should only be called after all AIOCBs have been completed. This is not so obvious for the call from curl_detach_aio_context, so assert that. Cc: qemu-stable@nongnu.org Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-2-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-16block: curl: Allow passing cookies via QCryptoSecretPeter Krempa
Since cookies can contain sensitive data (session ID, etc ...) it is desired to hide them from the prying eyes of users. Add a possibility to pass them via the secret infrastructure. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1447413 Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: f4a22cdebdd0bca6a13a43a2a6deead7f2ec4bb3.1493906281.git.pkrempa@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-12Merge tag 'block-pull-request' into stagingStefan Hajnoczi
# gpg: Signature made Fri 12 May 2017 10:37:12 AM EDT # gpg: using RSA key 0x9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * tag 'block-pull-request': aio: add missing aio_notify() to aio_enable_external() block: Simplify BDRV_BLOCK_RAW recursion coroutine: remove GThread implementation Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-12Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi
Block layer patches # gpg: Signature made Thu 11 May 2017 10:31:37 AM EDT # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: (58 commits) MAINTAINERS: Add qemu-progress to the block layer qcow2: Discard/zero clusters by byte count qcow2: Assert that cluster operations are aligned qcow2: Optimize write zero of unaligned tail cluster iotests: Add test 179 to cover write zeroes with unmap iotests: Improve _filter_qemu_img_map qcow2: Optimize zero_single_l2() to minimize L2 churn qcow2: Make distinction between zero cluster types obvious qcow2: Name typedef for cluster type qcow2: Correctly report status of preallocated zero clusters block: Update comments on BDRV_BLOCK_* meanings qcow2: Use consistent switch indentation qcow2: Nicer variable names in qcow2_update_snapshot_refcount() tests: Add coverage for recent block geometry fixes blkdebug: Add ability to override unmap geometries blkdebug: Simplify override logic blkdebug: Add pass-through write_zero and discard support blkdebug: Refactor error injection blkdebug: Sanity check block layer guarantees qemu-io: Switch 'map' output to byte-based reporting ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-12block: Simplify BDRV_BLOCK_RAW recursionEric Blake
Since we are already in coroutine context during the body of bdrv_co_get_block_status(), we can shave off a few layers of wrappers when recursing to query the protocol when a format driver returned BDRV_BLOCK_RAW. Note that we are already using the correct recursion later on in the same function, when probing whether the protocol layer is sparse in order to find out if we can add BDRV_BLOCK_ZERO to an existing BDRV_BLOCK_DATA|BDRV_BLOCK_OFFSET_VALID. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 20170504173745.27414-1-eblake@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-05-11qcow2: Discard/zero clusters by byte countEric Blake
Passing a byte offset, but sector count, when we ultimately want to operate on cluster granularity, is madness. Clean up the external interfaces to take both offset and count as bytes, while still keeping the assertion added previously that the caller must align the values to a cluster. Then rename things to make sure backports don't get confused by changed units: instead of qcow2_discard_clusters() and qcow2_zero_clusters(), we now have qcow2_cluster_discard() and qcow2_cluster_zeroize(). The internal functions still operate on clusters at a time, and return an int for number of cleared clusters; but on an image with 2M clusters, a single L2 table holds 256k entries that each represent a 2M cluster, totalling well over INT_MAX bytes if we ever had a request for that many bytes at once. All our callers currently limit themselves to 32-bit bytes (and therefore fewer clusters), but by making this function 64-bit clean, we have one less place to clean up if we later improve the block layer to support 64-bit bytes through all operations (with the block layer auto-fragmenting on behalf of more-limited drivers), rather than the current state where some interfaces are artificially limited to INT_MAX at a time. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-13-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Assert that cluster operations are alignedEric Blake
We already audited (in commit 0c1bd469) that qcow2_discard_clusters() is only passed cluster-aligned start values; but we can further tighten the assertion that the only unaligned end value is at EOF. Recent commits have taken advantage of an unaligned tail cluster, for both discard and write zeroes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-12-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Optimize write zero of unaligned tail clusterEric Blake
We've already improved discards to operate efficiently on the tail of an unaligned qcow2 image; it's time to make a similar improvement to write zeroes. The special case is only valid at the tail cluster of a file, where we must recognize that any sectors beyond the image end would implicitly read as zero, and therefore should not penalize our logic for widening a partial cluster into writing the whole cluster as zero. However, note that for now, the special case of end-of-file is only recognized if there is no backing file, or if the backing file has the same length; that's because when the backing file is shorter than the active layer, we don't have code in place to recognize that reads of a sector unallocated at the top and beyond the backing end-of-file are implicitly zero. It's not much of a real loss, because most people don't use images that aren't cluster-aligned, or where the active layer is a different size than the backing layer (especially where the difference falls within a single cluster). Update test 154 to cover the new scenarios, using two images of intentionally differing length. While at it, fix the test to gracefully skip when run as ./check -qcow2 -o compat=0.10 154 since the older format lacks zero clusters already required earlier in the test. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-11-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Optimize zero_single_l2() to minimize L2 churnEric Blake
Similar to discard_single_l2(), we should try to avoid dirtying the L2 cache when the cluster we are changing already has the right characteristics. Note that by the time we get to zero_single_l2(), BDRV_REQ_MAY_UNMAP is a requirement to unallocate a cluster (this is because the block layer clears that flag if discard.* flags during open requested that we never punch holes - see the conversation around commit 170f4b2e, https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg07306.html). Therefore, this patch can only reuse a zero cluster as-is if either unmapping is not requested, or if the zero cluster was not associated with an allocation. Technically, there are some cases where an unallocated cluster already reads as all zeroes (namely, when there is no backing file [easy: check bs->backing], or when the backing file also reads as zeroes [harder: we can't check bdrv_get_block_status since we are already holding the lock]), where the guest would not immediately see a difference if we left that cluster unallocated. But if the user did not request unmapping, leaving an unallocated cluster is wrong; and even if the user DID request unmapping, keeping a cluster unallocated risks a subtle semantic change of guest-visible contents if a backing file is later added, and it is not worth auditing whether all internal uses such as mirror properly avoid an unmap request. Thus, this patch is intentionally limited to just clusters that are already marked as zero. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-8-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Make distinction between zero cluster types obviousEric Blake
Treat plain zero clusters differently from allocated ones, so that we can simplify the logic of checking whether an offset is present. Do this by splitting QCOW2_CLUSTER_ZERO into two new enums, QCOW2_CLUSTER_ZERO_PLAIN and QCOW2_CLUSTER_ZERO_ALLOC. I tried to arrange the enum so that we could use 'ret <= QCOW2_CLUSTER_ZERO_PLAIN' for all unallocated types, and 'ret >= QCOW2_CLUSTER_ZERO_ALLOC' for allocated types, although I didn't actually end up taking advantage of the layout. In many cases, this leads to simpler code, by properly combining cases (sometimes, both zero types pair together, other times, plain zero is more like unallocated while allocated zero is more like normal). Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-7-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Name typedef for cluster typeEric Blake
Although it doesn't add all that much type safety (this is C, after all), it does add a bit of legibility to use the name QCow2ClusterType instead of a plain int. In particular, qcow2_get_cluster_offset() has an overloaded return type; a QCow2ClusterType on success, and -errno on failure; keeping the cluster type in a separate variable makes it slightly easier for the next patch to make further computations based on the type. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-6-eblake@redhat.com [mreitz: Use the new type in two more places (one of them pulled from the next patch)] Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Correctly report status of preallocated zero clustersEric Blake
We were throwing away the preallocation information associated with zero clusters. But we should be matching the well-defined semantics in bdrv_get_block_status(), where (BDRV_BLOCK_ZERO | BDRV_BLOCK_OFFSET_VALID) informs the user which offset is reserved, while still reminding the user that reading from that offset is likely to read garbage. count_contiguous_clusters_by_type() is now used only for unallocated cluster runs, hence it gets renamed and tightened. Making this change lets us see which portions of an image are zero but preallocated, when using qemu-img map --output=json. The --output=human side intentionally ignores all zero clusters, whether or not they are preallocated. The fact that there is no change to qemu-iotests './check -qcow2' merely means that we aren't yet testing this aspect of qemu-img; a later patch will add a test. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-5-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Use consistent switch indentationEric Blake
Fix a couple of inconsistent indentations, before an upcoming patch further tweaks the switch statements. (best viewed with 'git diff -b'). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-3-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Nicer variable names in qcow2_update_snapshot_refcount()Eric Blake
In order to keep checkpatch happy when the next patch changes indentation, we first have to shorten some long lines. The easiest approach is to use a new variable in place of 'offset & L2E_OFFSET_MASK', except that 'offset' is the best name for that variable. Change '[old_]offset' to '[old_]entry' to make room. While touching things, also fix checkpatch warnings about unusual 'for' statements. Suggested by Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-2-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11blkdebug: Add ability to override unmap geometriesEric Blake
Make it easier to simulate various unusual hardware setups (for example, recent commits 3482b9b and b8d0a98 affect the Dell Equallogic iSCSI with its 15M preferred and maximum unmap and write zero sizing, or b2f95fe deals with the Linux loopback block device having a max_transfer of 64k), by allowing blkdebug to wrap any other device with further restrictions on various alignments. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-9-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11blkdebug: Simplify override logicEric Blake
Rather than store into a local variable, then copy to the struct if the value is valid, then reporting errors otherwise, it is simpler to just store into the struct and report errors if the value is invalid. This however requires that the struct store a 64-bit number, rather than a narrower type. Likewise, setting a sane errno value in ret prior to the sequence of parsing and jumping to out: on error makes it easier for the next patch to add a chain of similar checks. Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170429191419.30051-8-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11blkdebug: Add pass-through write_zero and discard supportEric Blake
In order to test the effects of artificial geometry constraints on operations like write zero or discard, we first need blkdebug to manage these actions. It also allows us to inject errors on those operations, just like we can for read/write/flush. We can also test the contract promised by the block layer; namely, if a device has specified limits on alignment or maximum size, then those limits must be obeyed (for now, the blkdebug driver merely inherits limits from whatever it is wrapping, but the next patch will further enhance it to allow specific limit overrides). This patch intentionally refuses to service requests smaller than the requested alignments; this is because an upcoming patch adds a qemu-iotest to prove that the block layer is correctly handling fragmentation, but the test only works if there is a way to tell the difference at artificial alignment boundaries when blkdebug is using a larger-than-default alignment. If we let the blkdebug layer always defer to the underlying layer, which potentially has a smaller granularity, the iotest will be thwarted. Tested by setting up an NBD server with export 'foo', then invoking: $ ./qemu-io qemu-io> open -o driver=blkdebug blkdebug::nbd://localhost:10809/foo qemu-io> d 0 15M qemu-io> w -z 0 15M Pre-patch, the server never sees the discard (it was silently eaten by the block layer); post-patch it is passed across the wire. Likewise, pre-patch the write is always passed with NBD_WRITE (with 15M of zeroes on the wire), while post-patch it can utilize NBD_WRITE_ZEROES (for less traffic). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-7-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11blkdebug: Refactor error injectionEric Blake
Rather than repeat the logic at each caller of checking if a Rule exists that warrants an error injection, fold that logic into inject_error(); and rename it to rule_check() for legibility. This will help the next patch, which adds two more callers that need to check rules for the potential of injecting errors. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-6-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11blkdebug: Sanity check block layer guaranteesEric Blake
Commits 04ed95f4 and 1a62d0ac updated the block layer to auto-fragment any I/O to fit within device boundaries. Additionally, when using a minimum alignment of 4k, we want to ensure the block layer does proper read-modify-write rather than requesting I/O on a slice of a sector. Let's enforce that the contract is obeyed when using blkdebug. For now, blkdebug only allows alignment overrides, and just inherits other limits from whatever device it is wrapping, but a future patch will further enhance things. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-5-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11file-posix: Remove .bdrv_inactivate/invalidate_cacheKevin Wolf
Now that the block layer takes care to request a lot less permissions for inactive nodes, the special-casing in file-posix isn't necessary any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2017-05-11block: Drop permissions when migration completesKevin Wolf
With image locking, permissions affect other qemu processes as well. We want to be sure that the destination can run, so let's drop permissions on the source when migration completes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2017-05-11block: New BdrvChildRole.activate() for blk_resume_after_migration()Kevin Wolf
Instead of manually calling blk_resume_after_migration() in migration code after doing bdrv_invalidate_cache_all(), integrate the BlockBackend activation with cache invalidation into a single function. This is achieved with a new callback in BdrvChildRole that is called by bdrv_invalidate_cache_all(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>