aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2020-05-15qom: Drop parameter @errp of object_property_add() & friendsMarkus Armbruster
The only way object_property_add() can fail is when a property with the same name already exists. Since our property names are all hardcoded, failure is a programming error, and the appropriate way to handle it is passing &error_abort. Same for its variants, except for object_property_add_child(), which additionally fails when the child already has a parent. Parentage is also under program control, so this is a programming error, too. We have a bit over 500 callers. Almost half of them pass &error_abort, slightly fewer ignore errors, one test case handles errors, and the remaining few callers pass them to their own callers. The previous few commits demonstrated once again that ignoring programming errors is a bad idea. Of the few ones that pass on errors, several violate the Error API. The Error ** argument must be NULL, &error_abort, &error_fatal, or a pointer to a variable containing NULL. Passing an argument of the latter kind twice without clearing it in between is wrong: if the first call sets an error, it no longer points to NULL for the second call. ich9_pm_add_properties(), sparc32_ledma_realize(), sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize() are wrong that way. When the one appropriate choice of argument is &error_abort, letting users pick the argument is a bad idea. Drop parameter @errp and assert the preconditions instead. There's one exception to "duplicate property name is a programming error": the way object_property_add() implements the magic (and undocumented) "automatic arrayification". Don't drop @errp there. Instead, rename object_property_add() to object_property_try_add(), and add the obvious wrapper object_property_add(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200505152926.18877-15-armbru@redhat.com> [Two semantic rebase conflicts resolved]
2020-05-13block/block-copy: fix use-after-free of task pointerVladimir Sementsov-Ogievskiy
Obviously, we should g_free the task after trace point and offset update. Reported-by: Coverity (CID 1428756) Fixes: 4ce5dd3e9b5ee0fac18625860eb3727399ee965e Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200507183800.22626-1-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-13qcow2: add zstd cluster compressionDenis Plotnikov
zstd significantly reduces cluster compression time. It provides better compression performance maintaining the same level of the compression ratio in comparison with zlib, which, at the moment, is the only compression method available. The performance test results: Test compresses and decompresses qemu qcow2 image with just installed rhel-7.6 guest. Image cluster size: 64K. Image on disk size: 2.2G The test was conducted with brd disk to reduce the influence of disk subsystem to the test results. The results is given in seconds. compress cmd: time ./qemu-img convert -O qcow2 -c -o compression_type=[zlib|zstd] src.img [zlib|zstd]_compressed.img decompress cmd time ./qemu-img convert -O qcow2 [zlib|zstd]_compressed.img uncompressed.img compression decompression zlib zstd zlib zstd ------------------------------------------------------------ real 65.5 16.3 (-75 %) 1.9 1.6 (-16 %) user 65.0 15.8 5.3 2.5 sys 3.3 0.2 2.0 2.0 Both ZLIB and ZSTD gave the same compression ratio: 1.57 compressed image size in both cases: 1.4G Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> QAPI part: Acked-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200507082521.29210-4-dplotnikov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-13qcow2: rework the cluster compression routineDenis Plotnikov
The patch enables processing the image compression type defined for the image and chooses an appropriate method for image clusters (de)compression. Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200507082521.29210-3-dplotnikov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-13qcow2: introduce compression type featureDenis Plotnikov
The patch adds some preparation parts for incompatible compression type feature to qcow2 allowing the use different compression methods for image clusters (de)compressing. It is implied that the compression type is set on the image creation and can be changed only later by image conversion, thus compression type defines the only compression algorithm used for the image, and thus, for all image clusters. The goal of the feature is to add support of other compression methods to qcow2. For example, ZSTD which is more effective on compression than ZLIB. The default compression is ZLIB. Images created with ZLIB compression type are backward compatible with older qemu versions. Adding of the compression type breaks a number of tests because now the compression type is reported on image creation and there are some changes in the qcow2 header in size and offsets. The tests are fixed in the following ways: * filter out compression_type for many tests * fix header size, feature table size and backing file offset affected tests: 031, 036, 061, 080 header_size +=8: 1 byte compression type 7 bytes padding feature_table += 48: incompatible feature compression type backing_file_offset += 56 (8 + 48 -> header_change + feature_table_change) * add "compression type" for test output matching when it isn't filtered affected tests: 049, 060, 061, 065, 082, 085, 144, 182, 185, 198, 206, 242, 255, 274, 280 Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> QAPI part: Acked-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200507082521.29210-2-dplotnikov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-08block: Drop unused .bdrv_has_zero_init_truncateEric Blake
Now that there are no clients of bdrv_has_zero_init_truncate, none of the drivers need to worry about providing it. What's more, this eliminates a source of some confusion: a literal reading of the documentation as written in ceaca56f and implemented in commit 1dcaf527 claims that a driver which returns 0 for bdrv_has_zero_init_truncate() must not return 1 for bdrv_has_zero_init(); this condition was violated for parallels, qcow, and sometimes for vdi, although in practice it did not matter since those drivers also lacked .bdrv_co_truncate. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-10-eblake@redhat.com> Acked-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vhdx: Rework truncation logicEric Blake
The vhdx driver uses truncation for image growth, with a special case for blocks that already read as zero but which are only being partially written. But with a bit of rearranging, it's just as easy to defer the decision on whether truncation resulted in zeroes to the actual allocation attempt, reducing the number of places that still use bdrv_has_zero_init_truncate. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-9-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08parallels: Rework truncation logicEric Blake
The parallels driver tries to use truncation for image growth, but can only do so when reads are guaranteed as zero. Now that we have a way to request zero contents from truncation, we can defer the decision to actual allocation attempts rather than up front, reducing the number of places that still use bdrv_has_zero_init_truncate. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-8-eblake@redhat.com> Reviewed-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08ssh: Support BDRV_REQ_ZERO_WRITE for truncateEric Blake
Our .bdrv_has_zero_init_truncate can detect when the remote side always zero fills; we can reuse that same knowledge to implement BDRV_REQ_ZERO_WRITE by ignoring it when the server gives it to us for free. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-7-eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08sheepdog: Support BDRV_REQ_ZERO_WRITE for truncateEric Blake
Our .bdrv_has_zero_init_truncate always returns 1 because sheepdog always 0-fills; we can use that same knowledge to implement BDRV_REQ_ZERO_WRITE by ignoring it. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-6-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08rbd: Support BDRV_REQ_ZERO_WRITE for truncateEric Blake
Our .bdrv_has_zero_init_truncate always returns 1 because rbd always 0-fills; we can use that same knowledge to implement BDRV_REQ_ZERO_WRITE by ignoring it. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-5-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08nfs: Support BDRV_REQ_ZERO_WRITE for truncateEric Blake
Our .bdrv_has_zero_init_truncate returns 1 if we detect that the OS always 0-fills; we can use that same knowledge to implement BDRV_REQ_ZERO_WRITE by ignoring it when the OS gives it to us for free. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-4-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08file-win32: Support BDRV_REQ_ZERO_WRITE for truncateEric Blake
When using bdrv_file, .bdrv_has_zero_init_truncate always returns 1; therefore, we can behave just like file-posix, and always implement BDRV_REQ_ZERO_WRITE by ignoring it since the OS gives it to us for free (note that file-posix.c had to use an 'if' because it shared code between regular files and block devices, but in file-win32.c, bdrv_host_device uses a separate .bdrv_file_open). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428202905.770727-3-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08gluster: Drop useless has_zero_init callbackEric Blake
block.c already defaults to 0 if we don't provide a callback; there's no need to write a callback that always fails. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-Id: <20200428202905.770727-2-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08qcow2: Fix preallocation on block devicesMax Reitz
Calling bdrv_getlength() to get the pre-truncate file size will not really work on block devices, because they have always the same length, and trying to write beyond it will fail with a rather cryptic error message. Instead, we should use qcow2_get_last_cluster() and bdrv_getlength() only as a fallback. Before this patch: $ truncate -s 1G test.img $ sudo losetup -f --show test.img /dev/loop0 $ sudo qemu-img create -f qcow2 -o preallocation=full /dev/loop0 64M Formatting '/dev/loop0', fmt=qcow2 size=67108864 cluster_size=65536 preallocation=full lazy_refcounts=off refcount_bits=16 qemu-img: /dev/loop0: Could not resize image: Failed to resize refcount structures: No space left on device With this patch: $ sudo qemu-img create -f qcow2 -o preallocation=full /dev/loop0 64M Formatting '/dev/loop0', fmt=qcow2 size=67108864 cluster_size=65536 preallocation=full lazy_refcounts=off refcount_bits=16 qemu-img: /dev/loop0: Could not resize image: Failed to resize underlying file: Preallocation mode 'full' unsupported for this non-regular file So as you can see, it still fails, but now the problem is missing support on the block device level, so we at least get a better error message. Note that we cannot preallocate block devices on truncate by design, because we do not know what area to preallocate. Their length is always the same, the truncate operation does not change it. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200505141801.1096763-1-mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08backup: Make sure that source and target size matchKevin Wolf
Since the introduction of a backup filter node in commit 00e30f05d, the backup block job crashes when the target image is smaller than the source image because it will try to write after the end of the target node without having BLK_PERM_RESIZE. (Previously, the BlockBackend layer would have caught this and errored out gracefully.) We can fix this and even do better than the old behaviour: Check that source and target have the same image size at the start of the block job and unshare BLK_PERM_RESIZE. (This permission was already unshared before the same commit 00e30f05d, but the BlockBackend that was used to make the restriction was removed without a replacement.) This will immediately error out when starting the job instead of only when writing to a block that doesn't exist in the target. Longer target than source would technically work because we would never write to blocks that don't exist, but semantically these are invalid, too, because a backup is supposed to create a copy, not just an image that starts with a copy. Fixes: 00e30f05de1d19586345ec373970ef4c192c6270 Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1778593 Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430142755.315494-4-kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08backup: Improve error for bdrv_getlength() failureKevin Wolf
bdrv_get_device_name() will be an empty string with modern management tools that don't use -drive. Use bdrv_get_device_or_node_name() instead so that the node name is used if the BlockBackend is anonymous. While at it, start with upper case to make the message consistent with the rest of the function. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-Id: <20200430142755.315494-3-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vmdk: Flush only once in vmdk_L2update()Kevin Wolf
If we have a backup L2 table, we currently flush once after writing to the active L2 table and again after writing to the backup table. A single flush is enough and makes things a little less slow. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430133007.170335-6-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vmdk: Don't update L2 table for zero write on zero clusterKevin Wolf
If a cluster is already zeroed, we don't have to call vmdk_L2update(), which is rather slow because it flushes the image file. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430133007.170335-5-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vmdk: Fix partial overwrite of zero clusterKevin Wolf
When overwriting a zero cluster, we must not perform copy-on-write from the backing file, but from a zeroed buffer. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430133007.170335-4-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vmdk: Fix zero cluster allocationKevin Wolf
m_data must contain valid data even for zero clusters when no cluster was allocated in the image file. Without this, zero writes segfault with images that have zeroed_grain=on. For zero writes, we don't want to allocate a cluster in the image file even in compressed files. Fixes: 524089bce43fd1cd3daaca979872451efa2cf7c6 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430133007.170335-3-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08vmdk: Rename VmdkMetaData.valid to new_allocationKevin Wolf
m_data is used for zero clusters even though valid == 0. It really only means that a new cluster was allocated in the image file. Rename it to reflect this. While at it, change it from int to bool, too. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430133007.170335-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08qcow2: Avoid integer wraparound in qcow2_co_truncate()Alberto Garcia
After commit f01643fb8b47e8a70c04bbf45e0f12a9e5bc54de when an image is extended and BDRV_REQ_ZERO_WRITE is set then the new clusters are zeroized. The code however does not detect correctly situations when the old and the new end of the image are within the same cluster. The problem can be reproduced with these steps: qemu-img create -f qcow2 backing.qcow2 1M qemu-img create -f qcow2 -F qcow2 -b backing.qcow2 top.qcow2 qemu-img resize --shrink top.qcow2 520k qemu-img resize top.qcow2 567k In the last step offset - zero_start causes an integer wraparound. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-Id: <20200504155217.10325-1-berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-07block: luks: better error message when creating too large filesMaxim Levitsky
Currently if you attampt to create too large file with luks you get the following error message: Formatting 'test.luks', fmt=luks size=17592186044416 key-secret=sec0 qemu-img: test.luks: Could not resize file: File too large While for raw format the error message is qemu-img: test.img: The image size is too large for file format 'raw' The reason for this is that qemu-img checks for errono of the failure, and presents the later error when it is -EFBIG However crypto generic code 'swallows' the errno and replaces it with -EIO. As an attempt to make it better, we can make luks driver, detect -EFBIG and in this case present a better error message, which is what this patch does The new error message is: qemu-img: error creating test.luks: The requested file size is too large Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1534898 Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2020-05-05Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2020-05-05' ↵Peter Maydell
into staging Block patches: - Asynchronous copying for block-copy (i.e., the backup job) - Allow resizing of qcow2 images when they have internal snapshots - iotests: Logging improvements for Python tests - iotest 153 fix, and block comment cleanups # gpg: Signature made Tue 05 May 2020 13:56:58 BST # gpg: using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40 # gpg: issuer "mreitz@redhat.com" # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full] # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40 * remotes/maxreitz/tags/pull-block-2020-05-05: (24 commits) block/block-copy: use aio-task-pool API block/block-copy: refactor task creation block/block-copy: add state pointer to BlockCopyTask block/block-copy: alloc task on each iteration block/block-copy: rename in-flight requests to tasks Fix iotest 153 block: Comment cleanups qcow2: Tweak comment about bitmaps vs. resize qcow2: Allow resize of images with internal snapshots block: Add blk_new_with_bs() helper iotests: use python logging for iotests.log() iotests: Mark verify functions as private iotest 258: use script_main iotests: add script_initialize iotests: add hmp helper with logging iotests: limit line length to 79 chars iotests: touch up log function signature iotests: drop pre-Python 3.4 compatibility code iotests: alphabetize standard imports iotests: add pylintrc file ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-05Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2020-05-04' into ↵Peter Maydell
staging nbd patches for 2020-05-04 - reduce client-side fragmentation of NBD trim and status requests - fix iotest 41 when run in deep tree - fix socket activation in qemu-nbd # gpg: Signature made Mon 04 May 2020 22:12:21 BST # gpg: using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2020-05-04: block/nbd-client: drop max_block restriction from discard block/nbd-client: drop max_block restriction from block_status iotests/041: Fix NBD socket path tools: Fix use of fcntl(F_SETFD) during socket activation Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-05Merge remote-tracking branch ↵Peter Maydell
'remotes/vivier2/tags/trivial-branch-for-5.1-pull-request' into staging trivial patches (20200504) Silent static analyzer warning Remove dead assignments Support -chardev serial on macOS Update MAINTAINERS Some cosmetic changes # gpg: Signature made Mon 04 May 2020 16:45:18 BST # gpg: using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C # gpg: issuer "laurent@vivier.eu" # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full] # gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full] # gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full] # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C * remotes/vivier2/tags/trivial-branch-for-5.1-pull-request: hw/timer/pxa2xx_timer: Add assertion to silent static analyzer warning hw/timer/stm32f2xx_timer: Remove dead assignment hw/gpio/aspeed_gpio: Remove dead assignment hw/isa/i82378: Remove dead assignment hw/ide/sii3112: Remove dead assignment hw/input/adb-kbd: Remove dead assignment hw/i2c/pm_smbus: Remove dead assignment blockdev: Remove dead assignment block: Avoid dead assignment Compress lines for immediate return chardev: Add macOS to list of OSes that support -chardev serial MAINTAINERS: Update Keith Busch's email address elf_ops: Don't try to g_mapped_file_unref(NULL) hw/mem/pc-dimm: Fix line over 80 characters warning hw/mem/pc-dimm: Print slot number on error at pc_dimm_pre_plug() MAINTAINERS: Mark the LatticeMico32 target as orphan timer/exynos4210_mct: Remove redundant statement in exynos4210_mct_write() display/blizzard: use extract16() for fix clang analyzer warning in blizzard_draw_line16_32() scsi/esp-pci: add g_assert() for fix clang analyzer warning in esp_pci_io_write() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-05block/block-copy: use aio-task-pool APIVladimir Sementsov-Ogievskiy
Run block_copy iterations in parallel in aio tasks. Changes: - BlockCopyTask becomes aio task structure. Add zeroes field to pass it to block_copy_do_copy - add call state - it's a state of one call of block_copy(), shared between parallel tasks. For now used only to keep information about first error: is it read or not. - convert block_copy_dirty_clusters to aio-task loop. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200429130847.28124-6-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block/block-copy: refactor task creationVladimir Sementsov-Ogievskiy
Instead of just relying on the comment "Called only on full-dirty region" in block_copy_task_create() let's move initial dirty area search directly to block_copy_task_create(). Let's also use effective bdrv_dirty_bitmap_next_dirty_area instead of looping through all non-dirty clusters. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200429130847.28124-5-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block/block-copy: add state pointer to BlockCopyTaskVladimir Sementsov-Ogievskiy
We are going to use aio-task-pool API, so we'll need state pointer in BlockCopyTask anyway. Add it now and use where possible. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200429130847.28124-4-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block/block-copy: alloc task on each iterationVladimir Sementsov-Ogievskiy
We are going to use aio-task-pool API, so tasks will be handled in parallel. We need therefore separate allocated task on each iteration. Introduce this logic now. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200429130847.28124-3-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block/block-copy: rename in-flight requests to tasksVladimir Sementsov-Ogievskiy
We are going to use aio-task-pool API and extend in-flight request structure to be a successor of AioTask, so rename things appropriately. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200429130847.28124-2-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block: Comment cleanupsEric Blake
It's been a while since we got rid of the sector-based bdrv_read and bdrv_write (commit 2e11d756); let's finish the job on a few remaining comments. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200428213807.776655-1-eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05qcow2: Tweak comment about bitmaps vs. resizeEric Blake
Our comment did not actually match the code. Rewrite the comment to be less sensitive to any future changes to qcow2-bitmap.c that might implement scenarios that we currently reject. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200428192648.749066-4-eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05qcow2: Allow resize of images with internal snapshotsEric Blake
We originally refused to allow resize of images with internal snapshots because the v2 image format did not require the tracking of snapshot size, making it impossible to safely revert to a snapshot with a different size than the current view of the image. But the snapshot size tracking was rectified in v3, and our recent fixes to qemu-img amend (see 0a85af35) guarantee that we always have a valid snapshot size. Thus, we no longer need to artificially limit image resizes, but it does become one more thing that would prevent a downgrade back to v2. And now that we support different-sized snapshots, it's also easy to fix reverting to a snapshot to apply the new size. Upgrade iotest 61 to cover this (we previously had NO coverage of refusal to resize while snapshots exist). Note that the amend process can fail but still have effects: in particular, since we break things into upgrade, resize, downgrade, a failure during resize does not roll back changes made during upgrade, nor does failure in downgrade roll back a resize. But this situation is pre-existing even without this patch; and without journaling, the best we could do is minimize the chance of partial failure by collecting all changes prior to doing any writes - which adds a lot of complexity but could still fail with EIO. On the other hand, we are careful that even if we have partial modification but then fail, the image is left viable (that is, we are careful to sequence things so that after each successful cluster write, there may be transient leaked clusters but no corrupt metadata). And complicating the code to make it more transaction-like is not worth the effort: a user can always request multiple 'qemu-img amend' changing one thing each, if they need finer-grained control over detecting the first failure than what they get by letting qemu decide how to sequence multiple changes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200428192648.749066-3-eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-05block: Add blk_new_with_bs() helperEric Blake
There are several callers that need to create a new block backend from an existing BDS; make the task slightly easier with a common helper routine. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200424190903.522087-2-eblake@redhat.com> [mreitz: Set @ret only in error paths, see https://lists.nongnu.org/archive/html/qemu-block/2020-04/msg01216.html] Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200428192648.749066-2-eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-05-04block/nbd-client: drop max_block restriction from discardVladimir Sementsov-Ogievskiy
The NBD spec was updated (see nbd.git commit 9f30fedb) so that max_block doesn't relate to NBD_CMD_TRIM. So, drop the restriction. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200401150112.9557-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: tweak commit message to call out NBD commit] Signed-off-by: Eric Blake <eblake@redhat.com>
2020-05-04block/nbd-client: drop max_block restriction from block_statusVladimir Sementsov-Ogievskiy
The NBD spec was updated (see nbd.git commit 9f30fedb) so that max_block doesn't relate to NBD_CMD_BLOCK_STATUS. So, drop the restriction. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200401150112.9557-2-vsementsov@virtuozzo.com> [eblake: tweak commit message to call out NBD commit] Signed-off-by: Eric Blake <eblake@redhat.com>
2020-05-04lockable: replaced locks with lock guard macros where appropriateDaniel Brodsky
- ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets - replaced result with QEMU_LOCK_GUARD if all unlocks at function end - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 20200404042108.389635-3-dnbrdsky@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2020-05-04Compress lines for immediate returnSimran Singhal
Compress two lines into a single line if immediate return statement is found. It also remove variables progress, val, data, ret and sock as they are no longer needed. Remove space between function "mixer_load" and '(' to fix the checkpatch.pl error:- ERROR: space prohibited between function name and open parenthesis '(' Done using following coccinelle script: @@ local idexpression ret; expression e; @@ -ret = +return e; -return ret; Signed-off-by: Simran Singhal <singhalsimran0@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200401165314.GA3213@simran-Inspiron-5558> [lv: in handle_aiocb_write_zeroes_unmap() move "int ret" inside the #ifdef] Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2020-04-30qcow2: Forward ZERO_WRITE flag for full preallocationKevin Wolf
The BDRV_REQ_ZERO_WRITE is currently implemented in a way that first the image is possibly preallocated and then the zero flag is added to all clusters. This means that a copy-on-write operation may be needed when writing to these clusters, despite having used preallocation, negating one of the major benefits of preallocation. Instead, try to forward the BDRV_REQ_ZERO_WRITE to the protocol driver, and if the protocol driver can ensure that the new area reads as zeros, we can skip setting the zero flag in the qcow2 layer. Unfortunately, the same approach doesn't work for metadata preallocation, so we'll still set the zero flag there. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200424142701.67053-1-kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30block: truncate: Don't make backing file data visibleKevin Wolf
When extending the size of an image that has a backing file larger than its old size, make sure that the backing file data doesn't become visible in the guest, but the added area is properly zeroed out. Consider the following scenario where the overlay is shorter than its backing file: base.qcow2: AAAAAAAA overlay.qcow2: BBBB When resizing (extending) overlay.qcow2, the new blocks should not stay unallocated and make the additional As from base.qcow2 visible like before this patch, but zeros should be read. A similar case happens with the various variants of a commit job when an intermediate file is short (- for unallocated): base.qcow2: A-A-AAAA mid.qcow2: BB-B top.qcow2: C--C--C- After commit top.qcow2 to mid.qcow2, the following happens: mid.qcow2: CB-C00C0 (correct result) mid.qcow2: CB-C--C- (before this fix) Without the fix, blocks that previously read as zeros on top.qcow2 suddenly turn into A. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200424125448.63318-8-kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30file-posix: Support BDRV_REQ_ZERO_WRITE for truncateKevin Wolf
For regular files, we always get BDRV_REQ_ZERO_WRITE behaviour from the OS, so we can advertise the flag and just ignore it. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200424125448.63318-7-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30raw-format: Support BDRV_REQ_ZERO_WRITE for truncateKevin Wolf
The raw format driver can simply forward the flag and let its bs->file child take care of actually providing the zeros. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200424125448.63318-6-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30qcow2: Support BDRV_REQ_ZERO_WRITE for truncateKevin Wolf
If BDRV_REQ_ZERO_WRITE is set and we're extending the image, calling qcow2_cluster_zeroize() with flags=0 does the right thing: It doesn't undo any previous preallocation, but just adds the zero flag to all relevant L2 entries. If an external data file is in use, a write_zeroes request to the data file is made instead. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200424125448.63318-5-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30block-backend: Add flags to blk_truncate()Kevin Wolf
Now that node level interface bdrv_truncate() supports passing request flags to the block driver, expose this on the BlockBackend level, too. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200424125448.63318-4-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30block: Add flags to bdrv(_co)_truncate()Kevin Wolf
Now that block drivers can support flags for .bdrv_co_truncate, expose the parameter in the node level interfaces bdrv_co_truncate() and bdrv_truncate(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200424125448.63318-3-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30block: Add flags to BlockDriver.bdrv_co_truncate()Kevin Wolf
This adds a new BdrvRequestFlags parameter to the .bdrv_co_truncate() driver callbacks, and a supported_truncate_flags field in BlockDriverState that allows drivers to advertise support for request flags in the context of truncate. For now, we always pass 0 and no drivers declare support for any flag. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200424125448.63318-2-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-30qapi: Only input visitors can actually failMarkus Armbruster
The previous few commits have made this more obvious, and removed the one exception. Time to clarify the documentation, and drop dead error checking. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200424084338.26803-13-armbru@redhat.com>
2020-04-29block/file-posix: Fix check_cache_dropped() error handlingMarkus Armbruster
The Error ** argument must be NULL, &error_abort, &error_fatal, or a pointer to a variable containing NULL. Passing an argument of the latter kind twice without clearing it in between is wrong: if the first call sets an error, it no longer points to NULL for the second call. check_cache_dropped() calls error_setg() in a loop. It fails to break the loop in one instance. If a subsequent iteration error_setg()s again, it trips error_setv()'s assertion. Fix it to break the loop. Fixes: 31be8a2a97ecba7d31a82932286489cac318e9e9 Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200422130719.28225-3-armbru@redhat.com>