aboutsummaryrefslogtreecommitdiff
path: root/block.c
AgeCommit message (Collapse)Author
2020-12-11block: introduce BDRV_MAX_LENGTHVladimir Sementsov-Ogievskiy
We are going to modify block layer to work with 64bit requests. And first step is moving to int64_t type for both offset and bytes arguments in all block request related functions. It's mostly safe (when widening signed or unsigned int to int64_t), but switching from uint64_t is questionable. So, let's first establish the set of requests we want to work with. First signed int64_t should be enough, as off_t is signed anyway. Then, obviously offset + bytes should not overflow. And most interesting: (offset + bytes) being aligned up should not overflow as well. Aligned to what alignment? First thing that comes in mind is bs->bl.request_alignment, as we align up request to this alignment. But there is another thing: look at bdrv_mark_request_serialising(). It aligns request up to some given alignment. And this parameter may be bdrv_get_cluster_size(), which is often a lot greater than bs->bl.request_alignment. Note also, that bdrv_mark_request_serialising() uses signed int64_t for calculations. So, actually, we already depend on some restrictions. Happily, bdrv_get_cluster_size() returns int and bs->bl.request_alignment has 32bit unsigned type, but defined to be a power of 2 less than INT_MAX. So, we may establish, that INT_MAX is absolute maximum for any kind of alignment that may occur with the request. Note, that bdrv_get_cluster_size() is not documented to return power of 2, still bdrv_mark_request_serialising() behaves like it is. Also, backup uses bdi.cluster_size and is not prepared to it not being power of 2. So, let's establish that Qemu supports only power-of-2 clusters and alignments. So, alignment can't be greater than 2^30. Finally to be safe with calculations, to not calculate different maximums for different nodes (depending on cluster size and request_alignment), let's simply set QEMU_ALIGN_DOWN(INT64_MAX, 2^30) as absolute maximum bytes length for Qemu. Actually, it's not much less than INT64_MAX. OK, then, let's apply it to block/io. Let's consider all block/io entry points of offset/bytes: 4 bytes/offset interface functions: bdrv_co_preadv_part(), bdrv_co_pwritev_part(), bdrv_co_copy_range_internal() and bdrv_co_pdiscard() and we check them all with bdrv_check_request(). We also have one entry point with only offset: bdrv_co_truncate(). Check the offset. And one public structure: BdrvTrackedRequest. Happily, it has only three external users: file-posix.c: adopted by this patch write-threshold.c: only read fields test-write-threshold.c: sets obviously small constant values Better is to make the structure private and add corresponding interfaces.. Still it's not obvious what kind of interface is needed for file-posix.c. Let's keep it public but add corresponding assertions. After this patch we'll convert functions in block/io.c to int64_t bytes and offset parameters. We can assume that offset/bytes pair always satisfy new restrictions, and make corresponding assertions where needed. If we reach some offset/bytes point in block/io.c missing bdrv_check_request() it is considered a bug. As well, if block/io.c modifies a offset/bytes request, expanding it more then aligning up to request_alignment, it's a bug too. For all io requests except for discard we keep for now old restriction of 32bit request length. iotest 206 output error message changed, as now test disk size is larger than new limit. Add one more test case with new maximum disk size to cover too-big-L1 case. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201203222713.13507-5-vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-12-11fuse: Allow exporting BDSs via FUSEMax Reitz
block-export-add type=fuse allows mounting block graph nodes via FUSE on some existing regular file. That file should then appears like a raw disk image, and accesses to it result in accesses to the exported BDS. Right now, we only implement the necessary block export functions to set it up and shut it down. We do not implement any access functions, so accessing the mount point only results in errors. This will be addressed by a followup patch. We keep a hash table of exported mount points, because we want to be able to detect when users try to use a mount point twice. This is because we invoke stat() to check whether the given mount point is a regular file, but if that file is served by ourselves (because it is already used as a mount point), then this stat() would have to be served by ourselves, too, which is impossible to do while we (as the caller) are waiting for it to settle. Therefore, keep track of mount point paths to at least catch the most obvious instances of that problem. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201027190600.192171-3-mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-11-09block: make bdrv_drop_intermediate() less wrongVladimir Sementsov-Ogievskiy
First, permission update loop tries to do iterations transactionally, but the whole update is not transactional: nobody roll-back successful loop iterations when some iteration fails. Second, in the iteration we have nested permission update: c->klass->update_filename may point to bdrv_child_cb_update_filename() which calls bdrv_backing_update_filename(), which may do node reopen to RW. Permission update system is not prepared to nested updates, at least it has intermediate permission-update state stored in BdrvChild structures: has_backup_perm, backup_perm and backup_shared_perm. So, let's first do bdrv_replace_node_common() (which is more transactional than open-coded update in bdrv_drop_intermediate()) and then call update_filename() in separate. We still do not rollback changes in case of update_filename() failure but it's not much worse than pre-patch behavior. Note that bdrv_replace_node_common() does check for frozen children, so corresponding check is dropped in bdrv_drop_intermediate(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201106124241.16950-4-vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-11-09block: add bdrv_replace_node_common()Vladimir Sementsov-Ogievskiy
Add new parameter to bdrv_replace_node(): auto_skip. With auto_skip=false we'll have stricter behavior: update _all_ from parents or fail. New behaviour will be used in the following commit in block.c, so keep original function name as public interface. Note: new error message is a bit funny in contrast with further "Cannot" in case of frozen child, but we'd better keep some difference to make it possible to distinguish one from another on failure. Still, actually we'd better refactor should_update_child() call to distinguish also different kinds of "should not". Let's do it later. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201106124241.16950-3-vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-11-09block: add forgotten bdrv_abort_perm_update() to bdrv_co_invalidate_cache()Vladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201106124241.16950-2-vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-11-09block: Fix integer promotion error in bdrv_getlength()Eric Blake
Back in 2015, we attempted to fix error reporting for images that claimed to have more than INT64_MAX/512 sectors, but due to the type promotions caused by BDRV_SECTOR_SIZE being unsigned, this inadvertently forces all negative ret values to be slammed into -EFBIG rather than the original error. While we're at it, we can avoid the confusing ?: by spelling the logic more directly. Fixes: 4a9c9ea0d3 Reported-by: Guoyi Tu <tu.guoyi@h3c.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201105155122.60943-1-eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-10-30qapi: Add QAPI_LIST_PREPEND() macroEric Blake
block.c has a useful macro QAPI_LIST_ADD() for inserting at the front of any QAPI-generated list; move it from block.c to qapi/util.h so more places can use it, including one earlier place in block.c, and rename it to something more obvious (since we also have a lot of places that append, rather than prepend, to a list). There are many more places in the codebase that can benefit from using the macro, but converting them will be left to later patches. In theory, all QAPI list types are child classes of GenericList; but in practice, that relationship is not explicitly spelled out in the C type declarations (rather, it is something that happens implicitly due to C compatible layouts), and the macro does not actually depend on the GenericList type. We considered moving GenericList from visitor.h into util.h to group related code; however, such a move would be awkward if we do not also move GenericAlternate. Unfortunately, moving GenericAlternate would introduce its own problems of declaration circularity (qapi-builtin-types.h needs a complete definition of QEnumLookup from util.h, but GenericAlternate needs a complete definition of QType from qapi-builtin-types.h). Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20201027050556.269064-3-eblake@redhat.com> [eblake: s/ADD/PREPEND/ per suggestion by Markus]
2020-10-30block: Simplify QAPI_LIST_ADDEric Blake
There is no need to rely on the verbosity of the gcc/clang compiler extension of g_new(typeof(X), 1) when we can instead use the standard g_malloc(sizeof(X)). In general, we like g_new over g_malloc for returning type X rather than void* to let the compiler catch more potential typing mistakes, but in this particular macro, our other use of typeof on the same line already ensures we are getting correct results. Suggested-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201027050556.269064-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com>
2020-10-27block: End quiescent sections when a BDS is deletedGreg Kurz
If a BDS gets deleted during blk_drain_all(), it might miss a call to bdrv_do_drained_end(). This means missing a call to aio_enable_external() and the AIO context remains disabled for ever. This can cause a device to become irresponsive and to disrupt the guest execution, ie. hang, loop forever or worse. This scenario is quite easy to encounter with virtio-scsi on POWER when punching multiple blockdev-create QMP commands while the guest is booting and it is still running the SLOF firmware. This happens because SLOF disables/re-enables PCI devices multiple times via IO/MEM/MASTER bits of PCI_COMMAND register after the initial probe/feature negotiation, as it tends to work with a single device at a time at various stages like probing and running block/network bootloaders without doing a full reset in-between. This naturally generates many dataplane stops and starts, and thus many drain sections that can race with blockdev_create_run(). In the end, SLOF bails out. It is somehow reproducible on x86 but it requires to generate articial dataplane start/stop activity with stop/cont QMP commands. In this case, seabios ends up looping for ever, waiting for the virtio-scsi device to send a response to a command it never received. Add a helper that pairs all previously called bdrv_do_drained_begin() with a bdrv_do_drained_end() and call it from bdrv_close(). While at it, update the "/bdrv-drain/graph-change/drain_all" test in test-bdrv-drain so that it can catch the issue. BugId: https://bugzilla.redhat.com/show_bug.cgi?id=1874441 Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <160346526998.272601.9045392804399803158.stgit@bahia.lan> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-10-09block: Add bdrv_lock()/unlock()Kevin Wolf
Inside of coroutine context, we can't directly use aio_context_acquire() for the AioContext of a block node because we already own the lock of the current AioContext and we need to avoid double locking to prevent deadlocks. This provides helper functions to lock the AioContext of a node only if it's not the same as the current AioContext. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20201005155855.256490-14-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2020-10-09block: Add bdrv_co_enter()/leave()Kevin Wolf
Add a pair of functions to temporarily move the current coroutine to the AioContext of a given BlockDriverState. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20201005155855.256490-13-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2020-10-05block: generate coroutine-wrapper codeVladimir Sementsov-Ogievskiy
Use code generation implemented in previous commit to generated coroutine wrappers in block.c and block/io.c Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200924185414.28642-6-vsementsov@virtuozzo.com>
2020-10-05block: declare some coroutine functions in block/coroutines.hVladimir Sementsov-Ogievskiy
We are going to keep coroutine-wrappers code (structure-packing parameters, BDRV_POLL wrapper functions) in separate auto-generated files. So, we'll need a header with declaration of original _co_ functions, for those which are static now. As well, we'll need declarations for wrapper functions. Do these declarations now, as a preparation step. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200924185414.28642-4-vsementsov@virtuozzo.com>
2020-10-05block: return error-code from bdrv_invalidate_cacheVladimir Sementsov-Ogievskiy
This is the only coroutine wrapper from block.c and block/io.c which doesn't return a value, so let's convert it to the common behavior, to simplify moving to generated coroutine wrappers in a further commit. Also, bdrv_invalidate_cache is a void function, returning error only through **errp parameter, which is considered to be bad practice, as it forces callers to define and propagate local_err variable, so conversion is good anyway. This patch leaves the conversion of .bdrv_co_invalidate_cache() driver callbacks and bdrv_invalidate_cache_all() for another day. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200924185414.28642-2-vsementsov@virtuozzo.com>
2020-10-02block/export: Add blk_exp_close_all(_type)Kevin Wolf
This adds a function to shut down all block exports, and another one to shut down the block exports of a single type. The latter is used for now when stopping the NBD server. As soon as we implement support for multiple NBD servers, we'll need a per-server list of exports and it will be replaced by a function using that. As a side effect, the BlockExport layer has a list tracking all existing exports now. closed_exports loses its only user and can go away. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-18-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-09-23qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi
clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
2020-09-17qemu/: fix some comment spelling errorszhaolichang
I found that there are many spelling errors in the comments of qemu, so I used the spellcheck tool to check the spelling errors and finally found some spelling errors in the folder. Signed-off-by: zhaolichang <zhaolichang@huawei.com> Reviewed-by: Alex Bennee <alex.bennee@linaro.org> Message-Id: <20200917075029.313-2-zhaolichang@huawei.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2020-09-07block: Leave BDS.backing_{file,format} constantMax Reitz
Parts of the block layer treat BDS.backing_file as if it were whatever the image header says (i.e., if it is a relative path, it is relative to the overlay), other parts treat it like a cache for bs->backing->bs->filename (relative paths are relative to the CWD). Considering bs->backing->bs->filename exists, let us make it mean the former. Among other things, this now allows the user to specify a base when using qemu-img to commit an image file in a directory that is not the CWD (assuming, everything uses relative filenames). Before this patch: $ ./qemu-img create -f qcow2 foo/bot.qcow2 1M $ ./qemu-img create -f qcow2 -b bot.qcow2 foo/mid.qcow2 $ ./qemu-img create -f qcow2 -b mid.qcow2 foo/top.qcow2 $ ./qemu-img commit -b mid.qcow2 foo/top.qcow2 qemu-img: Did not find 'mid.qcow2' in the backing chain of 'foo/top.qcow2' $ ./qemu-img commit -b foo/mid.qcow2 foo/top.qcow2 qemu-img: Did not find 'foo/mid.qcow2' in the backing chain of 'foo/top.qcow2' $ ./qemu-img commit -b $PWD/foo/mid.qcow2 foo/top.qcow2 qemu-img: Did not find '[...]/foo/mid.qcow2' in the backing chain of 'foo/top.qcow2' After this patch: $ ./qemu-img commit -b mid.qcow2 foo/top.qcow2 Image committed. $ ./qemu-img commit -b foo/mid.qcow2 foo/top.qcow2 qemu-img: Did not find 'foo/mid.qcow2' in the backing chain of 'foo/top.qcow2' $ ./qemu-img commit -b $PWD/foo/mid.qcow2 foo/top.qcow2 Image committed. With this change, bdrv_find_backing_image() must look at whether the user has overridden a BDS's backing file. If so, it can no longer use bs->backing_file, but must instead compare the given filename against the backing node's filename directly. Note that this changes the QAPI output for a node's backing_file. We had very inconsistent output there (sometimes what the image header said, sometimes the actual filename of the backing image). This inconsistent output was effectively useless, so we have to decide one way or the other. Considering that bs->backing_file usually at runtime contained the path to the image relative to qemu's CWD (or absolute), this patch changes QAPI's backing_file to always report the bs->backing->bs->filename from now on. If you want to receive the image header information, you have to refer to full-backing-filename. This necessitates a change to iotest 228. The interesting information it really wanted is the image header, and it can get that now, but it has to use full-backing-filename instead of backing_file. Because of this patch's changes to bs->backing_file's behavior, we also need some reference output changes. Along with the changes to bs->backing_file, stop updating BDS.backing_format in bdrv_backing_attach() as well. This way, ImageInfo's backing-filename and backing-filename-format fields will represent what the image header says and nothing else. iotest 245 changes in behavior: With the backing node no longer overriding the parent node's backing_file string, you can now omit the @backing option when reopening a node with neither a default nor a current backing file even if it used to have a backing node at some point. 273 also changes: The base image is opened without a format layer, so ImageInfo.backing-filename-format used to report "file" for the base image's overlay after blockdev-snapshot. However, the image header never says "file" anywhere, so it now reports $IMGFMT. Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07block: Improve get_allocated_file_size's defaultMax Reitz
There are two practical problems with bdrv_get_allocated_file_size()'s default right now: (1) For drivers with children, we should generally sum all their sizes instead of just passing the request through to bs->file. The latter is good for filters, but not so much for format drivers. (2) Filters need not have bs->file, so we should actually go to the filtered child instead of hard-coding bs->file. Fix this by splitting the default implementation into three branches: (1) For filter drivers: Return the size of the filtered child (2) For protocol drivers: Return -ENOTSUP, because the default implementation cannot make a guess (3) For other drivers: Sum all data-bearing children's sizes Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07block: Use CAFs for debug breakpointsMax Reitz
When looking for a blkdebug node (which implements debug breakpoints), use bdrv_primary_bs() to iterate through the graph, because that is where a blkdebug node would be. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Use CAFs in bdrv_refresh_filename()Max Reitz
bdrv_refresh_filename() and the kind of related bdrv_dirname() should look to the primary child when they wish to copy the underlying file's filename. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Re-evaluate backing file handling in reopenMax Reitz
Reopening a node's backing child needs a bit of special handling because the "backing" child has different defaults than all other children (among other things). Adding filter support here is a bit more difficult than just using the child access functions. In fact, we often have to directly use bs->backing because these functions are about the "backing" child (which may or may not be the COW backing file). Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07block: Use CAFs when working with backing chainsMax Reitz
Use child access functions when iterating through backing chains so filters do not break the chain. In addition, bdrv_find_overlay() will now always return the actual overlay; that is, it will never return a filter node but only one with a COW backing file (there may be filter nodes between that node and @bs). Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07block: Use bdrv_filter_(bs|child) where obviousMax Reitz
Places that use patterns like if (bs->drv->is_filter && bs->file) { ... something about bs->file->bs ... } should be BlockDriverState *filtered = bdrv_filter_bs(bs); if (filtered) { ... something about @filtered ... } instead. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Add bdrv_supports_compressed_writes()Max Reitz
Filters cannot compress data themselves but they have to implement .bdrv_co_pwritev_compressed() still (or they cannot forward compressed writes). Therefore, checking whether bs->drv->bdrv_co_pwritev_compressed is non-NULL is not sufficient to know whether the node can actually handle compressed writes. This function looks down the filter chain to see whether there is a non-filter that can actually convert the compressed writes into compressed data (and thus normal writes). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Drop bdrv_is_encrypted()Max Reitz
The original purpose of bdrv_is_encrypted() was to inquire whether a BDS can be used without the user entering a password or not. It has not been used for that purpose for quite some time. Actually, it is not even fit for that purpose, because to answer that question, it would have recursively query all of the given node's children. So now we have to decide in which direction we want to fix bdrv_is_encrypted(): Recursively query all children, or drop it and just use bs->encrypted to get the current node's status? Nowadays, its only purpose is to report through bdrv_query_image_info() whether the given image is encrypted or not. For this purpose, it is probably more interesting to see whether a given node itself is encrypted or not (otherwise, a management application cannot discern for certain which nodes are really encrypted and which just have encrypted children). Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Include filters when freezing backing chainMax Reitz
In order to make filters work in backing chains, the associated functions must be able to deal with them and freeze both COW and filter child links. While at it, add some comments that note which functions require their caller to ensure that a given child link is not frozen, and how the callers do so. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: bdrv_set_backing_hd() is about bs->backingMax Reitz
bdrv_set_backing_hd() is a function that explicitly cares about the bs->backing child. Highlight that in its description and use child_bs(bs->backing) instead of backing_bs(bs) to make it more obvious. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: bdrv_cow_child() for bdrv_has_zero_init()Max Reitz
bdrv_has_zero_init() should use bdrv_cow_child() if it wants to check whether the given BDS has a COW backing file. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-09-07block: Add chain helper functionsMax Reitz
Add some helper functions for skipping filters in a chain of block nodes. Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07block: Add child access functionsMax Reitz
There are BDS children that the general block layer code can access, namely bs->file and bs->backing. Since the introduction of filters and external data files, their meaning is not quite clear. bs->backing can be a COW source, or it can be a filtered child; bs->file can be a filtered child, it can be data and metadata storage, or it can be just metadata storage. This overloading really is not helpful. This patch adds functions that retrieve the correct child for each exact purpose. Later patches in this series will make use of them. Doing so will allow us to handle filter nodes in a meaningful way. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2020-09-07block: Raise an error when backing file parameter is an empty stringConnor Kuehl
Providing an empty string for the backing file parameter like so: qemu-img create -f qcow2 -b '' /tmp/foo allows the flow of control to reach and subsequently fail an assert statement because passing an empty string to bdrv_get_full_backing_filename_from_filename() simply results in NULL being returned without an error being raised. To fix this, let's check for an empty string when getting the value from the opts list. Reported-by: Attila Fazekas <afazekas@redhat.com> Fixes: https://bugzilla.redhat.com/1809553 Signed-off-by: Connor Kuehl <ckuehl@redhat.com> Message-Id: <20200813134722.802180-1-ckuehl@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-08-21meson: replace create-config with meson configure_filePaolo Bonzini
Move the create-config logic to meson.build; create a configuration_data object and let meson handle the quoting and output. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21meson: convert blockMarc-André Lureau
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-17block: Require aligned image size to avoid assertion failureKevin Wolf
Unaligned requests will automatically be aligned to bl.request_alignment and we can't extend write requests to access space beyond the end of the image without resizing the image, so if we have the WRITE permission, but not the RESIZE one, it's required that the image size is aligned. Failing to meet this requirement could cause assertion failures like this if RESIZE permissions weren't requested: qemu-img: block/io.c:1910: bdrv_co_write_req_prepare: Assertion `end_sector <= bs->total_sectors || child->perm & BLK_PERM_RESIZE' failed. This was e.g. triggered by qemu-img converting to a target image with 4k request alignment when the image was only aligned to 512 bytes, but not to 4k. Turn this into a graceful error in bdrv_check_perm() so that WRITE without RESIZE can only be taken if the image size is aligned. If a user holds both permissions and drops only RESIZE, the function will return an error, but bdrv_child_try_set_perm() will ignore the failure silently if permissions are only requested to be relaxed and just keep both permissions while returning success. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200716142601.111237-2-kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-14qemu-img: Deprecate use of -b without -FEric Blake
Creating an image that requires format probing of the backing image is potentially unsafe (we've had several CVEs over the years based on probes leaking information to the guest on a subsequent boot, although these days tools like libvirt are aware of the issue enough to prevent the worst effects). For example, if our probing algorithm ever changes, or if other tools like libvirt determine a different probe result than we do, then subsequent use of that backing file under a different format will present corrupted data to the guest. Fortunately, the worst effects occur only when the backing image is originally raw, and we at least prevent commit into a probed raw backing file that would change its probed type. Still, it is worth starting a deprecation clock so that future qemu-img can refuse to create backing chains that would rely on probing, to encourage clients to avoid unsafe practices. Most warnings are intentionally emitted from bdrv_img_create() in the block layer, but qemu-img convert uses bdrv_create() which cannot emit its own warning without causing spurious warnings on other code paths. In the end, all command-line image creation or backing file rewriting now performs a check. Furthermore, if we probe a backing file as non-raw, then it is safe to explicitly record that result (rather than relying on future probes); only where we probe a raw image do we care about further warnings to the user when using such an image (for example, commits into a probed-raw backing file are prevented), to help them improve their tooling. But whether or not we make the probe results explicit, we still warn the user to remind them to upgrade their workflow to supply -F always. iotest 114 specifically wants to create an unsafe image for later amendment rather than defaulting to our new default of recording a probed format, so it needs an update. While touching it, expand it to cover all of the various warnings enabled by this patch. iotest 301 also shows a change to qcow messages. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200706203954.341758-11-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-14block: Add support to warn on backing file change without formatEric Blake
For now, this is a mechanical addition; all callers pass false. But the next patch will use it to improve 'qemu-img rebase -u' when selecting a backing file with no format. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com> Message-Id: <20200706203954.341758-10-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-14block: Error if backing file fails during creation without -uEric Blake
Back in commit 6e6e55f5 (Jul 2017, v2.10), we tweaked the code to warn if the backing file could not be opened but the user gave a size, unless the user also passes the -u option to bypass the open of the backing file. As one common reason for failure to open the backing file is when there is mismatch in the requested backing format in relation to what the backing file actually contains, we actually want to open the backing file and ensure that it has the right format in as many cases as possible. iotest 301 for qcow demonstrates how detecting explicit format mismatch is useful to prevent the creation of an image that would probe differently than the user requested. Now is the time to finally turn the warning an error, as promised. Note that the original warning was added prior to our documentation of an official deprecation policy (eb22aeca, also Jul 2017), and because the warning didn't mention the word "deprecated", we never actually remembered to document it as such. But the warning has been around long enough that I don't see prolonging it another two releases. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200706203954.341758-7-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-14qemu-img: Flush stdout before before potential stderr messagesEric Blake
During 'qemu-img create ... 2>&1', if --quiet is not in force, we can end up with buffered I/O in stdout that was produced before failure, but which appears in output after failure. This is confusing; the fix is to flush stdout prior to attempting anything that might produce an error message. Several iotests demonstrate the resulting ordering change now that the merged outputs now reflect chronology. (An even better fix would be to avoid printf from within block.c altogether, but that's much more invasive...) Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200706203954.341758-2-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-10error: Reduce unnecessary error propagationMarkus Armbruster
When all we do with an Error we receive into a local variable is propagating to somewhere else, we can just as well receive it there right away, even when we need to keep error_propagate() for other error paths. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200707160613.848843-38-armbru@redhat.com>
2020-07-10error: Eliminate error_propagate() with Coccinelle, part 2Markus Armbruster
When all we do with an Error we receive into a local variable is propagating to somewhere else, we can just as well receive it there right away. The previous commit did that with a Coccinelle script I consider fairly trustworthy. This commit uses the same script with the matching of return taken out, i.e. we convert if (!foo(..., &err)) { ... error_propagate(errp, err); ... } to if (!foo(..., errp)) { ... ... } This is unsound: @err could still be read between afterwards. I don't know how to express "no read of @err without an intervening write" in Coccinelle. Instead, I manually double-checked for uses of @err. Suboptimal line breaks tweaked manually. qdev_realize() simplified further to placate scripts/checkpatch.pl. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200707160613.848843-36-armbru@redhat.com>
2020-07-10error: Eliminate error_propagate() with Coccinelle, part 1Markus Armbruster
When all we do with an Error we receive into a local variable is propagating to somewhere else, we can just as well receive it there right away. Convert if (!foo(..., &err)) { ... error_propagate(errp, err); ... return ... } to if (!foo(..., errp)) { ... ... return ... } where nothing else needs @err. Coccinelle script: @rule1 forall@ identifier fun, err, errp, lbl; expression list args, args2; binary operator op; constant c1, c2; symbol false; @@ if ( ( - fun(args, &err, args2) + fun(args, errp, args2) | - !fun(args, &err, args2) + !fun(args, errp, args2) | - fun(args, &err, args2) op c1 + fun(args, errp, args2) op c1 ) ) { ... when != err when != lbl: when strict - error_propagate(errp, err); ... when != err ( return; | return c2; | return false; ) } @rule2 forall@ identifier fun, err, errp, lbl; expression list args, args2; expression var; binary operator op; constant c1, c2; symbol false; @@ - var = fun(args, &err, args2); + var = fun(args, errp, args2); ... when != err if ( ( var | !var | var op c1 ) ) { ... when != err when != lbl: when strict - error_propagate(errp, err); ... when != err ( return; | return c2; | return false; | return var; ) } @depends on rule1 || rule2@ identifier err; @@ - Error *err = NULL; ... when != err Not exactly elegant, I'm afraid. The "when != lbl:" is necessary to avoid transforming if (fun(args, &err)) { goto out } ... out: error_propagate(errp, err); even though other paths to label out still need the error_propagate(). For an actual example, see sclp_realize(). Without the "when strict", Coccinelle transforms vfio_msix_setup(), incorrectly. I don't know what exactly "when strict" does, only that it helps here. The match of return is narrower than what I want, but I can't figure out how to express "return where the operand doesn't use @err". For an example where it's too narrow, see vfio_intx_enable(). Silently fails to convert hw/arm/armsse.c, because Coccinelle gets confused by ARMSSE being used both as typedef and function-like macro there. Converted manually. Line breaks tidied up manually. One nested declaration of @local_err deleted manually. Preexisting unwanted blank line dropped in hw/riscv/sifive_e.c. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200707160613.848843-35-armbru@redhat.com>
2020-07-10block: Avoid error accumulation in bdrv_img_create()Markus Armbruster
When creating an image fails because the format doesn't support option "backing_file" or "backing_fmt", bdrv_img_create() first has qemu_opt_set() put a generic error into @local_err, then puts the real error into @errp with error_setg(), and then propagates the former to the latter, which throws away the generic error. A bit complicated, but works. Now that qemu_opt_set() returns a useful value, we can simply ignore the generic error instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200707160613.848843-16-armbru@redhat.com>
2020-07-10qemu-option: Use returned bool to check for failureMarkus Armbruster
The previous commit enables conversion of foo(..., &err); if (err) { ... } to if (!foo(..., &err)) { ... } for QemuOpts functions that now return true / false on success / error. Coccinelle script: @@ identifier fun = { opts_do_parse, parse_option_bool, parse_option_number, parse_option_size, qemu_opt_parse, qemu_opt_rename, qemu_opt_set, qemu_opt_set_bool, qemu_opt_set_number, qemu_opts_absorb_qdict, qemu_opts_do_parse, qemu_opts_from_qdict_entry, qemu_opts_set, qemu_opts_validate }; expression list args, args2; typedef Error; Error *err; @@ - fun(args, &err, args2); - if (err) + if (!fun(args, &err, args2)) { ... } A few line breaks tidied up manually. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200707160613.848843-15-armbru@redhat.com> [Conflict with commit 0b6786a9c1 "block/amend: refactor qcow2 amend options" resolved by rerunning Coccinelle on master's version]
2020-07-06block: inline bdrv_unallocated_blocks_are_zero()Vladimir Sementsov-Ogievskiy
The function has only one user: bdrv_co_block_status(). Inline it to simplify reviewing of the following patches, which will finally drop unallocated_blocks_are_zero field too. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200528094405.145708-3-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-07-06block/amend: add 'force' optionMaxim Levitsky
'force' option will be used for some unsafe amend operations. This includes things like erasing last keyslot in luks based formats (which destroys the data, unless the master key is backed up by external means), but that _might_ be desired result. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200608094030.670121-4-mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-06-10block: Call attention to truncation of long NBD exportsEric Blake
Commit 93676c88 relaxed our NBD client code to request export names up to the NBD protocol maximum of 4096 bytes without NUL terminator, even though the block layer can't store anything longer than 4096 bytes including NUL terminator for display to the user. Since this means there are some export names where we have to truncate things, we can at least try to make the truncation a bit more obvious for the user. Note that in spite of the truncated display name, we can still communicate with an NBD server using such a long export name; this was deemed nicer than refusing to even connect to such a server (since the server may not be under our control, and since determining our actual length limits gets tricky when nbd://host:port/export and nbd+unix:///export?socket=/path are themselves variable-length expansions beyond the export name but count towards the block layer name length). Reported-by: Xueqiang Wei <xuwei@redhat.com> Fixes: https://bugzilla.redhat.com/1843684 Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200610163741.3745251-3-eblake@redhat.com>
2020-05-18block: Drop @child_class from bdrv_child_perm()Max Reitz
Implementations should decide the necessary permissions based on @role. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200513110544.176672-35-mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-18block: Drop child_fileMax Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200513110544.176672-33-mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-18block: Drop bdrv_format_default_perms()Max Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200513110544.176672-32-mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>