aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2014-11-09block/vdi: Limit maximum size even futherMax Reitz
The block layer read and write functions do not like requests which are bigger than INT_MAX bytes. Since the VDI bmap is read and written in a single operation, its size is therefore limited accordingly. This reduces the maximum VDI image size supported by QEMU to half of what it currently is (down to approximately 512 TB). The VDI test 084 has to be adapted accordingly. Actually, one could clearly see that it was broken from the "Could not open 'TEST_DIR/t.IMGFMT': Invalid argument" line for an image which was supposed to work just fine. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Peter Lieven <pl@kamp.de>
2014-11-03Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into ↵Peter Maydell
staging # gpg: Signature made Mon 03 Nov 2014 11:50:53 GMT using RSA key ID 81AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" * remotes/stefanha/tags/block-pull-request: (53 commits) block: declare blockjobs and dataplane friends! block: let commit blockjob run in BDS AioContext block: let mirror blockjob run in BDS AioContext block: let stream blockjob run in BDS AioContext block: let backup blockjob run in BDS AioContext block: add bdrv_drain() blockjob: add block_job_defer_to_main_loop() blockdev: add note that block_job_cb() must be thread-safe blockdev: acquire AioContext in blockdev_mark_auto_del() blockdev: acquire AioContext in do_qmp_query_block_jobs_one() block: acquire AioContext in generic blockjob QMP commands iotests: Expand test 061 block/qcow2: Simplify shared L2 handling in amend block/qcow2: Make get_refcount() global block/qcow2: Implement status CB for amend qemu-img: Fix insignificant memleak qemu-img: Add progress output for amend block: Add status callback to bdrv_amend_options() block: qemu-iotest 107 supports NFS iotests: Add test for qcow2's bdrv_make_empty ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-11-03Merge remote-tracking branch ↵Peter Maydell
'remotes/mjt/tags/pull-trivial-patches-2014-11-02' into staging trivial patches for 2014-11-02 # gpg: Signature made Sun 02 Nov 2014 11:54:43 GMT using RSA key ID A4C3D7DB # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" # gpg: aka "Michael Tokarev <mjt@corpit.ru>" # gpg: aka "Michael Tokarev <mjt@debian.org>" * remotes/mjt/tags/pull-trivial-patches-2014-11-02: (23 commits) vdi: wrapped uuid_unparse() in #ifdef tap: fix possible fd leak in net_init_tap tap: do not close(fd) in net_init_tap_one target-i386: Remove unused model_features_t struct tap_int.h: remove repeating NETWORK_SCRIPT defines os-posix: reorder parent notification for -daemonize pidfile: stop making pidfile error a special case os-posix: replace goto again with a proper loop os-posix: use global daemon_pipe instead of cryptic fds[1] dump: Fix dump-guest-memory termination and use-after-close virtio-9p-proxy: improve error messages in connect_namedsocket() virtio-9p-proxy: fix error return in proxy_init() virtio-9p-proxy: Fix sockfd leak target-tricore: check return value before using it net/slirp: specify logbase for smbd Revert "os-posix: report error message when lock file failed" util: Improve os_mem_prealloc error message sparse: fix build target-arm: A64: remove redundant store target-xtensa: mark XtensaConfig structs as unused ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-11-03block: let commit blockjob run in BDS AioContextStefan Hajnoczi
The commit block job must run in the BlockDriverState AioContext so that it works with dataplane. Acquire the AioContext in blockdev.c so starting the block job is safe. One detail here is that the bdrv_drain_all() must be moved inside the aio_context_acquire() region so requests cannot sneak in between the drain and acquire. The completion code in block/commit.c must perform backing chain manipulation and bdrv_reopen() from the main loop. Use block_job_defer_to_main_loop() to achieve that. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-11-git-send-email-stefanha@redhat.com
2014-11-03block: let mirror blockjob run in BDS AioContextStefan Hajnoczi
The mirror block job must run in the BlockDriverState AioContext so that it works with dataplane. Acquire the AioContext in blockdev.c so starting the block job is safe. Note that to_replace is treated separately from other BlockDriverStates in that it does not need to be in the same AioContext. Explicitly acquire/release to_replace's AioContext when accessing it. The completion code in block/mirror.c must perform BDS graph manipulation and bdrv_reopen() from the main loop. Use block_job_defer_to_main_loop() to achieve that. The bdrv_drain_all() call is not allowed outside the main loop since it could lead to lock ordering problems. Use bdrv_drain(bs) instead because we have acquired the AioContext so nothing else can sneak in I/O. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-10-git-send-email-stefanha@redhat.com
2014-11-03block: let stream blockjob run in BDS AioContextStefan Hajnoczi
The stream block job must run in the BlockDriverState AioContext so that it works with dataplane. The basics of acquiring the AioContext are easy in blockdev.c. The tricky part is the completion code which drops part of the backing file chain. This must be done in the main loop where bdrv_unref() and bdrv_close() are safe to call. Use block_job_defer_to_main_loop() to achieve that. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-9-git-send-email-stefanha@redhat.com
2014-11-03block: let backup blockjob run in BDS AioContextStefan Hajnoczi
The backup block job must run in the BlockDriverState AioContext so that it works with dataplane. The basics of acquiring the AioContext are easy in blockdev.c. The completion code in block/backup.c must call bdrv_unref() from the main loop. Use block_job_defer_to_main_loop() to achieve that. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-8-git-send-email-stefanha@redhat.com
2014-11-03block/qcow2: Simplify shared L2 handling in amendMax Reitz
Currently, we have a bitmap for keeping track of which clusters have been created during the zero cluster expansion process. This was necessary because we need to properly increase the refcount for shared L2 tables. However, now we can simply take the L2 refcount and use it for the cluster allocated for expansion. This will be the correct refcount and therefore we don't have to remember that cluster having been allocated any more. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net> Message-id: 1414404776-4919-7-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/qcow2: Make get_refcount() globalMax Reitz
Reading the refcount of a cluster is an operation which can be useful in all of the qcow2 code, so make that function globally available. While touching this function, amend the comment describing the "addend" parameter: It is (no longer, if it ever was) necessary to have it set to -1 or 1; any value is fine. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net> Message-id: 1414404776-4919-6-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/qcow2: Implement status CB for amendMax Reitz
The only really time-consuming operation potentially performed by qcow2_amend_options() is zero cluster expansion when downgrading qcow2 images from compat=1.1 to compat=0.10, so report status of that operation and that operation only through the status CB. For this, approximate the progress as the number of L1 entries visited during the operation. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net> Message-id: 1414404776-4919-5-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block: Add status callback to bdrv_amend_options()Max Reitz
Depending on the changed options and the image format, bdrv_amend_options() may take a significant amount of time. In these cases, a way to be informed about the operation's status is desirable. Since the operation is rather complex and may fundamentally change the image, implementing it as AIO or a coroutine does not seem feasible. On the other hand, implementing it as a block job would be significantly more difficult than a simple callback and would not add benefits other than progress report to the amending operation, because it should not actually be run as a block job at all. A callback may not be very pretty, but it's very easy to implement and perfectly fits its purpose here. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414404776-4919-2-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03qemu-img: Implement commit like QMPMax Reitz
qemu-img should use QMP commands whenever possible in order to ensure feature completeness of both online and offline image operations. As qemu-img itself has no access to QMP (since this would basically require just everything being linked into qemu-img), imitate QMP's implementation of block-commit by using commit_active_start() and then waiting for the block job to finish. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 1414159063-25977-9-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/mirror: Improve progress reportMax Reitz
Instead of taking the total length of the block device as the block job's length, use the number of dirty sectors. The progress is now the number of sectors mirrored to the target block device. Note that this may result in the job's length increasing during operation, which is however in fact desirable. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414159063-25977-8-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03qcow2: Optimize bdrv_make_empty()Max Reitz
bdrv_make_empty() is currently only called if the current image represents an external snapshot that has been committed to its base image; it is therefore unlikely to have internal snapshots. In this case, bdrv_make_empty() can be greatly sped up by emptying the L1 and refcount table (while having the dirty flag set, which only works for compat=1.1) and creating a trivial refcount structure. If there are snapshots or for compat=0.10, fall back to the simple implementation (discard all clusters). [Applied s/clusters/cluster/ typo fix suggested by Eric Blake --Stefan] Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 1414159063-25977-4-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03qcow2: Implement bdrv_make_empty()Max Reitz
Implement this function by making all clusters in the image file fall through to the backing file (by using the recently extended discard). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414159063-25977-3-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03qcow2: Allow "full" discardMax Reitz
Normally, discarded sectors should read back as zero. However, there are cases in which a sector (or rather cluster) should be discarded as if they were never written in the first place, that is, reading them should fall through to the backing file again. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414159063-25977-2-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03raw-posix: raw_co_get_block_status() return valueMax Reitz
Instead of generating the full return value thrice in try_fiemap(), try_seek_hole() and as a fall-back in raw_co_get_block_status() itself, generate the value only in raw_co_get_block_status(). While at it, also remove the pnum parameter from try_fiemap() and try_seek_hole(). Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414148280-17949-3-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03raw-posix: Fix raw_co_get_block_status() after EOFMax Reitz
As its comment states, raw_co_get_block_status() should unconditionally return 0 and set *pnum to 0 for after EOF. An assertion after lseek(..., SEEK_HOLE) tried to catch this case by asserting that errno != -ENXIO (which would indicate a position after the EOF); but it should be errno != ENXIO instead. Regardless of that, there should be no such assertion at all. If bdrv_getlength() returned an outdated value and the image has been resized outside of qemu, lseek() will return with errno == ENXIO. Just return that value as an error then. Setting *pnum to 0 and returning 0 should not be done here, as in that case we should update the device length as well. So, from qemu's perspective, the file has not been resized; it's just that there was an error querying sectors beyond a certain point (the actual file size). Additionally, nb_sectors should be clamped against the image end. This was probably not an issue if FIEMAP or SEEK_HOLE/SEEK_DATA worked, but the fallback did not take this case into account. Reported-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414148280-17949-2-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/curl: Improve type safety of s->timeout.Richard W.M. Jones
qemu_opt_get_number returns a uint64_t, and curl_easy_setopt expects a long (not an int). There is no warning about the latter type error because curl_easy_setopt uses a varargs argument. Store the timeout (which is a positive number of seconds) as a uint64_t. Check that the number given by the user is reasonable. Zero is permissible (meaning no timeout is enforced by cURL). Cast it to long before calling curl_easy_setopt to fix the type error. Example error message after this change has been applied: $ ./qemu-img create -f qcow2 /tmp/test.qcow2 \ -b 'json: { "file.driver":"https", "file.url":"https://foo/bar", "file.timeout":-1 }' qemu-img: /tmp/test.qcow2: Could not open 'json: { "file.driver":"https", "file.url":"https://foo/bar", "file.timeout":-1 }': timeout parameter is too large or negative: Invalid argument Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03snapshot: add bdrv_drain_all() to bdrv_snapshot_delete() to avoid ↵Zhang Haoyu
concurrency problem If there are still pending i/o while deleting snapshot, because deleting snapshot is done in non-coroutine context, and the pending i/o read/write (bdrv_co_do_rw) is done in coroutine context, so it's possible to cause concurrency problem between above two operations. Add bdrv_drain_all() to bdrv_snapshot_delete() to avoid this problem. Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 201410211637596311287@sangfor.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03rbd: Add support for bdrv_invalidate_cacheAdam Crume
This fixes Ceph issue 2467: ttp://tracker.ceph.com/issues/2467 [Dropped return r in void function as suggested by Josh Durgin <josh.durgin@inktank.com>. --Stefan] Signed-off-by: Adam Crume <adamcrume@gmail.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1412880272-3154-1-git-send-email-adamcrume@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/parallels: fix access to not initialized memory in catalog_bitmapDenis V. Lunev
found by valgrind. Command: ./qemu-img convert -f parallels -O qcow2 1.hds 1.img Invalid read of size 4 at 0x17D0EF: parallels_co_read (parallels.c:357) by 0x11FEE4: bdrv_aio_rw_vector (block.c:4640) by 0x11FFBF: bdrv_aio_readv_em (block.c:4652) by 0x11F55F: bdrv_co_readv_em (block.c:4862) by 0x123428: bdrv_aligned_preadv (block.c:3056) by 0x1239FA: bdrv_co_do_preadv (block.c:3162) by 0x125424: bdrv_rw_co_entry (block.c:2706) by 0x155DD9: coroutine_trampoline (coroutine-ucontext.c:118) by 0x6975B6F: ??? (in /lib/x86_64-linux-gnu/libc-2.19.so) The problem is that s->catalog_bitmap is allocated/filled as gmalloc(s->catalog_size) thus index validity check must be inclusive, i.e. index >= s->catalog_size is invalid. Signed-off-by: Denis V. Lunev <den@openvz.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1412759610-2257-4-git-send-email-den@openvz.org CC: Jeff Cody <jcody@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/iscsi: check for oversized requestsPeter Lieven
Cancel oversized requests early. They would generate an iSCSI protocol error anyway; after having transferred possibly a lot of data over the wire. Suggested-By: Max Reitz <mreitz@redhat.com> Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/iscsi: use sector_limits_lun2qemu throughout iscsi_refresh_limitsPeter Lieven
As Max pointed out there is a hidden cast from int64_t to int for all limits. So use the newly introduced sector_limits_lun2qemu for all limits received from the target. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-03block/iscsi: set max_transfer_lengthPeter Lieven
Copy the max_xfer_len from the BlockLimits VPD or use the maximum value fitting in the CDB. The helper function sector_limits_lun2qemu is introduced to convert and cap the limits from the VPD to the maximum power of two fitting in an integer; integer is the range for nb_sectors throughout the block layer. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-11-02vdi: wrapped uuid_unparse() in #ifdefSeokYeon Hwang
Wrapped uuid_unparse() in #ifdef to avoid "-Wunused-function" on clang 3.4 or later. Signed-off-by: SeokYeon Hwang <syeon.hwang@samsung.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-10-31iscsi: Refuse to open as writable if the LUN is write protectedFam Zheng
Before, when a write protected iSCSI target is attached as scsi-disk with BDRV_O_RDWR, we report it as writable, while in fact all writes will fail. One way to improve this is to report write protect flag as true to guest, but a even better way is to refuse using a write protected LUN to guest. Target write protect flag is checked with a mode sense query. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-10-23block: char devices on FreeBSD are not behind a pagerRoger Pau Monne
Introduce a new flag to mark devices that require requests to be aligned and replace the usage of BDRV_O_NOCACHE and O_DIRECT with this flag when appropriate. If a character device is used as a backend on a FreeBSD host set this flag unconditionally. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2014-10-23qcow2: Do not overflow when writing an L1 sectorMax Reitz
While writing an L1 table sector, qcow2_write_l1_entry() copies the respective range from s->l1_table to the local "buf" array. The size of s->l1_table does not have to be a multiple of L1_ENTRIES_PER_SECTOR; thus, limit the index which is used for copying all entries to the L1 size. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Drop REFCOUNT_SHIFTMax Reitz
With BDRVQcowState.refcount_block_bits, we don't need REFCOUNT_SHIFT anymore. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Clean up after refcount rebuildMax Reitz
Because the old refcount structure will be leaked after having rebuilt it, we need to recalculate the refcounts and run a leak-fixing operation afterwards (if leaks should be fixed at all). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Rebuild refcount structure during checkMax Reitz
The previous commit introduced the "rebuild" variable to qcow2's implementation of the image consistency check. Now make use of this by adding a function which creates a completely new refcount structure based solely on the in-memory information gathered before. The old refcount structure will be leaked, however. This leak will be dealt with in a follow-up commit. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Do not perform potentially damaging repairsMax Reitz
If a referenced cluster has a refcount of 0, increasing its refcount may result in clusters being allocated for the refcount structures. This may overwrite the referenced cluster, therefore we cannot simply increase the refcount then. In such cases, we can either try to replicate all the refcount operations solely for the check operation, basing the allocations on the in-memory refcount table; or we can simply rebuild the whole refcount structure based on the in-memory refcount table. Since the latter will be much easier, do that. To prepare for this, introduce a "rebuild" boolean which should be set to true whenever a fix is rather dangerous or too complicated using the current refcount structures. Another example for this is refcount blocks being referenced more than once. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Fix refcount blocks beyond image endMax Reitz
If the qcow2 check function detects a refcount block located beyond the image end, grow the image appropriately. This cannot break anything and is the logical fix for such a case. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Reuse refcount table in calculate_refcounts()Max Reitz
We will later call calculate_refcounts multiple times, so reuse the refcount table if possible. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Let inc_refcounts() resize the reftableMax Reitz
Now that the refcount table can be passed around by reference, do that for inc_refcounts() (and subsequently check_refcounts_l1() and check_refcounts_l2()) and use it for resizing it when a cluster after the image end is encountered. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Let inc_refcounts() return -errnoMax Reitz
As of a future patch, inc_refcounts() will have to throw errors which are generally signaled by returning -errno. Therefore, let it return an integer which is either 0 for success or -errno and handle the -errno case in all callers. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Split fail code in L1 and L2 checksMax Reitz
Instead of printing out an error message, incrementing check_errors and returning a fixed -errno, just do cleanups and return -ret, with ret set by the code which threw the exception (jumped to the fail label). Also, increment check_errors on error in check_refcounts_l2(). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Use int64_t for in-memory reftable sizeMax Reitz
Use int64_t for the entry count of the in-memory refcount table throughout the check functions. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Pull check_refblocks() upMax Reitz
Pull check_refblocks() before calculate_refcounts() so we can drop its static declaration. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Use sizeof(**refcount_table)Max Reitz
When implementing variable refcounts, we want to be able to easily find all the places in qemu which are tied to a certain refcount order. Replace sizeof(uint16_t) in the check code by sizeof(**refcount_table) so we can later find it more easily. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Split qcow2_check_refcounts()Max Reitz
Put the code for calculating the reference counts and comparing them during qemu-img check into own functions. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Fix leaks in dirty imagesMax Reitz
When opening dirty images, qcow2's repair function should not only repair errors but leaks as well. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23qcow2: Calculate refcount block entry countMax Reitz
The size of a refblock entry is (in theory) variable; calculate therefore the number of entries per refblock and the according bit shift (1 << x == entry count) when opening an image. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-23block/vdi: Use {DIV_,}ROUND_UPMax Reitz
There are macros for these operations, so make use of them. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-20block: Make device model's references to BlockBackend strongMarkus Armbruster
Doesn't make a difference just yet, but it's the right thing to do. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-20block: Lift device model API into BlockBackendMarkus Armbruster
Move device model attachment / detachment and the BlockDevOps device model callbacks and their wrappers from BlockDriverState to BlockBackend. Wrapper calls in block.c change from bdrv_dev_FOO_cb(bs, ...) to if (bs->blk) { bdrv_dev_FOO_cb(bs->blk, ...); } No change, because both bdrv_dev_change_media_cb() and bdrv_dev_resize_cb() do nothing when no device model is attached, and a device model can be attached only when bs->blk. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-20block/qapi: Convert qmp_query_block() to BlockBackendMarkus Armbruster
Much more command code needs conversion. I start with this one because it's using bdrv_dev_* functions, which I'm about to lift into BlockBackend. While there, give bdrv_query_info() internal linkage. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-20blockdev: Fix blockdev-add not to create DriveInfoMarkus Armbruster
blockdev_init() always creates a DriveInfo, but only drive_new() fills it in. qmp_blockdev_add() leaves it blank. This results in a drive with type = IF_IDE, bus = 0, unit = 0. Screwed up in commit ee13ed1c. Board initialization code looking for IDE drive (0,0) can pick up one of these bogus drives. The QMP command has to execute really early to be visible. Not sure how likely that is in practice. Fix by creating DriveInfo in drive_new(). Block backends created by blockdev-add don't get one. Breaks the test for "has been created by qmp_blockdev_add()" in blockdev_mark_auto_del() and do_drive_del(), because it changes the value of dinfo && !dinfo->enable_auto_del from true to false. Simply test !dinfo instead. Leaves DriveInfo member enable_auto_del unused. Drop it. A few places assume a block backend always has a DriveInfo. Fix them up. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-10-20blockdev: Drop superfluous DriveInfo member idMarkus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>