aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2016-09-08iscsi: pass SCSI status back for SG_IOVadim Rozenfeld
Signed-off-by: Vadim Rozenfeld <vrozenfe@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 644c6869d335e10bc10b8399646f767763c4977f) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05nbd: Limit nbdflags to 16 bitsEric Blake
Rather than asserting that nbdflags is within range, just give it the correct type to begin with :) nbdflags corresponds to the per-export portion of NBD Protocol "transmission flags", which is 16 bits in response to NBD_OPT_EXPORT_NAME and NBD_OPT_GO. Furthermore, upstream NBD has never passed the global flags to the kernel via ioctl(NBD_SET_FLAGS) (the ioctl was first introduced in NBD 2.9.22; then a latent bug in NBD 3.1 actually tried to OR the global flags with the transmission flags, with the disaster that the addition of NBD_FLAG_NO_ZEROES in 3.9 caused all earlier NBD 3.x clients to treat every export as read-only; NBD 3.10 and later intentionally clip things to 16 bits to pass only transmission flags). Qemu should follow suit, since the current two global flags (NBD_FLAG_FIXED_NEWSTYLE and NBD_FLAG_NO_ZEROES) have no impact on the kernel's behavior during transmission. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1469129688-22848-3-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 7423f417827146f956df820f172d0bf80a489495) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05block/iscsi: fix rounding in iscsi_allocationmap_setPeter Lieven
when setting clusters as alloacted the boundaries have to be expanded. As Paolo pointed out the calculation of the number of clusters is wrong: Suppose cluster_sectors is 2, sector_num = 1, nb_sectors = 6: In the "mark allocated" case, you want to set 0..8, i.e. cluster_num=0, nb_clusters=4. 0--.--2--.--4--.--6--.--8 <--|_________________|--> (<--> = expanded) Instead you are setting nb_clusters=3, so that 6..8 is not marked. 0--.--2--.--4--.--6--.--8 <--|______________|!!! (! = wrong) Cc: qemu-stable@nongnu.org Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1468831940-15556-2-git-send-email-pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit eb36b953e0ebf4129b188a241fbc367062ac2e06) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05nbd: Allow larger requestsEric Blake
The NBD layer was breaking up request at a limit of 2040 sectors (just under 1M) to cater to old qemu-nbd. But the server limit was raised to 32M in commit 2d8214885 to match the kernel, more than three years ago; and the upstream NBD Protocol is proposing documentation that without any explicit communication to state otherwise, a client should be able to safely assume that a 32M transaction will work. It is time to rely on the larger sizing, and any downstream distro that cares about maximum interoperability to older qemu-nbd servers can just tweak the value of #define NBD_MAX_SECTORS. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-stable@nongnu.org Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> (cherry picked from commit 476b923c32ece0e268580776aaf1fab4ab4459a8) Conflicts: include/block/nbd.h * removed context dependency on 943cec86 Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05qcow2: Avoid making the L1 table too bigMax Reitz
We refuse to open images whose L1 table we deem "too big". Consequently, we should not produce such images ourselves. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20160615153630.2116-3-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> [mreitz: Added QEMU_BUILD_BUG_ON()] Signed-off-by: Max Reitz <mreitz@redhat.com> (cherry picked from commit 84c26520d3c1c9ff4a10455748139463278816d5) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05backup: Don't leak BackupBlockJob in error pathKevin Wolf
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> (cherry picked from commit 91ab68837933232bcef99da7c968e6d41900419b) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05block: Drop bdrv_ioctl_bh_cbFam Zheng
Similar to the "!drv || !drv->bdrv_aio_ioctl" case above, here it is okay to set co.ret and return. As pointed out by Paolo, a BH will be created as necessary by the caller (bdrv_co_maybe_schedule_bh). Besides, as pointed out by Kevin, "data" was leaked before. Reported-by: Kevin Wolf <kwolf@redhat.com> Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20160601015223.19277-1-famz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> (cherry picked from commit c8a9fd80719e63615dac12e3625223fb54aa8430) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-05block/iscsi: avoid potential overflow of acb->task->cdbPeter Lieven
at least in the path via virtio-blk the maximum size is not restricted. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1464080368-29584-1-git-send-email-pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a6b3167fa0e825aebb5a7cd8b437b6d41584a196) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-08-04block/nfs: refuse readahead if cache.direct is onPeter Lieven
if we open a NFS export with disabled cache we should refuse the readahead feature as it will cache data inside libnfs. If a export was opened with readahead enabled it should futher not be allowed to disable the cache while running. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1463662083-20814-2-git-send-email-pl@kamp.de Signed-off-by: Jeff Cody <jcody@redhat.com> (cherry picked from commit 38f8d5e0251ae7d8257cf099cb3e5a375ef60378) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-04-29vvfat: Fix default volume labelKevin Wolf
Commit d5941dd documented that it leaves the default volume name as it was ("QEMU VVFAT"), but it doesn't actually implement this. You get an empty name (eleven space characters) instead. This fixes the implementation to apply the advertised default. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-04-29vvfat: Fix volume name assertionKevin Wolf
Commit d5941dd made the volume name configurable, but it didn't consider that the rw code compares the volume name string to assert that the first directory entry is the volume name. This made vvfat crash in rw mode. This fixes the assertion to compare with the configured volume name instead of a literal string. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-04-22mirror: Workaround for unexpected iohandler events during completionFam Zheng
Commit 5a7e7a0ba moved mirror_exit to a BH handler but didn't add any protection against new requests that could sneak in just before the BH is dispatched. For example (assuming a code base at that commit): main_loop_wait # 1 os_host_main_loop_wait g_main_context_dispatch aio_ctx_dispatch aio_dispatch ... mirror_run bdrv_drain (a) block_job_defer_to_main_loop qemu_iohandler_poll virtio_queue_host_notifier_read ... virtio_submit_multiwrite (b) blk_aio_multiwrite main_loop_wait # 2 <snip> aio_dispatch aio_bh_poll (c) mirror_exit At (a) we know the BDS has no pending request. However, the same main_loop_wait call is going to dispatch iohandlers (EventNotifier events), which may lead to a new I/O from guest. So the invariant is already broken at (c). Data loss. Commit f3926945c8 made iohandler to use aio API. The order of virtio_queue_host_notifier_read and block_job_defer_to_main_loop within a main_loop_wait becomes unpredictable, and even worse, if the host notifier event arrives at the next main_loop_wait call, the unpredictable order between mirror_exit and virtio_queue_host_notifier_read is also a trouble. As shown below, this commit made the bug easier to trigger: - Bug case 1: main_loop_wait # 1 os_host_main_loop_wait g_main_context_dispatch aio_ctx_dispatch (qemu_aio_context) ... mirror_run bdrv_drain (a) block_job_defer_to_main_loop aio_ctx_dispatch (iohandler_ctx) virtio_queue_host_notifier_read ... virtio_submit_multiwrite (b) blk_aio_multiwrite main_loop_wait # 2 ... aio_dispatch aio_bh_poll (c) mirror_exit - Bug case 2: main_loop_wait # 1 os_host_main_loop_wait g_main_context_dispatch aio_ctx_dispatch (qemu_aio_context) ... mirror_run bdrv_drain (a) block_job_defer_to_main_loop main_loop_wait # 2 ... aio_ctx_dispatch (iohandler_ctx) virtio_queue_host_notifier_read ... virtio_submit_multiwrite (b) blk_aio_multiwrite aio_dispatch aio_bh_poll (c) mirror_exit In both cases, (b) breaks the invariant wanted by (a) and (c). Until then, the request loss has been silent. Later, 3f09bfbc7be added asserts at (c) to check the invariant (in bdrv_replace_in_backing_chain), and Max reported an assertion failure first visible there, by doing active committing while the guest is running bonnie++. 2.5 added bdrv_drained_begin at (a) to protect the dataplane case from similar problems, but we never realize the main loop bug until now. As a bandage, this patch disables iohandler's external events temporarily together with bs->ctx. Launchpad Bug: 1570134 Cc: qemu-stable@nongnu.org Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-20mirror: Don't extend the last sub-chunkFam Zheng
The last sub-chunk is rounded up to the copy granularity in the target image, resulting in a larger size than the source. Add a function to clip the copied sectors to the end. This undoes the "wrong" changes to tests/qemu-iotests/109.out in e5b43573e28. The remaining two offset changes are okay. [ kwolf: Use DIV_ROUND_UP to calculate nb_chunks now ] Reported-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2016-04-20block/mirror: Refresh stale bitmap iterator cacheMax Reitz
If the drive's dirty bitmap is dirtied while the mirror operation is running, the cache of the iterator used by the mirror code may become stale and not contain all dirty bits. This only becomes an issue if we are looking for contiguously dirty chunks on the drive. In that case, we can easily detect the discrepancy and just refresh the iterator if one occurs. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-20block/mirror: Revive dead yielding codeMax Reitz
mirror_iteration() is supposed to wait if the current chunk is subject to a still in-flight mirroring operation. However, it mixed checking this conflict situation with checking the dirty status of a chunk. A simplification for the latter condition (the first chunk encountered is always dirty) led to neglecting the former: We just skip the first chunk and thus never test whether it conflicts with an in-flight operation. To fix this, pull out the code which waits for in-flight operations on the first chunk of the range to be mirrored to settle. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-19block/gluster: prevent data loss after i/o errorJeff Cody
Upon receiving an I/O error after an fsync, by default gluster will dump its cache. However, QEMU will retry the fsync, which is especially useful when encountering errors such as ENOSPC when using the werror=stop option. When using caching with gluster, however, the last written data will be lost upon encountering ENOSPC. Using the write-behind-cache xlator option of 'resync-failed-syncs-after-fsync' should cause gluster to retain the cached data after a failed fsync, so that ENOSPC and other transient errors are recoverable. Unfortunately, we have no way of knowing if the 'resync-failed-syncs-after-fsync' xlator option is supported, so for now close the fd and set the BDS driver to NULL upon fsync error. Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-04-19block/gluster: code movement of qemu_gluster_close()Jeff Cody
Move qemu_gluster_close() further up in the file, in preparation for the next patch, to avoid a forward declaration. Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-04-19block/gluster: return correct error valueJeff Cody
Upon error, gluster will call the aio callback function with a ret value of -1, with errno set to the proper error value. If we set the acb->ret value to the return value in the callback, that results in every error being EPERM (i.e. 1). Instead, set it to the proper error result. Reviewed-by: Niels de Vos <ndevos@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-04-15block: Don't ignore flags in blk_{,co,aio}_write_zeroes()Kevin Wolf
Commit 57d6a428 neglected to pass the given flags to blk_aio_prwv(), which broke discard by WRITE SAME for scsi-disk (the UNMAP bit would be ignored). Commit fc1453cd introduced the same bug for blk_write_zeroes(). This is used for 'qemu-img convert' without has_zero_init (e.g. on a block device) and for preallocation=falloc in parallels. Commit 8896e088 is the version for blk_co_write_zeroes(). This function is only used in qemu-io. Reported-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2016-04-15block/vpc: update comments to be compliant w/coding guidelinesJeff Cody
Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block/vpc: set errp in vpc_openJeff Cody
Add more useful error information to failure paths in vpc_open Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block/vpc: make checks on max table size a bit more laxJeff Cody
The check on the max_table_size field not being larger than required is valid, and in accordance with the VHD spec. However, there have been VHD images encountered in the wild that have an out-of-spec max table size that is technically too large. There is no issue in allowing this larger table size, as we also later verify that the computed size (used for the pagetable) is large enough to fit all sectors. In addition, max_table_entries is bounds checked against SIZE_MAX and INT_MAX. Remove the strict check, so that we can accomodate these sorts of images that are benignly out of spec. Reported-by: Stefan Hajnoczi <stefanha@redhat.com> Reported-by: Grant Wu <grantwwu@gmail.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block/vpc: Use the correct max sector count for VHD imagesJeff Cody
The old VHD_MAX_SECTORS value is incorrect, and is a throwback to the CHS calculations. The VHD specification allows images up to 2040 GiB, which (using 512 byte sectors) corresponds to a maximum number of sectors of 0xff000000, rather than the old value of 0xfe0001ff. Update VHD_MAX_SECTORS to reflect the correct value. Also, update comment references to the actual size limit, and correct one compare so that we can have sizes up to the limit. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block/vpc: use current_size field for XenConverter VHD imagesJeff Cody
XenConverter VHD images are another VHD image where current_size is different from the CHS values in the the format header. Use current_size as the default, by looking at the creator_app signature field. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15vpc: use current_size field for XenServer VHD imagesStefan Hajnoczi
The vpc driver has two methods of determining virtual disk size. The correct one to use depends on the software that generated the image file. Add the XenServer creator_app signature so that image size is correctly detected for those images. Reported-by: Grant Wu <grantwwu@gmail.com> Reported-by: Spencer Baugh <sbaugh@catern.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block/vpc: set errp in vpc_createJeff Cody
Add more useful error information to failure paths in vpc_create(). Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-15block: Fix blk_aio_write_zeroes()Kevin Wolf
Commit 57d6a428 broke blk_aio_write_zeroes() because in some write functions in the call path don't have an explicit length argument but reuse qiov->size instead. Which is great, except that write_zeroes doesn't have a qiov, which this commit interprets as 0 bytes. Consequently, blk_aio_write_zeroes() didn't effectively do anything. This patch introduces an explicit acb->bytes in BlkAioEmAIOCB and uses that instead of acb->rwco.size. The synchronous version of the function is okay because it does pass a qiov (with the right size and a NULL pointer as its base). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-04-12qcow2: Prevent backing file names longer than 1023Max Reitz
We reject backing file names with a length of more than 1023 characters when opening a qcow2 file, so we should not produce such files ourselves. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-12vpc: fix return value check for blk_pwritePaolo Bonzini
bdrv_pwrite_sync used to return zero or negative error, while blk_pwrite returns the number of written bytes when successful. This caused VPC image creation to fail spectacularly: it wrote the first 512 bytes, and then exited immediately because of the non-zero answer from blk_pwrite. But the truly spectacular part is that it returns a positive value (the 512 that blk_pwrite returned) causing everyone to believe that it succeeded. This fixes qemu-iotests with vpc format. Fixes: b8f45cdf7827e39f9a1e6cc446f5972cc6144237 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-04-11mirror: Replace bdrv_drain(bs) with bdrv_co_drain(bs)Fam Zheng
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1459855253-5378-3-git-send-email-famz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-04-11block: Fix bdrv_drain in coroutineFam Zheng
Using the nested aio_poll() in coroutine is a bad idea. This patch replaces the aio_poll loop in bdrv_drain with a BH, if called in coroutine. For example, the bdrv_drain() in mirror.c can hang when a guest issued request is pending on it in qemu_co_mutex_lock(). Mirror coroutine in this case has just finished a request, and the block job is about to complete. It calls bdrv_drain() which waits for the other coroutine to complete. The other coroutine is a scsi-disk request. The deadlock happens when the latter is in turn pending on the former to yield/terminate, in qemu_co_mutex_lock(). The state flow is as below (assuming a qcow2 image): mirror coroutine scsi-disk coroutine ------------------------------------------------------------- do last write qcow2:qemu_co_mutex_lock() ... scsi disk read tracked request begin qcow2:qemu_co_mutex_lock.enter qcow2:qemu_co_mutex_unlock() bdrv_drain while (has tracked request) aio_poll() In the scsi-disk coroutine, the qemu_co_mutex_lock() will never return because the mirror coroutine is blocked in the aio_poll(blocking=true). With this patch, the added qemu_coroutine_yield() allows the scsi-disk coroutine to make progress as expected: mirror coroutine scsi-disk coroutine ------------------------------------------------------------- do last write qcow2:qemu_co_mutex_lock() ... scsi disk read tracked request begin qcow2:qemu_co_mutex_lock.enter qcow2:qemu_co_mutex_unlock() bdrv_drain.enter > schedule BH > qemu_coroutine_yield() > qcow2:qemu_co_mutex_lock.return > ... tracked request end ... (resumed from BH callback) bdrv_drain.return ... Reported-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1459855253-5378-2-git-send-email-famz@redhat.com Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-04-05Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell
Block layer patches for 2.6 # gpg: Signature made Tue 05 Apr 2016 16:32:25 BST using RSA key ID C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" * remotes/kevin/tags/for-upstream: crypto: Avoid memory leak on failure qemu-iotests: 149: Use "/usr/bin/env python" block: Forbid I/O throttling on nodes with multiple parents for 2.6 block: forbid x-blockdev-del from acting on DriveInfo Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-05crypto: Avoid memory leak on failureEric Blake
Commit 7836857 introduced a memory leak due to invalid use of Error vs. visit_type_end(). If visiting the intermediate members fails, we clear the error and unconditionally use visit_end_struct() on the same error object; but if that cleanup succeeds, we then skip the qapi_free call. Until a later patch adds visit_check_struct(), the only safe approach is to use two separate error objects. Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 1459526222-30052-1-git-send-email-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-05nbd: don't request FUA on FLUSHEric Blake
The NBD protocol does not clearly document what will happen if a client sends NBD_CMD_FLAG_FUA on NBD_CMD_FLUSH. Historically, both the qemu and upstream NBD servers silently ignored that flag, but that feels a bit risky. Meanwhile, the qemu NBD client unconditionally sends the flag (without even bothering to check whether the caller cares; at least with NBD_CMD_WRITE the client only sends FUA if requested by a higher layer). There is ongoing discussion on the NBD list to fix the protocol documentation to require that the server MUST ignore the flag (unless the kernel folks can better explain what FUA means for a flush), but until those doc improvements land, the current nbd.git master was recently changed to reject the flag with EINVAL (see nbd commit ab22e082), which now makes it impossible for a qemu client to use FLUSH with an upstream NBD server. We should not send FUA with flush unless the upstream protocol documents what it will do, and even then, it should be something that the caller can opt into, rather than being unconditional. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1459526902-32561-1-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-30block/nfs: add missing #include "qemu/cutils.h"Stefan Hajnoczi
parse_uint_full() used to be included from qemu-common.h but was moved to qemu/cutils.h in commit f348b6d1a53e5271cf1c9f9acc4646b4b98c1771 ("util: move declarations out of qemu-common.h"). Cc: Veronia Bahaa <veroniabahaa@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1459341994-20567-3-git-send-email-stefanha@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-03-30block/nfs: add missing #include "qapi/error.h"Stefan Hajnoczi
error_setg() used to be included indirectly through qemu/osdep.h. Since commit da34e65cb4025728566d6504a99916f6e7e1dd6a ("include/qemu/osdep.h: Don't include qapi/error.h") it requires an explicit include. Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1459341994-20567-2-git-send-email-stefanha@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-03-30block/null-{co,aio}: Implement get_block_status()Max Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-03-30block/null-{co,aio}: Allow reading zeroesMax Reitz
This is optional so that it does not impede the null block driver's performance unless this behavior is desired. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-03-30block: Remove bdrv_(set_)enable_write_cache()Kevin Wolf
The only remaining users were block jobs (mirror and backup) which unconditionally enabled WCE on the BlockBackend of the target image. As these block jobs don't go through BlockBackend for their I/O requests, they aren't affected by this setting anyway but always get a writeback mode, so that call can be removed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block: Remove BDRV_O_CACHE_WBKevin Wolf
The previous patches have successively made blk->enable_write_cache the true source for the information whether a writethrough mode must be implemented. The corresponding BDRV_O_CACHE_WB is only useless baggage we're carrying around, so now's the time to remove it. At the same time, we remove the 'cache.writeback' option parsing on the BDS level as the only effect was setting the BDRV_O_CACHE_WB flag. This change requires test cases that explicitly enabled the option to drop it. Other than that and the change of the error message when writethrough is enabled on the BDS level (from "Can't set writethrough mode" to "doesn't support the option"), there should be no change in behaviour. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30raw: Support BDRV_REQ_FUAKevin Wolf
Pass through the FUA flag to the lower layer so that the separate flush can be saved in practically relevant cases where a (raw) format driver sits on top of the protocol driver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30nbd: Support BDRV_REQ_FUAKevin Wolf
The NBD server already used to send a FUA flag when the writethrough mode was set. This code was a remnant from the times where protocol drivers actually had to implement writethrough modes. Since nowadays the block layer sends flushes in writethrough mode and non-root nodes are always writeback, this was mostly dead code - only mostly because if NBD was configured to be used without a format, we sent _both_ FUA and an explicit flush afterwards, which makes the code not technically dead, but useless overhead. This patch changes the code so that the block layer's FUA flag is recognised and translated into a NBD FUA flag. The additional flush is avoided now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30iscsi: Support BDRV_REQ_FUAKevin Wolf
This replaces the existing hack in the iscsi driver that sent the FUA bit in writethrough mode and ignored the following flush in order to optimise the number of roundtrips (see commit 73b5394e). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block: Introduce bdrv_co_writev_flags()Kevin Wolf
This function will allow drivers to implement BDRV_REQ_FUA natively instead of sending a separate flush after the write. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block/qapi: Use blk_enable_write_cache()Kevin Wolf
Now that WCE is handled on the BlockBackend level, the flag is meaningless for BDSes. As the schema requires us to fill the field, we return an enabled write cache for them. Note that this means that querying the BlockBackend name may return writethrough as the cache information, whereas querying the node-name of the root of that same BlockBackend will return writeback. This may appear odd at first, but it actually makes sense because it correctly repesents the layer that implements the WCE handling. This becomes more apparent when you consider nodes that are the root node of multiple BlockBackends, where each BB can have its own WCE setting. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block: Move enable_write_cache to BB levelKevin Wolf
Whether a write cache is used or not is a decision that concerns the user (e.g. the guest device) rather than the backend. It was already logically part of the BB level as bdrv_move_feature_fields() always kept it on top of the BDS tree; with this patch, the core of it (the actual flag and the additional flushes) is also implemented there. Direct callers of bdrv_open() must pass BDRV_O_CACHE_WB now if bs doesn't have a BlockBackend attached. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block: Handle flush error in bdrv_pwrite_sync()Kevin Wolf
We don't want to silently ignore a flush error. Also, there is little point in avoiding the flush for writethrough modes and once WCE is moved to the BB layer, we definitely need the flush here because bdrv_pwrite() won't involve one any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30block: Always set writeback mode in blk_new_open()Kevin Wolf
All callers of blk_new_open() either don't rely on the WCE bit set after blk_new_open() because they explicitly set it anyway, or they pass BDRV_O_CACHE_WB unconditionally. This patch changes blk_new_open() so that it always enables writeback mode and asserts that BDRV_O_CACHE_WB is clear. For those callers that used to pass BDRV_O_CACHE_WB unconditionally, the flag is removed now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-30replay: introduce block devices record/replayPavel Dovgalyuk
This patch introduces block driver that implement recording and replaying of block devices' operations. All block completion operations are added to the queue. Queue is flushed at checkpoints and information about processed requests is recorded to the log. In replay phase the queue is matched with events read from the log. Therefore block devices requests are processed deterministically. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> [ kwolf: Rebased onto modified and already applied part of the series ] Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-03-30block: add flush callbackPavel Dovgalyuk
This patch adds callback for flush request. This callback is responsible for flushing whole block devices stack. bdrv_flush function does not proceed to underlying devices. It should be performed by this callback function, if needed. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: Kevin Wolf <kwolf@redhat.com>