aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2014-05-28block/sheepdog: Don't use qerror_report()Markus Armbruster
qerror_report() is a transitional interface to help with converting existing HMP commands to QMP. It should not be used elsewhere. Replace by error_report(). Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Fix silent sd_open(), sd_create() failuresMarkus Armbruster
Open and create methods must set an error when they fail. Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors to open and create methodsMarkus Armbruster
Completes the conversion to Error started in commit 015a103^..d5124c0, except for a few bugs fixed in the next commit. Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors through find_vdi_name()Markus Armbruster
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors through do_sd_create()Markus Armbruster
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors through sd_prealloc()Markus Armbruster
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors through get_sheep_fd()Markus Armbruster
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/sheepdog: Propagate errors through connect_to_sdog()Markus Armbruster
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/vvfat: Propagate errors through init_directories()Markus Armbruster
Completes the conversion of the open method to Error started in commit 015a103. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/vvfat: Propagate errors through enable_write_target()Markus Armbruster
Continues the conversion of the open method to Error started in commit 015a103. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/ssh: Propagate errors to open and create methodsMarkus Armbruster
Completes the conversion to Error started in commit 015a103^..d5124c0. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/ssh: Propagate errors through connect_to_ssh()Markus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/ssh: Propagate errors through authenticate()Markus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/ssh: Propagate errors through check_host_key()Markus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/ssh: Drop superfluous libssh2_session_last_errno() callsMarkus Armbruster
libssh2_session_last_error() already returns the error code. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block/rbd: Propagate errors to open and create methodsMarkus Armbruster
Completes the conversion to Error started in commit 015a103^..d5124c0. Cc: Josh Durgin <josh.durgin@inktank.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block: Add backing_blocker in BlockDriverStateFam Zheng
This makes use of op_blocker and blocks all the operations except for commit target, on each BlockDriverState->backing_hd. The asserts for op_blocker in bdrv_swap are removed because with this change, the target of block commit has at least the backing blocker of its child, so the assertion is not true. Callers should do their check. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28block: Use bdrv_set_backing_hd everywhereFam Zheng
We need to handle the coming backing_blocker properly, so don't open code the assignment, instead, call bdrv_set_backing_hd to change backing_hd. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-28qcow2: Fix memory leak in COW error pathKevin Wolf
This triggers if bs->drv becomes NULL in a concurrent request. This is currently only the case when corruption prevention kicks in (i.e. at most once per image, and after that it produces I/O errors). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-22Merge remote-tracking branch 'remotes/bonzini/scsi-next' into stagingPeter Maydell
* remotes/bonzini/scsi-next: megasas: remove buildtime strings block: iscsi build fix if LIBISCSI_FEATURE_IOVECTOR is not defined virtio-scsi: Plug memory leak on virtio_scsi_push_event() error path scsi: Document intentional fall through in scsi_req_length() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-20Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell
Block patches # gpg: Signature made Mon 19 May 2014 15:21:14 BST using RSA key ID C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" * remotes/kevin/tags/for-upstream: (22 commits) block: optimize zero writes with bdrv_write_zeroes blockdev: add a function to parse enum ids from strings util: add qemu_iovec_is_zero qcow1: Stricter backing file length check qcow1: Validate image size (CVE-2014-0223) qcow1: Validate L2 table size (CVE-2014-0222) qcow1: Check maximum cluster size qcow1: Make padding in the header explicit curl: Add usage documentation curl: Add sslverify option curl: Remove broken parsing of options from url curl: Fix build when curl_multi_socket_action isn't available qemu-iotests: Fix blkdebug in VM drive in 030 qemu-iotests: Fix core dump suppression in test 039 iotests: Add test for the JSON protocol block: Allow JSON filenames check-qdict: Add test for qdict_join() qdict: Add qdict_join() block: add test for vhdx image created by Disk2VHD block: vhdx - account for identical header sections ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-20block: iscsi build fix if LIBISCSI_FEATURE_IOVECTOR is not definedJeff Cody
Commit b03c380 introduced the function iscsi_allocationmap_is_allocated(), however it is only used within a code block that is conditionally compiled. This produces a warning (error with -werror) of "defined but not used" for the the function, if LIBISCSI_FEATURE_IOVECTOR is not defined. This wraps iscsi_allocationmap_is_allocated() in the same conditional. Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-05-19block: optimize zero writes with bdrv_write_zeroesPeter Lieven
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format. This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance. I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage. a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0% * The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-19Merge remote-tracking branch 'remotes/bonzini/scsi-next' into stagingPeter Maydell
* remotes/bonzini/scsi-next: [PATCH] block/iscsi: bump year in copyright notice block/iscsi: allow cluster_size of 4K and greater block/iscsi: clarify the meaning of ISCSI_CHECKALLOC_THRES block/iscsi: speed up read for unallocated sectors block/iscsi: allow fall back to WRITE SAME without UNMAP MAINTAINERS: mark megasas as maintained megasas: Add MSI support megasas: Enable MSI-X support megasas: Implement LD_LIST_QUERY scsi: Improve error messages more scsi-disk: Improve error messager if can't get version number Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-19qcow1: Stricter backing file length checkKevin Wolf
Like qcow2 since commit 6d33e8e7, error out on invalid lengths instead of silently truncating them to 1023. Also don't rely on bdrv_pread() catching integer overflows that make len negative, but use unsigned variables in the first place. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
2014-05-19qcow1: Validate image size (CVE-2014-0223)Kevin Wolf
A huge image size could cause s->l1_size to overflow. Make sure that images never require a L1 table larger than what fits in s->l1_size. This cannot only cause unbounded allocations, but also the allocation of a too small L1 table, resulting in out-of-bounds array accesses (both reads and writes). Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-19qcow1: Validate L2 table size (CVE-2014-0222)Kevin Wolf
Too large L2 table sizes cause unbounded allocations. Images actually created by qemu-img only have 512 byte or 4k L2 tables. To keep things consistent with cluster sizes, allow ranges between 512 bytes and 64k (in fact, down to 1 entry = 8 bytes is technically working, but L2 table sizes smaller than a cluster don't make a lot of sense). This also means that the number of bytes on the virtual disk that are described by the same L2 table is limited to at most 8k * 64k or 2^29, preventively avoiding any integer overflows. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
2014-05-19qcow1: Check maximum cluster sizeKevin Wolf
Huge values for header.cluster_bits cause unbounded allocations (e.g. for s->cluster_cache) and crash qemu this way. Less huge values may survive those allocations, but can cause integer overflows later on. The only cluster sizes that qemu can create are 4k (for standalone images) and 512 (for images with backing files), so we can limit it to 64k. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
2014-05-19qcow1: Make padding in the header explicitKevin Wolf
We were relying on all compilers inserting the same padding in the header struct that is used for the on-disk format. Let's not do that. Mark the struct as packed and insert an explicit padding field for compatibility. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
2014-05-19curl: Add sslverify optionMatthew Booth
This allows qemu to use images over https with a self-signed certificate. It defaults to verifying the certificate. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-19curl: Remove broken parsing of options from urlMatthew Booth
The block layer now supports a generic json syntax for passing option parameters explicitly, making parsing of options from the url redundant. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-19curl: Fix build when curl_multi_socket_action isn't availableMatthew Booth
Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-19block: vhdx - account for identical header sectionsJeff Cody
The VHDX spec v1.00 declares that "a header is current if it is the only valid header or if it is valid and its SequenceNumber field is greater than the other header’s SequenceNumber field. The parser must only use data from the current header. If there is no current header, then the VHDX file is corrupt." However, the Disk2VHD tool from Microsoft creates a VHDX image file that has 2 identical headers, including matching checksums and matching sequence numbers. Likely, as a shortcut the tool is just writing the header twice, for the active and inactive headers, during the image creation. Technically, this should be considered a corrupt VHDX file (at least per the 1.00 spec, and that is how we currently treat it). But in order to accomodate images created with Disk2VHD, we can safely create an exception for this case. If we find identical sequence numbers, then we check the VHDXHeader-sized chunks of each 64KB header sections (we won't rely just on the crc32c to indicate the headers are the same). If they are identical, then we go ahead and use the first one. Reported-by: Nerijus Baliūnas <nerijus@users.sourceforge.net> Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-05-09block/raw-posix: Try both FIEMAP and SEEK_HOLEMax Reitz
The current version of raw-posix always uses ioctl(FS_IOC_FIEMAP) if FIEMAP is available; lseek with SEEK_HOLE/SEEK_DATA are not even compiled in in this case. However, there may be implementations which support the latter but not the former (e.g., NFSv4.2) as well as vice versa. To cover both cases, try FIEMAP first (as this will return -ENOTSUP if not supported instead of returning a failsafe value (everything allocated as a single extent)) and if that does not work, fall back to SEEK_HOLE/SEEK_DATA. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09gluster: Correctly propagate errors when volume isn't accessiblePeter Krempa
The docs for glfs_init suggest that the function sets errno on every failure. In fact it doesn't. As other functions such as qemu_gluster_open() in the gluster block code report their errors based on this fact we need to make sure that errno is set on each failure. This fixes a crash of qemu-img/qemu when a gluster brick isn't accessible from given host while the server serving the volume description is. Thread 1 (Thread 0x7ffff7fba740 (LWP 203880)): #0 0x00007ffff77673f8 in glfs_lseek () from /usr/lib64/libgfapi.so.0 #1 0x0000555555574a68 in qemu_gluster_getlength () #2 0x0000555555565742 in refresh_total_sectors () #3 0x000055555556914f in bdrv_open_common () #4 0x000055555556e8e8 in bdrv_open () #5 0x000055555556f02f in bdrv_open_image () #6 0x000055555556e5f6 in bdrv_open () #7 0x00005555555c5775 in bdrv_new_open () #8 0x00005555555c5b91 in img_info () #9 0x00007ffff62c9c05 in __libc_start_main () from /lib64/libc.so.6 #10 0x00005555555648ad in _start () Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09vmdk: Implement .bdrv_get_info()Fam Zheng
This will return cluster_size and needs_compressed_writes to caller, if all the extents have the same value (or there's only one extent). Otherwise return -ENOTSUP. cluster_size is only reported for sparse formats. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09vmdk: Implement .bdrv_write_compressedFam Zheng
Add a wrapper function to support "compressed" path in qemu-img convert. Only support streamOptimized subformat case for now (num_extents == 1 and extent compression is true). Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09block/iscsi: bump year in copyright noticePeter Lieven
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09block/nfs: Check for NULL server partMax Reitz
After the URL has been parsed make sure the server part is valid in order to avoid a segmentation fault when calling nfs_mount(). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-09qcow2: Fix alloc_clusters_noref() overflow detectionMax Reitz
If the very first allocation has a length of 0, the free_cluster_index is still 0 after the for loop, which means that subtracting one from it will underflow and signal an invalid range of clusters by returning -EFBIG. However, there is no such range, as its length is 0. Fix this by preventing underflows on free_cluster_index during the check. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-05-05[PATCH] block/iscsi: bump year in copyright noticePeter Lieven
Signed-off-by: Peter Lieven <pl@kamp.de>
2014-04-30curl: Fix hang reading from slow connectionsMatthew Booth
When receiving a new aio read request, we first look for an existing transaction whose range will cover the read request by the time it completes. However, we weren't checking that the existing transaction was still active. If it had timed out, we were adding the request to a transaction which would never complete and had already been cancelled, resulting in a hang. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Ensure all informationals are checked for completionMatthew Booth
According to the documentation, the correct way to ensure all informationals have been returned by curl_multi_info_read is to loop until it returns NULL. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Eliminate unnecessary use of curl_multi_socket_allMatthew Booth
curl_multi_socket_all is a deprecated catch-all which checks for activities on all open curl sockets. We have enough information from the event loop to check only the sockets with activity. This change removes use of curl_multi_socket_all in favour of curl_multi_socket_action called with the relevant handle. At the same time, it also ensures that the driver only checks for completion of read operations after reading from a socket, rather than both reading and writing. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Remove unnecessary explicit calls to internal event handlerMatthew Booth
Remove calls to curl_multi_do where the relevant handles are already registered to the event loop. Ensure that we kick off socket handling with CURL_SOCKET_TIMEOUT after adding a new handle. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Remove erroneous sleep waiting for curl completionMatthew Booth
The driver will not start more than a fixed number of curl sessions. If it needs more, it must wait for the completion of an existing one. The driver was sleeping, which will prevent the main loop from running, and therefore the event it's waiting on. It was also directly calling its internal handler rather than waiting on existing registered handlers to be called from the main loop. This change causes it simply to wait for a period of time whilst allowing the main loop to execute. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Fix return from curl_read_cb with invalid stateMatthew Booth
A curl write callback is supposed to return the number of bytes it handled. curl_read_cb would have erroneously reported it had handled all bytes in the event that the internal curl state was invalid. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Remove unnecessary use of gotoMatthew Booth
This isn't any of the usually acceptable uses of goto. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30curl: Fix long lineMatthew Booth
Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-30block/vdi: Error out immediately in vdi_create()Max Reitz
Currently, if an error occurs during the part of vdi_create() which actually writes the image, the function stores -errno, but continues anyway. Instead of trying to write data which (if it can be written at all) does not make any sense without the operations before succeeding (e.g., writing the image header), just error out immediately. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>