aboutsummaryrefslogtreecommitdiff
path: root/block/qcow2-cluster.c
AgeCommit message (Collapse)Author
2014-05-28qcow2: Fix memory leak in COW error pathKevin Wolf
This triggers if bs->drv becomes NULL in a concurrent request. This is currently only the case when corruption prevention kicks in (i.e. at most once per image, and after that it produces I/O errors). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-04-30qcow2: Check min_size in qcow2_grow_l1_table()Max Reitz
First, new_l1_size is an int64_t, whereas min_size is a uint64_t. Therefore, during the loop which adjusts new_l1_size until it equals or exceeds min_size, new_l1_size might overflow and become negative. The comparison in the loop condition however will take it as an unsigned value (because min_size is unsigned) and therefore recognize it as exceeding min_size. Therefore, the loop is left with a negative new_l1_size, which is not correct. This could be fixed by making new_l1_size uint64_t. On the other hand, however, by doing this, the while loop may take forever. If min_size is e.g. UINT64_MAX, it will take new_l1_size probably multiple overflows to reach the exact same value (if it reaches it at all). Then, right after the loop, new_l1_size will be recognized as being too big anyway. Both problems require a ridiculously high min_size value, which is very unlikely to occur; but both problems are also simply avoided by checking whether min_size is sane before calculating new_l1_size (which should still be checked separately, though). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-29qcow2: Fix discardMax Reitz
discard_single_l2() should not implement its own version of qcow2_get_cluster_type(), but rather rely on this already existing function. By doing so, it will work for compressed clusters as well (which it did not so far). Also, rename "old_offset" to "old_l2_entry", as both are quite different (and the value is indeed of the latter kind). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-04qcow2: Put cache reference in error caseKevin Wolf
When qcow2_get_cluster_offset() sees a zero cluster in a version 2 image, it (rightfully) returns an error. But in doing so it shouldn't leak an L2 table cache reference. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2014-04-01qcow2: Fix copy_sectors() with VM stateKevin Wolf
bs->total_sectors is not the highest possible sector number that could be involved in a copy on write operation: VM state is after the end of the virtual disk. This resulted in wrong values for the number of sectors to be copied (n). The code that checks for the end of the image isn't required any more because the code hasn't been calling the block layer's bdrv_read() for a long time; instead, it directly calls qcow2_readv(), which doesn't error out on VM state sector numbers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-04-01qcow2: Fix new L1 table size check (CVE-2014-0143)Kevin Wolf
The size in bytes is assigned to an int later, so check that instead of the number of entries. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-03-13qcow2: Check bs->drv in copy_sectors()Max Reitz
Before dereferencing bs->drv for a call to its member bdrv_co_readv(), copy_sectors() should check whether that pointer is indeed valid, since it may have been set to NULL by e.g. a concurrent write triggering the corruption prevention mechanism. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-02-21qcow2: Set zero flag for discarded clustersKevin Wolf
Instead of making the backing file contents visible again after a discard request, set the zero flag if possible (i.e. on version >= 3). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2014-02-09qcow2: remove n_start and n_end of qcow2_alloc_cluster_offset()Hu Tao
n_start can be actually calculated from offset. The number of sectors to be allocated(n_end - n_start) can be passed in in num. By removing n_start and n_end, we can save two parameters. The side effect is there is a bug in qcow2.c:preallocate() that passes incorrect n_start to qcow2_alloc_cluster_offset() is fixed. The bug can be triggerred by a larger cluster size than the default value(65536), for example: ./qemu-img create -f qcow2 \ -o 'cluster_size=131072,preallocation=metadata' file.img 4G Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-12-06qcow2: use start_of_cluster() and offset_into_cluster() everywhereHu Tao
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-11-28block: add flags to bdrv_*_write_zeroesPeter Lieven
Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-11-14qcow2: fix possible corruption when reading multiple clustersPeter Lieven
if multiple sectors spanning multiple clusters are read the function count_contiguous_clusters should ensure that the cluster type should not change between the clusters. Especially the for-loop should break when we have one or more normal clusters followed by a compressed cluster. Unfortunately the wrong macro was used in the mask to compare the flags. This was discovered while debugging a data corruption issue when converting a compressed qcow2 image to raw. qemu-img reads 2MB chunks which span multiple clusters. CC: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-11-05bswap.h: Remove cpu_to_be64wu()Peter Maydell
Replace the legacy cpu_to_be64wu() with stq_be_p(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-id: 1383669517-25598-9-git-send-email-peter.maydell@linaro.org Signed-off-by: Anthony Liguori <aliguori@amazon.com>
2013-10-11qcow2: Use negated overflow check maskMax Reitz
In qcow2_check_metadata_overlap and qcow2_pre_write_overlap_check, change the parameter signifying the checks to perform from its current positive form to a negative one, i.e., it will no longer explicitly specify every check to perform but rather a mask of checks not to perform. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-10-07qcow2: Free allocated L2 cluster on errorMax Reitz
If an error occurs in l2_allocate, the allocated (but unused) L2 cluster should be freed. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-10-02qcow2: Switch L1 table in a single sequenceMax Reitz
Switching the L1 table in memory should be an atomic operation, as far as possible. Calling qcow2_free_clusters on the old L1 table on disk is not a good idea when the old L1 table is no longer valid and the address to the new one hasn't yet been written into the corresponding BDRVQcowState field. To be more specific, this can lead to segfaults due to qcow2_check_metadata_overlap trying to access the L1 table during the free operation. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-09-27qcow2: Remove useless count_contiguous_clusters() parameterKevin Wolf
All callers pass start = 0, and it's doubtful if any other value would actually do what you expect. Remove the parameter. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2013-09-27qcow2: COMPRESSED on count_contiguous_clustersMax Reitz
Compressed clusters can never be contiguous, therefore the corresponding flag does not need to be given explicitly to count_contiguous_clusters. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-27qcow2: count_contiguous_clusters and compressionMax Reitz
The function is not intended to be used on compressed clusters and will not work correctly, if used anyway, since L2E_OFFSET_MASK is not the right mask for determining the offset of compressed clusters. Therefore, assert that the first cluster is not compressed and always include the compression flag in the mask of significant flags, i.e., stop the search as soon as a compressed cluster occurs. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-27qcow2: Free only newly allocated clusters on errorMax Reitz
In expand_zero_clusters_in_l1, a new cluster is only allocated if it was not already preallocated. On error, such preallocated clusters should not be freed, but only the newly allocated ones. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-27qcow2: Always use error path in l2_allocateMax Reitz
Just returning -errno in some cases prevents trace_qcow2_l2_allocate_done from being executed (and, in one case, also the unused allocated L2 table from being freed). Always going down the error path fixes this. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-27qcow2: Don't put invalid L2 table into cacheMax Reitz
In l2_allocate, the fail path is executed if qcow2_cache_flush fails. However, the L2 table has not yet been fetched from the L2 table cache. The qcow2_cache_put in the fail path therefore basically gives an undefined argument as the L2 table address (in this case). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-27qcow2: Correct bitmap size in zero expansionMax Reitz
Since the expanded_clusters bitmap is addressed using host offsets in the underlying image file, the correct size to use for allocating the bitmap is not determined by the guest disk image but by the underlying host image file. Furthermore, this size may change during the expansion due to cluster allocations on growable image files. In this case, the bitmap needs to be resized as well to reflect the growth. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-25qcow2: Assert against currently impossible overflowMax Reitz
If qcow2_alloc_cluster_link_l2 is called with a QCowL2Meta describing a request crossing L2 boundaries, a buffer overflow will occur. This is impossible right now since such requests are never generated (every request is shortened to L2 boundaries before) and probably also completely unintended (considering the name "QCowL2Meta"), however, it is still worth an assertion. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-12qcow2-cluster: Expand zero clustersMax Reitz
Add functionality for expanding zero clusters. This is necessary for downgrading the image version to one without zero cluster support. For non-backed images, this function may also just discard zero clusters instead of truly expanding them. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-09-12qcow2: Pass discard type to qcow2_discard_clusters()Kevin Wolf
The function will be used internally instead of only being called for guest discard requests. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2013-08-30qcow2-refcount: Repair OFLAG_COPIED errorsMax Reitz
Since the OFLAG_COPIED checks are now executed after the refcounts have been repaired (if repairing), it is safe to assume that they are correct but the OFLAG_COPIED flag may be not. Therefore, if its value differs from what it should be (considering the according refcount), that discrepancy can be repaired by correctly setting (or clearing that flag. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-08-30qcow2: Employ metadata overlap checksMax Reitz
The pre-write overlap check function is now called before most of the qcow2 writes (aborting it on collision or other error). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2013-06-24qcow2: Batch discardsKevin Wolf
This optimises the discard operation for freed clusters by batching discard requests (both snapshot deletion and bdrv_discard end up updating the refcounts cluster by cluster). Note that we don't discard asynchronously, but keep s->lock held. This is to avoid that a freed cluster is reallocated and written to while the discard is still in flight. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-06-24qcow2: Add refcount update reason to all callersKevin Wolf
This adds a refcount update reason to all callers of update_refcounts(), so that a follow-up patch can use this information to decide whether clusters that reach a refcount of 0 should be discarded in the image file. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-05-14qcow2: Catch some L1 table index overflowsKevin Wolf
This catches the situation that is described in the bug report at https://bugs.launchpad.net/qemu/+bug/865518 and goes like this: $ qemu-img create -f qcow2 huge.qcow2 $((1024*1024))T Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off $ qemu-io /tmp/huge.qcow2 -c "write $((1024*1024*1024*1024*1024*1024 - 1024)) 512" Segmentation fault With this patch applied the segfault will be avoided, however the case will still fail, though gracefully: $ qemu-img create -f qcow2 /tmp/huge.qcow2 $((1024*1024))T Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off qemu-img: The image size is too large for file format 'qcow2' Note that even long before these overflow checks kick in, you get insanely high memory usage (up to INT_MAX * sizeof(uint64_t) = 16 GB for the L1 table), so with somewhat smaller image sizes you'll probably see qemu aborting for a failed g_malloc(). If you need huge image sizes, you should increase the cluster size to the maximum of 2 MB in order to get higher limits. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Gather clusters in a looping loopKevin Wolf
Instead of just checking once in exactly this order if there are dependendies, non-COW clusters and new allocation, this starts looping around these. This way we can, for example, gather non-COW clusters after new allocations as long as the host cluster offsets stay contiguous. Once handle_dependencies() is extended so that COW areas of in-flight allocations can be overwritten, this allows to continue with gathering other clusters (we wouldn't be able to do that without this change because we would have missed a possible second dependency in one of the next clusters). This means that in the typical sequential write case, we can combine the COW overwrite of one cluster with the allocation of the next cluster as soon as something like Delayed COW gets actually implemented. It is only by avoiding splitting requests this way that Delayed COW actually starts improving performance noticably. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Move cluster gathering to a non-looping loopKevin Wolf
This patch is mainly to separate the indentation change from the semantic changes. All that really changes here is that everything moves into a while loop, all 'goto done' become 'break' and at the end of the loop a new 'break is inserted. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Allow requests with multiple l2metasKevin Wolf
Instead of expecting a single l2meta, have a list of them. This allows to still have a single I/O request for the guest data, even though multiple l2meta may be needed in order to describe both a COW overwrite and a new cluster allocation (typical sequential write case). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Use byte granularity in qcow2_alloc_cluster_offset()Kevin Wolf
This gets rid of the nb_clusters and keep_clusters and the associated complicated calculations. Just advance the number of bytes that have been processed and everything is fine. This patch advances the variables even after the last operation even though they aren't used any more afterwards to make things look more uniform. A later patch will turn the whole thing into a loop and then it actually starts making sense. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Prepare handle_alloc/copied() for byte granularityKevin Wolf
This makes handle_alloc() and handle_copied() return byte-granularity host offsets instead of returning always the cluster start. This is required so that qcow2_alloc_cluster_offset() can stop aligning everything to cluster boundaries. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: handle_copied(): Implement non-zero host_offsetKevin Wolf
Look only for clusters that start at a given physical offset. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: handle_copied(): Get rid of keep_clusters parameterKevin Wolf
Now *bytes is used to return the length of the area that can be written to without performing an allocation or COW. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: handle_copied(): Get rid of nb_clusters parameterKevin Wolf
handle_copied() uses its bytes parameter now to determine how many clusters it should try to find. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Factor out handle_copied()Kevin Wolf
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Clean up handle_alloc()Kevin Wolf
Things can be simplified a bit now. No semantic changes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Finalise interface of handle_alloc()Kevin Wolf
The interface works completely on a byte granularity now and duplicated parameters are removed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: handle_alloc(): Get rid of keep_clusters parameterKevin Wolf
handle_alloc() is now called with the offset at which the actual new allocation starts instead of the offset at which the whole write request starts, part of which may already be processed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: handle_alloc(): Get rid of nb_clusters parameterKevin Wolf
We already communicate the same information in *bytes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Factor out handle_alloc()Kevin Wolf
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Decouple cluster allocation from cluster reuse codeKevin Wolf
This moves some code that prepares the allocation of new clusters to where the actual allocation happens. This is the minimum required to be able to move it to a separate function in the next patch. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Change handle_dependency to byte granularityKevin Wolf
This is a more precise description of what really constitutes a dependency. The behaviour doesn't change at this point because the COW area of the old request is still aligned to cluster boundaries and therefore an overlap is detected wheneven the requests touch any part of the same cluster. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Improve check for overlapping allocationsKevin Wolf
The old code detected an overlapping allocation even when the allocations didn't actually overlap, but were only adjacent. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-28qcow2: Handle dependencies earlierKevin Wolf
Handling overlapping allocations isn't just a detail of cluster allocation. It is rather one of three ways to get the host cluster offset for a write request: 1. If a request overlaps an in-flight allocations, the cluster offset can be taken from there (this is what handle_dependencies will evolve into) or the request must just wait until the allocation has completed. Accessing the L2 is not valid in this case, it has outdated information. 2. Outside overlapping areas, check the clusters that can be written to as they are, with no COW involved. 3. If a COW is required, allocate new clusters Changing the code to reflect this doesn't change the behaviour because overlaps cannot exist for clusters that are kept in step 2. It does however make it easier for later patches to work on clusters that belong to an allocation that is still in flight. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-03-15qcow2: make is_allocated return true for zero clustersPaolo Bonzini
Otherwise, live migration of the top layer will miss zero clusters and let the backing file show through. This also matches what is done in qed. QCOW2_CLUSTER_ZERO clusters are invalid in v2 image files. Check this directly in qcow2_get_cluster_offset instead of replicating the test everywhere. Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>