aboutsummaryrefslogtreecommitdiff
path: root/block/qcow2-cluster.c
AgeCommit message (Collapse)Author
2018-03-02qcow2: Replace align_offset() with ROUND_UP()Alberto Garcia
The align_offset() function is equivalent to the ROUND_UP() macro so there's no need to use the former. The ROUND_UP() name is also a bit more explicit. This patch uses ROUND_UP() instead of the slower QEMU_ALIGN_UP() because align_offset() already requires that the second parameter is a power of two. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180215131008.5153-1-berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_cow_clusters()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices intead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6107001fc79e6739242f1de7d191375e4f130aac.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_contiguous_clusters_unallocated()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices instead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 78bcc54bc632574dd0b900a77a00a1b6ffc359e6.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_contiguous_clusters()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices intead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 812b0c3505bb1687e51285dccf1a94f0cecb1f74.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in qcow2_alloc_compressed_cluster_offset()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices instead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 0c5d4b9bf163aa3b49ec19cc512a50d83563f2ad.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update expand_zero_clusters_in_l1() to support L2 slicesAlberto Garcia
expand_zero_clusters_in_l1() expands zero clusters as a necessary step to downgrade qcow2 images to a version that doesn't support metadata zero clusters. This function takes an L1 table (which may or may not be active) and iterates over all its L2 tables looking for zero clusters. Since we'll be loading L2 slices instead of full tables we need to add an extra loop that iterates over all slices of each L2 table, and we should also use the slice size when allocating the buffer used when the L1 table is not active. This function doesn't need any additional changes so apart from that this patch simply updates the variable name from l2_table to l2_slice. Finally, and since we have to touch the bdrv_read() / bdrv_write() calls anyway, this patch takes the opportunity to replace them with the byte-based bdrv_pread() / bdrv_pwrite(). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 43590976f730501688096cff103f2923b72b0f32.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Prepare expand_zero_clusters_in_l1() for adding L2 slice supportAlberto Garcia
Adding support for L2 slices to expand_zero_clusters_in_l1() needs (among other things) an extra loop that iterates over all slices of each L2 table. Putting all changes in one patch would make it hard to read because all semantic changes would be mixed with pure indentation changes. To make things easier this patch simply creates a new block and changes the indentation of all lines of code inside it. Thus, all modifications in this patch are cosmetic. There are no semantic changes and no variables are renamed yet. The next patch will take care of that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: c2ae9f31ed5b6e591477ad4654448badd1c89d73.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Read refcount before L2 table in expand_zero_clusters_in_l1()Alberto Garcia
At the moment it doesn't really make a difference whether we call qcow2_get_refcount() before of after reading the L2 table, but if we want to support L2 slices we'll need to read the refcount first. This patch simply changes the order of those two operations to prepare for that. The patch with the actual semantic changes will be easier to read because of this. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 947a91d934053a2dbfef979aeb9568f57ef57c5d.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update zero_single_l2() to support L2 slicesAlberto Garcia
zero_single_l2() limits the number of clusters to be zeroed to the amount that fits inside an L2 table. Since we'll be loading L2 slices instead of full tables we need to update that limit. The function is renamed to zero_in_l2_slice() for clarity. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: ebc16e7e79fa6969d8975ef487d679794de4fbcc.1517840877.git.berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update discard_single_l2() to support L2 slicesAlberto Garcia
discard_single_l2() limits the number of clusters to be discarded to the amount that fits inside an L2 table. Since we'll be loading L2 slices instead of full tables we need to update that limit. The function is renamed to discard_in_l2_slice() for clarity. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 1cb44a5b68be5334cb01b97a3db3a3c5a43396e5.1517840877.git.berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update handle_alloc() to support L2 slicesAlberto Garcia
handle_alloc() loads an L2 table and limits the number of checked clusters to the amount that fits inside that table. Since we'll be loading L2 slices instead of full tables we need to update that limit. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b243299c7136f7014c5af51665431ddbf5e99afd.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update handle_copied() to support L2 slicesAlberto Garcia
handle_copied() loads an L2 table and limits the number of checked clusters to the amount that fits inside that table. Since we'll be loading L2 slices instead of full tables we need to update that limit. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 541ac001a7d6b86bab2392554bee53c2b312148c.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_alloc_cluster_link_l2() to support L2 slicesAlberto Garcia
There's a loop in this function that iterates over the L2 entries in a table, so now we need to assert that it remains within the limits of an L2 slice. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: f9846a1c2efc51938e877e2a25852d9ab14797ff.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_get_cluster_offset() to support L2 slicesAlberto Garcia
qcow2_get_cluster_offset() checks how many contiguous bytes are available at a given offset. The returned number of bytes is limited by the amount that can be addressed without having to load more than one L2 table. Since we'll be loading L2 slices instead of full tables this patch changes the limit accordingly using the size of the L2 slice for the calculations instead of the full table size. One consequence of this is that with small L2 slices operations such as 'qemu-img map' will need to iterate in more steps because each qcow2_get_cluster_offset() call will potentially return a smaller number. However the code is already prepared for that so this doesn't break semantics. The l2_table variable is also renamed to l2_slice to reflect this, and offset_to_l2_index() is replaced with offset_to_l2_slice_index(). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6b602260acb33da56ed6af9611731cb7acd110eb.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update get_cluster_table() to support L2 slicesAlberto Garcia
This patch updates get_cluster_table() to return L2 slices instead of full L2 tables. The code itself needs almost no changes, it only needs to call offset_to_l2_slice_index() instead of offset_to_l2_index(). This patch also renames all the relevant variables and the documentation. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 64cf064c0021ba315d3f3032da0f95db1b615f33.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Refactor get_cluster_table()Alberto Garcia
After the previous patch we're now always using l2_load() in get_cluster_table() regardless of whether a new L2 table has to be allocated or not. This patch refactors that part of the code to use one single l2_load() call. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: ce31758c4a1fadccea7a6ccb93951eb01d95fd4c.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update l2_allocate() to support L2 slicesAlberto Garcia
This patch updates l2_allocate() to support the qcow2 cache returning L2 slices instead of full L2 tables. The old code simply gets an L2 table from the cache and initializes it with zeroes or with the contents of an existing table. With a cache that returns slices instead of tables the idea remains the same, but the code must now iterate over all the slices that are contained in an L2 table. Since now we're operating with slices the function can no longer return the newly-allocated table, so it's up to the caller to retrieve the appropriate L2 slice after calling l2_allocate() (note that with this patch the caller is still loading full L2 tables, but we'll deal with that in a separate patch). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20fc0415bf0e011e29f6487ec86eb06a11f37445.1517840877.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Prepare l2_allocate() for adding L2 slice supportAlberto Garcia
Adding support for L2 slices to l2_allocate() needs (among other things) an extra loop that iterates over all slices of a new L2 table. Putting all changes in one patch would make it hard to read because all semantic changes would be mixed with pure indentation changes. To make things easier this patch simply creates a new block and changes the indentation of all lines of code inside it. Thus, all modifications in this patch are cosmetic. There are no semantic changes and no variables are renamed yet. The next patch will take care of that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: d0d7dca8520db304524f52f49d8157595a707a35.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update l2_load() to support L2 slicesAlberto Garcia
Each entry in the qcow2 L2 cache stores a full L2 table (which uses a complete cluster in the qcow2 image). A cluster is usually too large to be used efficiently as the size for a cache entry, so we want to decouple both values by allowing smaller cache entries. Therefore the qcow2 L2 cache will no longer return full L2 tables but slices instead. This patch updates l2_load() so it can handle L2 slices correctly. Apart from the offset of the L2 table (which we already had) we also need the guest offset in order to calculate which one of the slices we need. An L2 slice has currently the same size as an L2 table (one cluster), so for now this function will load exactly the same data as before. This patch also removes a stale comment about the return value being a pointer to the L2 table. This function returns an error code since 55c17e9821c474d5fcdebdc82ed2fc096777d611. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b830aa1fc5b6f8e3cb331d006853fe22facca847.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Add offset_to_l1_index()Alberto Garcia
Similar to offset_to_l2_index(), this function returns the index in the L1 table for a given guest offset. This is only used in a couple of places and it's not a particularly complex calculation, but it makes the code a bit more readable. Although in the qcow2_get_cluster_offset() case the old code was taking advantage of the l1_bits variable, we're going to get rid of the other uses of l1_bits in a later patch anyway, so it doesn't make sense to keep it just for this. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: a5f626fed526b7459a0425fad06d823d18df8522.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_put()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_idx(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6f98155489054a457563da77cdad1a66ebb3e896.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_entry_mark_dirty()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_idx(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 5c40516a91782b083c1428b7b6a41bb9e2679bfb.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Fix documentation of get_cluster_table()Alberto Garcia
This function has not been returning the offset of the L2 table since commit 3948d1d4876065160583e79533bf604481063833 Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b498733b6706a859a03678d74ecbd26aeba129aa.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Use g_try_realloc() in qcow2_expand_zero_clusters()Alberto Garcia
g_realloc() aborts the program if it fails to allocate the required amount of memory. We want to detect that scenario and return an error instead, so let's use g_try_realloc(). Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-09Include qapi/error.h exactly where neededMarkus Armbruster
This cleanup makes the number of objects depending on qapi/error.h drop from 1910 (out of 4743) to 1612 in my "build everything" tree. While there, separate #include from file comment with a blank line, and drop a useless comment on why qemu/osdep.h is included first. Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-5-armbru@redhat.com> [Semantic conflict with commit 34e304e975 resolved, OSX breakage fixed]
2017-11-17qcow2: Unaligned zero cluster in handle_alloc()Max Reitz
We should check whether the cluster offset we are about to use is actually valid; that is, whether it is aligned to cluster boundaries. Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com> Buglink: https://bugs.launchpad.net/qemu/+bug/1728643 Buglink: https://bugs.launchpad.net/qemu/+bug/1728657 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20171110203111.7666-3-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-11-14qcow2: Prevent allocating L2 tables at offset 0Alberto Garcia
If the refcount data is corrupted then we can end up trying to allocate a new L2 table at offset 0 in the image, triggering an assertion in the qcow2 cache that would crash QEMU: qcow2_cache_entry_mark_dirty: Assertion `c->entries[i].offset != 0' failed This patch adds an explicit check for this scenario and a new test case. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 92dac37191ae7844a2da22c122204eb493cc3133.1509718618.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-10-26block: Convert bdrv_get_block_status() to bytesEric Blake
We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the name of the function from bdrv_get_block_status() to bdrv_block_status() ensures that the compiler enforces that all callers are updated. For now, the io.c layer still assert()s that all callers are sector-aligned, but that can be relaxed when a later patch implements byte-based block status in the drivers. There was an inherent limitation in returning the offset via the return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which means an offset can only be mapped for sector-aligned queries (or, if we declare that non-aligned input is at the same relative position modulo 512 of the answer), so the new interface also changes things to return the offset via output through a parameter by reference rather than mashed into the return value. We'll have some glue code that munges between the two styles until we finish converting all uses. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_block_status(), coupled with the tweak in calling convention. But some code, particularly bdrv_is_allocated(), gets a lot simpler because it no longer has to mess with sectors. For ease of review, bdrv_get_block_status_above() will be tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06block: convert qcrypto_block_encrypt|decrypt to take bytes offsetDaniel P. Berrange
Instead of sector offset, take the bytes offset when encrypting or decrypting data. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170927125340.12360-6-berrange@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-09-26qcow2: add shrink image supportPavel Butsykin
This patch add shrinking of the image file for qcow2. As a result, this allows us to reduce the virtual image size and free up space on the disk without copying the image. Image can be fragmented and shrink is done by punching holes in the image file. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20170918124230.8152-4-pbutsykin@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-08-31Merge remote-tracking branch 'remotes/elmarco/tags/tidy-pull-request' into ↵Peter Maydell
staging # gpg: Signature made Thu 31 Aug 2017 11:29:33 BST # gpg: using RSA key 0xDAE8E10975969CE5 # gpg: Good signature from "Marc-André Lureau <marcandre.lureau@redhat.com>" # gpg: aka "Marc-André Lureau <marcandre.lureau@gmail.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5 * remotes/elmarco/tags/tidy-pull-request: (29 commits) eepro100: replace g_malloc()+memcpy() with g_memdup() test-iov: replace g_malloc()+memcpy() with g_memdup() i386: replace g_malloc()+memcpy() with g_memdup() i386: introduce ELF_NOTE_SIZE macro decnumber: use DIV_ROUND_UP kvm: use DIV_ROUND_UP i386/dump: use DIV_ROUND_UP ppc: use DIV_ROUND_UP msix: use DIV_ROUND_UP usb-hub: use DIV_ROUND_UP q35: use DIV_ROUND_UP piix: use DIV_ROUND_UP virtio-serial: use DIV_ROUND_UP console: use DIV_ROUND_UP monitor: use DIV_ROUND_UP virtio-gpu: use DIV_ROUND_UP vga: use DIV_ROUND_UP ui: use DIV_ROUND_UP vnc: use DIV_ROUND_UP vvfat: use DIV_ROUND_UP ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-08-31qcow2: use DIV_ROUND_UPMarc-André Lureau
I used the clang-tidy qemu-round check to generate the fix: https://github.com/elmarco/clang-tools-extra Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-08-30qcow2: allocate cluster_cache/cluster_data on demandStefan Hajnoczi
Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1) * cluster_size + 512 bytes upfront. Allocate s->cluster_cache and s->cluster_data when the first read operation is performance on a compressed cluster. The buffers are freed in .bdrv_close(). .bdrv_open() no longer has any code paths that can allocate these buffers, so remove the free functions in the error code path. This patch can result in significant memory savings when many qcow2 disks are attached or backing file chains are long: Before 12.81% (1,023,193,088B) After 5.36% (393,893,888B) Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20170821135530.32344-1-stefanha@redhat.com Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-07-11qcow2: add support for LUKS encryption formatDaniel P. Berrange
This adds support for using LUKS as an encryption format with the qcow2 file, using the new encrypt.format parameter to request "luks" format. e.g. # qemu-img create --object secret,data=123456,id=sec0 \ -f qcow2 -o encrypt.format=luks,encrypt.key-secret=sec0 \ test.qcow2 10G The legacy "encryption=on" parameter still results in creation of the old qcow2 AES format (and is equivalent to the new 'encryption-format=aes'). e.g. the following are equivalent: # qemu-img create --object secret,data=123456,id=sec0 \ -f qcow2 -o encryption=on,encrypt.key-secret=sec0 \ test.qcow2 10G # qemu-img create --object secret,data=123456,id=sec0 \ -f qcow2 -o encryption-format=aes,encrypt.key-secret=sec0 \ test.qcow2 10G With the LUKS format it is necessary to store the LUKS partition header and key material in the QCow2 file. This data can be many MB in size, so cannot go into the QCow2 header region directly. Thus the spec defines a FDE (Full Disk Encryption) header extension that specifies the offset of a set of clusters to hold the FDE headers, as well as the length of that region. The LUKS header is thus stored in these extra allocated clusters before the main image payload. Aside from all the cryptographic differences implied by use of the LUKS format, there is one further key difference between the use of legacy AES and LUKS encryption in qcow2. For LUKS, the initialiazation vectors are generated using the host physical sector as the input, rather than the guest virtual sector. This guarantees unique initialization vectors for all sectors when qcow2 internal snapshots are used, thus giving stronger protection against watermarking attacks. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170623162419.26068-14-berrange@redhat.com Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-07-11qcow2: convert QCow2 to use QCryptoBlock for encryptionDaniel P. Berrange
This converts the qcow2 driver to make use of the QCryptoBlock APIs for encrypting image content, using the legacy QCow2 AES scheme. With this change it is now required to use the QCryptoSecret object for providing passwords, instead of the current block password APIs / interactive prompting. $QEMU \ -object secret,id=sec0,file=/home/berrange/encrypted.pw \ -drive file=/home/berrange/encrypted.qcow2,encrypt.key-secret=sec0 The test 087 could be simplified since there is no longer a difference in behaviour when using blockdev_add with encrypted images for the running vs stopped CPU state. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170623162419.26068-12-berrange@redhat.com Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-07-11qcow2: make qcow2_encrypt_sectors encrypt in placeDaniel P. Berrange
Instead of requiring separate input/output buffers for encrypting data, change qcow2_encrypt_sectors() to assume use of a single buffer, encrypting in place. The current callers all used the same buffer for input/output already. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170623162419.26068-11-berrange@redhat.com Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-06-26qcow2: Use offset_into_cluster() and offset_to_l2_index()Alberto Garcia
We already have functions for doing these calculations, so let's use them instead of doing everything by hand. This makes the code a bit more readable. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Merge the writing of the COW regions with the guest dataAlberto Garcia
If the guest tries to write data that results on the allocation of a new cluster, instead of writing the guest data first and then the data from the COW regions, write everything together using one single I/O operation. This can improve the write performance by 25% or more, depending on several factors such as the media type, the cluster size and the I/O request size. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Pass a QEMUIOVector to do_perform_cow_{read,write}()Alberto Garcia
Instead of passing a single buffer pointer to do_perform_cow_write(), pass a QEMUIOVector. This will allow us to merge the write requests for the COW regions and the actual data into a single one. Although do_perform_cow_read() does not strictly need to change its API, we're doing it here as well for consistency. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Allow reading both COW regions with only one requestAlberto Garcia
Reading both COW regions requires two separate requests, but it's perfectly possible to merge them and perform only one. This generally improves performance, particularly on rotating disk drives. The downside is that the data in the middle region is read but discarded. This patch takes a conservative approach and only merges reads when the size of the middle region is <= 16KB. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Split do_perform_cow() into _read(), _encrypt() and _write()Alberto Garcia
This patch splits do_perform_cow() into three separate functions to read, encrypt and write the COW regions. perform_cow() can now read both regions first, then encrypt them and finally write them to disk. The memory allocation is also done in this function now, using one single buffer large enough to hold both regions. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Make perform_cow() call do_perform_cow() twiceAlberto Garcia
Instead of calling perform_cow() twice with a different COW region each time, call it just once and make perform_cow() handle both regions. This patch simply moves code around. The next one will do the actual reordering of the COW operations. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Use unsigned int for both members of Qcow2COWRegionAlberto Garcia
Qcow2COWRegion has two attributes: - The offset of the COW region from the start of the first cluster touched by the I/O request. Since it's always going to be positive and the maximum request size is at most INT_MAX, we can use a regular unsigned int to store this offset. - The size of the COW region in bytes. This is guaranteed to be >= 0, so we should use an unsigned type instead. In x86_64 this reduces the size of Qcow2COWRegion from 16 to 8 bytes. It will also help keep some assertions simpler now that we know that there are no negative numbers. The prototype of do_perform_cow() is also updated to reflect these changes. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-26qcow2: Remove unused Error variable in do_perform_cow()Alberto Garcia
We are using the return value of qcow2_encrypt_sectors() to detect problems but we are throwing away the returned Error since we have no way to report it to the user. Therefore we can simply get rid of the local Error variable and pass NULL instead. Alternatively we could try to figure out a way to pass the original error instead of simply returning -EIO, but that would be more invasive, so let's keep the current approach. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-05-29block: Tweak error message related to qemu-img amendEric Blake
When converting a 1.1 image down to 0.10, qemu-iotests 060 forces a contrived failure where allocating a cluster used to replace a zero cluster reads unaligned data. Since it is a zero cluster rather than a data cluster being converted, changing the error message to match our earlier change in 'qcow2: Make distinction between zero cluster types obvious' is worthwhile. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170508171302.17805-1-eblake@redhat.com [mreitz: Commit message fixes] Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Discard/zero clusters by byte countEric Blake
Passing a byte offset, but sector count, when we ultimately want to operate on cluster granularity, is madness. Clean up the external interfaces to take both offset and count as bytes, while still keeping the assertion added previously that the caller must align the values to a cluster. Then rename things to make sure backports don't get confused by changed units: instead of qcow2_discard_clusters() and qcow2_zero_clusters(), we now have qcow2_cluster_discard() and qcow2_cluster_zeroize(). The internal functions still operate on clusters at a time, and return an int for number of cleared clusters; but on an image with 2M clusters, a single L2 table holds 256k entries that each represent a 2M cluster, totalling well over INT_MAX bytes if we ever had a request for that many bytes at once. All our callers currently limit themselves to 32-bit bytes (and therefore fewer clusters), but by making this function 64-bit clean, we have one less place to clean up if we later improve the block layer to support 64-bit bytes through all operations (with the block layer auto-fragmenting on behalf of more-limited drivers), rather than the current state where some interfaces are artificially limited to INT_MAX at a time. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-13-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Assert that cluster operations are alignedEric Blake
We already audited (in commit 0c1bd469) that qcow2_discard_clusters() is only passed cluster-aligned start values; but we can further tighten the assertion that the only unaligned end value is at EOF. Recent commits have taken advantage of an unaligned tail cluster, for both discard and write zeroes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-12-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Optimize zero_single_l2() to minimize L2 churnEric Blake
Similar to discard_single_l2(), we should try to avoid dirtying the L2 cache when the cluster we are changing already has the right characteristics. Note that by the time we get to zero_single_l2(), BDRV_REQ_MAY_UNMAP is a requirement to unallocate a cluster (this is because the block layer clears that flag if discard.* flags during open requested that we never punch holes - see the conversation around commit 170f4b2e, https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg07306.html). Therefore, this patch can only reuse a zero cluster as-is if either unmapping is not requested, or if the zero cluster was not associated with an allocation. Technically, there are some cases where an unallocated cluster already reads as all zeroes (namely, when there is no backing file [easy: check bs->backing], or when the backing file also reads as zeroes [harder: we can't check bdrv_get_block_status since we are already holding the lock]), where the guest would not immediately see a difference if we left that cluster unallocated. But if the user did not request unmapping, leaving an unallocated cluster is wrong; and even if the user DID request unmapping, keeping a cluster unallocated risks a subtle semantic change of guest-visible contents if a backing file is later added, and it is not worth auditing whether all internal uses such as mirror properly avoid an unmap request. Thus, this patch is intentionally limited to just clusters that are already marked as zero. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-8-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Make distinction between zero cluster types obviousEric Blake
Treat plain zero clusters differently from allocated ones, so that we can simplify the logic of checking whether an offset is present. Do this by splitting QCOW2_CLUSTER_ZERO into two new enums, QCOW2_CLUSTER_ZERO_PLAIN and QCOW2_CLUSTER_ZERO_ALLOC. I tried to arrange the enum so that we could use 'ret <= QCOW2_CLUSTER_ZERO_PLAIN' for all unallocated types, and 'ret >= QCOW2_CLUSTER_ZERO_ALLOC' for allocated types, although I didn't actually end up taking advantage of the layout. In many cases, this leads to simpler code, by properly combining cases (sometimes, both zero types pair together, other times, plain zero is more like unallocated while allocated zero is more like normal). Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-7-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-05-11qcow2: Name typedef for cluster typeEric Blake
Although it doesn't add all that much type safety (this is C, after all), it does add a bit of legibility to use the name QCow2ClusterType instead of a plain int. In particular, qcow2_get_cluster_offset() has an overloaded return type; a QCow2ClusterType on success, and -errno on failure; keeping the cluster type in a separate variable makes it slightly easier for the next patch to make further computations based on the type. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-6-eblake@redhat.com [mreitz: Use the new type in two more places (one of them pulled from the next patch)] Signed-off-by: Max Reitz <mreitz@redhat.com>