aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2018-03-02Include qapi/qmp/qerror.h exactly where neededMarkus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180211093607.27351-2-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2018-03-01nbd: Honor server's advertised minimum block sizeEric Blake
Commit 79ba8c98 (v2.7) changed the setting of request_alignment to occur only during bdrv_refresh_limits(), rather than at at bdrv_open() time; but at the time, NBD was unaffected, because it still used sector-based callbacks, so the block layer defaulted NBD to use 512 request_alignment. Later, commit 70c4fb26 (also v2.7) changed NBD to use byte-based callbacks, without setting request_alignment. This resulted in NBD using request_alignment of 1, which works great when the server supports it (as is the case for qemu-nbd), but falls apart miserably if the server requires alignment (but only if qemu actually sends a sub-sector request; qemu-io can do it, but most qemu operations still perform on sectors or larger). Even later, the NBD protocol was updated to document that clients should learn the server's minimum alignment during NBD_OPT_GO; and recommended that clients should assume a minimum size of 512 unless the server understands NBD_OPT_GO and replied with a smaller size. Commit 081dd1fe (v2.10) attempted to do that, by assigning request_alignment to whatever was learned from the server; but it has two flaws: the assignment is done during bdrv_open() so it gets unconditionally wiped out back to 1 during any later bdrv_refresh_limits(); and the code is not using a default of 512 when the server did not report a minimum size. Fix these issues by moving the assignment to request_alignment to the right function, and by using a sane default when the server does not advertise a minimum size. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180215032905.27146-1-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
2018-03-01block/nvme: fix Coverity reportsPaolo Bonzini
1) string not null terminated in sysfs_find_group_file 2) NULL pointer dereference and dead local variable in nvme_init. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <20180213015240.9352-1-famz@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2018-02-13qcow2: Allow configuring the L2 slice sizeAlberto Garcia
Now that the code is ready to handle L2 slices we can finally add an option to allow configuring their size. An L2 slice is the portion of an L2 table that is read by the qcow2 cache. Until now the cache was always reading full L2 tables, and since the L2 table size is equal to the cluster size this was not very efficient with large clusters. Here's a more detailed explanation of why it makes sense to have smaller cache entries in order to load L2 data: https://lists.gnu.org/archive/html/qemu-block/2017-09/msg00635.html This patch introduces a new command-line option to the qcow2 driver named l2-cache-entry-size (cf. l2-cache-size). The cache entry size has the same restrictions as the cluster size: it must be a power of two and it has the same range of allowed values, with the additional requirement that it must not be larger than the cluster size. The L2 cache entry size (L2 slice size) remains equal to the cluster size for now by default, so this feature must be explicitly enabled. Although my tests show that 4KB slices consistently improve performance and give the best results, let's wait and make more tests with different cluster sizes before deciding on an optimal default. Now that the cache entry size is not necessarily equal to the cluster size we need to reflect that in the MIN_L2_CACHE_SIZE documentation. That minimum value is a requirement of the COW algorithm: we need to read two L2 slices (and not two L2 tables) in order to do COW, see l2_allocate() for the actual code. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: c73e5611ff4a9ec5d20de68a6c289553a13d2354.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_cow_clusters()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices intead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6107001fc79e6739242f1de7d191375e4f130aac.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_contiguous_clusters_unallocated()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices instead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 78bcc54bc632574dd0b900a77a00a1b6ffc359e6.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in count_contiguous_clusters()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices intead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 812b0c3505bb1687e51285dccf1a94f0cecb1f74.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Rename l2_table in qcow2_alloc_compressed_cluster_offset()Alberto Garcia
This function doesn't need any changes to support L2 slices, but since it's now dealing with slices instead of full tables, the l2_table variable is renamed for clarity. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 0c5d4b9bf163aa3b49ec19cc512a50d83563f2ad.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_truncate() to support L2 slicesAlberto Garcia
The qcow2_truncate() code is mostly independent from whether we're using L2 slices or full L2 tables, but in full and falloc preallocation modes new L2 tables are allocated using qcow2_alloc_cluster_link_l2(). Therefore the code needs to be modified to ensure that all nb_clusters that are processed in each call can be allocated with just one L2 slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 1fd7d272b5e7b66254a090b74cf2bed1cc334c0e.1517840877.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update expand_zero_clusters_in_l1() to support L2 slicesAlberto Garcia
expand_zero_clusters_in_l1() expands zero clusters as a necessary step to downgrade qcow2 images to a version that doesn't support metadata zero clusters. This function takes an L1 table (which may or may not be active) and iterates over all its L2 tables looking for zero clusters. Since we'll be loading L2 slices instead of full tables we need to add an extra loop that iterates over all slices of each L2 table, and we should also use the slice size when allocating the buffer used when the L1 table is not active. This function doesn't need any additional changes so apart from that this patch simply updates the variable name from l2_table to l2_slice. Finally, and since we have to touch the bdrv_read() / bdrv_write() calls anyway, this patch takes the opportunity to replace them with the byte-based bdrv_pread() / bdrv_pwrite(). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 43590976f730501688096cff103f2923b72b0f32.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Prepare expand_zero_clusters_in_l1() for adding L2 slice supportAlberto Garcia
Adding support for L2 slices to expand_zero_clusters_in_l1() needs (among other things) an extra loop that iterates over all slices of each L2 table. Putting all changes in one patch would make it hard to read because all semantic changes would be mixed with pure indentation changes. To make things easier this patch simply creates a new block and changes the indentation of all lines of code inside it. Thus, all modifications in this patch are cosmetic. There are no semantic changes and no variables are renamed yet. The next patch will take care of that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: c2ae9f31ed5b6e591477ad4654448badd1c89d73.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Read refcount before L2 table in expand_zero_clusters_in_l1()Alberto Garcia
At the moment it doesn't really make a difference whether we call qcow2_get_refcount() before of after reading the L2 table, but if we want to support L2 slices we'll need to read the refcount first. This patch simply changes the order of those two operations to prepare for that. The patch with the actual semantic changes will be easier to read because of this. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 947a91d934053a2dbfef979aeb9568f57ef57c5d.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_update_snapshot_refcount() to support L2 slicesAlberto Garcia
qcow2_update_snapshot_refcount() increases the refcount of all clusters of a given snapshot. In order to do that it needs to load all its L2 tables and iterate over their entries. Since we'll be loading L2 slices instead of full tables we need to add an extra loop that iterates over all slices of each L2 table. This function doesn't need any additional changes so apart from that this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 5f4db199b9637f4833b58487135124d70add8cf0.1517840877.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Prepare qcow2_update_snapshot_refcount() for adding L2 slice supportAlberto Garcia
Adding support for L2 slices to qcow2_update_snapshot_refcount() needs (among other things) an extra loop that iterates over all slices of each L2 table. Putting all changes in one patch would make it hard to read because all semantic changes would be mixed with pure indentation changes. To make things easier this patch simply creates a new block and changes the indentation of all lines of code inside it. Thus, all modifications in this patch are cosmetic. There are no semantic changes and no variables are renamed yet. The next patch will take care of that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 8ffaa5e55bd51121f80e498f4045b64902a94293.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update zero_single_l2() to support L2 slicesAlberto Garcia
zero_single_l2() limits the number of clusters to be zeroed to the amount that fits inside an L2 table. Since we'll be loading L2 slices instead of full tables we need to update that limit. The function is renamed to zero_in_l2_slice() for clarity. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: ebc16e7e79fa6969d8975ef487d679794de4fbcc.1517840877.git.berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update discard_single_l2() to support L2 slicesAlberto Garcia
discard_single_l2() limits the number of clusters to be discarded to the amount that fits inside an L2 table. Since we'll be loading L2 slices instead of full tables we need to update that limit. The function is renamed to discard_in_l2_slice() for clarity. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 1cb44a5b68be5334cb01b97a3db3a3c5a43396e5.1517840877.git.berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update handle_alloc() to support L2 slicesAlberto Garcia
handle_alloc() loads an L2 table and limits the number of checked clusters to the amount that fits inside that table. Since we'll be loading L2 slices instead of full tables we need to update that limit. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b243299c7136f7014c5af51665431ddbf5e99afd.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update handle_copied() to support L2 slicesAlberto Garcia
handle_copied() loads an L2 table and limits the number of checked clusters to the amount that fits inside that table. Since we'll be loading L2 slices instead of full tables we need to update that limit. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 541ac001a7d6b86bab2392554bee53c2b312148c.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_alloc_cluster_link_l2() to support L2 slicesAlberto Garcia
There's a loop in this function that iterates over the L2 entries in a table, so now we need to assert that it remains within the limits of an L2 slice. Apart from that, this function doesn't need any additional changes, so this patch simply updates the variable name from l2_table to l2_slice. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: f9846a1c2efc51938e877e2a25852d9ab14797ff.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update qcow2_get_cluster_offset() to support L2 slicesAlberto Garcia
qcow2_get_cluster_offset() checks how many contiguous bytes are available at a given offset. The returned number of bytes is limited by the amount that can be addressed without having to load more than one L2 table. Since we'll be loading L2 slices instead of full tables this patch changes the limit accordingly using the size of the L2 slice for the calculations instead of the full table size. One consequence of this is that with small L2 slices operations such as 'qemu-img map' will need to iterate in more steps because each qcow2_get_cluster_offset() call will potentially return a smaller number. However the code is already prepared for that so this doesn't break semantics. The l2_table variable is also renamed to l2_slice to reflect this, and offset_to_l2_index() is replaced with offset_to_l2_slice_index(). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6b602260acb33da56ed6af9611731cb7acd110eb.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update get_cluster_table() to support L2 slicesAlberto Garcia
This patch updates get_cluster_table() to return L2 slices instead of full L2 tables. The code itself needs almost no changes, it only needs to call offset_to_l2_slice_index() instead of offset_to_l2_index(). This patch also renames all the relevant variables and the documentation. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 64cf064c0021ba315d3f3032da0f95db1b615f33.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Refactor get_cluster_table()Alberto Garcia
After the previous patch we're now always using l2_load() in get_cluster_table() regardless of whether a new L2 table has to be allocated or not. This patch refactors that part of the code to use one single l2_load() call. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: ce31758c4a1fadccea7a6ccb93951eb01d95fd4c.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update l2_allocate() to support L2 slicesAlberto Garcia
This patch updates l2_allocate() to support the qcow2 cache returning L2 slices instead of full L2 tables. The old code simply gets an L2 table from the cache and initializes it with zeroes or with the contents of an existing table. With a cache that returns slices instead of tables the idea remains the same, but the code must now iterate over all the slices that are contained in an L2 table. Since now we're operating with slices the function can no longer return the newly-allocated table, so it's up to the caller to retrieve the appropriate L2 slice after calling l2_allocate() (note that with this patch the caller is still loading full L2 tables, but we'll deal with that in a separate patch). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20fc0415bf0e011e29f6487ec86eb06a11f37445.1517840877.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Prepare l2_allocate() for adding L2 slice supportAlberto Garcia
Adding support for L2 slices to l2_allocate() needs (among other things) an extra loop that iterates over all slices of a new L2 table. Putting all changes in one patch would make it hard to read because all semantic changes would be mixed with pure indentation changes. To make things easier this patch simply creates a new block and changes the indentation of all lines of code inside it. Thus, all modifications in this patch are cosmetic. There are no semantic changes and no variables are renamed yet. The next patch will take care of that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: d0d7dca8520db304524f52f49d8157595a707a35.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Update l2_load() to support L2 slicesAlberto Garcia
Each entry in the qcow2 L2 cache stores a full L2 table (which uses a complete cluster in the qcow2 image). A cluster is usually too large to be used efficiently as the size for a cache entry, so we want to decouple both values by allowing smaller cache entries. Therefore the qcow2 L2 cache will no longer return full L2 tables but slices instead. This patch updates l2_load() so it can handle L2 slices correctly. Apart from the offset of the L2 table (which we already had) we also need the guest offset in order to calculate which one of the slices we need. An L2 slice has currently the same size as an L2 table (one cluster), so for now this function will load exactly the same data as before. This patch also removes a stale comment about the return value being a pointer to the L2 table. This function returns an error code since 55c17e9821c474d5fcdebdc82ed2fc096777d611. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b830aa1fc5b6f8e3cb331d006853fe22facca847.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Add offset_to_l2_slice_index()Alberto Garcia
Similar to offset_to_l2_index(), this function takes a guest offset and returns the index in the L2 slice that contains its L2 entry. An L2 slice has currently the same size as an L2 table (one cluster), so both functions return the same value for now. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: a1c45c5c5a76146dd1712d8d1e7b409ad539c718.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Add l2_slice_size field to BDRVQcow2StateAlberto Garcia
The BDRVQcow2State structure contains an l2_size field, which stores the number of 64-bit entries in an L2 table. For efficiency reasons we want to be able to load slices instead of full L2 tables, so we need to know how many entries an L2 slice can hold. An L2 slice is the portion of an L2 table that is loaded by the qcow2 cache. At the moment that cache can only load complete tables, therefore an L2 slice has the same size as an L2 table (one cluster) and l2_size == l2_slice_size. Later we'll allow smaller slices, but until then we have to use this new l2_slice_size field to make the rest of the code ready for that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: adb048595f9fb5dfb110c802a8b3c3be3b937f37.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Add offset_to_l1_index()Alberto Garcia
Similar to offset_to_l2_index(), this function returns the index in the L1 table for a given guest offset. This is only used in a couple of places and it's not a particularly complex calculation, but it makes the code a bit more readable. Although in the qcow2_get_cluster_offset() case the old code was taking advantage of the l1_bits variable, we're going to get rid of the other uses of l1_bits in a later patch anyway, so it doesn't make sense to keep it just for this. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: a5f626fed526b7459a0425fad06d823d18df8522.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_is_table_offset()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_addr(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: eb0ed90affcf302e5a954bafb5931b5215483d3a.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_discard()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_idx() and qcow2_cache_table_release(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 9724f7e38e763ad3be32627c6b7fe8df9edb1476.1517840877.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_clean_unused()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_table_release(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b74f17591af52f201de0ea3a3b2dd0a81932334d.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_destroy()Alberto Garcia
This function was never using the BlockDriverState parameter so it can be safely removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 49c74fe8b3aead9056e61a85b145ce787d06262b.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_put()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_idx(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 6f98155489054a457563da77cdad1a66ebb3e896.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_entry_mark_dirty()Alberto Garcia
This function was only using the BlockDriverState parameter to pass it to qcow2_cache_get_table_idx(). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 5c40516a91782b083c1428b7b6a41bb9e2679bfb.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_table_release()Alberto Garcia
This function was only using the BlockDriverState parameter to get the cache table size (since it was equal to the cluster size). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 7c1b262344375d52544525f85bbbf0548d5ba575.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_get_table_idx()Alberto Garcia
This function was only using the BlockDriverState parameter to get the cache table size (since it was equal to the cluster size). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: da3575d47c9a181a2cfd4715e53dd84a2c651017.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Remove BDS parameter from qcow2_cache_get_table_addr()Alberto Garcia
This function was only using the BlockDriverState parameter to get the cache table size (since it was equal to the cluster size). This is no longer necessary so this parameter can be removed. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: e1f943a9e89e1deb876f45de1bb22419ccdb6ad3.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Add table size field to Qcow2CacheAlberto Garcia
The table size in the qcow2 cache is currently equal to the cluster size. This doesn't allow us to use the cache memory efficiently, particularly with large cluster sizes, so we need to be able to have smaller cache tables that are independent from the cluster size. This patch adds a new field to Qcow2Cache that we can use instead of the cluster size. The current table size is still being initialized to the cluster size, so there are no semantic changes yet, but this patch will allow us to prepare the rest of the code and simplify a few function calls. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 67a1bf9e55f417005c567bead95a018dc34bc687.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13qcow2: Fix documentation of get_cluster_table()Alberto Garcia
This function has not been returning the offset of the L2 table since commit 3948d1d4876065160583e79533bf604481063833 Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b498733b6706a859a03678d74ecbd26aeba129aa.1517840876.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13block: maintain persistent disabled bitmapsVladimir Sementsov-Ogievskiy
To maintain load/store disabled bitmap there is new approach: - deprecate @autoload flag of block-dirty-bitmap-add, make it ignored - store enabled bitmaps as "auto" to qcow2 - store disabled bitmaps without "auto" flag to qcow2 - on qcow2 open load "auto" bitmaps as enabled and others as disabled (except in_use bitmaps) Also, adjust iotests 165 and 176 appropriately. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20180202160752.143796-1-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-02-13sheepdog: Allow fully preallocated truncationMax Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13sheepdog: Pass old and new size to sd_prealloc()Max Reitz
sd_prealloc() will now preallocate the area [old_size, new_size). As before, it rounds to buf_size and may thus overshoot and preallocate areas that were not requested to be preallocated. For image creation, this is no change in behavior. For truncation, this is in accordance with the documentation for preallocated truncation. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13sheepdog: Make sd_prealloc() take a BDSMax Reitz
We want to use this function in sd_truncate() later on, so taking a filename is not exactly ideal. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13gluster: Add preallocated truncationMax Reitz
By using qemu_do_cluster_truncate() in qemu_cluster_truncate(), we now automatically have preallocated truncation. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13gluster: Query current size in do_truncate()Max Reitz
Instead of expecting the current size to be 0, query it and allocate only the area [current_size, offset) if preallocation is requested. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13gluster: Pull truncation from qemu_gluster_createMax Reitz
Pull out the truncation code from the qemu_cluster_create() function so we can later reuse it in qemu_gluster_truncate(). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13gluster: Move glfs_close() to create's clean-upMax Reitz
glfs_close() is a classical clean-up operation, as can be seen by the fact that it is executed even if the truncation before it failed. Also, moving it to clean-up makes it more clear that if it fails, we do not want it to overwrite the current ret value if that signifies an error already. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-13qcow2: Use g_try_realloc() in qcow2_expand_zero_clusters()Alberto Garcia
g_realloc() aborts the program if it fails to allocate the required amount of memory. We want to detect that scenario and return an error instead, so let's use g_try_realloc(). Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-02-09block: Simplify bdrv_can_write_zeroes_with_unmap()Eric Blake
We don't need the can_write_zeroes_with_unmap field in BlockDriverInfo, because it is redundant information with supported_zero_flags & BDRV_REQ_MAY_UNMAP. Note that BlockDriverInfo and supported_zero_flags are both per-device settings, rather than global state about the driver as a whole, which means one or both of these bits of information can already be conditional. Let's audit how they were set: crypto: always setting can_write_ to false is pointless (the struct starts life zero-initialized), no use of supported_ nbd: just recently fixed to set can_write_ if supported_ includes MAY_UNMAP (thus this commit effectively reverts bca80059e and solves the problem mentioned there in a more global way) file-posix, iscsi, qcow2: can_write_ is conditional, while supported_ was unconditional; but passing MAY_UNMAP would fail with ENOTSUP if the condition wasn't met qed: can_write_ is unconditional, but pwrite_zeroes lacks support for MAY_UNMAP and supported_ is not set. Perhaps support can be added later (since it would be similar to qcow2), but for now claiming false is no real loss all other drivers: can_write_ is not set, and supported_ is either unset or a passthrough Simplify the code by moving the conditional into supported_zero_flags for all drivers, then dropping the now-unused BDI field. For callers that relied on bdrv_can_write_zeroes_with_unmap(), we return the same per-device settings for drivers that had conditions (no observable change in behavior there); and can now return true (instead of false) for drivers that support passthrough (for example, the commit driver) which gives those drivers the same fix as nbd just got in bca80059e. For callers that relied on supported_zero_flags, we now have a few more places that can avoid a wasted call to pwrite_zeroes() that will just fail with ENOTSUP. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180126193439.20219-1-eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-02-09Move include qemu/option.h from qemu-common.h to actual usersMarkus Armbruster
qemu-common.h includes qemu/option.h, but most places that include the former don't actually need the latter. Drop the include, and add it to the places that actually need it. While there, drop superfluous includes of both headers, and separate #include from file comment with a blank line. This cleanup makes the number of objects depending on qemu/option.h drop from 4545 (out of 4743) to 284 in my "build everything" tree. Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-20-armbru@redhat.com> [Semantic conflict with commit bdd6a90a9e in block/nvme.c resolved]