Age | Commit message (Collapse) | Author |
|
data_file does not work with v2, and we probably want 026 to keep
working for v2 images. Thus, open a new file for v3-exclusive error
path test cases.
Fixes: 81311255f217859413c94f2cd9cebf2684bbda94
(“iotests/026: Test EIO on allocation in a data-file”)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200311140707.1243218-1-mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Test what happens when writing data to an external data file, where the
write requires an L2 entry to be allocated, but the data write fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200225143130.111267-4-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Test what happens when writing data to a preallocated zero cluster, but
the data write fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200225143130.111267-3-mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Upcoming asynchronous handling of sub-parts of qcow2 requests will
change number of leaked clusters and even make it racy. As a
preparation, ignore leaks on failure parts in 026.
It's not trivial to just grep or substitute qemu-img output for such
thing. Instead do better: 3 is a error code of qemu-img check, if only
leaks are found. Catch this case and print success output.
Suggested-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190916175324.18478-2-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
One of the recent commits changed the way qemu-io prints out its
errors and warnings - they are now prefixed with the program name.
We've got to adapt the iotests accordingly to prevent that they
are failing.
Fixes: 99e98d7c9fc1a1639fad ("qemu-io: Use error_[gs]et_progname()")
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
This adds a test for a temporary write failure, which simulates the
situation after werror=stop/enospc has stopped the VM. We shouldn't
leave leaked clusters behind in such cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
When we try to allocate new clusters we first look for available ones
starting from s->free_cluster_index and once we find them we increase
their reference counts. Before we get to call update_refcount() to do
this last step s->free_cluster_index is already pointing to the next
cluster after the ones we are trying to allocate.
During update_refcount() it may happen however that we also need to
allocate a new refcount block in order to store the refcounts of these
new clusters (and to complicate things further that may also require
us to grow the refcount table). After all this we don't know if the
clusters that we originally tried to allocate are still available, so
we return -EAGAIN to ask the caller to restart the search for free
clusters.
This is what can happen in a common scenario:
1) We want to allocate a new cluster and we see that cluster N is
free.
2) We try to increase N's refcount but all refcount blocks are full,
so we allocate a new one at N+1 (where s->free_cluster_index was
pointing at).
3) Once we're done we return -EAGAIN to look again for a free
cluster, but now s->free_cluster_index points at N+2, so that's
the one we allocate. Cluster N remains unallocated and we have a
hole in the qcow2 file.
This can be reproduced easily:
qemu-img create -f qcow2 -o cluster_size=512 hd.qcow2 1M
qemu-io -c 'write 0 124k' hd.qcow2
After this the image has 132608 bytes (256 clusters), and the refcount
block is full. If we write 512 more bytes it should allocate two new
clusters: the data cluster itself and a new refcount block.
qemu-io -c 'write 124k 512' hd.qcow2
However the image has now three new clusters (259 in total), and the
first one of them is empty (and unallocated):
dd if=hd.qcow2 bs=512c skip=256 count=1 | hexdump -C
If we write larger amounts of data in the last step instead of the 512
bytes used in this example we can create larger holes in the qcow2
file.
What this patch does is reset s->free_cluster_index to its previous
value when alloc_refcount_block() returns -EAGAIN. This way the caller
will try to allocate again the original clusters if they are still
free.
The output of iotest 026 also needs to be updated because now that
images have no holes some tests fail at a different point and the
number of leaked clusters is different.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
s/refcout/refcount/
CC: qemu-trivial@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Commit 3ff2f67a changed bdrv_co_flush() so that no flush is issues if
the image hasn't been dirtied since the last flush. This is not quite
correct: The condition should be that the image hasn't been dirtied
since the last _successful_ flush. This patch changes the logic
accordingly.
Without this fix, subsequent bdrv_co_flush() calls would return success
without actually doing anything even though the image is still dirty.
The difference is visible in some blkdebug test cases where error
messages incorrectly disappeared after commit 3ff2f67a.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1478300595-10090-1-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Some guests (win2008 server for example) do a lot of unnecessary
flushing when underlying media has not changed. This adds additional
overhead on host when calling fsync/fdatasync.
This change introduces a write generation scheme in BlockDriverState.
Current write generation is checked against last flushed generation to
avoid unnessesary flushes.
The problem with excessive flushing was found by a performance test
which does parallel directory tree creation (from 2 processes).
Results improved from 0.424 loops/sec to 0.432 loops/sec.
Each loop creates 10^3 directories with 10 files in each.
This affected some blkdebug testcases that were expecting error logs from
failure-injected flushes which are now skipped entirely
(tests 026 071 089).
This also affects the performance of block jobs and thus BLOCK_JOB_READY
events for driver-mirror and active block-commit commands now arrives
faster, before QMP send successfully returns to caller (tests 141 144).
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468870792-7411-5-git-send-email-den@openvz.org
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
CC: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Our qapi conventions document that '.' should only be used in
the prefix of downstream names. BlkdebugEvent was a lone
exception to this. Changing this is not backwards compatible
to the 'blockdev-add' QMP command; however, that command is
not yet fully stable. It can also be argued that the testsuite
is the biggest user of blkdebug, and that any other user can
be taught to deal with the change by paying attention to
introspection results.
Done with:
$ for str in \
l1_grow.{alloc,write,activate}_table \
l2_alloc.{cow_read,write} \
refblock_alloc.{hookup,write,write_blocks,write_table,switch_table} \
pwritev_rmw.{head,after_head,tail,after_tail}; do
str1=$(echo "$str" | sed 's/\./\\./')
str2=$(echo "$str" | sed 's/\./_/')
git grep -l "$str1" | xargs -r sed -i "s/$str1/$str2/g"
done
followed by a manual touchup to test 77 to keep the test working.
Reported-by: Markus Armbruster <armbru@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1447836791-369-21-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
Background:
The blkdebug scripts are currently engineered so that when a debug
event occurs, a prefilter browses a master list of parsed rules for a
certain event and adds them to an "active list" of rules to be used for
the forthcoming action, provided the events and state numbers match.
Then, once the request is received, the last active rule is used to
inject an error if certain parameters match.
This active list is cleared every time the prefilter injects a new
rule for the first time during a debug event.
The "once" rule currently causes the error injection, if it is
triggered, to only clear the active list. This is insufficient for
preventing future injections of the same rule.
Remedy:
This patch /deletes/ the rule from the list that the prefilter
browses, so it is gone for good. In V2, we remove only the rule of
interest from the active list instead of allowing the "once" rule to
clear the entire list of active rules.
Impact:
This affects iotests 026. Several ENOSPC tests that used "once" can
be seen to have output that shows multiple failure messages. After
this patch, the error messages tend to be smaller and less severe, but
the injection can still be seen to be working. I have patched the
expected output to expect the smaller error messages.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 1423257977-25630-1-git-send-email-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
This is simply:
$ cd tests/qemu-iotests; sed -i -e 's/ *$//' *.out
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1418110684-19528-2-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
qcow2_cache_flush() may fail; if one of the caches failed to be flushed
successfully to disk in qcow2_close() the image should not be marked
clean, and we should emit a warning.
This breaks the (qcow2-specific) iotests 026, 071 and 089; change their
output accordingly.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
(CVE-2014-0147)
free_cluster_index is only correct if update_refcount() was called from
an allocation function, and even there it's brittle because it's used to
protect unfinished allocations which still have a refcount of 0 - if it
moves in the wrong place, the unfinished allocation can be corrupted.
So not using it any more seems to be a good idea. Instead, use the
first requested cluster to do the calculations. Return -EAGAIN if
unfinished allocations could become invalid and let the caller restart
its search for some free clusters.
The context of creating a snapsnot is one situation where
update_refcount() is called outside of a cluster allocation. For this
case, the change fixes a buffer overflow if a cluster is referenced in
an L2 table that cannot be represented by an existing refcount block.
(new_table[refcount_table_index] was out of bounds)
[Bump the qemu-iotests 026 refblock_alloc.write leak count from 10 to
11.
--Stefan]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Because l2_allocate now frees the unused L2 cluster on error, the
according test cases in 026 don't result in one leaked cluster anymore.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The reference output for test case 026 hasn't been updated in a long
time and it's one of the "known failing" cases. This patch updates the
reference output so that unintentional changes can be reliably detected
again.
The problem with this test case is that it produces different output
depending on whether -nocache is used or not. The solution of this patch
is to actually have two different reference outputs. If nnn.out.nocache
exists, it is used as the reference output for -nocache; otherwise,
nnn.out stays valid for both cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Some image formats do have a cluster size, others don't, but there are
tests that work with both sets of images and currently we get failures
because the qemu-img create output doesn't mention the cluster size for
some formats.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The error message for leaked clusters has changed. qemu-iotests needs to be
updated to pass 026 again.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
This adds test cases for qcow2 error paths (using blkdebug)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|