Age | Commit message (Collapse) | Author |
|
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-18-jsnow@redhat.com
[Removed 'auto' group, as per new testing config guidelines --js]
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Seems that it comes up enough.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-17-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Use "FilePaths" instead of "FilePath" to request multiple files be
cleaned up after we leave that object's scope.
This is not crucial; but it saves a little typing.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-16-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
run_job can cancel pending jobs to simulate failure. This lets us use
the pending callback to issue test commands while the job is open, but
then still have the job fail in the end.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-15-jsnow@redhat.com
[Maintainer edit: Merge conflict resolution in run_job]
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Because the new-style python tests don't use the iotests.main() test
launcher, we don't turn on the debugger logging for these scripts
when invoked via ./check -d.
Refactor the launcher shim into new and old style shims so that they
share environmental configuration.
Two cleanup notes: debug was not actually used as a global, and there
was no reason to create a class in an inner scope just to achieve
default variables; we can simply create an instance of the runner with
the values we want instead.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-14-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
We already have 030 for that in general, but this tests very specific
cases of both jobs finishing concurrently.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
This test case is motivated by commit 2b23f28639 ('block/copy-on-read:
Fix permissions for inactive node'). Instead of just testing
copy-on-read on migration, let's stack all sorts of filter nodes on top
of each other and try if the resulting VM can still migrate
successfully. For good measure, put everything into an iothread, because
why not?
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
234 implements functions that are useful for doing migration between two
VMs. Move them to iotests.py so that other test cases can use them, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
The code path for -device drive=<node-name> or without a drive=...
option for empty drives, which is supposed to be used with -blockdev
differs enough from the -drive based path with a user-owned
BlockBackend, so we want to test both paths at least for the basic tests
implemented by TestInitiallyFilled and TestInitiallyEmpty.
This would have caught the bug recently fixed for inserting read-only
nodes into a scsi-cd created without a drive=... option.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
We're getting a ridiculous number of child classes of
TestInitiallyFilled and TestInitiallyEmpty that differ only in a few
attributes that we want to test in all combinations.
Instead of explicitly writing down every combination, let's use a loop
and create those classes dynamically.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
The test covered only floppy and ide-cd. Add scsi-cd as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190805113526.20319-1-mreitz@redhat.com
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190805152840.32190-1-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Perform two guest writes to not yet backed up areas of an image, where
the former touches an inner area of the latter.
Before HEAD^, copy offloading broke this in two ways:
(1) The target image differs from the reference image (what the source
was when the backup started).
(2) But you will not see that in the failing output, because the job
offset is reported as being greater than the job length. This is
because one cluster is copied twice, and thus accounted for twice,
but of course the job length does not increase.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190801173900.23851-3-mreitz@redhat.com
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
|
|
The patch "iotests: Set read-zeroes on in null block driver for Valgrind"
with the commit ID a6862418fec4072 needs the change in 051.out when
compared against on the s390 system.
Fixes: a6862418fec40727b392c86dc13d9ec980efcb15
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The 'seq' command is not available by default on OpenBSD, so these
iotests are currently failing there. It could be installed as 'gseq'
from the coreutils package - but since it is using a different name
there and we are running the iotests with the "bash" shell anyway,
let's simply use the built-in double parentheses for the for-loops
instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190723111201.1926-1-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
|
|
Remove some more tests from the "auto" group that either have issues
in certain environments (like macOS or FreeBSD, or on certain file systems
like ZFS or tmpfs), do not work with the qcow2 format, or that are simply
taking too much time.
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190717111947.30356-3-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
|
|
The regular expressions in the "check" script currently expect that there
is always a space after the test number in the group file, so you can't
have a test in there without a group unless the line still ends with a
space - which is quite error prone since some editors might remove spaces
at the end of lines automatically.
Thus let's fix the regular expressions so that it is also possible to
have lines with one test number only in the group file.
Suggested-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190717111947.30356-2-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
|
|
When qemu quits, all throttling should be ignored. That means, if there
is a mirror job running from a throttled node, it should be cancelled
immediately and qemu close without blocking.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Before the previous patches, the first case resulted in a failed
assertion (which is noted as qemu receiving a SIGABRT in the test
output), and the second usually triggered a segmentation fault.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
If a test has issued a quit command already (which may be useful to do
explicitly because the test wants to show its effects),
QEMUMachine.shutdown() should not do so again. Otherwise, the VM may
well return an ECONNRESET which will lead QEMUMachine.shutdown() to
killing it, which then turns into a "qemu received signal 9" line.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The Valgrind tool reports about the uninitialised buffer 'buf'
instantiated on the stack of the function guess_disk_lchs().
Pass 'read-zeroes=on' to the null block driver to make it deterministic.
The output of the tests 051, 186 and 227 now includes the parameter
'read-zeroes'. So, the benchmark output files are being changed too.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
This tests that the stream job exits cleanly (without abort) when the
top node is read-only and cannot be reopened read/write.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190703172813.6868-12-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
We recently removed the dependency of the stream job on its base node.
That makes it OK to use a commit filter node there. Test that.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-11-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
unittest-style tests generally do not use the log file, but VM.run_job()
can still be useful to them. Add a parameter to it that hides its
output from the log file.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190703172813.6868-10-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Currently, 030 just compares the error class, which does not say
anything.
Before HEAD^ added throttling to test_overlapping_4, that test actually
usually failed because node2 was already gone, not because it was the
commit and stream job were not allowed to overlap.
Prevent such problems in the future by comparing the error description
instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-9-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Currently, TestParallelOps in 030 creates images that are too small for
job throttling to be effective. This is reflected by the fact that it
never undoes the throttling.
Increase the image size and undo the throttling when the job should be
completed. Also, add throttling in test_overlapping_4, or the jobs may
not be so overlapping after all. In fact, the error usually emitted
here is that node2 simply does not exist, not that overlapping jobs are
not allowed -- the fact that this job ignores the exact error messages
and just checks the error class is something that should be fixed in a
follow-up patch.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190703172813.6868-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
A recent tweak to the '-o help' output for qemu-img needs to be
reflected into the iotests expected outputs.
Fixes: f7077c98
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
'remotes/ehabkost/tags/python-next-pull-request' into staging
Python queue, 2019-07-01
* Deprecate Python 2 support (Eduardo Habkost)
* qemu/__init__.py refactor (John Snow)
* make qmp-shell work with python3 (Igor Mammedov)
# gpg: Signature made Mon 01 Jul 2019 23:28:27 BST
# gpg: using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg: issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/python-next-pull-request:
Deprecate Python 2 support
machine.py: minor delinting
python/qemu: split QEMUMachine out from underneath __init__.py
qmp: make qmp-shell work with python3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The bottom node is the intermediate block device that has the base as its
backing image. It is used instead of the base node while a block stream
job is running to avoid dependency on the base that may change due to the
parallel jobs. The change may take place due to a filter node as well that
is inserted between the base and the intermediate bottom node. It occurs
when the base node is the top one for another commit or stream job.
After the introduction of the bottom node, don't freeze its backing child,
that's the base, anymore.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1559152576-281803-4-git-send-email-andrey.shinkevich@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
It's not obvious that something named __init__.py actually houses
important code that isn't relevant to python packaging glue. Move the
QEMUMachine and related error classes out into their own module.
Adjust users to the new import location.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20190627212816.27298-2-jsnow@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Tests should place their files into the test directory. This includes
Unix sockets. 205 currently fails to do so, which prevents it from
being run concurrently.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190618210238.9524-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Rewrite the implementation of the ssh block driver to use libssh instead
of libssh2. The libssh library has various advantages over libssh2:
- easier API for authentication (for example for using ssh-agent)
- easier API for known_hosts handling
- supports newer types of keys in known_hosts
Use APIs/features available in libssh 0.8 conditionally, to support
older versions (which are not recommended though).
Adjust the iotest 207 according to the different error message, and to
find the default key type for localhost (to properly compare the
fingerprint with).
Contributed-by: Max Reitz <mreitz@redhat.com>
Adjust the various Docker/Travis scripts to use libssh when available
instead of libssh2. The mingw/mxe testing is dropped for now, as there
are no packages for it.
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190620200840.17655-1-ptoscano@redhat.com
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 5873173.t2JhDm7DL7@lindworm.usersys.redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
512M of L1 entries is a very loose bound, only 32M are required to store
the maximal supported VMDK file size of 2TB.
Fixed qemu-iotest 59# - now failure occures before on impossible L1
table size.
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com>
Message-id: 20190620091057.47441-3-shmuel.eiderman@oracle.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
COW (even empty/zero) areas require encryption too
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190516143028.81155-1-anton.nefedov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Currently, the "thistime" variable is not reinitialized on every loop
iteration. This leads to tests that do not yield a run time (because
they failed or were skipped) printing the run time of the previous test
that did. Fix that by reinitializing "thistime" for every test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We do not support this combination (yet), so this should yield an error
message.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190507203508.18026-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
This test converts a simple image to another, but blkdebug injects
block_status and read faults at some offsets. The resulting image
should be the same as the input image, except that sectors that could
not be read have to be 0.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190507203508.18026-7-mreitz@redhat.com
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[mreitz: Dropped superfluous printf from _filter_offsets, as suggested
by Vladimir; disable test for VDI and IMGOPTSSYNTAX]
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
There are error messages which refer to an overlay node as the snapshot.
That is wrong, those are two different things.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20190603202236.1342-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Test fails at least for qcow, because of different cluster sizes in
base and top (and therefore different granularities of bitmaps we are
trying to merge).
The test aim is to check block-dirty-bitmap-merge between different
nodes functionality, no needs to check all formats. So, let's just drop
support for anything except qcow2.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190605155405.104384-1-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
In 219, we wait for the job to make progress before we emit its status.
This makes the output reliable. We do not wait for any more progress if
the job's current-progress already matches its total-progress.
Unfortunately, there is a bug: Right after the job has been started,
it's possible that total-progress is still 0. In that case, we may skip
the first progress-making step and keep ending up 64 kB short.
To fix that bug, we can simply wait for total-progress to reach 4 MB
(the image size) after starting the job.
Reported-by: Karen Mezick <kmezick@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1686651
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190516161114.27596-1-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
[mreitz: Adjusted commit message as per John's proposal]
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
It is possible for an empty file to take up blocks on a filesystem, for
example:
$ qemu-img create -f raw test.img 1G
Formatting 'test.img', fmt=raw size=1073741824
$ mkfs.ext4 -I 128 -q test.img
$ mkdir test-mount
$ sudo mount -o loop test.img test-mount
$ sudo touch test-mount/test-file
$ stat -c 'blocks=%b' test-mount/test-file
blocks=8
These extra blocks (one cluster) are apparently used for metadata,
because they are always there, on top of blocks used for data:
$ sudo dd if=/dev/zero of=test-mount/test-file bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00135339 s, 775 MB/s
$ stat -c 'blocks=%b' test-mount/test-file
blocks=2056
Make iotest 175 take this into account.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190516144319.12570-1-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-6-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
[mreitz: Moved from 250 to 256]
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Don't pull events out of the queue that don't belong to us;
be choosier so that we can use this method to drive jobs that
were launched by transactions that may have more jobs.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-5-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Cap waits to 60 seconds so that iotests can fail gracefully if something
goes wrong.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190523170643.20794-3-jsnow@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
common.nbd's nbd_server_set_tcp_port() tries to find a free port, and
then uses it for the whole test run. However, this is racy because even
if the port was free at the beginning, there is no guarantee it will
continue to be available. Therefore, 233 currently cannot reliably be
run concurrently with other NBD TCP tests.
This patch addresses the problem by dropping nbd_server_set_tcp_port(),
and instead finding a new port every time nbd_server_start_tcp_socket()
is invoked. For this, we run qemu-nbd with --fork and on error evaluate
the output to see whether it contains "Address already in use". If so,
we try the next port.
On success, we still want to continually redirect the output from
qemu-nbd to stderr. To achieve both, we redirect qemu-nbd's stderr to a
FIFO that we then open in bash. If the parent process exits with status
0 (which means that the server has started successfully), we launch a
background cat process that copies the FIFO to stderr. On failure, we
read the whole content into a variable and then evaluate it.
While at it, use --fork in nbd_server_start_unix_socket(), too. Doing
so allows us to drop nbd_server_wait_for_*_socket().
Note that the reason common.nbd did not use --fork before is that
qemu-nbd did not have --pid-file.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190508211820.17851-6-mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
|
|
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20190508211820.17851-5-mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
|
|
qemu_nbd_pipe() currently unconditionally reads qemu-nbd's output. That
is not ideal because qemu-nbd may keep stderr open after the parent
process has exited.
Currently, the only user of qemu_nbd_pipe() is 147, which discards the
whole output if the parent process returned success and only evaluates
it on error. Therefore, we can replace qemu_nbd_pipe() by
qemu_nbd_early_pipe() that does the same: Discard the output on success,
and return it on error.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190508211820.17851-3-mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
|