Age | Commit message (Collapse) | Author |
|
Since f6880b7f [qemu-log: support simple pid substitution for logs],
test-logging creates files with hard-coded names in /tmp. In the best
case, this prevents multiple developers from running "make check" on
the same machine. In the worst case, it allows for symlink attacks,
enabling an attacker to overwrite files that are writable to the
developer running "make check".
Instead of hard-coding the paths, create a temporary directory using
g_dir_make_tmp() and clean it up afterwards.
Fixes: f6880b7f ("qemu-log: support simple pid substitution for logs")
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1471545963-11720-3-git-send-email-silbe@linux.vnet.ibm.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
(commit 80dcfb8532ae76343109a48f12ba8ca1c505c179)
Upon migration, the code use a timer based on vm_clock for 1ns
in the future from post_load to do the event send in case host_connected
differs between migration source and target.
However, it's not guaranteed that the apic is ready to inject irqs into
the guest, and the irq line remained high, resulting in any future interrupts
going unnoticed by the guest as well.
That's because 1) the migration coroutine is not blocked when it get EAGAIN
while reading QEMUFile. 2) The vm_clock is enabled default currently, it doesn't
rely on the calling of vm_start(), that means vm_clock timers can run before
VCPUs are running.
So, let's set the vm_clock disabled default, keep the initial intention of
design for vm_clock timers.
Meanwhile, change the test-aio usecase, using QEMU_CLOCK_REALTIME instead of
QEMU_CLOCK_VIRTUAL as the block code does.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1470728955-90600-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
109 iotest is broken for raw after 0965a41e998ab820b5
[mirror: double performance of the bulk stage if the disc is full]
The problem is with finishing block-job with error: before specified
patch mirror was not very async and it created one big request at disk
start, this request finished with error and qemu produced
BLOCK_JOB_COMPLETED with zero progress.
After 0965a41, mirror starts several smaller requests in parallel, when
BLOCK_JOB_COMPLETED emited we have some successful non-zero progress.
This patch solves the issue by filtering out progress from 109 test
output.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Since 7f0317cfc8da6 we have API to specify the ID of block jobs and we
also guarantee that they are well-formed and unique.
This patch adds tests to check some common scenarios.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We have three qtest tests which have test names ending with "error".
This is awkward because the output of verbose test runs looks like
/crypto/task/error: OK
/crypto/task/thread_error: OK
which gives false positives if you are grepping build logs for
errors by looking for "error:". Since there are only three tests
with this problem, just rename them all to 'failure' instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1470307178-22848-1-git-send-email-peter.maydell@linaro.org
|
|
staging
# gpg: Signature made Fri 05 Aug 2016 10:24:34 BST
# gpg: using RSA key 0x9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
virtio-blk: Remove stale comment about draining
virtio-blk: Release s->rq queue at system_reset
throttle: Test burst limits lower than the normal limits
throttle: Don't allow burst limits to be lower than the normal limits
block/parallels: check new image size
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This checks that making FOO_max lower than FOO is not allowed.
We could also forbid having FOO_max == FOO, but that doesn't have
any odd side effects and it would require us to update several other
tests, so let's keep it simple.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 2f90f9ee58aa14b7bd985f67c5996b06e0ab6c19.1469693110.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
We don't have .git in the docker checkout, add this to enable -Werror
explicitly.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1469453510-658-1-git-send-email-famz@redhat.com
|
|
By not using "--format" with docker images command.
The option is not available on RHEL 7 docker command. Use an awk
matching command instead.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1470202928-3392-1-git-send-email-famz@redhat.com>
|
|
Printf'ing a NULL string is undefined behaviour. Avoid it.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1469459025-23606-4-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
So far, QHT functions assume that the passed qht has previously been
initialized--otherwise they segfault.
This patch makes an exception for qht_statistics_init, with the goal
of simplifying calling code. For instance, qht_statistics_init is
called from the 'info jit' dump, and given that under KVM the TB qht
is never initialized, we get a segfault. Thus, instead of complicating
the 'info jit' code with additional checks, let's allow passing an
uninitialized qht to qht_statistics_init.
While at it, add a test for this to test-qht.
Before the patch (for $ qemu -enable-kvm [...]):
(qemu) info jit
[...]
direct jump count 0 (0%) (2 jumps=0 0%)
Program received signal SIGSEGV, Segmentation fault.
After the patch the "TB hash buckets", "TB hash occupancy"
and "TB hash avg chain" lines are omitted.
(qemu) info jit
[...]
direct jump count 0 (0%) (2 jumps=0 0%)
TB hash buckets 0/0 (-nan% head buckets used)
TB hash occupancy nan% avg chain occ. Histogram: (null)
TB hash avg chain nan buckets. Histogram: (null)
[...]
Reported by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1469205390-14369-1-git-send-email-cota@braap.org>
[Extract printing statistics to an entirely separate function. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
into staging
ppc patch queue 2016-07-29
Here are the current pending ppc and spapr related patches for
qemu-2.7. Given the freeze status, these are all bugfixes, with two
exceptions:
* There's some final rework of the vcpu hotplug model. Specifically
we add spapr specific code on the generic basis Igor established
to make cpu_index stable for pseries-2.7 and later machine types.
- This allows us to remove the limitation that cpu cores had to
be inserted in linear order, and removed in LIFO order.
- This is worth merging this late in 2.7 because it will avoid
considerable future grief with management layers needing to
discover whether out-of-order hotplug is possible, amongst
other things.
- For now we do add a constraint that the initial cpu cannot be
unplugged.
* We add two extra testcases to make check, for postcopy and
drive_del on ppc64.
- Not strictly bugfixes, but safe, because they don't affect the
actual code, and increase test coverage.
# gpg: Signature made Fri 29 Jul 2016 05:50:02 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160729:
tests: add drive_del-test to ppc/ppc64
spapr: Prevent boot CPU core removal
ppc: Fix fault PC reporting for lve*/stve* VMX instructions
test: port postcopy test to ppc64
Revert "spapr: Ensure CPU cores are added contiguously and removed in LIFO order"
spapr: init CPUState->cpu_index with index relative to core-id
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
pc, pci, virtio: cleanups, fixes
a bunch of bugfixes and a couple of cleanups
making these easier and/or making debugging easier
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 29 Jul 2016 04:11:01 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (41 commits)
mptsas: Fix a migration compatible issue
vhost: do not update last avail idx on get_vring_base() failure
vhost: add vhost_net_set_backend()
vhost-user: add error report in vhost_user_write()
tests: fix vhost-user-test leak
tests: plug some leaks in virtio-net-test
vhost-user: wait until backend init is completed
char: add and use tcp_chr_wait_connected
char: add chr_wait_connected callback
vhost: add assert() to check runtime behaviour
vhost-net: vhost_migration_done is vhost-user specific
Revert "vhost-net: do not crash if backend is not present"
vhost-user: add get_vhost_net() assertions
vhost-user: keep vhost_net after a disconnection
vhost-user: check vhost_user_{read,write}() return value
vhost-user: check qemu_chr_fe_set_msgfds() return value
vhost-user: call set_msgfds unconditionally
qemu-char: fix qemu_chr_fe_set_msgfds() crash when disconnected
vhost: use error_report() instead of fprintf(stderr,...)
vhost: add missing VHOST_OPS_DEBUG
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
As userfaultfd syscall is available on powerpc, migration
postcopy can be used.
This patch adds the support needed to test this on powerpc,
instead of using a bootsector to run code to modify memory,
we use a FORTH script in "boot-command" property.
As spapr machine doesn't support "-prom-env" argument
(the nvram is initialized by SLOF and not by QEMU),
"boot-command" is provided to SLOF via a file mapped nvram
(with "-drive file=...,if=pflash")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Spotted by valgrind.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Found thanks to valgrind.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The previous commit refactoring iotests.py:
commit 66613974468fb6e1609fb3eabf55981b1ee436cf
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Wed Jul 20 14:23:10 2016 +0100
scripts: refactor the VM class in iotests for reuse
was not properly tested and included a number of broken
bits.
- The 'event_match' method was not moved into qemu.py
- The 'self._args' list parameter in QEMUMachine needs
to be copied otherwise modifications will affect the
global 'qemu_opts' variable in iotests.py
- The QEMUQtestMachine class methods had inverted
parameter order for the super() calls
- The QEMUQtestMachine class forgot to add
'-machine accel=qtest'
- The QEMUQtestMachine class constructor needs to set
a default 'name' value before using it as it may
be None
- The QEMUQtestMachine class constructor needs to use
named parameters when calling the super constructor
as it is leaving out some positional parameters.
- The 'qemu_prog' variable should be a string not a
list in iotests.py
- The VM classs constructor needs to use named
parameters when calling the super constructor
as it is leaving out some positional parameters.
- The path to the socket-scm-helper needs to be
passed into the QEMUMachine class
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1469549767-27249-1-git-send-email-berrange@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Do not create a leaking temporary file, but use a static file instead.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
|
|
This introduces a moderately general purpose framework for
testing performance of migration.
The initial guest workload is provided by the included 'stress'
program, which is configured to spawn one thread per guest CPU
and run a maximally memory intensive workload. It will loop
over GB of memory, xor'ing each byte with data from a 4k array
of random bytes. This ensures heavy read and write load across
all of guest memory to stress the migration performance. While
running the 'stress' program will record how long it takes to
xor each GB of memory and print this data for later reporting.
The test engine will spawn a pair of QEMU processes, either on
the same host, or with the target on a remote host via ssh,
using the host kernel and a custom initrd built with 'stress'
as the /init binary. Kernel command line args are set to ensure
a fast kernel boot time (< 1 second) between launching QEMU and
the stress program starting execution.
None the less, the test engine will initially wait N seconds for
the guest workload to stablize, before starting the migration
operation. When migration is running, the engine will use pause,
post-copy, autoconverge, xbzrle compression and multithread
compression features, as well as downtime & bandwidth tuning
to encourage completion. If migration completes, the test engine
will wait N seconds again for the guest workooad to stablize on
the target host. If migration does not complete after a preset
number of iterations, it will be aborted.
While the QEMU process is running on the source host, the test
engine will sample the host CPU usage of QEMU as a whole, and
each vCPU thread. While migration is running, it will record
all the stats reported by 'query-migration'. Finally, it will
capture the output of the stress program running in the guest.
All the data produced from a single test execution is recorded
in a structured JSON file. A separate program is then able to
create interactive charts using the "plotly" python + javascript
libraries, showing the characteristics of the migration.
The data output provides visualization of the effect on guest
vCPU workloads from the migration process, the corresponding
vCPU utilization on the host, and the overall CPU hit from
QEMU on the host. This is correlated from statistics from the
migration process, such as downtime, vCPU throttling and iteration
number.
While the tests can be run individually with arbitrary parameters,
there is also a facility for producing batch reports for a number
of pre-defined scenarios / comparisons, in order to be able to
get standardized results across different hardware configurations
(eg TCP vs RDMA, or comparing different VCPU counts / memory
sizes, etc).
To use this, first you must build the initrd image
$ make tests/migration/initrd-stress.img
To run a a one-shot test with all default parameters
$ ./tests/migration/guestperf.py > result.json
This has many command line args for varying its behaviour.
For example, to increase the RAM size and CPU count and
bind it to specific host NUMA nodes
$ ./tests/migration/guestperf.py \
--mem 4 --cpus 2 \
--src-mem-bind 0 --src-cpu-bind 0,1 \
--dst-mem-bind 1 --dst-cpu-bind 2,3 \
> result.json
Using mem + cpu binding is strongly recommended on NUMA
machines, otherwise the guest performance results will
vary wildly between runs of the test due to lucky/unlucky
NUMA placement, making sensible data analysis impossible.
To make it run across separate hosts:
$ ./tests/migration/guestperf.py \
--dst-host somehostname > result.json
To request that post-copy is enabled, with switchover
after 5 iterations
$ ./tests/migration/guestperf.py \
--post-copy --post-copy-iters 5 > result.json
Once a result.json file is created, a graph of the data
can be generated, showing guest workload performance per
thread and the migration iteration points:
$ ./tests/migration/guestperf-plot.py --output result.html \
--migration-iters --split-guest-cpu result.json
To further include host vCPU utilization and overall QEMU
utilization
$ ./tests/migration/guestperf-plot.py --output result.html \
--migration-iters --split-guest-cpu \
--qemu-cpu --vcpu-cpu result.json
NB, the 'guestperf-plot.py' command requires that you have
the plotly python library installed. eg you must do
$ pip install --user plotly
Viewing the result.html file requires that you have the
plotly.min.js file in the same directory as the HTML
output. This js file is installed as part of the plotly
python library, so can be found in
$HOME/.local/lib/python2.7/site-packages/plotly/offline/plotly.min.js
The guestperf-plot.py program can accept multiple json files
to plot, enabling results from different configurations to
be compared.
Finally, to run the entire standardized set of comparisons
$ ./tests/migration/guestperf-batch.py \
--dst-host somehost \
--mem 4 --cpus 2 \
--src-mem-bind 0 --src-cpu-bind 0,1 \
--dst-mem-bind 1 --dst-cpu-bind 2,3
--output tcp-somehost-4gb-2cpu
will store JSON files from all scenarios in the directory
named tcp-somehost-4gb-2cpu
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1469020993-29426-7-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
|
|
The iotests module has a python class for controlling QEMU
processes. Pull the generic functionality out of this file
and create a scripts/qemu.py module containing a QEMUMachine
class. Put the QTest integration support into a subclass
QEMUQtestMachine.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1469020993-29426-4-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
|
|
pc, pci, virtio: new features, cleanups, fixes
- interrupt remapping for intel iommus
- a bunch of virtio cleanups
- fixes all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 21 Jul 2016 18:49:30 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (57 commits)
intel_iommu: avoid unnamed fields
virtio: Update migration docs
virtio-gpu: Wrap in vmstate
virtio-gpu: Use migrate_add_blocker for virgl migration blocking
virtio-input: Wrap in vmstate
9pfs: Wrap in vmstate
virtio-serial: Wrap in vmstate
virtio-net: Wrap in vmstate
virtio-balloon: Wrap in vmstate
virtio-rng: Wrap in vmstate
virtio-blk: Wrap in vmstate
virtio-scsi: Wrap in vmstate
virtio: Migration helper function and macro
virtio-serial: Remove old migration version support
virtio-net: Remove old migration version support
virtio-scsi: Replace HandleOutput typedef
Revert "mirror: Workaround for unexpected iohandler events during completion"
virtio-scsi: Call virtio_add_queue_aio
virtio-blk: Call virtio_add_queue_aio
virtio: Introduce virtio_add_queue_aio
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
staging
# gpg: Signature made Wed 20 Jul 2016 12:19:56 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/docker-pull-request:
docker: pass EXECUTABLE to build script
docker: Don't start a container that doesn't exist
docker: Add "images" subcommand to docker.py
docker: Fix exit code if $CMD failed
docker: More sensible run script
tests/docker/docker.py: add update operation
tests/docker/dockerfiles: new debian-bootstrap.docker
tests/docker/docker.py: check and run .pre script
tests/docker/docker.py: support --include-executable
tests/docker/docker.py: docker_dir outside build
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
On a slower machine the test can take more than 30 seconds.
Increase the timeout to 100 seconds.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
|
|
staging
QAPI patches for 2016-07-19
# gpg: Signature made Tue 19 Jul 2016 19:35:27 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2016-07-19:
net: Use correct type for bool flag
qapi: Change Netdev into a flat union
block: Simplify drive-mirror
block: Simplify block_set_io_throttle
qapi: Implement boxed types for commands/events
qapi: Plumb in 'boxed' to qapi generator lower levels
qapi-event: Simplify visit of non-implicit data
qapi: Drop useless gen_err_check()
qapi: Add type.is_empty() helper
qapi: Hide tag_name data member of variants
qapi: Special case c_name() for empty type
qapi: Require all branches of flat union enum to be covered
net: use Netdev instead of NetClientOptions in client init
qapi: change QmpInputVisitor to QSLIST
qapi: change QmpOutputVisitor to QSLIST
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
To build a docker image with which needs qemu linux-user emulation we
need to pass --include-executable to the build script. Using the same
mechanism as for other container controls we enable the option is
EXECUTABLE is set on the make command line e.g:
make docker-image-debian-bootstrap V=1 J=9 DEB_ARCH=armhf \
DEB_TYPE=stable EXECUTABLE=./arm-linux-user/qemu-arm
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-11-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
Image building targets are dependencies of test running targets, so when
a docker image doesn't exist, it means it's skipped (due to dependency
checks in pre script). Therefore, skip the test too.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-10-git-send-email-famz@redhat.com
|
|
This is a wrapper for the 'docker images' command.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-9-git-send-email-famz@redhat.com
|
|
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-8-git-send-email-famz@redhat.com
|
|
It is very easy to figure out current directory and bash option from the
execution, so do less in the Makefile invocation command line, and
figure both options in the script.
This makes the next patch easier.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-7-git-send-email-famz@redhat.com
|
|
This adds a new operation to the docker script to allow updating of
binaries in an existing container. This is because it would be
inefficient to re-build the whole container just for an update to the
QEMU binary.
To update the executable run:
./tests/docker/docker.py update \
debian:armhf ./arm-linux-user/qemu-arm
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-6-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
Together with the debian-bootstrap.pre script can now build an arbitrary
architecture of Debian using debootstrap. This allows debootstrap to set
up its first stage before the container is built.
To build a container you need a command line like:
DEB_ARCH=armhf DEB_TYPE=testing \
./tests/docker/docker.py build \
--include-executable=arm-linux-user/qemu-arm debian:armhf \
./tests/docker/dockerfiles/debian-bootstrap.docker
Although a number of non-debian systems package the debootstrap script
it is fairly portable in itself. Assuming we have some sort of fakeroot
implementation we can just clone the upstream repository and use the
script from there.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-5-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
The docker script will now search for an associated $dockerfile.pre
script which gets run in the same build context as the dockerfile will
be. This is to support pre-seeding the build context before running the
docker build.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-4-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
When passed the path to a binary we copy it and any linked libraries (if
it is dynamically linked) into the docker build context. These can then
be included by a dockerfile with the line:
# Copy all of context into container
ADD . /
This is mainly intended for setting up foreign architecture docker
images which use qemu-$arch to do cross-architecture linux-user
execution. It also relies on the host and guest file-system following
reasonable multi-arch layouts so the copied libraries don't clash with
the guest ones.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-3-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
Instead of letting the build_image create the temporary working dir we
move the creation to the build command. This is preparation for the
later patches where additional files can be added to the build context
before the build step is run.
We also ensure we remove the build context after we are done (mkdtemp
doesn't do this automatically for you).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-2-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
|
|
Turn on the ability to pass command and event arguments in
a single boxed parameter, which must name a non-empty type
(although the type can be a struct with all optional members).
For structs, it makes it possible to pass a single qapi type
instead of a breakout of all struct members (useful if the
arguments are already in a struct or if the number of members
is large); for other complex types, it is now possible to use
a union or alternate as the data for a command or event.
The empty type may be technically feasible if needed down the
road, but it's easier to forbid it now and relax things to allow
it later, than it is to allow it now and have to special case
how the generated 'q_empty' type is handled (see commit 7ce106a9
for reasons why nothing is generated for the empty type). An
alternate type is never considered empty, but now that a boxed
type can be either an object or an alternate, we have to provide
a trivial QAPISchemaAlternateType.is_empty(). The new call to
arg_type.is_empty() during QAPISchemaCommand.check() requires
that we first check the type in question; but there is no chance
of introducing a cycle since objects do not refer back to commands.
We still have a split in syntax checking between ad-hoc parsing
up front (merely validates that 'boxed' has a sane value) and
during .check() methods (if 'boxed' is set, then 'data' must name
a non-empty user-defined type).
Generated code is unchanged, as long as no client uses the
new feature.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1468468228-27827-10-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Test files renamed to *-boxed-*]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
The next patch will add support for passing a qapi union type
as the 'data' of a command. But to do that, the user function
for implementing the command, as called by the generated
marshal command, must take the corresponding C struct as a
single boxed pointer, rather than a breakdown into one
parameter per member. Even without a union, being able to use
a C struct rather than a list of parameters can make it much
easier to handle coding with QAPI.
This patch adds the internal plumbing of a 'boxed' flag
associated with each command and event. In several cases,
this means adding indentation, with one new dead branch and
the remaining branch being the original code more deeply
nested; this was done so that the new implementation in the
next patch is easier to review without also being mixed with
indentation changes.
For this patch, no behavior or generated output changes, other
than the testsuite outputting the value of the new flag
(always False for now).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1468468228-27827-9-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Identifier box renamed to boxed in two places]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
Clean up the only remaining external use of the tag_name field of
QAPISchemaObjectTypeVariants, by explicitly listing the generated
'type' tag for all variants in the testsuite (you can still tell
simple unions by the -wrapper types). Then we can mark the
tag_name field as private by adding a leading underscore to prevent
any further use.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1468468228-27827-5-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
We were previously enforcing that all flat union branches were
found in the corresponding enum, but not that all enum values
were covered by branches. The resulting generated code would
abort() if the user passes the uncovered enum value.
We don't automatically treat non-present branches in a flat
union as empty types, for symmetry with simple unions (there,
the enum type is generated from the list of all branches, so
there is no way to omit a branch but still have it be part of
the union).
A later patch will add shorthand so that branches that are empty
in flat unions can be declared as 'branch':{} instead of
'branch':'Empty', to avoid the need for an otherwise useless
explicit empty type. [Such shorthand for simple unions is a bit
harder to justify, since we would still have to generate a
wrapper type that parses 'data':{}, rather than truly being an
empty branch with no additional siblings to the 'type' member.]
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1468468228-27827-3-git-send-email-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
Some guests (win2008 server for example) do a lot of unnecessary
flushing when underlying media has not changed. This adds additional
overhead on host when calling fsync/fdatasync.
This change introduces a write generation scheme in BlockDriverState.
Current write generation is checked against last flushed generation to
avoid unnessesary flushes.
The problem with excessive flushing was found by a performance test
which does parallel directory tree creation (from 2 processes).
Results improved from 0.424 loops/sec to 0.432 loops/sec.
Each loop creates 10^3 directories with 10 files in each.
This affected some blkdebug testcases that were expecting error logs from
failure-injected flushes which are now skipped entirely
(tests 026 071 089).
This also affects the performance of block jobs and thus BLOCK_JOB_READY
events for driver-mirror and active block-commit commands now arrives
faster, before QMP send successfully returns to caller (tests 141 144).
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468870792-7411-5-git-send-email-den@openvz.org
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
CC: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Due to changes in flush behaviour clean disks stopped generating
flush_to_disk events and IDE and AHCI tests that test flush commands
started to fail.
This change adds additional DMA writes to affected tests before sending
flush commands so that bdrv_flush actually generates flush_to_disk event.
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468870792-7411-4-git-send-email-den@openvz.org
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
CC: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
iotest 157 pretends not to care about the image format used, but in fact
it does due to the format name not being filtered in its output. This
patch adds filtering and changes the reference output accordingly.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160711132246.3152-1-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: d87d02823a6b91609509d8bb18e2f5dbd9a6102c.1467986342.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
In practice the entry argument is always known at creation time, and
it is confusing that sometimes qemu_coroutine_enter is used with a
non-NULL argument to re-enter a coroutine (this happens in
block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value
at creation time, for consistency with e.g. aio_bh_new.
Mostly done with the following semantic patch:
@ entry1 @
expression entry, arg, co;
@@
- co = qemu_coroutine_create(entry);
+ co = qemu_coroutine_create(entry, arg);
...
- qemu_coroutine_enter(co, arg);
+ qemu_coroutine_enter(co);
@ entry2 @
expression entry, arg;
identifier co;
@@
- Coroutine *co = qemu_coroutine_create(entry);
+ Coroutine *co = qemu_coroutine_create(entry, arg);
...
- qemu_coroutine_enter(co, arg);
+ qemu_coroutine_enter(co);
@ entry3 @
expression entry, arg;
@@
- qemu_coroutine_enter(qemu_coroutine_create(entry), arg);
+ qemu_coroutine_enter(qemu_coroutine_create(entry, arg));
@ reentry @
expression co;
@@
- qemu_coroutine_enter(co, NULL);
+ qemu_coroutine_enter(co);
except for the aforementioned few places where the semantic patch
stumbled (as expected) and for test_co_queue, which would otherwise
produce an uninitialized variable warning.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The next patch moves the coroutine argument from first-enter to
creation time. In this case, coroutine has not been initialized
yet when the coroutine is created, so change to a pointer.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
When a new job is created, the job ID is taken from the device name of
the BDS. This patch adds a new 'job_id' parameter to let the caller
provide one instead.
This patch also verifies that the ID is always unique and well-formed.
This causes problems in a couple of places where no ID is being set,
because the BDS does not have a device name.
In the case of test_block_job_start() (from test-blockjob-txn.c) we
can simply use this new 'job_id' parameter to set the missing ID.
In the case of img_commit() (from qemu-img.c) we still don't have the
API to make commit_active_start() set the job ID, so we solve it by
setting a default value. We'll get rid of this as soon as we extend
the API.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Cleaned up with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Tracked down with an ugly, brittle and probably buggy Perl script.
Also move includes converted to <...> up so they get included before
ours where that's obviously okay.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|