Age | Commit message (Collapse) | Author |
|
Fill in an element of the used ring with a single combined access to the
guest physical memory, rather than using two separated accesses.
This reduces the overhead due to expensive address translation.
Signed-off-by: Vincenzo Maffione <v.maffione@gmail.com>
Message-Id: <e4a89a767a4a92cbb6bcc551e151487eb36e1722.1450218353.git.v.maffione@gmail.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The virtqueue_pop() implementation needs to check if the avail ring
contains some pending buffers. To perform this check, it is not
always necessary to fetch the avail_idx in the VQ memory, which is
expensive. This patch introduces a shadow variable tracking avail_idx
and modifies virtio_queue_empty() to access avail_idx in physical
memory only when necessary.
Signed-off-by: Vincenzo Maffione <v.maffione@gmail.com>
Message-Id: <b617d6459902773d9f4ab843bfaca764f5af8eda.1450218353.git.v.maffione@gmail.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Accessing used_idx in the VQ requires an expensive access to
guest physical memory. Before this patch, 3 accesses are normally
done for each pop/push/notify call. However, since the used_idx is
only written by us, we can track it in our internal data structure.
Signed-off-by: Vincenzo Maffione <v.maffione@gmail.com>
Message-Id: <3d062ec54e9a7bf9fb325c1fd693564951f2b319.1450218353.git.v.maffione@gmail.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Compared to vring, virtio has a performance penalty of 10%. Fix it
by combining all the reads for a descriptor in a single address_space_read
call. This also simplifies the code nicely.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Build the addresses and s/g lists on the stack, and then copy them
to a VirtQueueElement that is just as big as required to contain this
particular s/g list. The cost of the copy is minimal compared to that
of a large malloc.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Build the addresses and s/g lists on the stack, and then copy them
to a VirtQueueElement that is just as big as required to contain this
particular s/g list. The cost of the copy is minimal compared to that
of a large malloc.
When virtqueue_map is used on the destination side of migration or on
loadvm, the iovecs have already been split at memory region boundary,
so we can just reuse the out_num/in_num we find in the file.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Allocate the arrays for in_addr/out_addr/in_sg/out_sg outside the
VirtQueueElement. For now, virtqueue_pop and vring_pop keep
allocating a very large VirtQueueElement.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Move allocation to virtio functions also when loading/saving a
VirtQueueElement. This will also let the load/save functions
keep backwards compatibility when the VirtQueueElement layout
is changed.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The return code of virtqueue_pop/vring_pop is unused except to check for
errors or 0. We can thus easily move allocation inside the functions
and just return a pointer to the VirtQueueElement.
The advantage is that we will be able to allocate only the space that
is needed for the actual size of the s/g list instead of the full
VIRTQUEUE_MAX_SIZE items. Currently VirtQueueElement takes about 48K
of memory, and this kind of allocation puts a lot of stress on malloc.
By cutting the size by two or three orders of magnitude, malloc can
use much more efficient algorithms.
The patch is pretty large, but changes to each device are testable
more or less independently. Splitting it would mostly add churn.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
I misunderstood the vmstate macro definition when I reworked the
virtio .get/.put.
The VMSTATE_STRUCT_VARRAY_KNOWN, was described as being for "a
variable length array (i.e. _type *_field) but we know the
length". However it actually specified operation for arrays embedded in
the struct (i.e. _type _field[]) since it lacked the VMS_POINTER
flag. This caused offset calculation to be completely off, examining and
potentially sending random data instead of the VirtQueue content.
Replace the otherwise unused VMSTATE_STRUCT_VARRAY_KNOWN with a
VMSTATE_STRUCT_VARRAY_POINTER_KNOWN that includes the VMS_POINTER flag
(so now actually doing what it advertises) and use it in the virtio
migration code.
Fixes and description as per Sascha's suggestions/debug.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reported-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Tested-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
Fixes: 50e5ae4dc3e4f21e874512f9e87b93b5472d26e0
Fixes: 2cf0148674430b6693c60d42b7eef721bfa9509f
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-15-git-send-email-peter.maydell@linaro.org
|
|
into staging
VirtFS update:
Cleanups mostly isolating virtio related details into separate files. This
is done to enable easy addition of Xen transport for VirtFS.
The changes include:
1. Rename a bunch of files and functions to make clear they are generic.
2. disentangle virtio transport code and generic 9pfs code.
3. Some function name clean-up.
# gpg: Signature made Tue 12 Jan 2016 06:04:35 GMT using RSA key ID 04C4E23A
# gpg: Good signature from "Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4846 9DE7 1860 360F A6E9 968C DE41 A4FE 04C4 E23A
* remotes/kvaneesh/tags/for-upstream-signed: (25 commits)
9pfs: introduce V9fsVirtioState
9pfs: factor out v9fs_device_{,un}realize_common
9pfs: rename virtio-9p.c to 9p.c
9pfs: rename virtio_9p_set_fd_limit to use v9fs_ prefix
9pfs: move handle_9p_output and make it static function
9pfs: export pdu_{submit,alloc,free}
9pfs: factor out virtio_9p_push_and_notify
9pfs: break out 9p.h from virtio-9p.h
9pfs: break out virtio_init_iov_from_pdu
9pfs: factor out pdu_push_and_notify
9pfs: factor out virtio_pdu_{,un}marshal
9pfs: make pdu_{,un}marshal proper functions
9pfs: PDU processing functions should start pdu_ prefix
9pfs: PDU processing functions don't need to take V9fsState as argument
fsdev: rename virtio-9p-marshal.{c,h} to 9p-iov-marshal.{c,h}
fsdev: break out 9p-marshal.{c,h} from virtio-9p-marshal.{c,h}
9pfs: remove dead code
9pfs: merge hw/virtio/virtio-9p.h into hw/9pfs/virtio-9p.h
9pfs: rename virtio-9p-xattr{,-user}.{c,h} to 9p-xattr{,-user}.{c,h}
9pfs: rename virtio-9p-synth.{c,h} to 9p-synth.{c,h}
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
V9fsState now only contains generic fields. Introduce V9fsVirtioState
for virtio transport. Change virtio-pci and virtio-ccw to use
V9fsVirtioState.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
|
|
There's no such thing as "PCI queues" in the virtio core.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The 'virtqueue_state' and 'ringsize' can be saved using VMSTATE
macros rather than hand coded .get/.put
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
|
|
The deleted file only contained V9fsConf which wasn't virtio specific.
Merge that to the general header of 9pfs.
Fixed header inclusions as I went along.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
|
|
realize method
In 1811e64 'hw/virtio: Add PCIe capability to virtio devices', the
QEMU_PCI_CAP_EXPRESS capability was added to virtio's pci_dev, within
'virtio_pci_realize' - the pci device object realization method.
This occurs to late, as 'pci_qdev_realize' (DeviceClass.realize of
TYPE_PCI_DEVICE) has already been called, without knowing that the
device instance is indeed an "express" instance, thus allocating
insufficient pci config space.
As a result, device may crash upon attempt to write to the PCIE config
space.
Fix, by arming the QEMU_PCI_CAP_EXPRESS capability early in virtio-pci's
own DeviceClass realize method.
This also makes code cleaner, as 'virtio_pci_realize' may now access the
'pci_is_express' predicate when needed.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@ravellosystems.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|
|
If you run a qemu advertising VERSION_1 with an old kernel where
vhost did not yet support VERSION_1, you'll end up with a device
that is {modern pci|ccw revision 1} but does not advertise VERSION_1.
This is not a sensible configuration and is rejected by the Linux
guest drivers.
To fix this, add a ->post_plugged() callback invoked after features
have been queried that can handle the VERSION_1 bit being withdrawn
and change ccw to fall back to revision 0 if VERSION_1 is gone.
Note that pci is _not_ fixed; we'll need to rethink the approach
for the next release but at least for pci it's not a regression.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This reverts commit 3a12f32229a046f4d4ab0a3a52fb01d2d5a1ab76.
In case of live migration several queues can be enabled and not only the
first one. So informing backend that only the first queue is enabled is
wrong.
Reported-by: Thibaut Collet <thibaut.collet@6wind.com>
Cc: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
|
|
commit 2b8819c6eee517c1582983773f8555bb3f9ed645
("vhost-user: modify SET_LOG_BASE to pass mmap size and offset")
passes log size in units of 4 byte chunks instead of the
expected size in bytes.
Fix this up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We are currently only sending VRING_ENABLE message for the first ring,
that's wrong: we must start/stop them all.
Reported-by: Victor Kaplansky <victork@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
When we get an unexpected response, print out
the original request.
Helps debug protocol errors tremendously.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
But not depend on PROTOCOL_F_MQ feature bit. So that we could use
SET_VRING_ENABLE to sign the backend on stop, even if MQ is disabled.
That's reasonable, since we will have one queue pair at least.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The virtio devices are converted to PCI-Express
if they are plugged into a PCI-Express bus and
the 'modern' protocol is enabled.
Devices plugged directly into the Root Complex as
Integrated Endpoints remain PCI.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Send SET_VRING_ENABLE at start/stop, to give the backend
an explicit sign of our state.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch basically reverts commit d1f8b30e.
It turned out that it breaks stuff, so revert it:
http://lists.nongnu.org/archive/html/qemu-devel/2015-10/msg00949.html
CC: "Michael S. Tsirkin" <mst@redhat.com>
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Unlike the kernel, vhost-user application accesses log table by
mmaping it to its user space. This change adds two new fields to
VhostUserMsg payload: mmap_size, and mmap_offset and make QEMU to
pass the to vhost-user application in VHOST_USER_SET_LOG_BASE
request.
Signed-off-by: Victor Kaplansky <victork@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Guest always get zero when reading queue_enable. This violates
spec. Fixing this by setting the queue_enable to true during any guest
writing and setting it to zero during reset.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We used to use mmio for notification. This could be slow on some arch
(e.g on x86 without EPT). So this patch introduces pio bar and a pio
notification cap for modern device. This ability is enabled through
property "modern-pio-notify" for virtio pci devices and was disabled
by default. Management can enable when it thinks it was needed.
Benchmarks shows almost no obvious difference compared to legacy
device on machines without ept. Thanks Wenli Quan <wquan@redhat.com>
for the benchmarking.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We use data match eventfd for 1.0 notification currently. This could
be slow since software decoding is needed for mmio exit. To speed this
up, we can switch to use zero length mmio eventfd for 1.0 notification
since we can examine the queue index directly from the writing
address. KVM kernel module can utilize this by registering it to fast
mmio bus which could be as fast as pio on ept capable machine when
fast mmio is supported by host kernel.
Lots of improvements were seen on a ept capable machine:
Guest RX:(TCP)
size/session/+throughput%/+cpu%/-+per cpu%/
64/1/+1.6807%/[-16.2421%]/[+21.3984%]/
64/2/+0.6091%/[-11.0187%]/[+13.0678%]/
64/4/+0.0553%/[-5.9768%]/[+6.4155%]/
64/8/+0.1206%/[-4.0057%]/[+4.2984%]/
256/1/-0.0031%/[-10.1166%]/[+11.2517%]/
256/2/-0.5058%/[-6.1656%]/+6.0317%]/
...
Guest TX:(TCP)
size/session/+throughput%/+cpu%/-+per cpu%/
64/1/[+18.9183%]/-0.2823%/[+19.2550%]/
64/2/[+13.5714%]/[+2.2675%]/[+11.0533%]/
64/4/[+13.1070%]/[+2.1817%]/[+10.6920%]/
64/8/[+13.0426%]/[+2.0887%]/[+10.7299%]/
256/1/[+36.2761%]/+6.3434%/[+28.1471%]/
...
1024/1/[+44.8873%]/+2.0811%/[+41.9335%]/
...
1024/4/+0.0228%/[-2.2044%]/[+2.2774%]/
...
16384/2/+0.0127%/[-5.0346%]/[+5.3148%]/
...
65535/1/[+0.0062%]/[-4.1183%]/[+4.3017%]/
65535/2/+0.0004%/[-4.2311%]/[+4.4185%]/
65535/4/+0.0107%/[-4.6106%]/[+4.8446%]/
65535/8/-0.0090%/[-5.5178%]/[+5.8306%]/
Latency:(TCP_RR)
size/session/+transaction rate%/+cpu%/-+per cpu%/
64/1/[+6.5248%]/[-9.2882%]/[+17.4322%]/
64/25/[+11.0854%]/[+0.8000%]/[+10.2038%]/
64/50/[+12.1076%]/[+2.4627%]/[+9.4131%]/
256/1/[+5.3677%]/[+10.5669%]/-4.7024%/
256/25/[+5.6402%]/-0.8962%/[+6.5955%]/
256/50/[+5.9685%]/[+1.7766%]/[+4.1188%]/
4096/1/+0.2508%/[-10.4941%]/[+12.0047%]/
4096/25/[+1.8533%]/-0.0273%/+1.8812%/
4096/50/[+1.2156%]/-1.4134%/+2.6667%/
Notes: data with '[]' is the one whose significance is greater than 95%.
Thanks Wenli Quan <wquan@redhat.com> for the benchmarking.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We don't migrate the followings fields for virtio-pci:
uint32_t dfselect;
uint32_t gfselect;
uint32_t guest_features[2];
struct {
uint16_t num;
bool enabled;
uint32_t desc[2];
uint32_t avail[2];
uint32_t used[2];
} vqs[VIRTIO_QUEUE_MAX];
This will confuse driver if migrating during initialization. Solves
this issue by:
- introduce transport specific callbacks to load and store extra
virtqueue states.
- add a new subsection for virtio to migrate transport specific modern
device state.
- implement pci specific callbacks.
- add a new property for virtio-pci for whether or not to migrate
extra state.
- compat the migration for 2.4 and elder machine types
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Postcopy detects accesses to pages that haven't been transferred yet
using userfaultfd, and it causes exceptions on pages that are 'not
present'.
Ballooning also causes pages to be marked as 'not present' when the
guest inflates the balloon.
Potentially a balloon could be inflated to discard pages that are
currently inflight during postcopy and that may be arriving at about
the same time.
To avoid this confusion, disable ballooning during postcopy.
When disabled we drop balloon requests from the guest. Since ballooning
is generally initiated by the host, the management system should avoid
initiating any balloon instructions to the guest during migration,
although it's not possible to know how long it would take a guest to
process a request made prior to the start of migration.
Guest initiated ballooning will not know if it's really freed a page
of host memory or not.
Queueing the requests until after migration would be nice, but is
non-trivial, since the set of inflate/deflate requests have to
be compared with the state of the page to know what the final
outcome is allowed to be.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
bring_map currently fails if one of the entries it's mapping is
contigious in GPA but not HVA address space. Introduce a mapped_len
parameter so it can handle this, returning the actual mapped length.
This will still fail if there's no space left in the sg, but luckily max
queue size in use is currently 256, while max sg size is 1024, so we
should be OK even is all entries happen to cross a single DIMM boundary.
Won't work well with very small DIMM sizes, unfortunately:
e.g. this will fail with 4K DIMMs where a single
request might span a large number of DIMMs.
Let's hope these are uncommon - at least we are not breaking things.
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Reported-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 1446047243-3221-2-git-send-email-mst@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Use address_space_read to make sure we handle the case of an indirect
descriptor crossing DIMM boundary correctly.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 1446047243-3221-1-git-send-email-mst@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Deprecated in favor of virtqueue_map.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
Drop use of the deprecated virtio_map_sg in virtio core.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
virtio_map_sg currently fails if one of the entries it's mapping is
contigious in GPA but not HVA address space. Introduce virtio_map which
handles this by splitting sg entries.
This new API generally turns out to be a good idea since it's harder to
misuse: at least in one case the existing one was used incorrectly.
This will still fail if there's no space left in the sg, but luckily max
queue size in use is currently 256, while max sg size is 1024, so we
should be OK even is all entries happen to cross a single DIMM boundary.
Won't work well with very small DIMM sizes, unfortunately:
e.g. this will fail with 4K DIMMs where a single
request might span a large number of DIMMs.
Let's hope these are uncommon - at least we are not breaking things.
Note: virtio-scsi calls virtio_map_sg on data loaded from network, and
validates input, asserting on failure. Copy the validating code here -
it will be dropped from virtio-scsi in a follow-up patch.
Reported-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
We are sending msg fields, use sizeof on these
and not on local variables which happen to
have a matching type.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We are using local msg structures everywhere, use them
for sizeof as well.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
vhost: build fix
Fix build breakages when using older gcc.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 20:36:07 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
vhost-user: fix up rhel6 build
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Build on RHEL6 fails:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42875
Apparently unnamed unions couldn't use C99 named field initializers.
Let's just name the payload union field.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
vhost, pc, virtio features, fixes, cleanups
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 12:39:19 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (37 commits)
hw/isa/lpc_ich9: inject the SMI on the VCPU that is writing to APM_CNT
i386: keep cpu_model field in MachineState uptodate
vhost: set the correct queue index in case of migration with multiqueue
piix: fix resource leak reported by Coverity
seccomp: add memfd_create to whitelist
vhost-user-test: check ownership during migration
vhost-user-test: add live-migration test
vhost-user-test: learn to tweak various qemu arguments
vhost-user-test: wrap server in TestServer struct
vhost-user-test: remove useless static check
vhost-user-test: move wait_for_fds() out
vhost: add migration block if memfd failed
vhost-user: use an enum helper for features mask
vhost user: add rarp sending after live migration for legacy guest
vhost user: add support of live migration
net: add trace_vhost_user_event
vhost-user: document migration log
vhost: use a function for each call
vhost-user: add a migration blocker
vhost-user: send log shm fd along with log_base
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
When a live migration is started the log address to mark dirty pages is provided
to the vhost backend through the vhost_dev_set_log function.
This function is called for each queue pairs but the queue index is wrongly set:
always set to the first queue pair. Then vhost backend lost descriptor addresses
of the queue pairs greater than 1 and behaviour of the vhost backend is
unpredictable.
The queue index is computed by taking account of the vq_index (to retrieve the
queue pair index) and calling the vhost_get_vq_index method of the backend.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Cc: qemu-stable@nongnu.org
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
The VHOST_USER_PROTOCOL_FEATURE_MASK will be automatically updated when
adding new features to the enum.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
[Adapted from mailing list discussion - Marc-André]
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
A new vhost user message is added to allow QEMU to ask to vhost user backend to
broadcast a fake RARP after live migration for guest without GUEST_ANNOUNCE
capability.
This new message is sent only if the backend supports the new
VHOST_USER_PROTOCOL_F_RARP protocol feature.
The payload of this new message is the MAC address of the guest (not known by
the backend). The MAC address is copied in the first 6 bytes of a u64 to avoid
to create a new payload message type.
This new message has no equivalent ioctl so a new callback is added in the
userOps structure to send the request.
Upon reception of this new message the vhost user backend must generate and
broadcast a fake RARP request to notify the migration is terminated.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
[Rebased and fixed checkpatch errors - Marc-André]
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Replace the generic vhost_call() by specific functions for each
function call to help with type safety and changing arguments.
While doing this, I found that "unsigned long long" and "uint64_t" were
used interchangeably and causing compilation warnings, using uint64_t
instead, as the vhost & protocol specifies.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
[Fix enum usage and MQ - Thibaut Collet]
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
If VHOST_USER_PROTOCOL_F_LOG_SHMFD is not announced, block vhost-user
migration. The blocker is removed in vhost_dev_cleanup().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Send the shm for the dirty pages logging if the backend supports
VHOST_USER_PROTOCOL_F_LOG_SHMFD. Wait for a reply to make sure
the old log is no longer used.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
If the backend is requires it, allocate shareable memory.
vhost_log_get() now uses 2 globals "vhost_log" and "vhost_log_shm", that
way there is a common non-shareable log and a common shareable one.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|