Age | Commit message (Collapse) | Author |
|
realize method
In 1811e64 'hw/virtio: Add PCIe capability to virtio devices', the
QEMU_PCI_CAP_EXPRESS capability was added to virtio's pci_dev, within
'virtio_pci_realize' - the pci device object realization method.
This occurs to late, as 'pci_qdev_realize' (DeviceClass.realize of
TYPE_PCI_DEVICE) has already been called, without knowing that the
device instance is indeed an "express" instance, thus allocating
insufficient pci config space.
As a result, device may crash upon attempt to write to the PCIE config
space.
Fix, by arming the QEMU_PCI_CAP_EXPRESS capability early in virtio-pci's
own DeviceClass realize method.
This also makes code cleaner, as 'virtio_pci_realize' may now access the
'pci_is_express' predicate when needed.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@ravellosystems.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|
|
If you run a qemu advertising VERSION_1 with an old kernel where
vhost did not yet support VERSION_1, you'll end up with a device
that is {modern pci|ccw revision 1} but does not advertise VERSION_1.
This is not a sensible configuration and is rejected by the Linux
guest drivers.
To fix this, add a ->post_plugged() callback invoked after features
have been queried that can handle the VERSION_1 bit being withdrawn
and change ccw to fall back to revision 0 if VERSION_1 is gone.
Note that pci is _not_ fixed; we'll need to rethink the approach
for the next release but at least for pci it's not a regression.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This reverts commit 3a12f32229a046f4d4ab0a3a52fb01d2d5a1ab76.
In case of live migration several queues can be enabled and not only the
first one. So informing backend that only the first queue is enabled is
wrong.
Reported-by: Thibaut Collet <thibaut.collet@6wind.com>
Cc: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
|
|
commit 2b8819c6eee517c1582983773f8555bb3f9ed645
("vhost-user: modify SET_LOG_BASE to pass mmap size and offset")
passes log size in units of 4 byte chunks instead of the
expected size in bytes.
Fix this up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We are currently only sending VRING_ENABLE message for the first ring,
that's wrong: we must start/stop them all.
Reported-by: Victor Kaplansky <victork@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
When we get an unexpected response, print out
the original request.
Helps debug protocol errors tremendously.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
But not depend on PROTOCOL_F_MQ feature bit. So that we could use
SET_VRING_ENABLE to sign the backend on stop, even if MQ is disabled.
That's reasonable, since we will have one queue pair at least.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The virtio devices are converted to PCI-Express
if they are plugged into a PCI-Express bus and
the 'modern' protocol is enabled.
Devices plugged directly into the Root Complex as
Integrated Endpoints remain PCI.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Send SET_VRING_ENABLE at start/stop, to give the backend
an explicit sign of our state.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch basically reverts commit d1f8b30e.
It turned out that it breaks stuff, so revert it:
http://lists.nongnu.org/archive/html/qemu-devel/2015-10/msg00949.html
CC: "Michael S. Tsirkin" <mst@redhat.com>
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Unlike the kernel, vhost-user application accesses log table by
mmaping it to its user space. This change adds two new fields to
VhostUserMsg payload: mmap_size, and mmap_offset and make QEMU to
pass the to vhost-user application in VHOST_USER_SET_LOG_BASE
request.
Signed-off-by: Victor Kaplansky <victork@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Guest always get zero when reading queue_enable. This violates
spec. Fixing this by setting the queue_enable to true during any guest
writing and setting it to zero during reset.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We used to use mmio for notification. This could be slow on some arch
(e.g on x86 without EPT). So this patch introduces pio bar and a pio
notification cap for modern device. This ability is enabled through
property "modern-pio-notify" for virtio pci devices and was disabled
by default. Management can enable when it thinks it was needed.
Benchmarks shows almost no obvious difference compared to legacy
device on machines without ept. Thanks Wenli Quan <wquan@redhat.com>
for the benchmarking.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We use data match eventfd for 1.0 notification currently. This could
be slow since software decoding is needed for mmio exit. To speed this
up, we can switch to use zero length mmio eventfd for 1.0 notification
since we can examine the queue index directly from the writing
address. KVM kernel module can utilize this by registering it to fast
mmio bus which could be as fast as pio on ept capable machine when
fast mmio is supported by host kernel.
Lots of improvements were seen on a ept capable machine:
Guest RX:(TCP)
size/session/+throughput%/+cpu%/-+per cpu%/
64/1/+1.6807%/[-16.2421%]/[+21.3984%]/
64/2/+0.6091%/[-11.0187%]/[+13.0678%]/
64/4/+0.0553%/[-5.9768%]/[+6.4155%]/
64/8/+0.1206%/[-4.0057%]/[+4.2984%]/
256/1/-0.0031%/[-10.1166%]/[+11.2517%]/
256/2/-0.5058%/[-6.1656%]/+6.0317%]/
...
Guest TX:(TCP)
size/session/+throughput%/+cpu%/-+per cpu%/
64/1/[+18.9183%]/-0.2823%/[+19.2550%]/
64/2/[+13.5714%]/[+2.2675%]/[+11.0533%]/
64/4/[+13.1070%]/[+2.1817%]/[+10.6920%]/
64/8/[+13.0426%]/[+2.0887%]/[+10.7299%]/
256/1/[+36.2761%]/+6.3434%/[+28.1471%]/
...
1024/1/[+44.8873%]/+2.0811%/[+41.9335%]/
...
1024/4/+0.0228%/[-2.2044%]/[+2.2774%]/
...
16384/2/+0.0127%/[-5.0346%]/[+5.3148%]/
...
65535/1/[+0.0062%]/[-4.1183%]/[+4.3017%]/
65535/2/+0.0004%/[-4.2311%]/[+4.4185%]/
65535/4/+0.0107%/[-4.6106%]/[+4.8446%]/
65535/8/-0.0090%/[-5.5178%]/[+5.8306%]/
Latency:(TCP_RR)
size/session/+transaction rate%/+cpu%/-+per cpu%/
64/1/[+6.5248%]/[-9.2882%]/[+17.4322%]/
64/25/[+11.0854%]/[+0.8000%]/[+10.2038%]/
64/50/[+12.1076%]/[+2.4627%]/[+9.4131%]/
256/1/[+5.3677%]/[+10.5669%]/-4.7024%/
256/25/[+5.6402%]/-0.8962%/[+6.5955%]/
256/50/[+5.9685%]/[+1.7766%]/[+4.1188%]/
4096/1/+0.2508%/[-10.4941%]/[+12.0047%]/
4096/25/[+1.8533%]/-0.0273%/+1.8812%/
4096/50/[+1.2156%]/-1.4134%/+2.6667%/
Notes: data with '[]' is the one whose significance is greater than 95%.
Thanks Wenli Quan <wquan@redhat.com> for the benchmarking.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We don't migrate the followings fields for virtio-pci:
uint32_t dfselect;
uint32_t gfselect;
uint32_t guest_features[2];
struct {
uint16_t num;
bool enabled;
uint32_t desc[2];
uint32_t avail[2];
uint32_t used[2];
} vqs[VIRTIO_QUEUE_MAX];
This will confuse driver if migrating during initialization. Solves
this issue by:
- introduce transport specific callbacks to load and store extra
virtqueue states.
- add a new subsection for virtio to migrate transport specific modern
device state.
- implement pci specific callbacks.
- add a new property for virtio-pci for whether or not to migrate
extra state.
- compat the migration for 2.4 and elder machine types
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Postcopy detects accesses to pages that haven't been transferred yet
using userfaultfd, and it causes exceptions on pages that are 'not
present'.
Ballooning also causes pages to be marked as 'not present' when the
guest inflates the balloon.
Potentially a balloon could be inflated to discard pages that are
currently inflight during postcopy and that may be arriving at about
the same time.
To avoid this confusion, disable ballooning during postcopy.
When disabled we drop balloon requests from the guest. Since ballooning
is generally initiated by the host, the management system should avoid
initiating any balloon instructions to the guest during migration,
although it's not possible to know how long it would take a guest to
process a request made prior to the start of migration.
Guest initiated ballooning will not know if it's really freed a page
of host memory or not.
Queueing the requests until after migration would be nice, but is
non-trivial, since the set of inflate/deflate requests have to
be compared with the state of the page to know what the final
outcome is allowed to be.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
bring_map currently fails if one of the entries it's mapping is
contigious in GPA but not HVA address space. Introduce a mapped_len
parameter so it can handle this, returning the actual mapped length.
This will still fail if there's no space left in the sg, but luckily max
queue size in use is currently 256, while max sg size is 1024, so we
should be OK even is all entries happen to cross a single DIMM boundary.
Won't work well with very small DIMM sizes, unfortunately:
e.g. this will fail with 4K DIMMs where a single
request might span a large number of DIMMs.
Let's hope these are uncommon - at least we are not breaking things.
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Reported-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 1446047243-3221-2-git-send-email-mst@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Use address_space_read to make sure we handle the case of an indirect
descriptor crossing DIMM boundary correctly.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 1446047243-3221-1-git-send-email-mst@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Deprecated in favor of virtqueue_map.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
Drop use of the deprecated virtio_map_sg in virtio core.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
virtio_map_sg currently fails if one of the entries it's mapping is
contigious in GPA but not HVA address space. Introduce virtio_map which
handles this by splitting sg entries.
This new API generally turns out to be a good idea since it's harder to
misuse: at least in one case the existing one was used incorrectly.
This will still fail if there's no space left in the sg, but luckily max
queue size in use is currently 256, while max sg size is 1024, so we
should be OK even is all entries happen to cross a single DIMM boundary.
Won't work well with very small DIMM sizes, unfortunately:
e.g. this will fail with 4K DIMMs where a single
request might span a large number of DIMMs.
Let's hope these are uncommon - at least we are not breaking things.
Note: virtio-scsi calls virtio_map_sg on data loaded from network, and
validates input, asserting on failure. Copy the validating code here -
it will be dropped from virtio-scsi in a follow-up patch.
Reported-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
|
|
We are sending msg fields, use sizeof on these
and not on local variables which happen to
have a matching type.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We are using local msg structures everywhere, use them
for sizeof as well.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
vhost: build fix
Fix build breakages when using older gcc.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 20:36:07 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
vhost-user: fix up rhel6 build
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Build on RHEL6 fails:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42875
Apparently unnamed unions couldn't use C99 named field initializers.
Let's just name the payload union field.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
vhost, pc, virtio features, fixes, cleanups
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 12:39:19 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (37 commits)
hw/isa/lpc_ich9: inject the SMI on the VCPU that is writing to APM_CNT
i386: keep cpu_model field in MachineState uptodate
vhost: set the correct queue index in case of migration with multiqueue
piix: fix resource leak reported by Coverity
seccomp: add memfd_create to whitelist
vhost-user-test: check ownership during migration
vhost-user-test: add live-migration test
vhost-user-test: learn to tweak various qemu arguments
vhost-user-test: wrap server in TestServer struct
vhost-user-test: remove useless static check
vhost-user-test: move wait_for_fds() out
vhost: add migration block if memfd failed
vhost-user: use an enum helper for features mask
vhost user: add rarp sending after live migration for legacy guest
vhost user: add support of live migration
net: add trace_vhost_user_event
vhost-user: document migration log
vhost: use a function for each call
vhost-user: add a migration blocker
vhost-user: send log shm fd along with log_base
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
When a live migration is started the log address to mark dirty pages is provided
to the vhost backend through the vhost_dev_set_log function.
This function is called for each queue pairs but the queue index is wrongly set:
always set to the first queue pair. Then vhost backend lost descriptor addresses
of the queue pairs greater than 1 and behaviour of the vhost backend is
unpredictable.
The queue index is computed by taking account of the vq_index (to retrieve the
queue pair index) and calling the vhost_get_vq_index method of the backend.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Cc: qemu-stable@nongnu.org
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
The VHOST_USER_PROTOCOL_FEATURE_MASK will be automatically updated when
adding new features to the enum.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
[Adapted from mailing list discussion - Marc-André]
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
A new vhost user message is added to allow QEMU to ask to vhost user backend to
broadcast a fake RARP after live migration for guest without GUEST_ANNOUNCE
capability.
This new message is sent only if the backend supports the new
VHOST_USER_PROTOCOL_F_RARP protocol feature.
The payload of this new message is the MAC address of the guest (not known by
the backend). The MAC address is copied in the first 6 bytes of a u64 to avoid
to create a new payload message type.
This new message has no equivalent ioctl so a new callback is added in the
userOps structure to send the request.
Upon reception of this new message the vhost user backend must generate and
broadcast a fake RARP request to notify the migration is terminated.
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
[Rebased and fixed checkpatch errors - Marc-André]
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Replace the generic vhost_call() by specific functions for each
function call to help with type safety and changing arguments.
While doing this, I found that "unsigned long long" and "uint64_t" were
used interchangeably and causing compilation warnings, using uint64_t
instead, as the vhost & protocol specifies.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
[Fix enum usage and MQ - Thibaut Collet]
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
If VHOST_USER_PROTOCOL_F_LOG_SHMFD is not announced, block vhost-user
migration. The blocker is removed in vhost_dev_cleanup().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Send the shm for the dirty pages logging if the backend supports
VHOST_USER_PROTOCOL_F_LOG_SHMFD. Wait for a reply to make sure
the old log is no longer used.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
If the backend is requires it, allocate shareable memory.
vhost_log_get() now uses 2 globals "vhost_log" and "vhost_log_shm", that
way there is a common non-shareable log and a common shareable one.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Check if the backend has VHOST_USER_PROTOCOL_F_LOG_SHMFD feature and
require a shared log.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Split VHOST_SET_LOG_BASE call in a seperate function callback, so that
type safety works and more arguments can be added in the next patches.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
|
|
Don't initialize vhost backend if memslots number exceeds the supported
limit. This prevents failures down the road when backend
is actually started.
[MST: rewrite commit log]
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
it will allow for other parts of QEMU check if it's safe
to map memory region during hotplug/runtime.
That way hotplug path will have a chance to cancel
hotplug operation instead of crashing in vhost_commit().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
In-kernel ITS emulation on ARM64 will require to supply requester IDs.
These IDs can now be retrieved from the device pointer using new
pci_requester_id() function.
This patch adds pci_dev pointer to KVM GSI routing functions and makes
callers passing it.
x86 architecture does not use requester IDs, but hw/i386/kvm/pci-assign.c
also made passing PCI device pointer instead of NULL for consistency with
the rest of the code.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-Id: <ce081423ba2394a4efc30f30708fca07656bc500.1444916432.git.p.fedin@samsung.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
staging
Pull request
v2:
* Fix virtio 16lx -> HWADDR_PRIx format specifier [Peter]
# gpg: Signature made Mon 12 Oct 2015 11:19:06 BST using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
* remotes/stefanha/tags/block-pull-request:
sdhci.c: Limit the maximum block size
block: switch from g_slice allocator to malloc
virtio dataplane: adapt dataplane for virtio Version 1
virtio-blk: use blk_io_plug/unplug for Linux AIO batching
sdhci: Pass drive parameter to sdhci-pci via qdev property
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Let dataplane allocate different region for the desc/avail/used
ring regions.
Take VIRTIO_RING_F_EVENT_IDX into account to increase the used/avail
rings accordingly.
[Fix 32-bit builds by changing 16lx format specifier to HWADDR_PRIx.
--Stefan]
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Tested-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Message-id: 1441625636-23773-1-git-send-email-pmorel@linux.vnet.ibm.com
(changed __virtio16 into uint16_t,
map descriptor table and available ring read-only)
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
When CONFIG_LINUX is off, devices "virtio-keyboard-device",
"virtio-mouse-device", "virtio-tablet-device" and
"virtio-input-host-device" aren't compiled in, yet
"virtio-keyboard-pci", "virtio-mouse-pci", "virtio-tablet-pci" and
"virtio-input-host-pci" still are. Attempts to introspect them crash,
e.g.
$ qemu-system-x86_64 -device virtio-tablet-pci,help
**
ERROR:/work/armbru/qemu/qom/object.c:333:object_initialize_with_type: assertion failed: (type != NULL)
Broken in commit 710e2d9 and commit 006a5ed.
Fix by compiling the "virtio-FOO-pci" exactly when compiling the
"virtio-FOO-device": compile "virtio-keyboard-device",
"virtio-mouse-device", "virtio-tablet-device" regardless of
CONFIG_LINUX, and compile "virtio-input-host-pci" only for
CONFIG_LINUX.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <1444320700-26260-3-git-send-email-armbru@redhat.com>
|
|
And do not issue an error_report in that case.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch introduces virtqueue_discard() to discard a descriptor and
unmap the sgs. This will be used by the patch that will discard
descriptor when packet is truncated.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Factor out sg unmapping logic. This will be reused by the patch that
can discard descriptor.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Andrew James <andrew.james@hpe.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
a specific virt queue, which is similar to attach/detach queue for
tap device.
virtio driver on guest doesn't have to use max virt queue pair, it
could enable any number of virt queue ranging from 1 to max virt
queue pair.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|
|
This patch is initially based a patch from Nikolay Nikolaev.
This patch adds vhost-user multiple queue support, by creating a nc
and vhost_net pair for each queue.
Qemu exits if find that the backend can't support the number of requested
queues (by providing queues=# option). The max number is queried by a
new message, VHOST_USER_GET_QUEUE_NUM, and is sent only when protocol
feature VHOST_USER_PROTOCOL_F_MQ is present first.
The max queue check is done at vhost-user initiation stage. We initiate
one queue first, which, in the meantime, also gets the max_queues the
backend supports.
In older version, it was reported that some messages are sent more times
than necessary. Here we came an agreement with Michael that we could
categorize vhost user messages to 2 types: non-vring specific messages,
which should be sent only once, and vring specific messages, which should
be sent per queue.
Here I introduced a helper function vhost_user_one_time_request(), which
lists following messages as non-vring specific messages:
VHOST_USER_SET_OWNER
VHOST_USER_RESET_DEVICE
VHOST_USER_SET_MEM_TABLE
VHOST_USER_GET_QUEUE_NUM
For above messages, we simply ignore them when they are not sent the first
time.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|
|
Minusing the idx with the base(dev->vq_index) for vhost-kernel, and
then adding it back for vhost-user doesn't seem right. Here introduces
a new method vhost_backend_get_vq_index() for getting the right vq
index for following vhost messages calls.
Suggested-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|
|
This is for querying how many queues the backend supports if it has mq
support(when VHOST_USER_PROTOCOL_F_MQ flag is set from the quried
protocol features).
vhost_net_get_max_queues() is the interface to export that value, and
to tell if the backend supports # of queues user requested, which is
done in the following patch.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
|