aboutsummaryrefslogtreecommitdiff
path: root/net/vhost-vdpa.c
AgeCommit message (Collapse)Author
2023-07-10vdpa: Allow VIRTIO_NET_F_CTRL_RX in SVQHawkins Jiawei
Enable SVQ with VIRTIO_NET_F_CTRL_RX feature. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <5d6173a6d7c4c514c98362b404c019f52d73b06c.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Avoid forwarding large CVQ command failuresHawkins Jiawei
Due to the size limitation of the out buffer sent to the vdpa device, which is determined by vhost_vdpa_net_cvq_cmd_len(), excessive CVQ command is truncated in QEMU. As a result, the vdpa device rejects this flawd CVQ command. However, the problem is that, the VIRTIO_NET_CTRL_MAC_TABLE_SET CVQ command has a variable length, which may exceed vhost_vdpa_net_cvq_cmd_len() if the guest sets more than `MAC_TABLE_ENTRIES` MAC addresses for the filter table. This patch solves this problem by following steps: * Increase the out buffer size to vhost_vdpa_net_cvq_cmd_page_len(), which represents the size of the buffer that is allocated and mmaped. This ensures that everything works correctly as long as the guest sets fewer than `(vhost_vdpa_net_cvq_cmd_page_len() - sizeof(struct virtio_net_ctrl_hdr) - 2 * sizeof(struct virtio_net_ctrl_mac)) / ETH_ALEN` MAC addresses. Considering the highly unlikely scenario for the guest setting more than that number of MAC addresses for the filter table, this should work fine for the majority of cases. * If the CVQ command exceeds vhost_vdpa_net_cvq_cmd_page_len(), instead of directly sending this CVQ command, QEMU should send a VIRTIO_NET_CTRL_RX_PROMISC CVQ command to vdpa device. Addtionally, a fake VIRTIO_NET_CTRL_MAC_TABLE_SET command including (`MAC_TABLE_ENTRIES` + 1) non-multicast MAC addresses and (`MAC_TABLE_ENTRIES` + 1) multicast MAC addresses should be provided to the device model. By doing so, the vdpa device turns promiscuous mode on, aligning with the VirtIO standard. The device model marks `n->mac_table.uni_overflow` and `n->mac_table.multi_overflow`, which aligns with the state of the vdpa device. Note that the bug cannot be triggered at the moment, since VIRTIO_NET_F_CTRL_RX feature is not enabled for SVQ. Fixes: 7a7f87e94c ("vdpa: Move command buffers map to start of net device") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <267e15e4eed2d7aeb9887f193da99a13d22a2f1d.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Accessing CVQ header through its structureHawkins Jiawei
We can access the CVQ header through `struct virtio_net_ctrl_hdr`, instead of accessing it through a `uint8_t` pointer, which improves the code's readability and maintainability. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <cd522e06a4371e9d6b8a1c1a86f90a92401d56e8.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Restore packet receive filtering state relative with _F_CTRL_RX featureHawkins Jiawei
This patch introduces vhost_vdpa_net_load_rx_mode() and vhost_vdpa_net_load_rx() to restore the packet receive filtering state in relation to VIRTIO_NET_F_CTRL_RX feature at device's startup. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <804cedac93e19ba3b810d52b274ca5ec11469f09.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Restore MAC address filtering stateHawkins Jiawei
This patch refactors vhost_vdpa_net_load_mac() to restore the MAC address filtering state at device's startup. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <4b9550c14bc8c98c8f48e04dbf3d3ac41489d3fd.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Use iovec for vhost_vdpa_net_load_cmd()Hawkins Jiawei
According to VirtIO standard, "The driver MUST follow the VIRTIO_NET_CTRL_MAC_TABLE_SET command by a le32 number, followed by that number of non-multicast MAC addresses, followed by another le32 number, followed by that number of multicast addresses." Considering that these data is not stored in contiguous memory, this patch refactors vhost_vdpa_net_load_cmd() to accept scattered data, eliminating the need for an addtional data copy or packing the data into s->cvq_cmd_out_buffer outside of vhost_vdpa_net_load_cmd(). Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <3482cc50eebd13db4140b8b5dec9d0cc25b20b1b.1688743107.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Fix possible use-after-free for VirtQueueElementHawkins Jiawei
QEMU uses vhost_handle_guest_kick() to forward guest's available buffers to the vdpa device in SVQ avail ring. In vhost_handle_guest_kick(), a `g_autofree` `elem` is used to iterate through the available VirtQueueElements. This `elem` is then passed to `svq->ops->avail_handler`, specifically to the vhost_vdpa_net_handle_ctrl_avail(). If this handler fails to process the CVQ command, vhost_handle_guest_kick() regains ownership of the `elem`, and either frees it or requeues it. Yet the problem is that, vhost_vdpa_net_handle_ctrl_avail() mistakenly frees the `elem`, even if it fails to forward the CVQ command to vdpa device. This can result in a use-after-free for the `elem` in vhost_handle_guest_kick(). This patch solves this problem by refactoring vhost_vdpa_net_handle_ctrl_avail() to only freeing the `elem` if it owns it. Fixes: bd907ae4b0 ("vdpa: manual forward CVQ buffers") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <e3f2d7db477734afe5c6a5ab3fa8b8317514ea34.1688746840.git.yin31149@gmail.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Return -EIO if device ack is VIRTIO_NET_ERR in _load_offloads()Hawkins Jiawei
According to VirtIO standard, "The class, command and command-specific-data are set by the driver, and the device sets the ack byte. There is little it can do except issue a diagnostic if ack is not VIRTIO_NET_OK." Therefore, QEMU should stop sending the queued SVQ commands and cancel the device startup if the device's ack is not VIRTIO_NET_OK. Yet the problem is that, vhost_vdpa_net_load_offloads() returns 1 based on `*s->status != VIRTIO_NET_OK` when the device's ack is VIRTIO_NET_ERR. As a result, net->nc->info->load() also returns 1, this makes vhost_net_start_one() incorrectly assume the device state is successfully loaded by vhost_vdpa_net_load() and return 0, instead of goto `fail` label to cancel the device startup, as vhost_net_start_one() only cancels the device startup when net->nc->info->load() returns a negative value. This patch fixes this problem by returning -EIO when the device's ack is not VIRTIO_NET_OK. Fixes: 0b58d3686a ("vdpa: Add vhost_vdpa_net_load_offloads()") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <b0396b80e96322b86f1a0b10c098fc1edd947d72.1688438055.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Return -EIO if device ack is VIRTIO_NET_ERR in _load_mq()Hawkins Jiawei
According to VirtIO standard, "The class, command and command-specific-data are set by the driver, and the device sets the ack byte. There is little it can do except issue a diagnostic if ack is not VIRTIO_NET_OK." Therefore, QEMU should stop sending the queued SVQ commands and cancel the device startup if the device's ack is not VIRTIO_NET_OK. Yet the problem is that, vhost_vdpa_net_load_mq() returns 1 based on `*s->status != VIRTIO_NET_OK` when the device's ack is VIRTIO_NET_ERR. As a result, net->nc->info->load() also returns 1, this makes vhost_net_start_one() incorrectly assume the device state is successfully loaded by vhost_vdpa_net_load() and return 0, instead of goto `fail` label to cancel the device startup, as vhost_net_start_one() only cancels the device startup when net->nc->info->load() returns a negative value. This patch fixes this problem by returning -EIO when the device's ack is not VIRTIO_NET_OK. Fixes: f64c7cda69 ("vdpa: Add vhost_vdpa_net_load_mq") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <ec515ebb0b4f56368751b9e318e245a5d994fa72.1688438055.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-10vdpa: Return -EIO if device ack is VIRTIO_NET_ERR in _load_mac()Hawkins Jiawei
According to VirtIO standard, "The class, command and command-specific-data are set by the driver, and the device sets the ack byte. There is little it can do except issue a diagnostic if ack is not VIRTIO_NET_OK." Therefore, QEMU should stop sending the queued SVQ commands and cancel the device startup if the device's ack is not VIRTIO_NET_OK. Yet the problem is that, vhost_vdpa_net_load_mac() returns 1 based on `*s->status != VIRTIO_NET_OK` when the device's ack is VIRTIO_NET_ERR. As a result, net->nc->info->load() also returns 1, this makes vhost_net_start_one() incorrectly assume the device state is successfully loaded by vhost_vdpa_net_load() and return 0, instead of goto `fail` label to cancel the device startup, as vhost_net_start_one() only cancels the device startup when net->nc->info->load() returns a negative value. This patch fixes this problem by returning -EIO when the device's ack is not VIRTIO_NET_OK. Fixes: f73c0c43ac ("vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <a21731518644abbd0c495c5b7960527c5911f80d.1688438055.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-08vdpa: Sort vdpa_feature_bits array alphabeticallyHawkins Jiawei
This patch sorts the vdpa_feature_bits array alphabetically in ascending order to avoid future duplicates. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-07-08vdpa: Delete duplicated VIRTIO_NET_F_RSS in vdpa_feature_bitsHawkins Jiawei
This entry was duplicated on referenced commit. Removing it. Fixes: 402378407dbd ("vhost-vdpa: multiqueue support") Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-06-26vhost-vdpa: do not cleanup the vdpa/vhost-net structures if peer nic is presentAni Sinha
When a peer nic is still attached to the vdpa backend, it is too early to free up the vhost-net and vdpa structures. If these structures are freed here, then QEMU crashes when the guest is being shut down. The following call chain would result in an assertion failure since the pointer returned from vhost_vdpa_get_vhost_net() would be NULL: do_vm_stop() -> vm_state_notify() -> virtio_set_status() -> virtio_net_vhost_status() -> get_vhost_net(). Therefore, we defer freeing up the structures until at guest shutdown time when qemu_cleanup() calls net_cleanup() which then calls qemu_del_net_client() which would eventually call vhost_vdpa_cleanup() again to free up the structures. This time, the loop in net_cleanup() ensures that vhost_vdpa_cleanup() will be called one last time when all the peer nics are detached and freed. All unit tests pass with this change. CC: imammedo@redhat.com CC: jusual@redhat.com CC: mst@redhat.com Fixes: CVE-2023-3301 Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2128929 Signed-off-by: Ani Sinha <anisinha@redhat.com> Message-Id: <20230619065209.442185-1-anisinha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: fix not using CVQ buffer in case of errorEugenio Pérez
Bug introducing when refactoring. Otherway, the guest never received the used buffer. Fixes: be4278b65fc1 ("vdpa: extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602173451.1917999-1-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
2023-06-26vdpa: mask _F_CTRL_GUEST_OFFLOADS for vhost vdpa devicesEugenio Pérez
QEMU does not emulate it so it must be disabled as long as the backend does not support it. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602173328.1917385-1-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
2023-06-26vdpa: Allow VIRTIO_NET_F_CTRL_GUEST_OFFLOADS in SVQHawkins Jiawei
Enable SVQ with VIRTIO_NET_F_CTRL_GUEST_OFFLOADS feature. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <778d642ecae6deed8a218b0e6232e4d7bb96b439.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: Add vhost_vdpa_net_load_offloads()Hawkins Jiawei
This patch introduces vhost_vdpa_net_load_offloads() to restore offloads state at device's startup. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <7e2b5cad9c48c917df53d80dec27dbfeb513e1a3.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: reuse virtio_vdev_has_feature()Hawkins Jiawei
We can use virtio_vdev_has_feature() instead of manually accessing the features. Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <ff838d30206209fd865511b16ffb34cc0d5e8d8f.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: map shadow vrings with MAP_SHAREDEugenio Pérez
The vdpa devices that use va addresses neeeds these maps shared. Otherwise, vhost_vdpa checks will refuse to accept the maps. The mmap call will always return a page aligned address, so removing the qemu_memalign call. Keeping the ROUND_UP for the size as we still need to DMA-map them in full. Not applying fixes tag as it never worked with va devices. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602143854.1879091-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len functionEugenio Pérez
We need to call it from resource cleanup context, as munmap needs the size of the mappings. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230602143854.1879091-3-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-26vdpa: do not block migration if device has cvq and x-svq=onEugenio Pérez
It was a mistake to forbid in all cases, as SVQ is already able to send all the CVQ messages before start forwarding data vqs. It actually caused a regression, making impossible to migrate device previously migratable. Fixes: 36e4647247f2 ("vdpa: add vhost_vdpa_net_valid_svq_features") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602143854.1879091-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
2023-06-23vdpa: move CVQ isolation check to net_init_vhost_vdpaEugenio Pérez
Evaluating it at start time instead of initialization time may make the guest capable of dynamically adding or removing migration blockers. Also, moving to initialization reduces the number of ioctls in the migration, reducing failure possibilities. As a drawback we need to check for CVQ isolation twice: one time with no MQ negotiated and another one acking it, as long as the device supports it. This is because Vring ASID / group management is based on vq indexes, but we don't know the index of CVQ before negotiating MQ. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230526153143.470745-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-23vdpa: return errno in vhost_vdpa_get_vring_group errorEugenio Pérez
We need to tell in the caller, as some errors are expected in a normal workflow. In particular, parent drivers in recent kernels with VHOST_BACKEND_F_IOTLB_ASID may not support vring groups. In that case, -ENOTSUP is returned. This is the case of vp_vdpa in Linux 6.2. Next patches in this series will use that information to know if it must abort or not. Also, next patches return properly an errp instead of printing with error_report. Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230526153143.470745-2-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-04-21vdpa: accept VIRTIO_NET_F_SPEED_DUPLEX in SVQEugenio Pérez
There is no reason to block it as it has nothing to do with the vrings. All the support of the feature comes via config space. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Suggested-by: Alvaro Karsz <alvaro.karsz@solid-run.com> Message-Id: <20230307170018.260557-1-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa net: allow VHOST_F_LOG_ALLEugenio Pérez
Since some actions move to the start function instead of init, the device features may not be the parent vdpa device's, but the one returned by vhost backend. If transition to SVQ is supported, the vhost backend will return _F_LOG_ALL to signal the device is migratable. Add VHOST_F_LOG_ALL. HW dirty page tracking can be added on top of this change if the device supports it in the future. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230303172445.1089785-14-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa: block migration if device has unsupported featuresEugenio Pérez
A vdpa net device must initialize with SVQ in order to be migratable at this moment, and initialization code verifies some conditions. If the device is not initialized with the x-svq parameter, it will not expose _F_LOG so the vhost subsystem will block VM migration from its initialization. Next patches change this, so we need to verify migration conditions differently. QEMU only supports a subset of net features in SVQ, and it cannot migrate state that cannot track or restore in the destination. Add a migration blocker if the device offers an unsupported feature. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-12-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa net: block migration if the device has CVQEugenio Pérez
Devices with CVQ need to migrate state beyond vq state. Leaving this to future series. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-11-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa: add vdpa net migration state notifierEugenio Pérez
This allows net to restart the device backend to configure SVQ on it. Ideally, these changes should not be net specific and they could be done in: * vhost_vdpa_set_features (with VHOST_F_LOG_ALL) * vhost_vdpa_set_vring_addr (with .enable_log) * vhost_vdpa_set_log_base. However, the vdpa net backend is the one with enough knowledge to configure everything because of some reasons: * Queues might need to be shadowed or not depending on its kind (control vs data). * Queues need to share the same map translations (iova tree). Also, there are other problems that may have solutions but complicates the implementation at this stage: * We're basically duplicating vhost_dev_start and vhost_dev_stop, and they could go out of sync. If we want to reuse them, we need a way to skip some function calls to avoid recursiveness (either vhost_ops -> vhost_set_features, vhost_set_vring_addr, ...). * We need to traverse all vhost_dev of a given net device twice: one to stop and get the vq state and another one after the reset to configure properties like address, fd, etc. Because of that it is cleaner to restart the whole net backend and configure again as expected, similar to how vhost-kernel moves between userspace and passthrough. If more kinds of devices need dynamic switching to SVQ we can: * Create a callback struct like VhostOps and move most of the code there. VhostOps cannot be reused since all vdpa backend share them, and to personalize just for networking would be too heavy. * Add a parent struct or link all the vhost_vdpa or vhost_dev structs so we can traverse them. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-9-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-03-07vdpa net: move iova tree creation from init to startEugenio Pérez
Only create iova_tree if and when it is needed. The cleanup keeps being responsible for the last VQ but this change allows it to merge both cleanup functions. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230303172445.1089785-2-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-02-17vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag checkEugenio Pérez
VHOST_BACKEND_F_IOTLB_ASID is the feature bit, not the bitmask. Since the device under test also provided VHOST_BACKEND_F_IOTLB_MSG_V2 and VHOST_BACKEND_F_IOTLB_BATCH, this went unnoticed. Fixes: c1a1008685 ("vdpa: always start CVQ in SVQ mode if possible") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2023-01-08vdpa: harden the error path if get_iova_range failedLongpeng
We should stop if the GET_IOVA_RANGE ioctl failed. Signed-off-by: Longpeng <longpeng2@huawei.com> Message-Id: <20221224114848.3062-3-longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2023-01-08vdpa-dev: get iova range explicitlyLongpeng
In commit a585fad26b ("vdpa: request iova_range only once") we remove GET_IOVA_RANGE form vhost_vdpa_init, the generic vdpa device will start without iova_range populated, so the device won't work. Let's call GET_IOVA_RANGE ioctl explicitly. Fixes: a585fad26b2e6ccc ("vdpa: request iova_range only once") Signed-off-by: Longpeng <longpeng2@huawei.com> Message-Id: <20221224114848.3062-2-longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2023-01-08vdpa: do not handle VIRTIO_NET_F_GUEST_ANNOUNCE in vhost-vdpaEugenio Pérez
So qemu emulates it even in case the device does not support it. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221221115015.1400889-5-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-01-08vdpa: handle VIRTIO_NET_CTRL_ANNOUNCE in vhost_vdpa_net_handle_ctrl_availEugenio Pérez
Since this capability is emulated by qemu shadowed CVQ cannot forward it to the device. Process all that command within qemu. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221221115015.1400889-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2022-12-21vdpa: always start CVQ in SVQ mode if possibleEugenio Pérez
Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-13-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2022-12-21vdpa: add shadow_data to vhost_vdpaEugenio Pérez
The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, CVQ ones are ASID 1 if supported. Let's tell the listener if it needs to register them on iova tree or not. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-12-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-21vdpa: store x-svq parameter in VhostVDPAStateEugenio Pérez
CVQ can be shadowed two ways: - Device has x-svq=on parameter (current way) - The device can isolate CVQ in its own vq group QEMU needs to check for the second condition dynamically, because CVQ index is not known before the driver ack the features. Since this is dynamic, the CVQ isolation could vary with different conditions, making it possible to go from "not isolated group" to "isolated". Saving the cmdline parameter in an extra field so we never disable CVQ SVQ in case the device was started with x-svq cmdline. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-11-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-21vdpa: add asid parameter to vhost_vdpa_dma_map/unmapEugenio Pérez
So the caller can choose which ASID is destined. No need to update the batch functions as they will always be called from memory listener updates at the moment. Memory listener updates will always update ASID 0, as it's the passthrough ASID. All vhost devices's ASID are 0 at this moment. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-10-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-21vdpa: move SVQ vring features check to net/Eugenio Pérez
The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore. Since the moved checks will be already evaluated at net/ to know if it is ok to shadow CVQ, move them. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-8-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-21vdpa: request iova_range only onceEugenio Pérez
Currently iova range is requested once per queue pair in the case of net. Reduce the number of ioctls asking it once at initialization and reusing that value for each vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20221215113144.322011-7-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasonwang@redhat.com>
2022-12-21vdpa: add vhost_vdpa_net_valid_svq_featuresEugenio Pérez
It will be reused at vdpa device start so let's extract in its own function. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-6-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-12-14qapi net: Elide redundant has_FOO in generated CMarkus Armbruster
The has_FOO for pointer-valued FOO are redundant, except for arrays. They are also a nuisance to work with. Recent commit "qapi: Start to elide redundant has_FOO in generated C" provided the means to elide them step by step. This is the step for qapi/net.json. Said commit explains the transformation in more detail. The invariant violations mentioned there do not occur here. Cc: Jason Wang <jasowang@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221104160712.3005652-19-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> [Fixes for MacOS squashed in]
2022-11-22vhost: mask VIRTIO_F_RING_RESET for vhost and vhost-user devicesStefano Garzarella
Commit 69e1c14aa2 ("virtio: core: vq reset feature negotation support") enabled VIRTIO_F_RING_RESET by default for all virtio devices. This feature is not currently emulated by QEMU, so for vhost and vhost-user devices we need to make sure it is supported by the offloaded device emulation (in-kernel or in another process). To do this we need to add VIRTIO_F_RING_RESET to the features bitmap passed to vhost_get_features(). This way it will be masked if the device does not support it. This issue was initially discovered with vhost-vsock and vhost-user-vsock, and then also tested with vhost-user-rng which confirmed the same issue. They fail when sending features through VHOST_SET_FEATURES ioctl or VHOST_USER_SET_FEATURES message, since VIRTIO_F_RING_RESET is negotiated by the guest (Linux >= v6.0), but not supported by the device. Fixes: 69e1c14aa2 ("virtio: core: vq reset feature negotation support") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1318 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <20221121101101.29400-1-sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Acked-by: Jason Wang <jasowang@redhat.com>
2022-11-08vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in ↵Si-Wei Liu
virtio_net_reset The citing commit has incorrect code in vhost_vdpa_receive() that returns zero instead of full packet size to the caller. This renders pending packets unable to be freed so then get clogged in the tx queue forever. When device is being reset later on, below assertion failure ensues: 0 0x00007f86d53bb387 in raise () from /lib64/libc.so.6 1 0x00007f86d53bca78 in abort () from /lib64/libc.so.6 2 0x00007f86d53b41a6 in __assert_fail_base () from /lib64/libc.so.6 3 0x00007f86d53b4252 in __assert_fail () from /lib64/libc.so.6 4 0x000055b8f6ff6fcc in virtio_net_reset (vdev=<optimized out>) at /usr/src/debug/qemu/hw/net/virtio-net.c:563 5 0x000055b8f7012fcf in virtio_reset (opaque=0x55b8faf881f0) at /usr/src/debug/qemu/hw/virtio/virtio.c:1993 6 0x000055b8f71f0086 in virtio_bus_reset (bus=bus@entry=0x55b8faf88178) at /usr/src/debug/qemu/hw/virtio/virtio-bus.c:102 7 0x000055b8f71f1620 in virtio_pci_reset (qdev=<optimized out>) at /usr/src/debug/qemu/hw/virtio/virtio-pci.c:1845 8 0x000055b8f6fafc6c in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...) at /usr/src/debug/qemu/memory.c:483 9 0x000055b8f6fadce9 in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f867e7fb7e8, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x55b8f6fafc20 <memory_region_write_accessor>, mr=0x55b8faf80a50, attrs=...) at /usr/src/debug/qemu/memory.c:544 10 0x000055b8f6fb1d0b in memory_region_dispatch_write (mr=mr@entry=0x55b8faf80a50, addr=addr@entry=20, data=0, op=<optimized out>, attrs=attrs@entry=...) at /usr/src/debug/qemu/memory.c:1470 11 0x000055b8f6f62ada in flatview_write_continue (fv=fv@entry=0x7f86ac04cd20, addr=addr@entry=549755813908, attrs=..., attrs@entry=..., buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=len@entry=1, addr1=20, l=1, mr=0x55b8faf80a50) at /usr/src/debug/qemu/exec.c:3266 12 0x000055b8f6f62c8f in flatview_write (fv=0x7f86ac04cd20, addr=549755813908, attrs=..., buf=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=1) at /usr/src/debug/qemu/exec.c:3306 13 0x000055b8f6f674cb in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /usr/src/debug/qemu/exec.c:3396 14 0x000055b8f6f67575 in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=<optimized out>, is_write=<optimized out>) at /usr/src/debug/qemu/exec.c:3406 15 0x000055b8f6fc1cc8 in kvm_cpu_exec (cpu=cpu@entry=0x55b8f9aa0e10) at /usr/src/debug/qemu/accel/kvm/kvm-all.c:2410 16 0x000055b8f6fa5f5e in qemu_kvm_cpu_thread_fn (arg=0x55b8f9aa0e10) at /usr/src/debug/qemu/cpus.c:1318 17 0x000055b8f7336e16 in qemu_thread_start (args=0x55b8f9ac8480) at /usr/src/debug/qemu/util/qemu-thread-posix.c:519 18 0x00007f86d575aea5 in start_thread () from /lib64/libpthread.so.0 19 0x00007f86d5483b2d in clone () from /lib64/libc.so.6 Make vhost_vdpa_receive() return the size passed in as is, so that the caller qemu_deliver_packet_iov() would eventually propagate it back to virtio_net_flush_tx() to release pending packets from the async_tx queue. Which corresponds to the drop path where qemu_sendv_packet_async() returns non-zero in virtio_net_flush_tx(). Fixes: 846a1e85da64 ("vdpa: Add dummy receive callback") Cc: Eugenio Perez Martin <eperezma@redhat.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20221108041929.18417-2-jasowang@redhat.com>
2022-10-31net/vhost-vdpa.c: Fix clang compilation failurePeter Maydell
Commit 8801ccd0500437 introduced a compilation failure with clang version 10.0.0-4ubuntu1: ../../net/vhost-vdpa.c:654:16: error: variable 'vdpa_device_fd' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized] } else if (opts->has_vhostfd) { ^~~~~~~~~~~~~~~~~ ../../net/vhost-vdpa.c:662:33: note: uninitialized use occurs here r = vhost_vdpa_get_features(vdpa_device_fd, &features, errp); ^~~~~~~~~~~~~~ ../../net/vhost-vdpa.c:654:12: note: remove the 'if' if its condition is always true } else if (opts->has_vhostfd) { ^~~~~~~~~~~~~~~~~~~~~~~ ../../net/vhost-vdpa.c:629:23: note: initialize the variable 'vdpa_device_fd' to silence this warning int vdpa_device_fd; ^ = 0 1 error generated. It's a false positive -- the compiler doesn't manage to figure out that the error checks further up mean that there's no code path where vdpa_device_fd isn't initialized. Put another way, the problem is that we check "if (opts->has_vhostfd)" when in fact that condition must always be true. A cleverer static analyser would probably warn that we were checking an always-true condition. Fix the compilation failure by removing the unnecessary if(). Fixes: 8801ccd0500437 ("vhost-vdpa: allow passing opened vhostfd to vhost-vdpa") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20221031132901.1277150-1-peter.maydell@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-28net: introduce qemu_set_info_str() functionLaurent Vivier
Embed the setting of info_str in a function. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-10-28vhost-vdpa: allow passing opened vhostfd to vhost-vdpaSi-Wei Liu
Similar to other vhost backends, vhostfd can be passed to vhost-vdpa backend as another parameter to instantiate vhost-vdpa net client. This would benefit the use case where only open file descriptors, as opposed to raw vhost-vdpa device paths, are accessible from the QEMU process. (qemu) netdev_add type=vhost-vdpa,vhostfd=61,id=vhost-vdpa1 Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-10-28vdpa: Remove shadow CVQ command checkEugenio Pérez
The guest will see undefined behavior if it issue not negotiate commands, bit it is expected somehow. Simplify code deleting this check. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-10-28vdpa: Delete duplicated vdpa_feature_bits entryEugenio Pérez
This entry was duplicated on referenced commit. Removing it. Fixes: 402378407dbd ("vhost-vdpa: multiqueue support") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-27vdpa: Allow MQ feature in SVQEugenio Pérez
Finally enable SVQ with MQ feature. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>