- 03 Jul, 2023 23 commits
-
-
Mike Christie authored
This patch drops the requirement that we can only switch workers if work has not been queued by using RCU for the vq based queueing paths and a mutex for the device wide flush. We can also use this to support SIGKILL properly in the future where we should exit almost immediately after getting that signal. With this patch, when get_signal returns true, we can set the vq->worker to NULL and do a synchronize_rcu to prevent new work from being queued to the vhost_task that has been killed. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-18-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This has vhost-scsi support the worker ioctls by calling the vhost_worker_ioctl helper. With a single worker, the single thread becomes a bottlneck when trying to use 3 or more virtqueues like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 With the patches and doing a worker per vq, we can scale to at least 16 vCPUs/vqs (that's my system limit) with the same command fio command above with numjobs=16: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=64 --numjobs=16 which gives around 2002K IOPs. Note that for testing I dropped depth to 64 above because the vhost/virt layer supports only 1024 total commands per device. And the only tuning I did was set LIO's emulate_pr to 0 to avoid LIO's PR lock in the main IO path which becomes an issue at around 12 jobs/virtqueues. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-17-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter how many jobs, virtqueues, and CPUs are used. To better utilize virtqueues and available CPUs, this patch allows userspace to create workers and bind them to vqs. You can have N workers per dev and also share N workers with M vqs on that dev. This patch adds the interface related code and the next patch will hook vhost-scsi into it. The patches do not try to hook net and vsock into the interface because: 1. multiple workers don't seem to help vsock. The problem is that with only 2 virtqueues we never fully use the existing worker when doing bidirectional tests. This seems to match vhost-scsi where we don't see the worker as a bottleneck until 3 virtqueues are used. 2. net already has a way to use multiple workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-16-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store mupltiple workers and look them up. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-15-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. This moves the vhost_vring_ioctl code to do this to a helper so it can be shared. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-14-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-13-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, the IO vq workers could be running while the TMF/ctl vq worker is running so this has us do a flush before completing the TMF to make sure cmds are completed when it's work is later queued and run. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-12-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
Convert from vhost_work_queue to vhost_vq_work_queue so we can remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-11-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker/CPU. This will be useful with the next patches that allow us to create mulitple worker threads and bind them to different vqs, and we can have completions running on different threads/CPUs. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230626232307.97930-10-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
Convert from vhost_work_queue to vhost_vq_work_queue, so we can drop vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-9-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs to be handled by different workers, so to allow drivers to execute operations like queue, stop, flush, etc on specific polls/vqs we need to know the mappings. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-8-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc. It also adds a helper to flush specific virtqueues, so vhost-scsi can flush IO vqs from it's ctl vq. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-7-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on. This temp leaves vhost_work_queue. It will be removed when the drivers are converted in the next patches. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-6-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, so this adds a helper to check if a vq has work pending and converts vhost-net to use it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230626232307.97930-5-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so in later patches in this set we can queue/flush specific vqs and their workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-4-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
This patchset allows us to allocate multiple workers, so this has us move from the vhost_worker that's embedded in the vhost_dev to dynamically allocating it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Mike Christie authored
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so after we have called vhost_worker_create it can be calling vhost_work_queue and trying to access the vhost worker/task. If vhost_dev_alloc_iovecs fails, then vhost_worker_free could free the worker/task from under vsock. This moves vhost_worker_create to the end of vhost_dev_set_owner where we know we can no longer fail in that path. If it fails after the VHOST_SET_OWNER and userspace closes the device, then the normal vsock release handling will do the right thing. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xianting Tian authored
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb5104 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-4-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xianting Tian authored
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb5104 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-3-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xianting Tian authored
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb5104 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-2-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Zhu Lingshan authored
This commit implements a better layout of the live migration bar, therefore the accessors for virtqueue state have been refactored. This commit also add a comment to the probing-ids list, indicating this driver drives F2000X-PL virtio-net Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-4-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Zhu Lingshan authored
Rather than a hardcode, this commit detects and reports the max value of allowed size of the virtqueues Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-3-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Zhu Lingshan authored
This commit dynamically allocates the data stores for the virtqueues based on virtio_pci_common_cfg.num_queues. Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-2-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
- 27 Jun, 2023 17 commits
-
-
Eli Cohen authored
Add support for generation of interrupts from the device directly to the VM to the VCPU thus avoiding the overhead on the host CPU. When supported, the driver will attempt to allocate vectors for each data virtqueue. If a vector for a virtqueue cannot be provided it will use the QP mode where notifications go through the driver. In addition, we add a shutdown callback to make sure allocated interrupts are released in case of shutdown to allow clean shutdown. Signed-off-by: Eli Cohen <elic@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Message-Id: <20230607190007.290505-1-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Simon Horman authored
Add missing documentation for the vqs_list_lock field of struct virtio_device, and the validate field of struct virtio_driver. ./scripts/kernel-doc says: .../virtio.h:131: warning: Function parameter or member 'vqs_list_lock' not described in 'virtio_device' .../virtio.h:192: warning: Function parameter or member 'validate' not described in 'virtio_driver' 2 warnings as Errors No functional changes intended. Signed-off-by: Simon Horman <horms@kernel.org> Message-Id: <20230510-virtio-kdoc-v3-1-e2681ed7a425@kernel.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
-
Shannon Nelson authored
Add the documentation and Kconfig entry for pds_vdpa driver. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-12-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
Register for the pds_core's notification events, primarily to find out when the FW has been reset so we can pass this on back up the chain. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-11-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
This is the vDPA device support, where we advertise that we can support the virtio queues and deal with the configuration work through the pds_core's adminq. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Message-Id: <20230519215632.12343-10-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Shannon Nelson authored
These are the adminq commands that will be needed for setting up and using the vDPA device. There are a number of commands defined in the FW's API, but by making use of the FW's virtio BAR we only need a few of these commands for vDPA support. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-9-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
Prep and use the "modern" virtio bar utilities to get our virtio config space ready. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-8-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
Find the vDPA management information from the DSC in order to advertise it to the vdpa subsystem. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-7-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
Add new adminq definitions in support for vDPA operations. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-6-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
The pds_core_logical_qtype enum and IFNAMSIZ are not needed in the common PDS header, only needed when working with the adminq, so move them to the adminq header. Note: This patch might conflict with pds_vfio patches that are in review, depending on which patchset gets pulled first. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-5-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
This is the initial auxiliary driver framework for a new vDPA device driver, an auxiliary_bus client of the pds_core driver. The pds_core driver supplies the PCI services for the VF device and for accessing the adminq in the PF device. This patch adds the very basics of registering for the auxiliary device and setting up debugfs entries. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-4-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
To add a bit of vendor flexibility with various virtio based devices, allow the caller to specify a different DMA mask. This adds a dma_mask field to struct virtio_pci_modern_device. If defined by the driver, this mask will be used in a call to dma_set_mask_and_coherent() instead of the traditional DMA_BIT_MASK(64). This allows limiting the DMA space on vendor devices with address limitations. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-3-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Shannon Nelson authored
To add a bit of vendor flexibility with various virtio based devices, allow the caller to check for a different device id. This adds a function pointer field to struct virtio_pci_modern_device to specify an override device id check. If defined by the driver, this function will be called to check that the PCI device is the vendor's expected device, and will return the found device id to be stored in mdev->id.device. This allows vendors with alternative vendor device ids to use this library on their own device BAR. Note: A lot of the diff in this is simply indenting the existing code into an else block. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230519215632.12343-2-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Feng Liu authored
Improve the size of the virtio_pci_device structure, which is commonly used to represent a virtio PCI device. A given virtio PCI device can either of legacy type or modern type, with the struct virtio_pci_legacy_device occupying 32 bytes and the struct virtio_pci_modern_device occupying 88 bytes. Make them a union, thereby save 32 bytes of memory as shown by the pahole tool. This improvement is particularly beneficial when dealing with numerous devices, as it helps conserve memory resources. Before the modification, pahole tool reported the following: struct virtio_pci_device { [...] struct virtio_pci_legacy_device ldev; /* 824 32 */ /* --- cacheline 13 boundary (832 bytes) was 24 bytes ago --- */ struct virtio_pci_modern_device mdev; /* 856 88 */ /* XXX last struct has 4 bytes of padding */ [...] /* size: 1056, cachelines: 17, members: 19 */ [...] }; After the modification, pahole tool reported the following: struct virtio_pci_device { [...] union { struct virtio_pci_legacy_device ldev; /* 824 32 */ struct virtio_pci_modern_device mdev; /* 824 88 */ }; /* 824 88 */ [...] /* size: 1024, cachelines: 16, members: 18 */ [...] }; Signed-off-by: Feng Liu <feliu@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20230516135446.16266-1-feliu@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Peng Fan authored
"-mfunction-return=thunk -mindirect-branch-register" are only valid for x86. So introduce compiler operation check to avoid such issues Fixes: 0d0ed400 ("tools/virtio: enable to build with retpoline") Signed-off-by: Peng Fan <peng.fan@nxp.com> Message-Id: <20230323040024.3809108-1-peng.fan@oss.nxp.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Dragos Tatulea authored
The referenced patch calls set_vq_affinity without checking if the op is valid. This patch adds the check. Fixes: 3dad5682 ("virtio-vdpa: Support interrupt affinity spreading mechanism") Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20230504135053.2283816-1-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Feng Liu <feliu@nvidia.com>
-
Alvaro Karsz authored
The callback sends a resume command to the DPU through the control mechanism. Signed-off-by: Alvaro Karsz <alvaro.karsz@solid-run.com> Message-Id: <20230502131048.61134-1-alvaro.karsz@solid-run.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
-