- 01 Nov, 2023 34 commits
-
-
Si-Wei Liu authored
In order to reduce excessive memory mapping cost in live migration and VM reboot, it is desirable to decouple the vhost-vdpa IOTLB abstraction from the virtio device life cycle, i.e. mappings can be kept intact across virtio device reset. Leverage the .reset_map callback, which is meant to destroy the iotlb on the given ASID and recreate the 1:1 passthrough/identity mapping. To be consistent, the mapping on device creation is initiailized to passthrough/identity with PA 1:1 mapped as IOVA. With this the device .reset op doesn't have to maintain and clean up memory mappings by itself. Additionally, implement .compat_reset to cater for older userspace, which may wish to see mapping to be cleared during reset. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <1697880319-4937-8-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Since commit 6f5312f8 ("vdpa/mlx5: Add support for running with virtio_vdpa"), mlx5_vdpa starts with preallocate 1:1 DMA MR at device creation time. This 1:1 DMA MR will be implicitly destroyed while the first .set_map call is invoked, in which case callers like vhost-vdpa will start to set up custom mappings. When the .reset callback is invoked, the custom mappings will be cleared and the 1:1 DMA MR will be re-created. In order to reduce excessive memory mapping cost in live migration, it is desirable to decouple the vhost-vdpa IOTLB abstraction from the virtio device life cycle, i.e. mappings can be kept around intact across virtio device reset. Leverage the .reset_map callback, which is meant to destroy the regular MR (including cvq mapping) on the given ASID and recreate the initial DMA mapping. That way, the device .reset op runs free from having to maintain and clean up memory mappings by itself. Additionally, implement .compat_reset to cater for older userspace, which may wish to see mapping to be cleared during reset. Co-developed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Message-Id: <1697880319-4937-7-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Using .compat_reset op from the previous patch, the buggy .reset behaviour can be kept as-is on older userspace apps, which don't ack the IOTLB_PERSIST backend feature. As this compatibility quirk is limited to those drivers that used to be buggy in the past, it won't affect change the behaviour or affect ABI on the setups with API compliant driver. The separation of .compat_reset from the regular .reset allows vhost-vdpa able to know which driver had broken behaviour before, so it can apply the corresponding compatibility quirk to the individual driver whenever needed. Compared to overloading the existing .reset with flags, .compat_reset won't cause any extra burden to the implementation of every compliant driver. [mst: squashed in two fixup commits] Message-Id: <1697880319-4937-6-git-send-email-si-wei.liu@oracle.com> Message-Id: <1698102863-21122-1-git-send-email-si-wei.liu@oracle.com> Reported-by: Dragos Tatulea <dtatulea@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <1698275594-19204-1-git-send-email-si-wei.liu@oracle.com> Reported-by: Lei Yang <leiyang@redhat.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Some device specific IOMMU parent drivers have long standing bogus behaviour that mistakenly clean up the maps during .reset. By definition, this is violation to the on-chip IOMMU ops (i.e. .set_map, or .dma_map & .dma_unmap) in those offending drivers, as the removal of internal maps is completely agnostic to the upper layer, causing inconsistent view between the userspace and the kernel. Some userspace app like QEMU gets around of this brokenness by proactively removing and adding back all the maps around vdpa device reset, but such workaround actually penaltize other well-behaved driver setup, where vdpa reset always comes with the associated mapping cost, especially for kernel vDPA devices (use_va=false) that have high cost on pinning. It's imperative to rectify this behaviour and remove the problematic code from all those non-compliant parent drivers. However, we cannot unconditionally remove the bogus map-cleaning code from the buggy .reset implementation, as there might exist userspace apps that already rely on the behaviour on some setup. Introduce a .compat_reset driver op to keep compatibility with older userspace. New and well behaved parent driver should not bother to implement such op, but only those drivers that are doing or used to do non-compliant map-cleaning reset will have to. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Message-Id: <1697880319-4937-5-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Userspace needs this feature flag to distinguish if vhost-vdpa iotlb in the kernel can be trusted to persist IOTLB mapping across vDPA reset. Without it, userspace has no way to tell apart if it's running on an older kernel, which could silently drop all iotlb mapping across vDPA reset, especially with broken parent driver implementation for the .reset driver op. The broken driver may incorrectly drop all mappings of its own as part of .reset, which inadvertently ends up with corrupted mapping state between vhost-vdpa userspace and the kernel. As a workaround, to make the mapping behaviour predictable across reset, userspace has to pro-actively remove all mappings before vDPA reset, and then restore all the mappings afterwards. This workaround is done unconditionally on top of all parent drivers today, due to the parent driver implementation issue and no means to differentiate. This workaround had been utilized in QEMU since day one when the corresponding vhost-vdpa userspace backend came to the world. There are 3 cases that backend may claim this feature bit on for: - parent device that has to work with platform IOMMU - parent device with on-chip IOMMU that has the expected .reset_map support in driver - parent device with vendor specific IOMMU implementation with persistent IOTLB mapping already that has to specifically declare this backend feature The reason why .reset_map is being one of the pre-condition for persistent iotlb is because without it, vhost-vdpa can't switch back iotlb to the initial state later on, especially for the on-chip IOMMU case which starts with identity mapping at device creation. virtio-vdpa requires on-chip IOMMU to perform 1:1 passthrough translation from PA to IOVA as-is to begin with, and .reset_map is the only means to turn back iotlb to the identity mapping mode after vhost-vdpa is gone. The difference in behavior did not matter as QEMU unmaps all the memory unregistering the memory listener at vhost_vdpa_dev_start( started = false), but the backend acknowledging this feature flag allows QEMU to make sure it is safe to skip this unmap & map in the case of vhost stop & start cycle. In that sense, this feature flag is actually a signal for userspace to know that the driver bug has been solved. Not offering it indicates that userspace cannot trust the kernel will retain the maps. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <1697880319-4937-4-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Devices with on-chip IOMMU or vendor specific IOTLB implementation may need to restore iotlb mapping to the initial or default state using the .reset_map op, as it's desirable for some parent devices to not work with DMA ops and maintain a simple IOMMU model with .reset_map. In particular, device reset should not cause mapping to go away on such IOTLB model, so persistent mapping is implied across reset. Before the userspace process using vhost-vdpa is gone, give it a chance to reset iotlb back to the initial state in vhost_vdpa_cleanup(). Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <1697880319-4937-3-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Some device specific IOMMU parent drivers have long standing bogus behavior that mistakenly clean up the maps during .reset. By definition, this is violation to the on-chip IOMMU ops (i.e. .set_map, or .dma_map & .dma_unmap) in those offending drivers, as the removal of internal maps is completely agnostic to the upper layer, causing inconsistent view between the userspace and the kernel. Some userspace app like QEMU gets around of this brokenness by proactively removing and adding back all the maps around vdpa device reset, but such workaround actually penalize other well-behaved driver setup, where vdpa reset always comes with the associated mapping cost, especially for kernel vDPA devices (use_va=false) that have high cost on pinning. It's imperative to rectify this behavior and remove the problematic code from all those non-compliant parent drivers. The reason why a separate .reset_map op is introduced is because this allows a simple on-chip IOMMU model without exposing too much device implementation detail to the upper vdpa layer. The .dma_map/unmap or .set_map driver API is meant to be used to manipulate the IOTLB mappings, and has been abstracted in a way similar to how a real IOMMU device maps or unmaps pages for certain memory ranges. However, apart from this there also exists other mapping needs, in which case 1:1 passthrough mapping has to be used by other users (read virtio-vdpa). To ease parent/vendor driver implementation and to avoid abusing DMA ops in an unexpacted way, these on-chip IOMMU devices can start with 1:1 passthrough mapping mode initially at the time of creation. Then the .reset_map op can be used to switch iotlb back to this initial state without having to expose a complex two-dimensional IOMMU device model. The .reset_map is not a MUST for every parent that implements the .dma_map or .set_map API, because device may work with DMA ops directly by implement their own to manipulate system memory mappings, so don't have to use .reset_map to achieve a simple IOMMU device model for 1:1 passthrough mapping. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <1697880319-4937-2-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Xuan Zhuo authored
Some buggy devices, the common cfg size may not match the features. This patch checks the common cfg size for the features(VIRTIO_F_NOTIF_CONFIG_DATA, VIRTIO_F_RING_RESET). When the common cfg size does not match the corresponding feature, we fail the probe and print error message. Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231019034902.7346-1-xuanzhuo@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
zhenwei pi authored
The following codes have an implicit conversion from size_t to u32: (u32)max_size = (size_t)virtio_max_dma_size(vdev); This may lead overflow, Ex (size_t)4G -> (u32)0. Once virtio_max_dma_size() has a larger size than U32_MAX, use U32_MAX instead. Signed-off-by: zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230904061045.510460-1-pizhenwei@bytedance.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xuan Zhuo authored
Add checks to the check_offsets(void) for queue_notify_data and queue_reset. Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231010031120.81272-5-xuanzhuo@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xuan Zhuo authored
This patch adds the definition of VIRTIO_F_NOTIF_CONFIG_DATA feature bit in the relevant header file. This feature indicates that the driver uses the data provided by the device as a virtqueue identifier in available buffer notifications. It comes from here: https://github.com/oasis-tcs/virtio-spec/issues/89Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Message-Id: <20231010031120.81272-2-xuanzhuo@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Greg Kroah-Hartman authored
Now that the driver core allows for struct class to be in read-only memory, we should make all 'class' structures declared at build time placing them into read-only memory, instead of having to be dynamically allocated at runtime. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Cc: Xie Yongji <xieyongji@bytedance.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Message-Id: <2023100643-tricolor-citizen-6c2d@gregkh> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Xie Yongji <xieyongji@bytedance.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Geert Uytterhoeven authored
Fix a misspelling of "preceding". Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Message-Id: <b57b882675809f1f9dacbf42cf6b920b2bea9cba.1695903476.git.geert+renesas@glider.be> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
-
Shannon Nelson authored
Finally following up to Simon's suggestion for some kdoc attention on struct virtio_pci_modern_device. Link: https://lore.kernel.org/netdev/ZE%2FQS0lnUvxFacjf@corigine.com/ Cc: Simon Horman <simon.horman@corigine.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230911213104.14391-1-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Shawn.Shao authored
Fix the wrong drivers_autoprobe path name in the document Signed-off-by: Shawn.Shao <shawn.shao@jaguarmicro.com> Message-Id: <20230821011535.1117-1-shawn.shao@jaguarmicro.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
-
Dragos Tatulea authored
As Eli Cohen moved to other work, I'll be the contact point for mlx5_vdpa. Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20230925160654.1558627-1-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Xueshi Hu authored
After commit 68f2736a ("mm: Convert all PageMovable users to movable_operations"), the execution path has been changed to move_to_new_folio movable_operations->migrate_page balloon_page_migrate balloon_page_migrate->balloon_page_migrate balloon_page_migrate Correct the outdated comment. Signed-off-by: Xueshi Hu <xueshi.hu@smartx.com> Message-Id: <20230813140709.835536-1-xueshi.hu@smartx.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
-
Eugenio Pérez authored
Offer this backend feature as mlx5 is compatible with it. It allows it to do live migration with CVQ, dynamically switching between passthrough and shadow virtqueue. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230703142514.363256-1-eperezma@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Dragos Tatulea authored
For the following sequence: - cvq group is in ASID 0 - .set_map(1, cvq_iotlb) - .set_group_asid(cvq_group, 1) ... the cvq mapping from ASID 0 will be used. This is not always correct behaviour. This patch adds support for the above mentioned flow by saving the iotlb on each .set_map and updating the cvq iotlb with it on a cvq group change. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-18-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
They will be used in a follow-up patch. For dup_iotlb, avoid the src == dst case. This is an error. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-17-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
Vq descriptor mappings are supported in hardware by filling in an additional mkey which contains the descriptor mappings to the hw vq. A previous patch in this series added support for hw mkey (mr) creation for ASID 1. This patch fills in both the vq data and vq descriptor mkeys based on group ASID mapping. The feature is signaled to the vdpa core through the presence of the .get_vq_desc_group op. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-16-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
Introduce the vq descriptor group and mr per ASID. Until now .set_map on ASID 1 was only updating the cvq iotlb. From now on it also creates a mkey for it. The current patch doesn't use it but follow-up patches will add hardware support for mapping the vq descriptors. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-15-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
The current flow for updating an mr works directly on mvdev->mr which makes it cumbersome to handle multiple new mr structs. This patch makes the flow more straightforward by having mlx5_vdpa_create_mr return a new mr which will update the old mr (if any). The old mr will be deleted and unlinked from mvdev. For the case when the iotlb is empty (not NULL), the old mr will be cleared. This change paves the way for adding mrs for different ASIDs. The initialized bool is no longer needed as mr is now a pointer in the mlx5_vdpa_dev struct which will be NULL when not initialized. Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-14-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
The mutex is named like it is supposed to protect only the mkey but in reality it is a global lock for all mr resources. Shift the mutex to it's rightful location (struct mlx5_vdpa_dev) and give it a more appropriate name. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231018171456.1624030-13-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
This patch adapts the mr creation/deletion code to be able to work with any given mr struct pointer. All the APIs are adapted to take an extra parameter for the mr. mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The check is done in the caller instead (mlx5_set_map). This change is needed for a followup patch which will introduce an additional mr for the vq descriptor data. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231018171456.1624030-12-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
Make mlx5_destroy_mr symmetric to mlx5_create_mr. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-11-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
Now that the cvq code is out of mlx5_vdpa_create/destroy_mr, the "dvq" functions can be folded into their callers. Having "dvq" in the naming will no longer be accurate in the downstream patches. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-10-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
The reslock is taken while refresh is called but iommu_lock is more specific to this resource. So take the iommu_lock during cvq iotlb refresh. Based on Eugenio's patch [0]. [0] https://lore.kernel.org/lkml/20230112142218.725622-4-eperezma@redhat.com/Acked-by: Jason Wang <jasowang@redhat.com> Suggested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-9-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
The handling of the cvq iotlb is currently coupled with the creation and destruction of the hardware mkeys (mr). This patch moves cvq iotlb handling into its own function and shifts it to a scope that is not related to mr handling. As cvq handling is just a prune_iotlb + dup_iotlb cycle, put it all in the same "update" function. Finally, the destruction path is handled by directly pruning the iotlb. After this move is done the ASID mr code can be collapsed into a single function. Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-8-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Dragos Tatulea authored
Necessary for upcoming cvq separation from mr allocation. Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20231018171456.1624030-7-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
With _F_DESC_ASID backend feature, the device can now support the VHOST_VDPA_GET_VRING_DESC_GROUP ioctl, and it may expose the descriptor table (including avail and used ring) in a different group than the buffers it contains. This new uAPI will fetch the group ID of the descriptor table. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231018171456.1624030-6-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
Userspace knows if the device has dedicated descriptor group or not by checking this feature bit. It's only exposed if the vdpa driver backend implements the .get_vq_desc_group() operation callback. Userspace trying to negotiate this feature when it or the dependent _F_IOTLB_ASID feature hasn't been exposed will result in an error. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231018171456.1624030-5-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
Si-Wei Liu authored
In some cases, the access to the virtqueue's descriptor area, device and driver areas (precluding indirect descriptor table in guest memory) may have to be confined to a different address space than where its buffers reside. Without loss of simplicity and generality with already established terminology, let's fold up these 3 areas and call them as a whole as descriptor table group, or descriptor group for short. Specifically, in case of split virtqueues, descriptor group consists of regions for Descriptor Table, Available Ring and Used Ring; for packed virtqueues layout, descriptor group contains Descriptor Ring, Driver and Device Event Suppression structures. The group ID for a dedicated descriptor group can be obtained through a new .get_vq_desc_group() op. If driver implements this op, it means that the descriptor, device and driver areas of the virtqueue may reside in a dedicated group than where its buffers reside, a.k.a the default virtqueue group through the .get_vq_group() op. In principle, the descriptor group may or may not have same group ID as the default group. Even if the descriptor group has a different ID, meaning the vq's descriptor group areas can optionally move to a separate address space than where guest memory resides, the descriptor group may still start from a default address space, same as where its buffers reside. To move the descriptor group to a different address space, .set_group_asid() has to be called to change the ASID binding for the group, which is no different than what needs to be done on any other virtqueue group. On the other hand, the .reset() semantics also applies on descriptor table group, meaning the device reset will clear all ASID bindings and move all virtqueue groups including descriptor group back to the default address space, i.e. in ASID 0. QEMU's shadow virtqueue is going to utilize dedicated descriptor group to speed up map and unmap operations, yielding tremendous downtime reduction by avoiding the full and slow remap cycle in SVQ switching. Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231018171456.1624030-4-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Si-Wei Liu <si-wei.liu@oracle.com> Tested-by: Lei Yang <leiyang@redhat.com>
-
https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.gitMichael S. Tsirkin authored
This merges a single commit that contains changes to mlx5_ifc.h It's required to support vq descriptor mappings in mlx5/vdpa Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
- 30 Oct, 2023 1 commit
-
-
Linus Torvalds authored
-
- 28 Oct, 2023 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull misc x86 fixes from Ingo Molnar: - Fix a possible CPU hotplug deadlock bug caused by the new TSC synchronization code - Fix a legacy PIC discovery bug that results in device troubles on affected systems, such as non-working keybards, etc - Add a new Intel CPU model number to <asm/intel-family.h> * tag 'x86-urgent-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/tsc: Defer marking TSC unstable to a worker x86/i8259: Skip probing when ACPI/MADT advertises PCAT compatibility x86/cpu: Add model number for Intel Arrow Lake mobile processor
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fix from Ingo Molnar: "Restore unintentionally lost quirk settings in the GIC irqchip driver, which broke certain devices" * tag 'irq-urgent-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/gic-v3-its: Don't override quirk settings with default values
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf event fix from Ingo Molnar: "Fix a potential NULL dereference bug" * tag 'perf-urgent-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/core: Fix potential NULL deref
-
Linus Torvalds authored
Merge tag 'probes-fixes-v6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull probes fixes from Masami Hiramatsu: - tracing/kprobes: Fix kernel-doc warnings for the variable length arguments - tracing/kprobes: Fix to count the symbols in modules even if the module name is not specified so that user can probe the symbols in the modules without module name * tag 'probes-fixes-v6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing/kprobes: Fix symbol counting logic by looking at modules as well tracing/kprobes: Fix the description of variable length arguments
-
git://git.infradead.org/users/hch/dma-mappingLinus Torvalds authored
Pull dma-mapping fix from Christoph Hellwig: - reduce the initialy dynamic swiotlb size to remove an annoying but harmless warning from the page allocator (Petr Tesarik) * tag 'dma-mapping-6.6-2023-10-28' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: do not try to allocate a TLB bigger than MAX_ORDER pages
-