- 01 Dec, 2022 3 commits
-
-
Jason Gunthorpe authored
Following the pattern of io_uring, perf, skb, and bpf, iommfd will use user->locked_vm for accounting pinned pages. Ensure the value is included in the struct and export free_uid() as iommufd is modular. user->locked_vm is the good accounting to use for ulimit because it is per-user, and the security sandboxing of locked pages is not supposed to be per-process. Other places (vfio, vdpa and infiniband) have used mm->pinned_vm and/or mm->locked_vm for accounting pinned pages, but this is only per-process and inconsistent with the new FOLL_LONGTERM users in the kernel. Concurrent work is underway to try to put this in a cgroup, so everything can be consistent and the kernel can provide a FOLL_LONGTERM limit that actually provides security. Link: https://lore.kernel.org/r/7-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comReviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This is the basic infrastructure of a new miscdevice to hold the iommufd IOCTL API. It provides: - A miscdevice to create file descriptors to run the IOCTL interface over - A table based ioctl dispatch and centralized extendable pre-validation step - An xarray mapping userspace ID's to kernel objects. The design has multiple inter-related objects held within in a single IOMMUFD fd - A simple usage count to build a graph of object relations and protect against hostile userspace racing ioctls The only IOCTL provided in this patch is the generic 'destroy any object by handle' operation. Link: https://lore.kernel.org/r/6-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comReviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kevin Tian authored
Add iommufd into the documentation tree, and supply initial documentation. Much of this is linked from code comments by kdoc. Link: https://lore.kernel.org/r/5-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comReviewed-by: Bagas Sanjaya <bagasdotme@gmail.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 29 Nov, 2022 4 commits
-
-
Jason Gunthorpe authored
Parse EXPORT_SYMBOL_NS_GPL() in addition to EXPORT_SYMBOL_GPL() for use with the -export flag. Link: https://lore.kernel.org/r/4-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comAcked-by: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The span iterator travels over the indexes of the interval_tree, not the nodes, and classifies spans of indexes as either 'used' or 'hole'. 'used' spans are fully covered by nodes in the tree and 'hole' spans have no node intersecting the span. This is done greedily such that spans are maximally sized and every iteration step switches between used/hole. As an example a trivial allocator can be written as: for (interval_tree_span_iter_first(&span, itree, 0, ULONG_MAX); !interval_tree_span_iter_done(&span); interval_tree_span_iter_next(&span)) if (span.is_hole && span.last_hole - span.start_hole >= allocation_size - 1) return span.start_hole; With all the tricky boundary conditions handled by the library code. The following iommufd patches have several algorithms for its overlapping node interval trees that are significantly simplified with this kind of iteration primitive. As it seems generally useful, put it into lib/. Link: https://lore.kernel.org/r/3-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comReviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lu Baolu authored
These complement the group interfaces used by VFIO and are for use by iommufd. The main difference is that multiple devices in the same group can all share the ownership by passing the same ownership pointer. Move the common code into shared functions. Link: https://lore.kernel.org/r/2-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comTested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This queries if a domain linked to a device should expect to support enforce_cache_coherency() so iommufd can negotiate the rules for when a domain should be shared or not. For iommufd a device that declares IOMMU_CAP_ENFORCE_CACHE_COHERENCY will not be attached to a domain that does not support it. Link: https://lore.kernel.org/r/1-v6-a196d26f289e+11787-iommufd_jgg@nvidia.comReviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Tested-by: Yu He <yu.he@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 03 Nov, 2022 14 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufdJoerg Roedel authored
iommu: Define EINVAL as device/domain incompatibility This series is to replace the previous EMEDIUMTYPE patch in a VFIO series: https://lore.kernel.org/kvm/Yxnt9uQTmbqul5lf@8bytes.org/ The purpose is to regulate all existing ->attach_dev callback functions to use EINVAL exclusively for an incompatibility error between a device and a domain. This allows VFIO and IOMMUFD to detect such a soft error, and then try a different domain with the same device. Among all the patches, the first two are preparatory changes. And then one patch to update kdocs and another three patches for the enforcement effort. Link: https://lore.kernel.org/r/cover.1666042872.git.nicolinc@nvidia.com
-
Lu Baolu authored
Rename iommu-sva-lib.c[h] to iommu-sva.c[h] as it contains all code for SVA implementation in iommu core. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-14-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Tweak the I/O page fault handling framework to route the page faults to the domain and call the page fault handler retrieved from the domain. This makes the I/O page fault handling framework possible to serve more usage scenarios as long as they have an IOMMU domain and install a page fault handler in it. Some unused functions are also removed to avoid dead code. The iommu_get_domain_for_dev_pasid() which retrieves attached domain for a {device, PASID} pair is used. It will be used by the page fault handling framework which knows {device, PASID} reported from the iommu driver. We have a guarantee that the SVA domain doesn't go away during IOPF handling, because unbind() won't free the domain until all the pending page requests have been flushed from the pipeline. The drivers either call iopf_queue_flush_dev() explicitly, or in stall case, the device driver is required to flush all DMAs including stalled transactions before calling unbind(). This also renames iopf_handle_group() to iopf_handler() to avoid confusing. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-13-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
This adds some mechanisms around the iommu_domain so that the I/O page fault handling framework could route a page fault to the domain and call the fault handler from it. Add pointers to the page fault handler and its private data in struct iommu_domain. The fault handler will be called with the private data as a parameter once a page fault is routed to the domain. Any kernel component which owns an iommu domain could install handler and its private parameter so that the page fault could be further routed and handled. This also prepares the SVA implementation to be the first consumer of the per-domain page fault handling model. The I/O page fault handler for SVA is copied to the SVA file with mmget_not_zero() added before mmap_read_lock(). Suggested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-12-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
These ops'es have been deprecated. There's no need for them anymore. Remove them to avoid dead code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-11-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
The existing iommu SVA interfaces are implemented by calling the SVA specific iommu ops provided by the IOMMU drivers. There's no need for any SVA specific ops in iommu_ops vector anymore as we can achieve this through the generic attach/detach_dev_pasid domain ops. This refactors the IOMMU SVA interfaces implementation by using the iommu_attach/detach_device_pasid interfaces and align them with the concept of the SVA iommu domain. Put the new SVA code in the SVA related file in order to make it self-contained. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-10-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Add support for SVA domain allocation and provide an SVA-specific iommu_domain_ops. This implementation is based on the existing SVA code. Possible cleanup and refactoring are left for incremental changes later. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Link: https://lore.kernel.org/r/20221031005917.45690-9-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Add support for SVA domain allocation and provide an SVA-specific iommu_domain_ops. This implementation is based on the existing SVA code. Possible cleanup and refactoring are left for incremental changes later. The VT-d driver will also need to support setting a DMA domain to a PASID of device. Current SVA implementation uses different data structures to track the domain and device PASID relationship. That's the reason why we need to check the domain type in remove_dev_pasid callback. Eventually we'll consolidate the data structures and remove the need of domain type check. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-8-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
The SVA iommu_domain represents a hardware pagetable that the IOMMU hardware could use for SVA translation. This adds some infrastructures to support SVA domain in the iommu core. It includes: - Extend the iommu_domain to support a new IOMMU_DOMAIN_SVA domain type. The IOMMU drivers that support allocation of the SVA domain should provide its own SVA domain specific iommu_domain_ops. - Add a helper to allocate an SVA domain. The iommu_domain_free() is still used to free an SVA domain. The report_iommu_fault() should be replaced by the new iommu_report_device_fault(). Leave the existing fault handler with the existing users and the newly added SVA members excludes it. Suggested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-7-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Attaching an IOMMU domain to a PASID of a device is a generic operation for modern IOMMU drivers which support PASID-granular DMA address translation. Currently visible usage scenarios include (but not limited): - SVA (Shared Virtual Address) - kernel DMA with PASID - hardware-assist mediated device This adds the set_dev_pasid domain ops for setting the domain onto a PASID of a device and remove_dev_pasid iommu ops for removing any setup on a PASID of device. This also adds interfaces for device drivers to attach/detach/retrieve a domain for a PASID of a device. If multiple devices share a single group, it's fine as long the fabric always routes every TLP marked with a PASID to the host bridge and only the host bridge. For example, ACS achieves this universally and has been checked when pci_enable_pasid() is called. As we can't reliably tell the source apart in a group, all the devices in a group have to be considered as the same source, and mapped to the same PASID table. The DMA ownership is about the whole device (more precisely, iommu group), including the RID and PASIDs. When the ownership is converted, the pasid array must be empty. This also adds necessary checks in the DMA ownership interfaces. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-6-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
The Requester ID/Process Address Space ID (PASID) combination identifies an address space distinct from the PCI bus address space, e.g., an address space defined by an IOMMU. But the PCIe fabric routes Memory Requests based on the TLP address, ignoring any PASID (PCIe r6.0, sec 2.2.10.4), so a TLP with PASID that SHOULD go upstream to the IOMMU may instead be routed as a P2P Request if its address falls in a bridge window. To ensure that all Memory Requests with PASID are routed upstream, only enable PASID if ACS P2P Request Redirect and Upstream Forwarding are enabled for the path leading to the device. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Suggested-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-5-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
The current kernel DMA with PASID support is based on the SVA with a flag SVM_FLAG_SUPERVISOR_MODE. The IOMMU driver binds the kernel memory address space to a PASID of the device. The device driver programs the device with kernel virtual address (KVA) for DMA access. There have been security and functional issues with this approach: - The lack of IOTLB synchronization upon kernel page table updates. (vmalloc, module/BPF loading, CONFIG_DEBUG_PAGEALLOC etc.) - Other than slight more protection, using kernel virtual address (KVA) has little advantage over physical address. There are also no use cases yet where DMA engines need kernel virtual addresses for in-kernel DMA. This removes SVM_FLAG_SUPERVISOR_MODE support from the IOMMU interface. The device drivers are suggested to handle kernel DMA with PASID through the kernel DMA APIs. The drvdata parameter in iommu_sva_bind_device() and all callbacks is not needed anymore. Cleanup them as well. Link: https://lore.kernel.org/linux-iommu/20210511194726.GP1002214@nvidia.com/Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-4-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Use this field to save the number of PASIDs that a device is able to consume. It is a generic attribute of a device and lifting it into the per-device dev_iommu struct could help to avoid the boilerplate code in various IOMMU drivers. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-3-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Use this field to keep the number of supported PASIDs that an IOMMU hardware is able to support. This is a generic attribute of an IOMMU and lifting it into the per-IOMMU device structure makes it possible to allocate a PASID for device without calls into the IOMMU drivers. Any iommu driver that supports PASID related features should set this field before enabling them on the devices. In the Intel IOMMU driver, intel_iommu_sm is moved to CONFIG_INTEL_IOMMU enclave so that the pasid_supported() helper could be used in dmar.c without compilation errors. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-2-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
- 01 Nov, 2022 5 commits
-
-
Nicolin Chen authored
The mtk_iommu and virtio drivers have places in the ->attach_dev callback functions that return hardcode errnos instead of the returned values, but callers of these ->attach_dv callback functions may care. Propagate them directly without the extra conversions. Link: https://lore.kernel.org/r/ca8c5a447b87002334f83325f28823008b4ce420.1666042873.git.nicolinc@nvidia.comReviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Yong Wu <yong.wu@mediatek.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Nicolin Chen authored
Following the new rules in include/linux/iommu.h kdocs, update all drivers ->attach_dev callback functions to return EINVAL in the failure paths that are related to domain incompatibility. Also, drop adjacent error prints to prevent a kernel log spam. Link: https://lore.kernel.org/r/f52a07f7320da94afe575c9631340d0019a203a7.1666042873.git.nicolinc@nvidia.comReviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Nicolin Chen authored
Following the new rules in include/linux/iommu.h kdocs, EINVAL now can be used to indicate that domain and device are incompatible by a caller that treats it as a soft failure and tries attaching to another domain. On the other hand, there are ->attach_dev callback functions returning it for obvious device-specific errors. They will result in some inefficiency in the caller handling routine. Update these places to corresponding errnos following the new rules. Link: https://lore.kernel.org/r/5924c03bea637f05feb2a20d624bae086b555ec5.1666042872.git.nicolinc@nvidia.comReviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Nicolin Chen authored
Cases like VFIO wish to attach a device to an existing domain that was not allocated specifically from the device. This raises a condition where the IOMMU driver can fail the domain attach because the domain and device are incompatible with each other. This is a soft failure that can be resolved by using a different domain. Provide a dedicated errno EINVAL from the IOMMU driver during attach that the reason why the attach failed is because of domain incompatibility. VFIO can use this to know that the attach is a soft failure and it should continue searching. Otherwise, the attach will be a hard failure and VFIO will return the code to userspace. Update kdocs to add rules of return value to the attach_dev op and APIs. Link: https://lore.kernel.org/r/bd56d93c18621104a0fa1b0de31e9b760b81b769.1666042872.git.nicolinc@nvidia.comSuggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Nicolin Chen authored
The same checks are done in amd_iommu_probe_device(). If any of them fails there, then the device won't get a group, so there's no way for it to even reach amd_iommu_attach_device anymore. Link: https://lore.kernel.org/r/c054654a81f2b675c73108fe4bf10e45335a721a.1666042872.git.nicolinc@nvidia.comSuggested-by: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 30 Oct, 2022 13 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/deller/linux-fbdevLinus Torvalds authored
Pull fbdev fixes from Helge Deller: "A use-after-free bugfix in the smscufx driver and various minor error path fixes, smaller build fixes, sysfs fixes and typos in comments in the stifb, sisfb, da8xxfb, xilinxfb, sm501fb, gbefb and cyber2000fb drivers" * tag 'fbdev-for-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/linux-fbdev: fbdev: cyber2000fb: fix missing pci_disable_device() fbdev: sisfb: use explicitly signed char fbdev: smscufx: Fix several use-after-free bugs fbdev: xilinxfb: Make xilinxfb_release() return void fbdev: sisfb: fix repeated word in comment fbdev: gbefb: Convert sysfs snprintf to sysfs_emit fbdev: sm501fb: Convert sysfs snprintf to sysfs_emit fbdev: stifb: Fall back to cfb_fillrect() on 32-bit HCRX cards fbdev: da8xx-fb: Fix error handling in .remove() fbdev: MIPS supports iomem addresses
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-miscLinus Torvalds authored
Pull char/misc fixes from Greg KH: "Some small driver fixes for 6.1-rc3. They include: - iio driver bugfixes - counter driver bugfixes - coresight bugfixes, including a revert and then a second fix to get it right. All of these have been in linux-next with no reported problems" * tag 'char-misc-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (21 commits) misc: sgi-gru: use explicitly signed char coresight: cti: Fix hang in cti_disable_hw() Revert "coresight: cti: Fix hang in cti_disable_hw()" counter: 104-quad-8: Fix race getting function mode and direction counter: microchip-tcb-capture: Handle Signal1 read and Synapse coresight: cti: Fix hang in cti_disable_hw() coresight: Fix possible deadlock with lock dependency counter: ti-ecap-capture: fix IS_ERR() vs NULL check counter: Reduce DEFINE_COUNTER_ARRAY_POLARITY() to defining counter_array iio: bmc150-accel-core: Fix unsafe buffer attributes iio: adxl367: Fix unsafe buffer attributes iio: adxl372: Fix unsafe buffer attributes iio: at91-sama5d2_adc: Fix unsafe buffer attributes iio: temperature: ltc2983: allocate iio channels once tools: iio: iio_utils: fix digit calculation iio: adc: stm32-adc: fix channel sampling time init iio: adc: mcp3911: mask out device ID in debug prints iio: adc: mcp3911: use correct id bits iio: adc: mcp3911: return proper error code on failure to allocate trigger iio: adc: mcp3911: fix sizeof() vs ARRAY_SIZE() bug ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usbLinus Torvalds authored
Pull USB fixes from Greg KH: "A few small USB fixes for 6.1-rc3. Include in here are: - MAINTAINERS update, including a big one for the USB gadget subsystem. Many thanks to Felipe for all of the years of hard work he has done on this codebase, it was greatly appreciated. - dwc3 driver fixes for reported problems. - xhci driver fixes for reported problems. - typec driver fixes for minor issues - uvc gadget driver change, and then revert as it wasn't relevant for 6.1-final, as it is a new feature and people are still reviewing and modifying it. All of these have been in the linux-next tree with no reported issues" * tag 'usb-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: usb: dwc3: gadget: Don't set IMI for no_interrupt usb: dwc3: gadget: Stop processing more requests on IMI Revert "usb: gadget: uvc: limit isoc_sg to super speed gadgets" xhci: Remove device endpoints from bandwidth list when freeing the device xhci-pci: Set runtime PM as default policy on all xHC 1.2 or later devices xhci: Add quirk to reset host back to default state at shutdown usb: xhci: add XHCI_SPURIOUS_SUCCESS to ASM1042 despite being a V0.96 controller usb: dwc3: st: Rely on child's compatible instead of name usb: gadget: uvc: limit isoc_sg to super speed gadgets usb: bdc: change state when port disconnected usb: typec: ucsi: acpi: Implement resume callback usb: typec: ucsi: Check the connection on resume usb: gadget: aspeed: Fix probe regression usb: gadget: uvc: fix sg handling during video encode usb: gadget: uvc: fix sg handling in error case usb: gadget: uvc: fix dropped frame after missed isoc usb: dwc3: gadget: Don't delay End Transfer on delayed_status usb: dwc3: Don't switch OTG -> peripheral if extcon is present MAINTAINERS: Update maintainers for broadcom USB MAINTAINERS: move USB gadget and phy entries under the main USB entry
-
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linuxLinus Torvalds authored
Pull gpio fixes from Bartosz Golaszewski: - convert gpio-tegra to using an immutable irqchip - MAINTAINERS update * tag 'gpio-fixes-for-v6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux: MAINTAINERS: Change myself to a maintainer gpio: tegra: Convert to immutable irq chip
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Borislav Petkov: - Rename a perf memory level event define to denote it is of CXL type - Add Alder and Raptor Lakes support to RAPL - Make sure raw sample data is output with tracepoints * tag 'perf_urgent_for_v6.1_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/mem: Rename PERF_MEM_LVLNUM_EXTN_MEM to PERF_MEM_LVLNUM_CXL perf/x86/rapl: Add support for Intel Raptor Lake perf/x86/rapl: Add support for Intel AlderLake-N perf: Fix missing raw data on tracepoint events
-
Linus Torvalds authored
Merge tag 'loongarch-fixes-6.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch fixes from Huacai Chen: "Remove unused kernel stack padding, fix some build errors/warnings and two bugs in laptop platform driver" * tag 'loongarch-fixes-6.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: platform/loongarch: laptop: Fix possible UAF and simplify generic_acpi_laptop_init() platform/loongarch: laptop: Adjust resume order for loongson_hotkey_resume() LoongArch: BPF: Avoid declare variables in switch-case LoongArch: Use flexible-array member instead of zero-length array LoongArch: Remove unused kernel stack padding
-
git://git.samba.org/sfrench/cifs-2.6Linus Torvalds authored
Pull cifs fixes from Steve French: - use after free fix for reconnect race - two memory leak fixes * tag '6.1-rc2-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6: cifs: fix use-after-free caused by invalid pointer `hostname` cifs: Fix pages leak when writedata alloc failed in cifs_write_from_iter() cifs: Fix pages array leak when writedata alloc failed in cifs_writedata_alloc()
-
git://git.kernel.org/pub/scm/linux/kernel/git/crng/randomLinus Torvalds authored
Pull random number generator fix from Jason Donenfeld: "One fix from Jean-Philippe Brucker, addressing a regression in which early boot code on ARM64 would use the non-_early variant of the arch_get_random family of functions, resulting in the architectural random number generator appearing unavailable during that early phase of boot. The fix simply changes arch_get_random*() to arch_get_random*_early(). This distinction between these two functions is a bit of an old wart I'm not a fan of, and for 6.2 I'll see if I can make obsolete the _early variant, so that one function does the right thing in all contexts without overhead" * tag 'random-6.1-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: random: use arch_get_random*_early() in random_init()
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fixes from James Bottomley: "Varions small fixes, all in drivers. Some of these arrived during the merge window and got held over to make sure of testing on the -rc tree. The biggest change is for standards conformance in the target driver, closely followed by a set of bug fixes in megaraid_sas" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (21 commits) scsi: ufs: core: Fix typo in comment scsi: mpi3mr: Select CONFIG_SCSI_SAS_ATTRS scsi: ufs: core: Fix typo for register name in comments scsi: pm80xx: Display proc_name in sysfs scsi: ufs: core: Fix the error log in ufshcd_query_flag_retry() scsi: ufs: core: Remove unneeded casts from void * scsi: lpfc: Fix spelling mistake "unsolicted" -> "unsolicited" scsi: qla2xxx: Use transport-defined speed mask for supported_speeds scsi: target: iblock: Fold iblock_emulate_read_cap_with_block_size() into iblock_get_blocks() scsi: qla2xxx: Fix serialization of DCBX TLV data request scsi: ufs: qcom: Remove redundant dev_err() call scsi: megaraid_sas: Move megasas_dbg_lvl init to megasas_init() scsi: megaraid_sas: Remove unnecessary memset() scsi: megaraid_sas: Simplify megasas_update_device_list scsi: megaraid_sas: Correct an error message scsi: megaraid_sas: Correct value passed to scsi_device_lookup() scsi: target: core: UA on all LUNs after reset scsi: target: core: New key must be used for moved PR scsi: target: core: Abort all preempted regs if requested scsi: target: core: Fix memory leak in preempt_and_abort ...
-
git://git.kernel.dk/linuxLinus Torvalds authored
Pull block fixes from Jens Axboe: - NVMe pull request via Christoph: - make the multipath dma alignment match the non-multipath one (Keith Busch) - fix a bogus use of sg_init_marker() (Nam Cao) - fix circulr locking in nvme-tcp (Sagi Grimberg) - Initialization fix for requests allocated via the special hw queue allocator (John) - Fix for a regression added in this release with the batched completions of end_io backed requests (Ming) - Error handling leak fix for rbd (Yang) - Error handling leak fix for add_disk() failure (Yu) * tag 'block-6.1-2022-10-28' of git://git.kernel.dk/linux: blk-mq: Properly init requests from blk_mq_alloc_request_hctx() blk-mq: don't add non-pt request with ->end_io to batch rbd: fix possible memory leak in rbd_sysfs_init() nvme-multipath: set queue dma alignment to 3 nvme-tcp: fix possible circular locking when deleting a controller under memory pressure nvme-tcp: replace sg_init_marker() with sg_init_table() block: fix memory leak for elevator on add_disk failure
-
git://git.kernel.dk/linuxLinus Torvalds authored
Pull io_uring fix from Jens Axboe: "Just a fix for a locking regression introduced with the deferred task_work running from this merge window" * tag 'io_uring-6.1-2022-10-28' of git://git.kernel.dk/linux: io_uring: unlock if __io_run_local_work locked inside io_uring: use io_run_local_work_locked helper
-
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mmLinus Torvalds authored
Pull misc hotfixes from Andrew Morton: "Eight fix pre-6.0 bugs and the remainder address issues which were introduced in the 6.1-rc merge cycle, or address issues which aren't considered sufficiently serious to warrant a -stable backport" * tag 'mm-hotfixes-stable-2022-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (23 commits) mm: multi-gen LRU: move lru_gen_add_mm() out of IRQ-off region lib: maple_tree: remove unneeded initialization in mtree_range_walk() mmap: fix remap_file_pages() regression mm/shmem: ensure proper fallback if page faults mm/userfaultfd: replace kmap/kmap_atomic() with kmap_local_page() x86: fortify: kmsan: fix KMSAN fortify builds x86: asm: make sure __put_user_size() evaluates pointer once Kconfig.debug: disable CONFIG_FRAME_WARN for KMSAN by default x86/purgatory: disable KMSAN instrumentation mm: kmsan: export kmsan_copy_page_meta() mm: migrate: fix return value if all subpages of THPs are migrated successfully mm/uffd: fix vma check on userfault for wp mm: prep_compound_tail() clear page->private mm,madvise,hugetlb: fix unexpected data loss with MADV_DONTNEED on hugetlbfs mm/page_isolation: fix clang deadcode warning fs/ext4/super.c: remove unused `deprecated_msg' ipc/msg.c: fix percpu_counter use after free memory tier, sysfs: rename attribute "nodes" to "nodelist" MAINTAINERS: git://github.com -> https://github.com for nilfs2 mm/kmemleak: prevent soft lockup in kmemleak_scan()'s object iteration loops ...
-
- 29 Oct, 2022 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Fix a case of rescheduling with user access unlocked, when preempt is enabled. - A follow-up fix for a recent fix, which could lead to IRQ state assertions firing incorrectly. - Two fixes for lockdep warnings seen when using kfence with the Hash MMU. - Two fixes for preempt warnings seen when using the Hash MMU. - Two fixes for the VAS coprocessor mechanism used on pseries. - Prevent building some of our older KVM backends when CONTEXT_TRACKING_USER is enabled, as it's known to cause crashes. - A couple of fixes for issues seen with PMU NMIs. Thanks to Nicholas Piggin, Guenter Roeck, Frederic Barrat Haren Myneni, Sachin Sant, and Samuel Holland. * tag 'powerpc-6.1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s/interrupt: Fix clear of PACA_IRQS_HARD_DIS when returning to soft-masked context powerpc/64s/interrupt: Perf NMI should not take normal exit path powerpc/64/interrupt: Prevent NMI PMI causing a dangerous warning KVM: PPC: BookS PR-KVM and BookE do not support context tracking powerpc: Fix reschedule bug in KUAP-unlocked user copy powerpc/64s: Fix hash__change_memory_range preemption warning powerpc/64s: Disable preemption in hash lazy mmu mode powerpc/64s: make linear_map_hash_lock a raw spinlock powerpc/64s: make HPTE lock and native_tlbie_lock irq-safe powerpc/64s: Add lockdep for HPTE lock powerpc/pseries: Use lparcfg to reconfig VAS windows for DLPAR CPU powerpc/pseries/vas: Add VAS IRQ primary handler
-