Commit 7a7c5bad authored by Robin Murphy's avatar Robin Murphy Committed by Joerg Roedel

iommu: Indicate queued flushes via gather data

Since iommu_iotlb_gather exists to help drivers optimise flushing for a
given unmap request, it is also the logical place to indicate whether
the unmap is strict or not, and thus help them further optimise for
whether to expect a sync or a flush_all subsequently. As part of that,
it also seems fair to make the flush queue code take responsibility for
enforcing the really subtle ordering requirement it brings, so that we
don't need to worry about forgetting that if new drivers want to add
flush queue support, and can consolidate the existing versions.

While we're adding to the kerneldoc, also fill in some info for
@freelist which was overlooked previously.
Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.comSigned-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent 8d971243
...@@ -481,6 +481,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, ...@@ -481,6 +481,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
dma_addr -= iova_off; dma_addr -= iova_off;
size = iova_align(iovad, size + iova_off); size = iova_align(iovad, size + iova_off);
iommu_iotlb_gather_init(&iotlb_gather); iommu_iotlb_gather_init(&iotlb_gather);
iotlb_gather.queued = cookie->fq_domain;
unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather); unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather);
WARN_ON(unmapped != size); WARN_ON(unmapped != size);
......
...@@ -637,6 +637,13 @@ void queue_iova(struct iova_domain *iovad, ...@@ -637,6 +637,13 @@ void queue_iova(struct iova_domain *iovad,
unsigned long flags; unsigned long flags;
unsigned idx; unsigned idx;
/*
* Order against the IOMMU driver's pagetable update from unmapping
* @pte, to guarantee that iova_domain_flush() observes that if called
* from a different CPU before we release the lock below.
*/
smp_wmb();
spin_lock_irqsave(&fq->lock, flags); spin_lock_irqsave(&fq->lock, flags);
/* /*
......
...@@ -161,16 +161,22 @@ enum iommu_dev_features { ...@@ -161,16 +161,22 @@ enum iommu_dev_features {
* @start: IOVA representing the start of the range to be flushed * @start: IOVA representing the start of the range to be flushed
* @end: IOVA representing the end of the range to be flushed (inclusive) * @end: IOVA representing the end of the range to be flushed (inclusive)
* @pgsize: The interval at which to perform the flush * @pgsize: The interval at which to perform the flush
* @freelist: Removed pages to free after sync
* @queued: Indicates that the flush will be queued
* *
* This structure is intended to be updated by multiple calls to the * This structure is intended to be updated by multiple calls to the
* ->unmap() function in struct iommu_ops before eventually being passed * ->unmap() function in struct iommu_ops before eventually being passed
* into ->iotlb_sync(). * into ->iotlb_sync(). Drivers can add pages to @freelist to be freed after
* ->iotlb_sync() or ->iotlb_flush_all() have cleared all cached references to
* them. @queued is set to indicate when ->iotlb_flush_all() will be called
* later instead of ->iotlb_sync(), so drivers may optimise accordingly.
*/ */
struct iommu_iotlb_gather { struct iommu_iotlb_gather {
unsigned long start; unsigned long start;
unsigned long end; unsigned long end;
size_t pgsize; size_t pgsize;
struct page *freelist; struct page *freelist;
bool queued;
}; };
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment