Commit 972bf252 authored by John Garry via iommu's avatar John Garry via iommu Committed by Joerg Roedel

iommu/iova: Move fast alloc size roundup into alloc_iova_fast()

It really is a property of the IOVA rcache code that we need to alloc a
power-of-2 size, so relocate the functionality to resize into
alloc_iova_fast(), rather than the callsites.
Signed-off-by: default avatarJohn Garry <john.garry@huawei.com>
Acked-by: default avatarWill Deacon <will@kernel.org>
Reviewed-by: default avatarXie Yongji <xieyongji@bytedance.com>
Acked-by: default avatarJason Wang <jasowang@redhat.com>
Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/1638875846-23993-1-git-send-email-john.garry@huawei.comSigned-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent 9abe2ac8
......@@ -442,14 +442,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
shift = iova_shift(iovad);
iova_len = size >> shift;
/*
* Freeing non-power-of-two-sized allocations back into the IOVA caches
* will come back to bite us badly, so we have to waste a bit of space
* rounding up anything cacheable to make sure that can't happen. The
* order of the unadjusted size will still match upon freeing.
*/
if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
iova_len = roundup_pow_of_two(iova_len);
dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
......
......@@ -497,6 +497,15 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
unsigned long iova_pfn;
struct iova *new_iova;
/*
* Freeing non-power-of-two-sized allocations back into the IOVA caches
* will come back to bite us badly, so we have to waste a bit of space
* rounding up anything cacheable to make sure that can't happen. The
* order of the unadjusted size will still match upon freeing.
*/
if (size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
size = roundup_pow_of_two(size);
iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
if (iova_pfn)
return iova_pfn;
......
......@@ -292,14 +292,6 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
unsigned long iova_len = iova_align(iovad, size) >> shift;
unsigned long iova_pfn;
/*
* Freeing non-power-of-two-sized allocations back into the IOVA caches
* will come back to bite us badly, so we have to waste a bit of space
* rounding up anything cacheable to make sure that can't happen. The
* order of the unadjusted size will still match upon freeing.
*/
if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
iova_len = roundup_pow_of_two(iova_len);
iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
return iova_pfn << shift;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment