Commit 23f88e0a authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Joerg Roedel

iommu/dma: Use for_each_sg in iommu_dma_alloc

arch_dma_prep_coherent can handle physically contiguous ranges larger
than PAGE_SIZE just fine, which means we don't need a page-based
iterator.
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent af751d43
...@@ -606,15 +606,11 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, ...@@ -606,15 +606,11 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
goto out_free_iova; goto out_free_iova;
if (!(prot & IOMMU_CACHE)) { if (!(prot & IOMMU_CACHE)) {
struct sg_mapping_iter miter; struct scatterlist *sg;
/* int i;
* The CPU-centric flushing implied by SG_MITER_TO_SG isn't
* sufficient here, so skip it by using the "wrong" direction. for_each_sg(sgt.sgl, sg, sgt.orig_nents, i)
*/ arch_dma_prep_coherent(sg_page(sg), sg->length);
sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
while (sg_miter_next(&miter))
arch_dma_prep_coherent(miter.page, PAGE_SIZE);
sg_miter_stop(&miter);
} }
if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot) if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment