• David Stevens's avatar
    iommu/dma: Account for min_align_mask w/swiotlb · 2cbc61a1
    David Stevens authored
    Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce
    buffers in iommu_dma_map_page, to account for min_align_mask.
    
    To deal with granule alignment, __iommu_dma_map maps iova_align(size +
    iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page
    passes aligned size when using swiotlb, then this becomes
    iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be
    zero when using swiotlb. However, this is not the case for devices that
    set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up
    mapping an extra page at the end of the buffer. Beyond just being a
    security issue, the extra page is not cleaned up by __iommu_dma_unmap.
    This causes problems when the IOVA is reused, due to collisions in the
    iommu driver.  Just passing the original size is sufficient, since
    __iommu_dma_map will take care of granule alignment.
    
    Fixes: 1f221a0d ("swiotlb: respect min_align_mask")
    Signed-off-by: default avatarDavid Stevens <stevensd@chromium.org>
    Link: https://lore.kernel.org/r/20210929023300.335969-8-stevensd@google.comSigned-off-by: default avatarJoerg Roedel <jroedel@suse.de>
    2cbc61a1
dma-iommu.c 39.5 KB