Commit 3c120143 authored by Zhen Lei's avatar Zhen Lei Committed by Joerg Roedel

iommu/amd: make sure TLB to be flushed before IOVA freed

Although the mapping has already been removed in the page table, it maybe
still exist in TLB. Suppose the freed IOVAs is reused by others before the
flush operation completed, the new user can not correctly access to its
meomory.
Signed-off-by: default avatarZhen Lei <thunder.leizhen@huawei.com>
Fixes: b1516a14 ('iommu/amd: Implement flush queue')
Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent 4674686d
......@@ -2407,9 +2407,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
}
if (amd_iommu_unmap_flush) {
dma_ops_free_iova(dma_dom, dma_addr, pages);
domain_flush_tlb(&dma_dom->domain);
domain_flush_complete(&dma_dom->domain);
dma_ops_free_iova(dma_dom, dma_addr, pages);
} else {
pages = __roundup_pow_of_two(pages);
queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment