Commit afeeb7ce authored by Zhao, Yu's avatar Zhao, Yu Committed by David Woodhouse

intel-iommu: Fix address wrap on 32-bit kernel.

The problem is in dma_pte_clear_range and dma_pte_free_pagetable. When
intel_unmap_single and intel_unmap_sg call them, the end address may be
zero if the 'start_addr + size' rounds up. So no PTE gets cleared. The
uncleared PTE fires the BUG_ON when it's used again to create new mappings.

After I modified dma_pte_clear_range a bit, the BUG_ON is gone.

Tested both 32 and 32 PAE modes on Intel X58 and Q35 platforms.
Signed-off-by: default avatarYu Zhao <yu.zhao@intel.com>
Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
parent 4cf2e75d
...@@ -718,15 +718,17 @@ static void dma_pte_clear_one(struct dmar_domain *domain, u64 addr) ...@@ -718,15 +718,17 @@ static void dma_pte_clear_one(struct dmar_domain *domain, u64 addr)
static void dma_pte_clear_range(struct dmar_domain *domain, u64 start, u64 end) static void dma_pte_clear_range(struct dmar_domain *domain, u64 start, u64 end)
{ {
int addr_width = agaw_to_width(domain->agaw); int addr_width = agaw_to_width(domain->agaw);
int npages;
start &= (((u64)1) << addr_width) - 1; start &= (((u64)1) << addr_width) - 1;
end &= (((u64)1) << addr_width) - 1; end &= (((u64)1) << addr_width) - 1;
/* in case it's partial page */ /* in case it's partial page */
start = PAGE_ALIGN(start); start = PAGE_ALIGN(start);
end &= PAGE_MASK; end &= PAGE_MASK;
npages = (end - start) / VTD_PAGE_SIZE;
/* we don't need lock here, nobody else touches the iova range */ /* we don't need lock here, nobody else touches the iova range */
while (start < end) { while (npages--) {
dma_pte_clear_one(domain, start); dma_pte_clear_one(domain, start);
start += VTD_PAGE_SIZE; start += VTD_PAGE_SIZE;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment