Commit c6303ab9 authored by Barry Song's avatar Barry Song Committed by Christoph Hellwig

arm64: mm: reserve per-numa CMA to localize coherent dma buffers

Right now, smmu is using dma_alloc_coherent() to get memory to save queues
and tables. Typically, on ARM64 server, there is a default CMA located at
node0, which could be far away from node2, node3 etc.
with this patch, smmu will get memory from local numa node to save command
queues and page tables. that means dma_unmap latency will be shrunk much.
Meanwhile, when iommu.passthrough is on, device drivers which call dma_
alloc_coherent() will also get local memory and avoid the travel between
numa nodes.
Acked-by: default avatarWill Deacon <will@kernel.org>
Signed-off-by: default avatarBarry Song <song.bao.hua@hisilicon.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent b7176c26
...@@ -429,6 +429,8 @@ void __init bootmem_init(void) ...@@ -429,6 +429,8 @@ void __init bootmem_init(void)
arm64_hugetlb_cma_reserve(); arm64_hugetlb_cma_reserve();
#endif #endif
dma_pernuma_cma_reserve();
/* /*
* sparse_init() tries to allocate memory from memblock, so must be * sparse_init() tries to allocate memory from memblock, so must be
* done after the fixed reservations * done after the fixed reservations
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment