Commit bff3b044 authored by Nicolas Saenz Julienne's avatar Nicolas Saenz Julienne Committed by Catalin Marinas

arm64: mm: reserve CMA and crashkernel in ZONE_DMA32

With the introduction of ZONE_DMA in arm64 we moved the default CMA and
crashkernel reservation into that area. This caused a regression on big
machines that need big CMA and crashkernel reservations. Note that
ZONE_DMA is only 1GB big.

Restore the previous behavior as the wide majority of devices are OK
with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
will configure it explicitly.

Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32")
Reported-by: default avatarQian Cai <cai@lca.pw>
Signed-off-by: default avatarNicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent 8b5369ea
...@@ -91,7 +91,7 @@ static void __init reserve_crashkernel(void) ...@@ -91,7 +91,7 @@ static void __init reserve_crashkernel(void)
if (crash_base == 0) { if (crash_base == 0) {
/* Current arm64 boot protocol requires 2MB alignment */ /* Current arm64 boot protocol requires 2MB alignment */
crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
crash_size, SZ_2M); crash_size, SZ_2M);
if (crash_base == 0) { if (crash_base == 0) {
pr_warn("cannot allocate crashkernel (size:0x%llx)\n", pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
...@@ -459,7 +459,7 @@ void __init arm64_memblock_init(void) ...@@ -459,7 +459,7 @@ void __init arm64_memblock_init(void)
high_memory = __va(memblock_end_of_DRAM() - 1) + 1; high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma_phys_limit ? : arm64_dma32_phys_limit); dma_contiguous_reserve(arm64_dma32_phys_limit);
} }
void __init bootmem_init(void) void __init bootmem_init(void)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment