Commit 330d6e48 authored by Cannon Matthews's avatar Cannon Matthews Committed by Linus Torvalds

mm/hugetlb.c: don't zero 1GiB bootmem pages

When using 1GiB pages during early boot, use the new
memblock_virt_alloc_try_nid_raw() to allocate memory without zeroing it.
Zeroing out hundreds or thousands of GiB in a single core memset() call
is very slow, and can make early boot last upwards of 20-30 minutes on
multi TiB machines.

The memory does not need to be zero'd as the hugetlb pages are always
zero'd on page fault.

Tested: Booted with ~3800 1G pages, and it booted successfully in
roughly the same amount of time as with 0, as opposed to the 25+ minutes
it would take before.

Link: http://lkml.kernel.org/r/20180711213313.92481-1-cannonmatthews@google.comSigned-off-by: default avatarCannon Matthews <cannonmatthews@google.com>
Acked-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Peter Feiner <pfeiner@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: Greg Thelen <gthelen@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d8a759b5
...@@ -2101,7 +2101,7 @@ int __alloc_bootmem_huge_page(struct hstate *h) ...@@ -2101,7 +2101,7 @@ int __alloc_bootmem_huge_page(struct hstate *h)
for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) {
void *addr; void *addr;
addr = memblock_virt_alloc_try_nid_nopanic( addr = memblock_virt_alloc_try_nid_raw(
huge_page_size(h), huge_page_size(h), huge_page_size(h), huge_page_size(h),
0, BOOTMEM_ALLOC_ACCESSIBLE, node); 0, BOOTMEM_ALLOC_ACCESSIBLE, node);
if (addr) { if (addr) {
...@@ -2119,6 +2119,7 @@ int __alloc_bootmem_huge_page(struct hstate *h) ...@@ -2119,6 +2119,7 @@ int __alloc_bootmem_huge_page(struct hstate *h)
found: found:
BUG_ON(!IS_ALIGNED(virt_to_phys(m), huge_page_size(h))); BUG_ON(!IS_ALIGNED(virt_to_phys(m), huge_page_size(h)));
/* Put them into a private list first because mem_map is not up yet */ /* Put them into a private list first because mem_map is not up yet */
INIT_LIST_HEAD(&m->list);
list_add(&m->list, &huge_boot_pages); list_add(&m->list, &huge_boot_pages);
m->hstate = h; m->hstate = h;
return 1; return 1;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment