Commit 3e04040d authored by Ard Biesheuvel's avatar Ard Biesheuvel Committed by Linus Torvalds

Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment"

This reverts commit 864b75f9.

Commit 864b75f9 ("mm/page_alloc: fix memmap_init_zone pageblock
alignment") modified the logic in memmap_init_zone() to initialize
struct pages associated with invalid PFNs, to appease a VM_BUG_ON()
in move_freepages(), which is redundant by its own admission, and
dereferences struct page fields to obtain the zone without checking
whether the struct pages in question are valid to begin with.

Commit 864b75f9 only makes it worse, since the rounding it does
may cause pfn assume the same value it had in a prior iteration of
the loop, resulting in an infinite loop and a hang very early in the
boot. Also, since it doesn't perform the same rounding on start_pfn
itself but only on intermediate values following an invalid PFN, we
may still hit the same VM_BUG_ON() as before.

So instead, let's fix this at the core, and ensure that the BUG
check doesn't dereference struct page fields of invalid pages.

Fixes: 864b75f9 ("mm/page_alloc: fix memmap_init_zone pageblock alignment")
Tested-by: default avatarJan Glauber <jglauber@cavium.com>
Tested-by: default avatarShanker Donthineni <shankerd@codeaurora.org>
Cc: Daniel Vacek <neelx@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 274a1ff0
...@@ -1910,7 +1910,9 @@ static int move_freepages(struct zone *zone, ...@@ -1910,7 +1910,9 @@ static int move_freepages(struct zone *zone,
* Remove at a later date when no bug reports exist related to * Remove at a later date when no bug reports exist related to
* grouping pages by mobility * grouping pages by mobility
*/ */
VM_BUG_ON(page_zone(start_page) != page_zone(end_page)); VM_BUG_ON(pfn_valid(page_to_pfn(start_page)) &&
pfn_valid(page_to_pfn(end_page)) &&
page_zone(start_page) != page_zone(end_page));
#endif #endif
if (num_movable) if (num_movable)
...@@ -5359,14 +5361,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, ...@@ -5359,14 +5361,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
/* /*
* Skip to the pfn preceding the next valid one (or * Skip to the pfn preceding the next valid one (or
* end_pfn), such that we hit a valid pfn (or end_pfn) * end_pfn), such that we hit a valid pfn (or end_pfn)
* on our next iteration of the loop. Note that it needs * on our next iteration of the loop.
* to be pageblock aligned even when the region itself */
* is not. move_freepages_block() can shift ahead of pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
* the valid region but still depends on correct page
* metadata.
*/
pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
~(pageblock_nr_pages-1)) - 1;
#endif #endif
continue; continue;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment