Commit ca0d8683 authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Sasha Levin

mm, compaction: simplify handling restart position in free pages scanner

[ Upstream commit f5f61a32 ]

Handling the position where compaction free scanner should restart
(stored in cc->free_pfn) got more complex with commit e14c720e ("mm,
compaction: remember position within pageblock in free pages scanner").
Currently the position is updated in each loop iteration of
isolate_freepages(), although it should be enough to update it only when
breaking from the loop.  There's also an extra check outside the loop
updates the position in case we have met the migration scanner.

This can be simplified if we move the test for having isolated enough
from the for-loop header next to the test for contention, and
determining the restart position only in these cases.  We can reuse the
isolate_start_pfn variable for this instead of setting cc->free_pfn
directly.  Outside the loop, we can simply set cc->free_pfn to current
value of isolate_start_pfn without any extra check.

Also add a VM_BUG_ON to catch possible mistake in the future, in case we
later add a new condition that terminates isolate_freepages_block()
prematurely without also considering the condition in
isolate_freepages().
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
parent 1602957c
...@@ -948,8 +948,7 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -948,8 +948,7 @@ static void isolate_freepages(struct compact_control *cc)
* pages on cc->migratepages. We stop searching if the migrate * pages on cc->migratepages. We stop searching if the migrate
* and free page scanners meet or enough free pages are isolated. * and free page scanners meet or enough free pages are isolated.
*/ */
for (; block_start_pfn >= low_pfn && for (; block_start_pfn >= low_pfn;
cc->nr_migratepages > cc->nr_freepages;
block_end_pfn = block_start_pfn, block_end_pfn = block_start_pfn,
block_start_pfn -= pageblock_nr_pages, block_start_pfn -= pageblock_nr_pages,
isolate_start_pfn = block_start_pfn) { isolate_start_pfn = block_start_pfn) {
...@@ -986,6 +985,8 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -986,6 +985,8 @@ static void isolate_freepages(struct compact_control *cc)
break; break;
/* /*
* If we isolated enough freepages, or aborted due to async
* compaction being contended, terminate the loop.
* Remember where the free scanner should restart next time, * Remember where the free scanner should restart next time,
* which is where isolate_freepages_block() left off. * which is where isolate_freepages_block() left off.
* But if it scanned the whole pageblock, isolate_start_pfn * But if it scanned the whole pageblock, isolate_start_pfn
...@@ -994,27 +995,31 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -994,27 +995,31 @@ static void isolate_freepages(struct compact_control *cc)
* In that case we will however want to restart at the start * In that case we will however want to restart at the start
* of the previous pageblock. * of the previous pageblock.
*/ */
cc->free_pfn = (isolate_start_pfn < block_end_pfn) ? if ((cc->nr_freepages >= cc->nr_migratepages)
isolate_start_pfn : || cc->contended) {
if (isolate_start_pfn >= block_end_pfn)
isolate_start_pfn =
block_start_pfn - pageblock_nr_pages; block_start_pfn - pageblock_nr_pages;
break;
} else {
/* /*
* isolate_freepages_block() might have aborted due to async * isolate_freepages_block() should not terminate
* compaction being contended * prematurely unless contended, or isolated enough
*/ */
if (cc->contended) VM_BUG_ON(isolate_start_pfn < block_end_pfn);
break; }
} }
/* split_free_page does not map the pages */ /* split_free_page does not map the pages */
map_pages(freelist); map_pages(freelist);
/* /*
* If we crossed the migrate scanner, we want to keep it that way * Record where the free scanner will restart next time. Either we
* so that compact_finished() may detect this * broke from the loop and set isolate_start_pfn based on the last
* call to isolate_freepages_block(), or we met the migration scanner
* and the loop terminated due to isolate_start_pfn < low_pfn
*/ */
if (block_start_pfn < low_pfn) cc->free_pfn = isolate_start_pfn;
cc->free_pfn = cc->migrate_pfn;
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment