Commit 97d0da22 authored by Wen Congyang's avatar Wen Congyang Committed by Linus Torvalds

memory-hotplug: fix NR_FREE_PAGES mismatch

NR_FREE_PAGES will be wrong after offlining pages.  We add/dec
NR_FREE_PAGES like this now:

1. move all pages in buddy system to MIGRATE_ISOLATE, and dec NR_FREE_PAGES

2. don't add NR_FREE_PAGES when it is freed and the migratetype is
   MIGRATE_ISOLATE

3. dec NR_FREE_PAGES when offlining isolated pages.

4. add NR_FREE_PAGES when undoing isolate pages.

When we come to step 3, all pages are in MIGRATE_ISOLATE list, and
NR_FREE_PAGES are right.  When we come to step4, all pages are not in
buddy system, so we don't change NR_FREE_PAGES in this step, but we change
NR_FREE_PAGES in step3.  So NR_FREE_PAGES is wrong after offlining pages.
So there is no need to change NR_FREE_PAGES in step3.

This patch also fixs a problem in step2: if the migratetype is
MIGRATE_ISOLATE, we should not add NR_FRR_PAGES when we remove pages from
pcppages.
Signed-off-by: default avatarWen Congyang <wency@cn.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jiang Liu <liuj97@gmail.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Jianguo Wu <wujianguo106@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 7c72eb32
...@@ -667,11 +667,13 @@ static void free_pcppages_bulk(struct zone *zone, int count, ...@@ -667,11 +667,13 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */ /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, zone, 0, mt); __free_one_page(page, zone, 0, mt);
trace_mm_page_pcpu_drain(page, 0, mt); trace_mm_page_pcpu_drain(page, 0, mt);
if (is_migrate_cma(mt)) if (likely(get_pageblock_migratetype(page) != MIGRATE_ISOLATE)) {
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1); __mod_zone_page_state(zone, NR_FREE_PAGES, 1);
if (is_migrate_cma(mt))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
}
} while (--to_free && --batch_free && !list_empty(list)); } while (--to_free && --batch_free && !list_empty(list));
} }
__mod_zone_page_state(zone, NR_FREE_PAGES, count);
spin_unlock(&zone->lock); spin_unlock(&zone->lock);
} }
...@@ -6047,8 +6049,6 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) ...@@ -6047,8 +6049,6 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
list_del(&page->lru); list_del(&page->lru);
rmv_page_order(page); rmv_page_order(page);
zone->free_area[order].nr_free--; zone->free_area[order].nr_free--;
__mod_zone_page_state(zone, NR_FREE_PAGES,
- (1UL << order));
for (i = 0; i < (1 << order); i++) for (i = 0; i < (1 << order); i++)
SetPageReserved((page+i)); SetPageReserved((page+i));
pfn += (1 << order); pfn += (1 << order);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment