Commit c0554329 authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Linus Torvalds

mm, memory_hotplug/failure: drain single zone pcplists

Memory hotplug and failure mechanisms have several places where pcplists
are drained so that pages are returned to the buddy allocator and can be
e.g. prepared for offlining.  This is always done in the context of a
single zone, we can reduce the pcplists drain to the single zone, which
is now possible.

The change should make memory offlining due to hotremove or failure
faster and not disturbing unrelated pcplists anymore.
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 510f5507
...@@ -233,7 +233,7 @@ void shake_page(struct page *p, int access) ...@@ -233,7 +233,7 @@ void shake_page(struct page *p, int access)
lru_add_drain_all(); lru_add_drain_all();
if (PageLRU(p)) if (PageLRU(p))
return; return;
drain_all_pages(NULL); drain_all_pages(page_zone(p));
if (PageLRU(p) || is_free_buddy_page(p)) if (PageLRU(p) || is_free_buddy_page(p))
return; return;
} }
...@@ -1661,7 +1661,7 @@ static int __soft_offline_page(struct page *page, int flags) ...@@ -1661,7 +1661,7 @@ static int __soft_offline_page(struct page *page, int flags)
if (!is_free_buddy_page(page)) if (!is_free_buddy_page(page))
lru_add_drain_all(); lru_add_drain_all();
if (!is_free_buddy_page(page)) if (!is_free_buddy_page(page))
drain_all_pages(NULL); drain_all_pages(page_zone(page));
SetPageHWPoison(page); SetPageHWPoison(page);
if (!is_free_buddy_page(page)) if (!is_free_buddy_page(page))
pr_info("soft offline: %#lx: page leaked\n", pr_info("soft offline: %#lx: page leaked\n",
......
...@@ -1725,7 +1725,7 @@ static int __ref __offline_pages(unsigned long start_pfn, ...@@ -1725,7 +1725,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
if (drain) { if (drain) {
lru_add_drain_all(); lru_add_drain_all();
cond_resched(); cond_resched();
drain_all_pages(NULL); drain_all_pages(zone);
} }
pfn = scan_movable_pages(start_pfn, end_pfn); pfn = scan_movable_pages(start_pfn, end_pfn);
...@@ -1747,7 +1747,7 @@ static int __ref __offline_pages(unsigned long start_pfn, ...@@ -1747,7 +1747,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
lru_add_drain_all(); lru_add_drain_all();
yield(); yield();
/* drain pcp pages, this is synchronous. */ /* drain pcp pages, this is synchronous. */
drain_all_pages(NULL); drain_all_pages(zone);
/* /*
* dissolve free hugepages in the memory block before doing offlining * dissolve free hugepages in the memory block before doing offlining
* actually in order to make hugetlbfs's object counting consistent. * actually in order to make hugetlbfs's object counting consistent.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment