Commit de047ce4 authored by Jaewon Kim's avatar Jaewon Kim Committed by Greg Kroah-Hartman

vmscan: fix increasing nr_isolated incurred by putback unevictable pages

commit c54839a7 upstream.

reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list.  But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed.  This increases nr_isolated.

To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller.  Caller will take care those pages.

Minchan said:

It fixes two issues.

1. With unevictable page, cma_alloc will be successful.

Exactly speaking, cma_alloc of current kernel will fail due to
unevictable pages.

2. fix leaking of NR_ISOLATED counter of vmstat

With it, too_many_isolated works.  Otherwise, it could make hang until
the process get SIGKILL.
Signed-off-by: default avatarJaewon Kim <jaewon31.kim@samsung.com>
Acked-by: default avatarMinchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 706ad8dc
...@@ -925,7 +925,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -925,7 +925,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageSwapCache(page)) if (PageSwapCache(page))
try_to_free_swap(page); try_to_free_swap(page);
unlock_page(page); unlock_page(page);
putback_lru_page(page); list_add(&page->lru, &ret_pages);
continue; continue;
activate_locked: activate_locked:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment