Commit fbbb602e authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

mm: deactivations shouldn't bias the LRU balance

Operations like MADV_FREE, FADV_DONTNEED etc.  currently move any affected
active pages to the inactive list to accelerate their reclaim (good) but
also steer page reclaim toward that LRU type, or away from the other
(bad).

The reason why this is undesirable is that such operations are not part of
the regular page aging cycle, and rather a fluke that doesn't say much
about the remaining pages on that list; they might all be in heavy use,
and once the chunk of easy victims has been purged, the VM continues to
apply elevated pressure on those remaining hot pages.  The other LRU,
meanwhile, might have easily reclaimable pages, and there was never a need
to steer away from it in the first place.

As the previous patch outlined, we should focus on recording actually
observed cost to steer the balance rather than speculating about the
potential value of one LRU list over the other.  In that spirit, leave
explicitely deactivated pages to the LRU algorithm to pick up, and let
rotations decide which list is the easiest to reclaim.

[cai@lca.pw: fix set-but-not-used warning]
  Link: http://lkml.kernel.org/r/20200522133335.GA624@Qians-MacBook-Air.localSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarMinchan Kim <minchan@kernel.org>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Qian Cai <cai@lca.pw>
Link: http://lkml.kernel.org/r/20200520232525.798933-10-hannes@cmpxchg.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 1431d4d1
...@@ -498,7 +498,7 @@ void lru_cache_add_active_or_unevictable(struct page *page, ...@@ -498,7 +498,7 @@ void lru_cache_add_active_or_unevictable(struct page *page,
static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
void *arg) void *arg)
{ {
int lru, file; int lru;
bool active; bool active;
if (!PageLRU(page)) if (!PageLRU(page))
...@@ -512,7 +512,6 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, ...@@ -512,7 +512,6 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
return; return;
active = PageActive(page); active = PageActive(page);
file = page_is_file_lru(page);
lru = page_lru_base_type(page); lru = page_lru_base_type(page);
del_page_from_lru_list(page, lruvec, lru + active); del_page_from_lru_list(page, lruvec, lru + active);
...@@ -538,14 +537,12 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, ...@@ -538,14 +537,12 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
if (active) if (active)
__count_vm_event(PGDEACTIVATE); __count_vm_event(PGDEACTIVATE);
lru_note_cost(lruvec, !file, hpage_nr_pages(page));
} }
static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
void *arg) void *arg)
{ {
if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
int file = page_is_file_lru(page);
int lru = page_lru_base_type(page); int lru = page_lru_base_type(page);
del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE);
...@@ -554,7 +551,6 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, ...@@ -554,7 +551,6 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
add_page_to_lru_list(page, lruvec, lru); add_page_to_lru_list(page, lruvec, lru);
__count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page));
lru_note_cost(lruvec, !file, hpage_nr_pages(page));
} }
} }
...@@ -579,7 +575,6 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, ...@@ -579,7 +575,6 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
__count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); __count_vm_events(PGLAZYFREE, hpage_nr_pages(page));
count_memcg_page_event(page, PGLAZYFREE); count_memcg_page_event(page, PGLAZYFREE);
lru_note_cost(lruvec, 0, hpage_nr_pages(page));
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment