Commit b49547ad authored by Chengming Zhou's avatar Chengming Zhou Committed by Andrew Morton

mm/zswap: stop lru list shrinking when encounter warm region

When the shrinker encounter an existing folio in swap cache, it means we
are shrinking into the warmer region.  We should terminate shrinking if
we're in the dynamic shrinker context.

This patch add LRU_STOP to support this, to avoid overshrinking.

Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-3-99d4084260a0@bytedance.comSigned-off-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarNhat Pham <nphamcs@gmail.com>
Reviewed-by: default avatarYosry Ahmed <yosryahmed@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 0827a1fb
......@@ -24,6 +24,8 @@ enum lru_status {
LRU_SKIP, /* item cannot be locked, skip */
LRU_RETRY, /* item not freeable. May drop the lock
internally, but has to return locked. */
LRU_STOP, /* stop lru list walking. May drop the lock
internally, but has to return locked. */
};
struct list_lru_one {
......
......@@ -243,6 +243,9 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx,
*/
assert_spin_locked(&nlru->lock);
goto restart;
case LRU_STOP:
assert_spin_locked(&nlru->lock);
goto out;
default:
BUG();
}
......
......@@ -1315,8 +1315,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
* into the warmer region. We should terminate shrinking (if we're in the dynamic
* shrinker context).
*/
if (writeback_result == -EEXIST && encountered_page_in_swapcache)
if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
ret = LRU_STOP;
*encountered_page_in_swapcache = true;
}
} else {
zswap_written_back_pages++;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment