Commit 6867c7a3 authored by T.J. Mercier's avatar T.J. Mercier Committed by Andrew Morton

mm: multi-gen LRU: don't spin during memcg release

When a memcg is in the process of being released mem_cgroup_tryget will
fail because its reference count has already reached 0.  This can happen
during reclaim if the memcg has already been offlined, and we reclaim all
remaining pages attributed to the offlined memcg.  shrink_many attempts to
skip the empty memcg in this case, and continue reclaiming from the
remaining memcgs in the old generation.  If there is only one memcg
remaining, or if all remaining memcgs are in the process of being released
then shrink_many will spin until all memcgs have finished being released. 
The release occurs through a workqueue, so it can take a while before
kswapd is able to make any further progress.

This fix results in reductions in kswapd activity and direct reclaim in
a test where 28 apps (working set size > total memory) are repeatedly
launched in a random sequence:

                                       A          B      delta   ratio(%)
           allocstall_movable       5962       3539      -2423     -40.64
            allocstall_normal       2661       2417       -244      -9.17
kswapd_high_wmark_hit_quickly      53152       7594     -45558     -85.71
                   pageoutrun      57365      11750     -45615     -79.52

Link: https://lkml.kernel.org/r/20230814151636.1639123-1-tjmercier@google.com
Fixes: e4dde56c ("mm: multi-gen LRU: per-node lru_gen_folio lists")
Signed-off-by: default avatarT.J. Mercier <tjmercier@google.com>
Acked-by: default avatarYu Zhao <yuzhao@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent e2c1ab07
......@@ -4854,16 +4854,17 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg)
spin_lock_irq(&pgdat->memcg_lru.lock);
VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list));
if (hlist_nulls_unhashed(&lruvec->lrugen.list))
goto unlock;
gen = lruvec->lrugen.gen;
hlist_nulls_del_rcu(&lruvec->lrugen.list);
hlist_nulls_del_init_rcu(&lruvec->lrugen.list);
pgdat->memcg_lru.nr_memcgs[gen]--;
if (!pgdat->memcg_lru.nr_memcgs[gen] && gen == get_memcg_gen(pgdat->memcg_lru.seq))
WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1);
unlock:
spin_unlock_irq(&pgdat->memcg_lru.lock);
}
}
......@@ -5435,8 +5436,10 @@ static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
rcu_read_lock();
hlist_nulls_for_each_entry_rcu(lrugen, pos, &pgdat->memcg_lru.fifo[gen][bin], list) {
if (op)
if (op) {
lru_gen_rotate_memcg(lruvec, op);
op = 0;
}
mem_cgroup_put(memcg);
......@@ -5444,7 +5447,7 @@ static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
memcg = lruvec_memcg(lruvec);
if (!mem_cgroup_tryget(memcg)) {
op = 0;
lru_gen_release_memcg(memcg);
memcg = NULL;
continue;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment