Commit e3de0446 authored by Coly Li's avatar Coly Li Committed by Jens Axboe

bcache: reap from tail of c->btree_cache in bch_mca_scan()

When shrink btree node cache from c->btree_cache in bch_mca_scan(),
no matter the selected node is reaped or not, it will be rotated from
the head to the tail of c->btree_cache list. But in bcache journal
code, when flushing the btree nodes with oldest journal entry, btree
nodes are iterated and slected from the tail of c->btree_cache list in
btree_flush_write(). The list_rotate_left() in bch_mca_scan() will
make btree_flush_write() iterate more nodes in c->btree_list in reverse
order.

This patch just reaps the selected btree node cache, and not move it
from the head to the tail of c->btree_cache list. Then bch_mca_scan()
will not mess up c->btree_cache list to btree_flush_write().
Signed-off-by: default avatarColy Li <colyli@suse.de>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent d5c9c470
......@@ -747,19 +747,19 @@ static unsigned long bch_mca_scan(struct shrinker *shrink,
i++;
}
for (; (nr--) && i < btree_cache_used; i++) {
if (list_empty(&c->btree_cache))
list_for_each_entry_safe_reverse(b, t, &c->btree_cache, list) {
if (nr <= 0 || i >= btree_cache_used)
goto out;
b = list_first_entry(&c->btree_cache, struct btree, list);
list_rotate_left(&c->btree_cache);
if (!mca_reap(b, 0, false)) {
mca_bucket_free(b);
mca_data_free(b);
rw_unlock(true, b);
freed++;
}
nr--;
i++;
}
out:
mutex_unlock(&c->bucket_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment