Commit 83e68f25 authored by Yosry Ahmed's avatar Yosry Ahmed Committed by Andrew Morton

mm: zswap: remove unnecessary trees cleanups in zswap_swapoff()

During swapoff, try_to_unuse() makes sure that zswap_invalidate() is
called for all swap entries before zswap_swapoff() is called.  This means
that all zswap entries should already be removed from the tree.  Simplify
zswap_swapoff() by removing the trees cleanup code, and leave an assertion
in its place.

Link: https://lkml.kernel.org/r/20240124045113.415378-3-yosryahmed@google.comSigned-off-by: default avatarYosry Ahmed <yosryahmed@google.com>
Reviewed-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 64cf264c
...@@ -1808,19 +1808,9 @@ void zswap_swapoff(int type) ...@@ -1808,19 +1808,9 @@ void zswap_swapoff(int type)
if (!trees) if (!trees)
return; return;
for (i = 0; i < nr_zswap_trees[type]; i++) { /* try_to_unuse() invalidated all the entries already */
struct zswap_tree *tree = trees + i; for (i = 0; i < nr_zswap_trees[type]; i++)
struct zswap_entry *entry, *n; WARN_ON_ONCE(!RB_EMPTY_ROOT(&trees[i].rbroot));
/* walk the tree and free everything */
spin_lock(&tree->lock);
rbtree_postorder_for_each_entry_safe(entry, n,
&tree->rbroot,
rbnode)
zswap_free_entry(entry);
tree->rbroot = RB_ROOT;
spin_unlock(&tree->lock);
}
kvfree(trees); kvfree(trees);
nr_zswap_trees[type] = 0; nr_zswap_trees[type] = 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment