• Yosry Ahmed's avatar
    mm: swap: enforce updating inuse_pages at the end of swap_range_free() · 64cf264c
    Yosry Ahmed authored
    Patch series "mm: zswap: simplify zswap_swapoff()", v2.
    
    These patches aim to simplify zswap_swapoff() by removing the unnecessary
    trees cleanup code.  Patch 1 makes sure that the order of operations
    during swapoff is enforced correctly, making sure the simplification in
    patch 2 is correct in a future-proof manner.
    
    
    This patch (of 2):
    
    In swap_range_free(), we update inuse_pages then do some cleanups (arch
    invalidation, zswap invalidation, swap cache cleanups, etc).  During
    swapoff, try_to_unuse() checks that inuse_pages is 0 to make sure all swap
    entries are freed.  Make sure we only update inuse_pages after we are done
    with the cleanups in swap_range_free(), and use the proper memory barriers
    to enforce it.  This makes sure that code following try_to_unuse() can
    safely assume that swap_range_free() ran for all entries in thr swapfile
    (e.g.  swap cache cleanup, zswap_swapoff()).
    
    In practice, this currently isn't a problem because swap_range_free() is
    called with the swap info lock held, and the swapoff code happens to spin
    for that after try_to_unuse().  However, this seems fragile and
    unintentional, so make it more relable and future-proof.  This also
    facilitates a following simplification of zswap_swapoff().
    
    Link: https://lkml.kernel.org/r/20240124045113.415378-1-yosryahmed@google.com
    Link: https://lkml.kernel.org/r/20240124045113.415378-2-yosryahmed@google.comSigned-off-by: default avatarYosry Ahmed <yosryahmed@google.com>
    Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
    Cc: Chengming Zhou <zhouchengming@bytedance.com>
    Cc: Chris Li <chrisl@kernel.org>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Nhat Pham <nphamcs@gmail.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    64cf264c
swapfile.c 92.4 KB