Commit 155b5f88 authored by Huang Ying's avatar Huang Ying Committed by Linus Torvalds

mm/swapfile.c: sort swap entries before free

To reduce the lock contention of swap_info_struct->lock when freeing
swap entry.  The freed swap entries will be collected in a per-CPU
buffer firstly, and be really freed later in batch.  During the batch
freeing, if the consecutive swap entries in the per-CPU buffer belongs
to same swap device, the swap_info_struct->lock needs to be
acquired/released only once, so that the lock contention could be
reduced greatly.  But if there are multiple swap devices, it is possible
that the lock may be unnecessarily released/acquired because the swap
entries belong to the same swap device are non-consecutive in the
per-CPU buffer.

To solve the issue, the per-CPU buffer is sorted according to the swap
device before freeing the swap entries.

With the patch, the memory (some swapped out) free time reduced 11.6%
(from 2.65s to 2.35s) in the vm-scalability swap-w-rand test case with
16 processes.  The test is done on a Xeon E5 v3 system.  The swap device
used is a RAM simulated PMEM (persistent memory) device.  To test
swapping, the test case creates 16 processes, which allocate and write
to the anonymous pages until the RAM and part of the swap device is used
up, finally the memory (some swapped out) is freed before exit.

[akpm@linux-foundation.org: tweak comment]
Link: http://lkml.kernel.org/r/20170525005916.25249-1-ying.huang@intel.comSigned-off-by: default avatarHuang Ying <ying.huang@intel.com>
Acked-by: default avatarTim Chen <tim.c.chen@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shaohua Li <shli@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8e675f7a
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/swapfile.h> #include <linux/swapfile.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/swap_slots.h> #include <linux/swap_slots.h>
#include <linux/sort.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
...@@ -1198,6 +1199,13 @@ void put_swap_page(struct page *page, swp_entry_t entry) ...@@ -1198,6 +1199,13 @@ void put_swap_page(struct page *page, swp_entry_t entry)
swapcache_free_cluster(entry); swapcache_free_cluster(entry);
} }
static int swp_entry_cmp(const void *ent1, const void *ent2)
{
const swp_entry_t *e1 = ent1, *e2 = ent2;
return (int)swp_type(*e1) - (int)swp_type(*e2);
}
void swapcache_free_entries(swp_entry_t *entries, int n) void swapcache_free_entries(swp_entry_t *entries, int n)
{ {
struct swap_info_struct *p, *prev; struct swap_info_struct *p, *prev;
...@@ -1208,6 +1216,14 @@ void swapcache_free_entries(swp_entry_t *entries, int n) ...@@ -1208,6 +1216,14 @@ void swapcache_free_entries(swp_entry_t *entries, int n)
prev = NULL; prev = NULL;
p = NULL; p = NULL;
/*
* Sort swap entries by swap device, so each lock is only taken once.
* nr_swapfiles isn't absolutely correct, but the overhead of sort() is
* so low that it isn't necessary to optimize further.
*/
if (nr_swapfiles > 1)
sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL);
for (i = 0; i < n; ++i) { for (i = 0; i < n; ++i) {
p = swap_info_get_cont(entries[i], prev); p = swap_info_get_cont(entries[i], prev);
if (p) if (p)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment