Commit 9b15b817 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

swap: fix shmem swapping when more than 8 areas

Minchan Kim reports that when a system has many swap areas, and tmpfs
swaps out to the ninth or more, shmem_getpage_gfp()'s attempts to read
back the page cannot locate it, and the read fails with -ENOMEM.

Whoops.  Yes, I blindly followed read_swap_header()'s pte_to_swp_entry(
swp_entry_to_pte()) technique for determining maximum usable swap
offset, without stopping to realize that that actually depends upon the
pte swap encoding shifting swap offset to the higher bits and truncating
it there.  Whereas our radix_tree swap encoding leaves offset in the
lower bits: it's swap "type" (that is, index of swap area) that was
truncated.

Fix it by reducing the SWP_TYPE_SHIFT() in swapops.h, and removing the
broken radix_to_swp_entry(swp_to_radix_entry()) from read_swap_header().

This does not reduce the usable size of a swap area any further, it
leaves it as claimed when making the original commit: no change from 3.0
on x86_64, nor on i386 without PAE; but 3.0's 512GB is reduced to 128GB
per swapfile on i386 with PAE.  It's not a change I would have risked
five years ago, but with x86_64 supported for ten years, I believe it's
appropriate now.

Hmm, and what if some architecture implements its swap pte with offset
encoded below type? That would equally break the maximum usable swap
offset check.  Happily, they all follow the same tradition of encoding
offset above type, but I'll prepare a check on that for next.
Reported-and-Reviewed-and-Tested-by: default avatarMinchan Kim <minchan@kernel.org>
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org [3.1, 3.2, 3.3, 3.4]
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent a2c2df86
...@@ -9,13 +9,15 @@ ...@@ -9,13 +9,15 @@
* get good packing density in that tree, so the index should be dense in * get good packing density in that tree, so the index should be dense in
* the low-order bits. * the low-order bits.
* *
* We arrange the `type' and `offset' fields so that `type' is at the five * We arrange the `type' and `offset' fields so that `type' is at the seven
* high-order bits of the swp_entry_t and `offset' is right-aligned in the * high-order bits of the swp_entry_t and `offset' is right-aligned in the
* remaining bits. * remaining bits. Although `type' itself needs only five bits, we allow for
* shmem/tmpfs to shift it all up a further two bits: see swp_to_radix_entry().
* *
* swp_entry_t's are *never* stored anywhere in their arch-dependent format. * swp_entry_t's are *never* stored anywhere in their arch-dependent format.
*/ */
#define SWP_TYPE_SHIFT(e) (sizeof(e.val) * 8 - MAX_SWAPFILES_SHIFT) #define SWP_TYPE_SHIFT(e) ((sizeof(e.val) * 8) - \
(MAX_SWAPFILES_SHIFT + RADIX_TREE_EXCEPTIONAL_SHIFT))
#define SWP_OFFSET_MASK(e) ((1UL << SWP_TYPE_SHIFT(e)) - 1) #define SWP_OFFSET_MASK(e) ((1UL << SWP_TYPE_SHIFT(e)) - 1)
/* /*
......
...@@ -1916,24 +1916,20 @@ static unsigned long read_swap_header(struct swap_info_struct *p, ...@@ -1916,24 +1916,20 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
/* /*
* Find out how many pages are allowed for a single swap * Find out how many pages are allowed for a single swap
* device. There are three limiting factors: 1) the number * device. There are two limiting factors: 1) the number
* of bits for the swap offset in the swp_entry_t type, and * of bits for the swap offset in the swp_entry_t type, and
* 2) the number of bits in the swap pte as defined by the * 2) the number of bits in the swap pte as defined by the
* the different architectures, and 3) the number of free bits * different architectures. In order to find the
* in an exceptional radix_tree entry. In order to find the
* largest possible bit mask, a swap entry with swap type 0 * largest possible bit mask, a swap entry with swap type 0
* and swap offset ~0UL is created, encoded to a swap pte, * and swap offset ~0UL is created, encoded to a swap pte,
* decoded to a swp_entry_t again, and finally the swap * decoded to a swp_entry_t again, and finally the swap
* offset is extracted. This will mask all the bits from * offset is extracted. This will mask all the bits from
* the initial ~0UL mask that can't be encoded in either * the initial ~0UL mask that can't be encoded in either
* the swp_entry_t or the architecture definition of a * the swp_entry_t or the architecture definition of a
* swap pte. Then the same is done for a radix_tree entry. * swap pte.
*/ */
maxpages = swp_offset(pte_to_swp_entry( maxpages = swp_offset(pte_to_swp_entry(
swp_entry_to_pte(swp_entry(0, ~0UL)))); swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
maxpages = swp_offset(radix_to_swp_entry(
swp_to_radix_entry(swp_entry(0, maxpages)))) + 1;
if (maxpages > swap_header->info.last_page) { if (maxpages > swap_header->info.last_page) {
maxpages = swap_header->info.last_page + 1; maxpages = swap_header->info.last_page + 1;
/* p->max is an unsigned int: don't overflow it */ /* p->max is an unsigned int: don't overflow it */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment