Commit 9a1ea439 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm: put_and_wait_on_page_locked() while page is migrated

Waiting on a page migration entry has used wait_on_page_locked() all along
since 2006: but you cannot safely wait_on_page_locked() without holding a
reference to the page, and that extra reference is enough to make
migrate_page_move_mapping() fail with -EAGAIN, when a racing task faults
on the entry before migrate_page_move_mapping() gets there.

And that failure is retried nine times, amplifying the pain when trying to
migrate a popular page.  With a single persistent faulter, migration
sometimes succeeds; with two or three concurrent faulters, success becomes
much less likely (and the more the page was mapped, the worse the overhead
of unmapping and remapping it on each try).

This is especially a problem for memory offlining, where the outer level
retries forever (or until terminated from userspace), because a heavy
refault workload can trigger an endless loop of migration failures.
wait_on_page_locked() is the wrong tool for the job.

David Herrmann (but was he the first?) noticed this issue in 2014:
https://marc.info/?l=linux-mm&m=140110465608116&w=2

Tim Chen started a thread in August 2017 which appears relevant:
https://marc.info/?l=linux-mm&m=150275941014915&w=2 where Kan Liang went
on to implicate __migration_entry_wait():
https://marc.info/?l=linux-mm&m=150300268411980&w=2 and the thread ended
up with the v4.14 commits: 2554db91 ("sched/wait: Break up long wake
list walk") 11a19c7b ("sched/wait: Introduce wakeup boomark in
wake_up_page_bit")

Baoquan He reported "Memory hotplug softlock issue" 14 November 2018:
https://marc.info/?l=linux-mm&m=154217936431300&w=2

We have all assumed that it is essential to hold a page reference while
waiting on a page lock: partly to guarantee that there is still a struct
page when MEMORY_HOTREMOVE is configured, but also to protect against
reuse of the struct page going to someone who then holds the page locked
indefinitely, when the waiter can reasonably expect timely unlocking.

But in fact, so long as wait_on_page_bit_common() does the put_page(), and
is careful not to rely on struct page contents thereafter, there is no
need to hold a reference to the page while waiting on it.  That does mean
that this case cannot go back through the loop: but that's fine for the
page migration case, and even if used more widely, is limited by the "Stop
walking if it's locked" optimization in wake_page_function().

Add interface put_and_wait_on_page_locked() to do this, using "behavior"
enum in place of "lock" arg to wait_on_page_bit_common() to implement it.
No interruptible or killable variant needed yet, but they might follow: I
have a vague notion that reporting -EINTR should take precedence over
return from wait_on_page_bit_common() without knowing the page state, so
arrange it accordingly - but that may be nothing but pedantic.

__migration_entry_wait() still has to take a brief reference to the page,
prior to calling put_and_wait_on_page_locked(): but now that it is dropped
before waiting, the chance of impeding page migration is very much
reduced.  Should we perhaps disable preemption across this?

shrink_page_list()'s __ClearPageLocked(): that was a surprise!  This
survived a lot of testing before that showed up.  PageWaiters may have
been set by wait_on_page_bit_common(), and the reference dropped, just
before shrink_page_list() succeeds in freezing its last page reference: in
such a case, unlock_page() must be used.  Follow the suggestion from
Michal Hocko, just revert a978d6f5 ("mm: unlockless reclaim") now:
that optimization predates PageWaiters, and won't buy much these days; but
we can reinstate it for the !PageWaiters case if anyone notices.

It does raise the question: should vmscan.c's is_page_cache_freeable() and
__remove_mapping() now treat a PageWaiters page as if an extra reference
were held?  Perhaps, but I don't think it matters much, since
shrink_page_list() already had to win its trylock_page(), so waiters are
not very common there: I noticed no difference when trying the bigger
change, and it's surely not needed while put_and_wait_on_page_locked() is
only used for page migration.

[willy@infradead.org: add put_and_wait_on_page_locked() kerneldoc]
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261121330.1116@eggly.anvilsSigned-off-by: default avatarHugh Dickins <hughd@google.com>
Reported-by: default avatarBaoquan He <bhe@redhat.com>
Tested-by: default avatarBaoquan He <bhe@redhat.com>
Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: David Herrmann <dh.herrmann@gmail.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Nick Piggin <npiggin@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f0c867d9
...@@ -537,6 +537,8 @@ static inline int wait_on_page_locked_killable(struct page *page) ...@@ -537,6 +537,8 @@ static inline int wait_on_page_locked_killable(struct page *page)
return wait_on_page_bit_killable(compound_head(page), PG_locked); return wait_on_page_bit_killable(compound_head(page), PG_locked);
} }
extern void put_and_wait_on_page_locked(struct page *page);
/* /*
* Wait for a page to complete writeback * Wait for a page to complete writeback
*/ */
......
...@@ -981,7 +981,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, ...@@ -981,7 +981,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync,
if (wait_page->bit_nr != key->bit_nr) if (wait_page->bit_nr != key->bit_nr)
return 0; return 0;
/* Stop walking if it's locked */ /*
* Stop walking if it's locked.
* Is this safe if put_and_wait_on_page_locked() is in use?
* Yes: the waker must hold a reference to this page, and if PG_locked
* has now already been set by another task, that task must also hold
* a reference to the *same usage* of this page; so there is no need
* to walk on to wake even the put_and_wait_on_page_locked() callers.
*/
if (test_bit(key->bit_nr, &key->page->flags)) if (test_bit(key->bit_nr, &key->page->flags))
return -1; return -1;
...@@ -1049,25 +1056,44 @@ static void wake_up_page(struct page *page, int bit) ...@@ -1049,25 +1056,44 @@ static void wake_up_page(struct page *page, int bit)
wake_up_page_bit(page, bit); wake_up_page_bit(page, bit);
} }
/*
* A choice of three behaviors for wait_on_page_bit_common():
*/
enum behavior {
EXCLUSIVE, /* Hold ref to page and take the bit when woken, like
* __lock_page() waiting on then setting PG_locked.
*/
SHARED, /* Hold ref to page and check the bit when woken, like
* wait_on_page_writeback() waiting on PG_writeback.
*/
DROP, /* Drop ref to page before wait, no check when woken,
* like put_and_wait_on_page_locked() on PG_locked.
*/
};
static inline int wait_on_page_bit_common(wait_queue_head_t *q, static inline int wait_on_page_bit_common(wait_queue_head_t *q,
struct page *page, int bit_nr, int state, bool lock) struct page *page, int bit_nr, int state, enum behavior behavior)
{ {
struct wait_page_queue wait_page; struct wait_page_queue wait_page;
wait_queue_entry_t *wait = &wait_page.wait; wait_queue_entry_t *wait = &wait_page.wait;
bool bit_is_set;
bool thrashing = false; bool thrashing = false;
bool delayacct = false;
unsigned long pflags; unsigned long pflags;
int ret = 0; int ret = 0;
if (bit_nr == PG_locked && if (bit_nr == PG_locked &&
!PageUptodate(page) && PageWorkingset(page)) { !PageUptodate(page) && PageWorkingset(page)) {
if (!PageSwapBacked(page)) if (!PageSwapBacked(page)) {
delayacct_thrashing_start(); delayacct_thrashing_start();
delayacct = true;
}
psi_memstall_enter(&pflags); psi_memstall_enter(&pflags);
thrashing = true; thrashing = true;
} }
init_wait(wait); init_wait(wait);
wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0; wait->flags = behavior == EXCLUSIVE ? WQ_FLAG_EXCLUSIVE : 0;
wait->func = wake_page_function; wait->func = wake_page_function;
wait_page.page = page; wait_page.page = page;
wait_page.bit_nr = bit_nr; wait_page.bit_nr = bit_nr;
...@@ -1084,14 +1110,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, ...@@ -1084,14 +1110,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
spin_unlock_irq(&q->lock); spin_unlock_irq(&q->lock);
if (likely(test_bit(bit_nr, &page->flags))) { bit_is_set = test_bit(bit_nr, &page->flags);
if (behavior == DROP)
put_page(page);
if (likely(bit_is_set))
io_schedule(); io_schedule();
}
if (lock) { if (behavior == EXCLUSIVE) {
if (!test_and_set_bit_lock(bit_nr, &page->flags)) if (!test_and_set_bit_lock(bit_nr, &page->flags))
break; break;
} else { } else if (behavior == SHARED) {
if (!test_bit(bit_nr, &page->flags)) if (!test_bit(bit_nr, &page->flags))
break; break;
} }
...@@ -1100,12 +1129,23 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, ...@@ -1100,12 +1129,23 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
ret = -EINTR; ret = -EINTR;
break; break;
} }
if (behavior == DROP) {
/*
* We can no longer safely access page->flags:
* even if CONFIG_MEMORY_HOTREMOVE is not enabled,
* there is a risk of waiting forever on a page reused
* for something that keeps it locked indefinitely.
* But best check for -EINTR above before breaking.
*/
break;
}
} }
finish_wait(q, wait); finish_wait(q, wait);
if (thrashing) { if (thrashing) {
if (!PageSwapBacked(page)) if (delayacct)
delayacct_thrashing_end(); delayacct_thrashing_end();
psi_memstall_leave(&pflags); psi_memstall_leave(&pflags);
} }
...@@ -1124,17 +1164,36 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, ...@@ -1124,17 +1164,36 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
void wait_on_page_bit(struct page *page, int bit_nr) void wait_on_page_bit(struct page *page, int bit_nr)
{ {
wait_queue_head_t *q = page_waitqueue(page); wait_queue_head_t *q = page_waitqueue(page);
wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, false); wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED);
} }
EXPORT_SYMBOL(wait_on_page_bit); EXPORT_SYMBOL(wait_on_page_bit);
int wait_on_page_bit_killable(struct page *page, int bit_nr) int wait_on_page_bit_killable(struct page *page, int bit_nr)
{ {
wait_queue_head_t *q = page_waitqueue(page); wait_queue_head_t *q = page_waitqueue(page);
return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, false); return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED);
} }
EXPORT_SYMBOL(wait_on_page_bit_killable); EXPORT_SYMBOL(wait_on_page_bit_killable);
/**
* put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked
* @page: The page to wait for.
*
* The caller should hold a reference on @page. They expect the page to
* become unlocked relatively soon, but do not wish to hold up migration
* (for example) by holding the reference while waiting for the page to
* come unlocked. After this function returns, the caller should not
* dereference @page.
*/
void put_and_wait_on_page_locked(struct page *page)
{
wait_queue_head_t *q;
page = compound_head(page);
q = page_waitqueue(page);
wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP);
}
/** /**
* add_page_wait_queue - Add an arbitrary waiter to a page's wait queue * add_page_wait_queue - Add an arbitrary waiter to a page's wait queue
* @page: Page defining the wait queue of interest * @page: Page defining the wait queue of interest
...@@ -1264,7 +1323,8 @@ void __lock_page(struct page *__page) ...@@ -1264,7 +1323,8 @@ void __lock_page(struct page *__page)
{ {
struct page *page = compound_head(__page); struct page *page = compound_head(__page);
wait_queue_head_t *q = page_waitqueue(page); wait_queue_head_t *q = page_waitqueue(page);
wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, true); wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE,
EXCLUSIVE);
} }
EXPORT_SYMBOL(__lock_page); EXPORT_SYMBOL(__lock_page);
...@@ -1272,7 +1332,8 @@ int __lock_page_killable(struct page *__page) ...@@ -1272,7 +1332,8 @@ int __lock_page_killable(struct page *__page)
{ {
struct page *page = compound_head(__page); struct page *page = compound_head(__page);
wait_queue_head_t *q = page_waitqueue(page); wait_queue_head_t *q = page_waitqueue(page);
return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, true); return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE,
EXCLUSIVE);
} }
EXPORT_SYMBOL_GPL(__lock_page_killable); EXPORT_SYMBOL_GPL(__lock_page_killable);
......
...@@ -1490,8 +1490,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) ...@@ -1490,8 +1490,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
if (!get_page_unless_zero(page)) if (!get_page_unless_zero(page))
goto out_unlock; goto out_unlock;
spin_unlock(vmf->ptl); spin_unlock(vmf->ptl);
wait_on_page_locked(page); put_and_wait_on_page_locked(page);
put_page(page);
goto out; goto out;
} }
...@@ -1527,8 +1526,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) ...@@ -1527,8 +1526,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
if (!get_page_unless_zero(page)) if (!get_page_unless_zero(page))
goto out_unlock; goto out_unlock;
spin_unlock(vmf->ptl); spin_unlock(vmf->ptl);
wait_on_page_locked(page); put_and_wait_on_page_locked(page);
put_page(page);
goto out; goto out;
} }
......
...@@ -327,16 +327,13 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, ...@@ -327,16 +327,13 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
/* /*
* Once page cache replacement of page migration started, page_count * Once page cache replacement of page migration started, page_count
* *must* be zero. And, we don't want to call wait_on_page_locked() * is zero; but we must not call put_and_wait_on_page_locked() without
* against a page without get_page(). * a ref. Use get_page_unless_zero(), and just fault again if it fails.
* So, we use get_page_unless_zero(), here. Even failed, page fault
* will occur again.
*/ */
if (!get_page_unless_zero(page)) if (!get_page_unless_zero(page))
goto out; goto out;
pte_unmap_unlock(ptep, ptl); pte_unmap_unlock(ptep, ptl);
wait_on_page_locked(page); put_and_wait_on_page_locked(page);
put_page(page);
return; return;
out: out:
pte_unmap_unlock(ptep, ptl); pte_unmap_unlock(ptep, ptl);
...@@ -370,8 +367,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) ...@@ -370,8 +367,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
if (!get_page_unless_zero(page)) if (!get_page_unless_zero(page))
goto unlock; goto unlock;
spin_unlock(ptl); spin_unlock(ptl);
wait_on_page_locked(page); put_and_wait_on_page_locked(page);
put_page(page);
return; return;
unlock: unlock:
spin_unlock(ptl); spin_unlock(ptl);
......
...@@ -1460,14 +1460,8 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -1460,14 +1460,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
count_memcg_page_event(page, PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED);
} else if (!mapping || !__remove_mapping(mapping, page, true)) } else if (!mapping || !__remove_mapping(mapping, page, true))
goto keep_locked; goto keep_locked;
/*
* At this point, we have no other references and there is unlock_page(page);
* no way to pick any more up (removed from LRU, removed
* from pagecache). Can use non-atomic bitops now (and
* we obviously don't have to worry about waking up a process
* waiting on the page lock, because there are no references.
*/
__ClearPageLocked(page);
free_it: free_it:
nr_reclaimed++; nr_reclaimed++;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment