Commit dd18dbc2 authored by Kirill A. Shutemov's avatar Kirill A. Shutemov Committed by Linus Torvalds

mm, thp: close race between mremap() and split_huge_page()

It's critical for split_huge_page() (and migration) to catch and freeze
all PMDs on rmap walk.  It gets tricky if there's concurrent fork() or
mremap() since usually we copy/move page table entries on dup_mm() or
move_page_tables() without rmap lock taken.  To get it work we rely on
rmap walk order to not miss any entry.  We expect to see destination VMA
after source one to work correctly.

But after switching rmap implementation to interval tree it's not always
possible to preserve expected walk order.

It works fine for dup_mm() since new VMA has the same vma_start_pgoff()
/ vma_last_pgoff() and explicitly insert dst VMA after src one with
vma_interval_tree_insert_after().

But on move_vma() destination VMA can be merged into adjacent one and as
result shifted left in interval tree.  Fortunately, we can detect the
situation and prevent race with rmap walk by moving page table entries
under rmap lock.  See commit 38a76013.

Problem is that we miss the lock when we move transhuge PMD.  Most
likely this bug caused the crash[1].

[1] http://thread.gmane.org/gmane.linux.kernel.mm/96473

Fixes: 108d6642 ("mm anon rmap: remove anon_vma_moveto_tail")
Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Acked-by: default avatarMichel Lespinasse <walken@google.com>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Cc: <stable@vger.kernel.org>        [3.7+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 3551a928
...@@ -194,10 +194,17 @@ unsigned long move_page_tables(struct vm_area_struct *vma, ...@@ -194,10 +194,17 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
break; break;
if (pmd_trans_huge(*old_pmd)) { if (pmd_trans_huge(*old_pmd)) {
int err = 0; int err = 0;
if (extent == HPAGE_PMD_SIZE) if (extent == HPAGE_PMD_SIZE) {
VM_BUG_ON(vma->vm_file || !vma->anon_vma);
/* See comment in move_ptes() */
if (need_rmap_locks)
anon_vma_lock_write(vma->anon_vma);
err = move_huge_pmd(vma, new_vma, old_addr, err = move_huge_pmd(vma, new_vma, old_addr,
new_addr, old_end, new_addr, old_end,
old_pmd, new_pmd); old_pmd, new_pmd);
if (need_rmap_locks)
anon_vma_unlock_write(vma->anon_vma);
}
if (err > 0) { if (err > 0) {
need_flush = true; need_flush = true;
continue; continue;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment