Commit b38af472 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

x86,mm: fix pte_special versus pte_numa

Sasha Levin has shown oopses on ffffea0003480048 and ffffea0003480008 at
mm/memory.c:1132, running Trinity on different 3.16-rc-next kernels:
where zap_pte_range() checks page->mapping to see if PageAnon(page).

Those addresses fit struct pages for pfns d2001 and d2000, and in each
dump a register or a stack slot showed d2001730 or d2000730: pte flags
0x730 are PCD ACCESSED PROTNONE SPECIAL IOMAP; and Sasha's e820 map has
a hole between cfffffff and 100000000, which would need special access.

Commit c46a7c81 ("x86: define _PAGE_NUMA by reusing software bits on
the PMD and PTE levels") has broken vm_normal_page(): a PROTNONE SPECIAL
pte no longer passes the pte_special() test, so zap_pte_range() goes on
to try to access a non-existent struct page.

Fix this by refining pte_special() (SPECIAL with PRESENT or PROTNONE) to
complement pte_numa() (SPECIAL with neither PRESENT nor PROTNONE).  A
hint that this was a problem was that c46a7c81 added pte_numa() test
to vm_normal_page(), and moved its is_zero_pfn() test from slow to fast
path: This was papering over a pte_special() snag when the zero page was
encountered during zap.  This patch reverts vm_normal_page() to how it
was before, relying on pte_special().

It still appears that this patch may be incomplete: aren't there other
places which need to be handling PROTNONE along with PRESENT?  For
example, pte_mknuma() clears _PAGE_PRESENT and sets _PAGE_NUMA, but on a
PROT_NONE area, that would make it pte_special().  This is side-stepped
by the fact that NUMA hinting faults skipped PROT_NONE VMAs and there
are no grounds where a NUMA hinting fault on a PROT_NONE VMA would be
interesting.

Fixes: c46a7c81 ("x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels")
Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
Tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
Cc: <stable@vger.kernel.org>	[3.16]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 7ea8574e
...@@ -131,8 +131,13 @@ static inline int pte_exec(pte_t pte) ...@@ -131,8 +131,13 @@ static inline int pte_exec(pte_t pte)
static inline int pte_special(pte_t pte) static inline int pte_special(pte_t pte)
{ {
return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) == /*
(_PAGE_PRESENT|_PAGE_SPECIAL); * See CONFIG_NUMA_BALANCING pte_numa in include/asm-generic/pgtable.h.
* On x86 we have _PAGE_BIT_NUMA == _PAGE_BIT_GLOBAL+1 ==
* __PAGE_BIT_SOFTW1 == _PAGE_BIT_SPECIAL.
*/
return (pte_flags(pte) & _PAGE_SPECIAL) &&
(pte_flags(pte) & (_PAGE_PRESENT|_PAGE_PROTNONE));
} }
static inline unsigned long pte_pfn(pte_t pte) static inline unsigned long pte_pfn(pte_t pte)
......
...@@ -751,7 +751,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, ...@@ -751,7 +751,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn = pte_pfn(pte); unsigned long pfn = pte_pfn(pte);
if (HAVE_PTE_SPECIAL) { if (HAVE_PTE_SPECIAL) {
if (likely(!pte_special(pte) || pte_numa(pte))) if (likely(!pte_special(pte)))
goto check_pfn; goto check_pfn;
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
return NULL; return NULL;
...@@ -777,15 +777,14 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, ...@@ -777,15 +777,14 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
} }
} }
if (is_zero_pfn(pfn))
return NULL;
check_pfn: check_pfn:
if (unlikely(pfn > highest_memmap_pfn)) { if (unlikely(pfn > highest_memmap_pfn)) {
print_bad_pte(vma, addr, pte, NULL); print_bad_pte(vma, addr, pte, NULL);
return NULL; return NULL;
} }
if (is_zero_pfn(pfn))
return NULL;
/* /*
* NOTE! We still have PageReserved() pages in the page tables. * NOTE! We still have PageReserved() pages in the page tables.
* eg. VDSO mappings can cause them to exist. * eg. VDSO mappings can cause them to exist.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment