Commit eb372d22 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] hugepage pagetable freeing fix

From: "Seth, Rohit" <rohit.seth@intel.com>

We recently covered a bug in mm/mmap.c on IA-64.  While unmapping a address
space, unmap_region calls free_pgtables to possibly free the pages that are
used for page tables.  Currently no distinction is made between freeing a
region that is mapped by normal pages vs the pages that are mapped by
hugepages.  Architecture specific code needs to handle cases where PTEs
corresponding to a region that is mapped by hugepages is properly getting
unmapped.  Attached please find a patch that makes the required changes in
generic part of kernel.  We will need to send a separate IA-64 patch to use
this new semantics.  Currently, so not to disturb the PPC (as that is the
only arch that had ARCH_HAS_HUGEPAGE_ONLY_RANGE defined) we are mapping back
the definition of new function hugetlb_free_pgtables to free_pgtables.
parent 17095c07
......@@ -41,6 +41,7 @@
( ((addr > (TASK_HPAGE_BASE-len)) && (addr < TASK_HPAGE_END)) || \
((current->mm->context & CONTEXT_LOW_HPAGES) && \
(addr > (TASK_HPAGE_BASE_32-len)) && (addr < TASK_HPAGE_END_32)) )
#define hugetlb_free_pgtables free_pgtables
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#define in_hugepage_area(context, addr) \
......
......@@ -39,6 +39,7 @@ mark_mm_hugetlb(struct mm_struct *mm, struct vm_area_struct *vma)
#ifndef ARCH_HAS_HUGEPAGE_ONLY_RANGE
#define is_hugepage_only_range(addr, len) 0
#define hugetlb_free_pgtables(tlb, prev, start, end) do { } while (0)
#endif
#else /* !CONFIG_HUGETLB_PAGE */
......@@ -63,6 +64,7 @@ static inline int is_vm_hugetlb_page(struct vm_area_struct *vma)
#define is_aligned_hugepage_range(addr, len) 0
#define pmd_huge(x) 0
#define is_hugepage_only_range(addr, len) 0
#define hugetlb_free_pgtables(tlb, prev, start, end) do { } while (0)
#ifndef HPAGE_MASK
#define HPAGE_MASK 0 /* Keep the compiler happy */
......
......@@ -1138,6 +1138,10 @@ static void unmap_region(struct mm_struct *mm,
tlb = tlb_gather_mmu(mm, 0);
unmap_vmas(&tlb, mm, vma, start, end, &nr_accounted);
vm_unacct_memory(nr_accounted);
if (is_hugepage_only_range(start, end - start))
hugetlb_free_pgtables(tlb, prev, start, end);
else
free_pgtables(tlb, prev, start, end);
tlb_finish_mmu(tlb, start, end);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment