Commit 5f0af70a authored by David Sterba's avatar David Sterba Committed by Linus Torvalds

mm: remove call to find_vma in pagewalk for non-hugetlbfs

Commit d33b9f45 ("mm: hugetlb: fix hugepage memory leak in
walk_page_range()") introduces a check if a vma is a hugetlbfs one and
later in 5dc37642 ("mm hugetlb: add hugepage support to pagemap") it is
moved under #ifdef CONFIG_HUGETLB_PAGE but a needless find_vma call is
left behind and its result is not used anywhere else in the function.

The side-effect of caching vma for @addr inside walk->mm is neither
utilized in walk_page_range() nor in called functions.
Signed-off-by: default avatarDavid Sterba <dsterba@suse.cz>
Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: default avatarAndi Kleen <ak@linux.intel.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Matt Mackall <mpm@selenic.com>
Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e9959f0f
...@@ -139,7 +139,6 @@ int walk_page_range(unsigned long addr, unsigned long end, ...@@ -139,7 +139,6 @@ int walk_page_range(unsigned long addr, unsigned long end,
pgd_t *pgd; pgd_t *pgd;
unsigned long next; unsigned long next;
int err = 0; int err = 0;
struct vm_area_struct *vma;
if (addr >= end) if (addr >= end)
return err; return err;
...@@ -149,15 +148,17 @@ int walk_page_range(unsigned long addr, unsigned long end, ...@@ -149,15 +148,17 @@ int walk_page_range(unsigned long addr, unsigned long end,
pgd = pgd_offset(walk->mm, addr); pgd = pgd_offset(walk->mm, addr);
do { do {
struct vm_area_struct *uninitialized_var(vma);
next = pgd_addr_end(addr, end); next = pgd_addr_end(addr, end);
#ifdef CONFIG_HUGETLB_PAGE
/* /*
* handle hugetlb vma individually because pagetable walk for * handle hugetlb vma individually because pagetable walk for
* the hugetlb page is dependent on the architecture and * the hugetlb page is dependent on the architecture and
* we can't handled it in the same manner as non-huge pages. * we can't handled it in the same manner as non-huge pages.
*/ */
vma = find_vma(walk->mm, addr); vma = find_vma(walk->mm, addr);
#ifdef CONFIG_HUGETLB_PAGE
if (vma && is_vm_hugetlb_page(vma)) { if (vma && is_vm_hugetlb_page(vma)) {
if (vma->vm_end < next) if (vma->vm_end < next)
next = vma->vm_end; next = vma->vm_end;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment