Commit f003c03b authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm: page_vma_mapped_walk(): use page for pvmw->page

Patch series "mm: page_vma_mapped_walk() cleanup and THP fixes".

I've marked all of these for stable: many are merely cleanups, but I
think they are much better before the main fix than after.

This patch (of 11):

page_vma_mapped_walk() cleanup: sometimes the local copy of pvwm->page
was used, sometimes pvmw->page itself: use the local copy "page"
throughout.

Link: https://lkml.kernel.org/r/589b358c-febc-c88e-d4c2-7834b37fa7bf@google.com
Link: https://lkml.kernel.org/r/88e67645-f467-c279-bf5e-af4b5c6b13eb@google.comSigned-off-by: default avatarHugh Dickins <hughd@google.com>
Reviewed-by: default avatarAlistair Popple <apopple@nvidia.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Wang Yugui <wangyugui@e16-tech.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Will Deacon <will@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4a09d388
...@@ -156,7 +156,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) ...@@ -156,7 +156,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
if (pvmw->pte) if (pvmw->pte)
goto next_pte; goto next_pte;
if (unlikely(PageHuge(pvmw->page))) { if (unlikely(PageHuge(page))) {
/* when pud is not present, pte will be NULL */ /* when pud is not present, pte will be NULL */
pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
if (!pvmw->pte) if (!pvmw->pte)
...@@ -217,8 +217,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) ...@@ -217,8 +217,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
* cannot return prematurely, while zap_huge_pmd() has * cannot return prematurely, while zap_huge_pmd() has
* cleared *pmd but not decremented compound_mapcount(). * cleared *pmd but not decremented compound_mapcount().
*/ */
if ((pvmw->flags & PVMW_SYNC) && if ((pvmw->flags & PVMW_SYNC) && PageTransCompound(page)) {
PageTransCompound(pvmw->page)) {
spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
spin_unlock(ptl); spin_unlock(ptl);
...@@ -234,9 +233,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) ...@@ -234,9 +233,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
return true; return true;
next_pte: next_pte:
/* Seek to next pte only makes sense for THP */ /* Seek to next pte only makes sense for THP */
if (!PageTransHuge(pvmw->page) || PageHuge(pvmw->page)) if (!PageTransHuge(page) || PageHuge(page))
return not_found(pvmw); return not_found(pvmw);
end = vma_address_end(pvmw->page, pvmw->vma); end = vma_address_end(page, pvmw->vma);
do { do {
pvmw->address += PAGE_SIZE; pvmw->address += PAGE_SIZE;
if (pvmw->address >= end) if (pvmw->address >= end)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment