Commit 67e4eb07 authored by Yang Shi's avatar Yang Shi Committed by Linus Torvalds

mm: thp: don't need to drain lru cache when splitting and mlocking THP

Since commit 8f182270 ("mm/swap.c: flush lru pvecs on compound page
arrival") THP would not stay in pagevec anymore.  So the optimization made
by commit d9654322 ("thp: increase split_huge_page() success rate")
doesn't make sense anymore, which tries to unpin munlocked THPs from
pagevec by draining pagevec.

Draining lru cache before isolating THP in mlock path is also unnecessary.
b676b293 ("mm, thp: fix mapped pages avoiding unevictable list on
mlock") added it and 9a73f61b ("thp, mlock: do not mlock PTE-mapped
file huge pages") accidentally carried it over after the above
optimization went in.
Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarDaniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Link: http://lkml.kernel.org/r/1585946493-7531-1-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 88590253
...@@ -1378,7 +1378,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, ...@@ -1378,7 +1378,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
goto skip_mlock; goto skip_mlock;
if (!trylock_page(page)) if (!trylock_page(page))
goto skip_mlock; goto skip_mlock;
lru_add_drain();
if (page->mapping && !PageDoubleMap(page)) if (page->mapping && !PageDoubleMap(page))
mlock_vma_page(page); mlock_vma_page(page);
unlock_page(page); unlock_page(page);
...@@ -2582,7 +2581,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) ...@@ -2582,7 +2581,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
struct anon_vma *anon_vma = NULL; struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL; struct address_space *mapping = NULL;
int count, mapcount, extra_pins, ret; int count, mapcount, extra_pins, ret;
bool mlocked;
unsigned long flags; unsigned long flags;
pgoff_t end; pgoff_t end;
...@@ -2641,14 +2639,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) ...@@ -2641,14 +2639,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
goto out_unlock; goto out_unlock;
} }
mlocked = PageMlocked(head);
unmap_page(head); unmap_page(head);
VM_BUG_ON_PAGE(compound_mapcount(head), head); VM_BUG_ON_PAGE(compound_mapcount(head), head);
/* Make sure the page is not on per-CPU pagevec as it takes pin */
if (mlocked)
lru_add_drain();
/* prevent PageLRU to go away from under us, and freeze lru stats */ /* prevent PageLRU to go away from under us, and freeze lru stats */
spin_lock_irqsave(&pgdata->lru_lock, flags); spin_lock_irqsave(&pgdata->lru_lock, flags);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment