Commit 11de9927 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm: numa: add migrated transhuge pages to LRU the same way as base pages

Migration of misplaced transhuge pages uses page_add_new_anon_rmap() when
putting the page back as it avoided an atomic operations and added the new
page to the correct LRU.  A side-effect is that the page gets marked
activated as part of the migration meaning that transhuge and base pages
are treated differently from an aging perspective than base page
migration.

This patch uses page_add_anon_rmap() and putback_lru_page() on completion
of a transhuge migration similar to base page migration.  It would require
fewer atomic operations to use lru_cache_add without taking an additional
reference to the page.  The downside would be that it's still different to
base page migration and unevictable pages may be added to the wrong LRU
for cleaning up later.  Testing of the usual workloads did not show any
adverse impact to the change.
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent bd673145
...@@ -1852,7 +1852,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, ...@@ -1852,7 +1852,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
* guarantee the copy is visible before the pagetable update. * guarantee the copy is visible before the pagetable update.
*/ */
flush_cache_range(vma, mmun_start, mmun_end); flush_cache_range(vma, mmun_start, mmun_end);
page_add_new_anon_rmap(new_page, vma, mmun_start); page_add_anon_rmap(new_page, vma, mmun_start);
pmdp_clear_flush(vma, mmun_start, pmd); pmdp_clear_flush(vma, mmun_start, pmd);
set_pmd_at(mm, mmun_start, pmd, entry); set_pmd_at(mm, mmun_start, pmd, entry);
flush_tlb_range(vma, mmun_start, mmun_end); flush_tlb_range(vma, mmun_start, mmun_end);
...@@ -1877,6 +1877,10 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, ...@@ -1877,6 +1877,10 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
spin_unlock(ptl); spin_unlock(ptl);
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
/* Take an "isolate" reference and put new page on the LRU. */
get_page(new_page);
putback_lru_page(new_page);
unlock_page(new_page); unlock_page(new_page);
unlock_page(page); unlock_page(page);
put_page(page); /* Drop the rmap reference */ put_page(page); /* Drop the rmap reference */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment