Commit e693de18 authored by Anshuman Khandual's avatar Anshuman Khandual Committed by Linus Torvalds

mm/hugetlb: enable arch specific huge page size support for migration

Architectures like arm64 have HugeTLB page sizes which are different
than generic sizes at PMD, PUD, PGD level and implemented via contiguous
bits.  At present these special size HugeTLB pages cannot be identified
through macros like (PMD|PUD|PGDIR)_SHIFT and hence chosen not be
migrated.

Enabling migration support for these special HugeTLB page sizes along
with the generic ones (PMD|PUD|PGD) would require identifying all of
them on a given platform.  A platform specific hook can precisely
enumerate all huge page sizes supported for migration.  Instead of
comparing against standard huge page orders let
hugetlb_migration_support() function call a platform hook
arch_hugetlb_migration_support().  Default definition for the platform
hook maintains existing semantics which checks standard huge page order.
But an architecture can choose to override the default and provide
support for a comprehensive set of huge page sizes.

Link: http://lkml.kernel.org/r/1545121450-1663-4-git-send-email-anshuman.khandual@arm.comSigned-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reviewed-by: default avatarSteve Capper <steve.capper@arm.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9b553bf5
...@@ -493,18 +493,29 @@ static inline pgoff_t basepage_index(struct page *page) ...@@ -493,18 +493,29 @@ static inline pgoff_t basepage_index(struct page *page)
extern int dissolve_free_huge_page(struct page *page); extern int dissolve_free_huge_page(struct page *page);
extern int dissolve_free_huge_pages(unsigned long start_pfn, extern int dissolve_free_huge_pages(unsigned long start_pfn,
unsigned long end_pfn); unsigned long end_pfn);
static inline bool hugepage_migration_supported(struct hstate *h)
{
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
#ifndef arch_hugetlb_migration_supported
static inline bool arch_hugetlb_migration_supported(struct hstate *h)
{
if ((huge_page_shift(h) == PMD_SHIFT) || if ((huge_page_shift(h) == PMD_SHIFT) ||
(huge_page_shift(h) == PUD_SHIFT) || (huge_page_shift(h) == PUD_SHIFT) ||
(huge_page_shift(h) == PGDIR_SHIFT)) (huge_page_shift(h) == PGDIR_SHIFT))
return true; return true;
else else
return false; return false;
}
#endif
#else #else
static inline bool arch_hugetlb_migration_supported(struct hstate *h)
{
return false; return false;
}
#endif #endif
static inline bool hugepage_migration_supported(struct hstate *h)
{
return arch_hugetlb_migration_supported(h);
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment