Commit 1c563432 authored by Minchan Kim's avatar Minchan Kim Committed by akpm

mm: fix is_pinnable_page against a cma page

Pages in the CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA so
the current is_pinnable_page() could miss CMA pages which have
MIGRATE_ISOLATE.  It ends up pinning CMA pages as longterm for the
pin_user_pages() API so CMA allocations keep failing until the pin is
released.

     CPU 0                                   CPU 1 - Task B

cma_alloc
alloc_contig_range
                                        pin_user_pages_fast(FOLL_LONGTERM)
change pageblock as MIGRATE_ISOLATE
                                        internal_get_user_pages_fast
                                        lockless_pages_from_mm
                                        gup_pte_range
                                        try_grab_folio
                                        is_pinnable_page
                                          return true;
                                        So, pinned the page successfully.
page migration failure with pinned page
                                        ..
                                        .. After 30 sec
                                        unpin_user_page(page)

CMA allocation succeeded after 30 sec.

The CMA allocation path protects the migration type change race using
zone->lock but what GUP path need to know is just whether the page is on
CMA area or not rather than exact migration type.  Thus, we don't need
zone->lock but just checks migration type in either of (MIGRATE_ISOLATE
and MIGRATE_CMA).

Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause rejecting
of pinning pages on MIGRATE_ISOLATE pageblocks even though it's neither
CMA nor movable zone if the page is temporarily unmovable.  However, such
a migration failure by unexpected temporal refcount holding is general
issue, not only come from MIGRATE_ISOLATE and the MIGRATE_ISOLATE is also
transient state like other temporal elevated refcount problem.

Link: https://lkml.kernel.org/r/20220524171525.976723-1-minchan@kernel.orgSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Acked-by: default avatarPaul E. McKenney <paulmck@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent ba6851b4
...@@ -1594,8 +1594,13 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, ...@@ -1594,8 +1594,13 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
#ifdef CONFIG_MIGRATION #ifdef CONFIG_MIGRATION
static inline bool is_pinnable_page(struct page *page) static inline bool is_pinnable_page(struct page *page)
{ {
return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || #ifdef CONFIG_CMA
is_zero_pfn(page_to_pfn(page)); int mt = get_pageblock_migratetype(page);
if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE)
return false;
#endif
return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page)));
} }
#else #else
static inline bool is_pinnable_page(struct page *page) static inline bool is_pinnable_page(struct page *page)
......
...@@ -482,8 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page, ...@@ -482,8 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page,
bitidx = pfn_to_bitidx(page, pfn); bitidx = pfn_to_bitidx(page, pfn);
word_bitidx = bitidx / BITS_PER_LONG; word_bitidx = bitidx / BITS_PER_LONG;
bitidx &= (BITS_PER_LONG-1); bitidx &= (BITS_PER_LONG-1);
/*
word = bitmap[word_bitidx]; * This races, without locks, with set_pfnblock_flags_mask(). Ensure
* a consistent read of the memory array, so that results, even though
* racy, are not corrupted.
*/
word = READ_ONCE(bitmap[word_bitidx]);
return (word >> bitidx) & mask; return (word >> bitidx) & mask;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment