Commit 85207ad8 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm: filemap_unaccount_folio() large skip mapcount fixup

The page_mapcount_reset() when folio_mapped() while mapping_exiting() was
devised long before there were huge or compound pages in the cache.  It is
still valid for small pages, but not at all clear what's right to check
and reset on large pages.  Just don't try when folio_test_large().

Link: https://lkml.kernel.org/r/879c4426-4122-da9c-1a86-697f2c9a083@google.comSigned-off-by: default avatarHugh Dickins <hughd@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent bb43b14b
...@@ -152,27 +152,27 @@ static void filemap_unaccount_folio(struct address_space *mapping, ...@@ -152,27 +152,27 @@ static void filemap_unaccount_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(folio_mapped(folio), folio); VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(folio_mapped(folio))) { if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(folio_mapped(folio))) {
int mapcount;
pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n", pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n",
current->comm, folio_pfn(folio)); current->comm, folio_pfn(folio));
dump_page(&folio->page, "still mapped when deleted"); dump_page(&folio->page, "still mapped when deleted");
dump_stack(); dump_stack();
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
mapcount = page_mapcount(&folio->page); if (mapping_exiting(mapping) && !folio_test_large(folio)) {
if (mapping_exiting(mapping) && int mapcount = page_mapcount(&folio->page);
folio_ref_count(folio) >= mapcount + 2) {
if (folio_ref_count(folio) >= mapcount + 2) {
/* /*
* All vmas have already been torn down, so it's * All vmas have already been torn down, so it's
* a good bet that actually the folio is unmapped, * a good bet that actually the page is unmapped
* and we'd prefer not to leak it: if we're wrong, * and we'd rather not leak it: if we're wrong,
* some other bad page check should catch it later. * another bad page check should catch it later.
*/ */
page_mapcount_reset(&folio->page); page_mapcount_reset(&folio->page);
folio_ref_sub(folio, mapcount); folio_ref_sub(folio, mapcount);
} }
} }
}
/* hugetlb folios do not participate in page cache accounting. */ /* hugetlb folios do not participate in page cache accounting. */
if (folio_test_hugetlb(folio)) if (folio_test_hugetlb(folio))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment