Commit 2af8ff29 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm/khugepaged: collapse_shmem() remember to clear holes

Huge tmpfs testing reminds us that there is no __GFP_ZERO in the gfp
flags khugepaged uses to allocate a huge page - in all common cases it
would just be a waste of effort - so collapse_shmem() must remember to
clear out any holes that it instantiates.

The obvious place to do so, where they are put into the page cache tree,
is not a good choice: because interrupts are disabled there.  Leave it
until further down, once success is assured, where the other pages are
copied (before setting PageUptodate).

Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261525080.2275@eggly.anvils
Fixes: f3f0e1d2 ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: <stable@vger.kernel.org>	[4.8+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent aaa52e34
...@@ -1467,7 +1467,12 @@ static void collapse_shmem(struct mm_struct *mm, ...@@ -1467,7 +1467,12 @@ static void collapse_shmem(struct mm_struct *mm,
* Replacing old pages with new one has succeeded, now we * Replacing old pages with new one has succeeded, now we
* need to copy the content and free the old pages. * need to copy the content and free the old pages.
*/ */
index = start;
list_for_each_entry_safe(page, tmp, &pagelist, lru) { list_for_each_entry_safe(page, tmp, &pagelist, lru) {
while (index < page->index) {
clear_highpage(new_page + (index % HPAGE_PMD_NR));
index++;
}
copy_highpage(new_page + (page->index % HPAGE_PMD_NR), copy_highpage(new_page + (page->index % HPAGE_PMD_NR),
page); page);
list_del(&page->lru); list_del(&page->lru);
...@@ -1477,6 +1482,11 @@ static void collapse_shmem(struct mm_struct *mm, ...@@ -1477,6 +1482,11 @@ static void collapse_shmem(struct mm_struct *mm,
ClearPageActive(page); ClearPageActive(page);
ClearPageUnevictable(page); ClearPageUnevictable(page);
put_page(page); put_page(page);
index++;
}
while (index < end) {
clear_highpage(new_page + (index % HPAGE_PMD_NR));
index++;
} }
local_irq_disable(); local_irq_disable();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment