• David Hildenbrand's avatar
    mm/mmu_gather: add __tlb_remove_folio_pages() · d7f861b9
    David Hildenbrand authored
    Add __tlb_remove_folio_pages(), which will remove multiple consecutive
    pages that belong to the same large folio, instead of only a single page. 
    We'll be using this function when optimizing unmapping/zapping of large
    folios that are mapped by PTEs.
    
    We're using the remaining spare bit in an encoded_page to indicate that
    the next enoced page in an array contains actually shifted "nr_pages". 
    Teach swap/freeing code about putting multiple folio references, and
    delayed rmap handling to remove page ranges of a folio.
    
    This extension allows for still gathering almost as many small folios as
    we used to (-1, because we have to prepare for a possibly bigger next
    entry), but still allows for gathering consecutive pages that belong to
    the same large folio.
    
    Note that we don't pass the folio pointer, because it is not required for
    now.  Further, we don't support page_size != PAGE_SIZE, it won't be
    required for simple PTE batching.
    
    We have to provide a separate s390 implementation, but it's fairly
    straight forward.
    
    Another, more invasive and likely more expensive, approach would be to use
    folio+range or a PFN range instead of page+nr_pages.  But, we should do
    that consistently for the whole mmu_gather.  For now, let's keep it simple
    and add "nr_pages" only.
    
    Note that it is now possible to gather significantly more pages: In the
    past, we were able to gather ~10000 pages, now we can also gather ~5000
    folio fragments that span multiple pages.  A folio fragment on x86-64 can
    span up to 512 pages (2 MiB THP) and on arm64 with 64k in theory 8192
    pages (512 MiB THP).  Gathering more memory is not considered something we
    should worry about, especially because these are already corner cases.
    
    While we can gather more total memory, we won't free more folio fragments.
    As long as page freeing time primarily only depends on the number of
    involved folios, there is no effective change for !preempt configurations.
    However, we'll adjust tlb_batch_pages_flush() separately to handle corner
    cases where page freeing time grows proportionally with the actual memory
    size.
    
    Link: https://lkml.kernel.org/r/20240214204435.167852-9-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
    Reviewed-by: default avatarRyan Roberts <ryan.roberts@arm.com>
    Cc: Alexander Gordeev <agordeev@linux.ibm.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Heiko Carstens <hca@linux.ibm.com>
    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Sven Schnelle <svens@linux.ibm.com>
    Cc: Vasily Gorbik <gor@linux.ibm.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Yin Fengwei <fengwei.yin@intel.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    d7f861b9
tlb.h 4.65 KB