• David Hildenbrand's avatar
    mm/swap: remember PG_anon_exclusive via a swp pte bit · 1493a191
    David Hildenbrand authored
    Patch series "mm: COW fixes part 3: reliable GUP R/W FOLL_GET of anonymous pages", v2.
    
    This series fixes memory corruptions when a GUP R/W reference (FOLL_WRITE
    | FOLL_GET) was taken on an anonymous page and COW logic fails to detect
    exclusivity of the page to then replacing the anonymous page by a copy in
    the page table: The GUP reference lost synchronicity with the pages mapped
    into the page tables.  This series focuses on x86, arm64, s390x and
    ppc64/book3s -- other architectures are fairly easy to support by
    implementing __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
    
    This primarily fixes the O_DIRECT memory corruptions that can happen on
    concurrent swapout, whereby we lose DMA reads to a page (modifying the
    user page by writing to it).
    
    O_DIRECT currently uses FOLL_GET for short-term (!FOLL_LONGTERM) DMA
    from/to a user page.  In the long run, we want to convert it to properly
    use FOLL_PIN, and John is working on it, but that might take a while and
    might not be easy to backport.  In the meantime, let's restore what used
    to work before we started modifying our COW logic: make R/W FOLL_GET
    references reliable as long as there is no fork() after GUP involved.
    
    This is just the natural follow-up of part 2, that will also further
    reduce "wrong COW" on the swapin path, for example, when we cannot remove
    a page from the swapcache due to concurrent writeback, or if we have two
    threads faulting on the same swapped-out page.  Fixing O_DIRECT is just a
    nice side-product
    
    This issue, including other related COW issues, has been summarized in [3]
    under 2):
    "
      2. Intra Process Memory Corruptions due to Wrong COW (FOLL_GET)
    
      It was discovered that we can create a memory corruption by reading a
      file via O_DIRECT to a part (e.g., first 512 bytes) of a page,
      concurrently writing to an unrelated part (e.g., last byte) of the same
      page, and concurrently write-protecting the page via clear_refs
      SOFTDIRTY tracking [6].
    
      For the reproducer, the issue is that O_DIRECT grabs a reference of the
      target page (via FOLL_GET) and clear_refs write-protects the relevant
      page table entry. On successive write access to the page from the
      process itself, we wrongly COW the page when resolving the write fault,
      resulting in a loss of synchronicity and consequently a memory corruption.
    
      While some people might think that using clear_refs in this combination
      is a corner cases, it turns out to be a more generic problem unfortunately.
    
      For example, it was just recently discovered that we can similarly
      create a memory corruption without clear_refs, simply by concurrently
      swapping out the buffer pages [7]. Note that we nowadays even use the
      swap infrastructure in Linux without an actual swap disk/partition: the
      prime example is zram which is enabled as default under Fedora [10].
    
      The root issue is that a write-fault on a page that has additional
      references results in a COW and thereby a loss of synchronicity
      and consequently a memory corruption if two parties believe they are
      referencing the same page.
    "
    
    We don't particularly care about R/O FOLL_GET references: they were never
    reliable and O_DIRECT doesn't expect to observe modifications from a page
    after DMA was started.
    
    Note that:
    * this only fixes the issue on x86, arm64, s390x and ppc64/book3s
      ("enterprise architectures"). Other architectures have to implement
      __HAVE_ARCH_PTE_SWP_EXCLUSIVE to achieve the same.
    * this does *not * consider any kind of fork() after taking the reference:
      fork() after GUP never worked reliably with FOLL_GET.
    * Not losing PG_anon_exclusive during swapout was the last remaining
      piece. KSM already makes sure that there are no other references on
      a page before considering it for sharing. Page migration maintains
      PG_anon_exclusive and simply fails when there are additional references
      (freezing the refcount fails). Only swapout code dropped the
      PG_anon_exclusive flag because it requires more work to remember +
      restore it.
    
    With this series in place, most COW issues of [3] are fixed on said
    architectures. Other architectures can implement
    __HAVE_ARCH_PTE_SWP_EXCLUSIVE fairly easily.
    
    [1] https://lkml.kernel.org/r/20220329160440.193848-1-david@redhat.com
    [2] https://lkml.kernel.org/r/20211217113049.23850-1-david@redhat.com
    [3] https://lore.kernel.org/r/3ae33b08-d9ef-f846-56fb-645e3b9b4c66@redhat.com
    
    
    This patch (of 8):
    
    Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
    it.  We do this, to keep fork() logic on swap entries easy and efficient:
    for example, if we wouldn't clear it when unmapping, we'd have to lookup
    the page in the swapcache for each and every swap entry during fork() and
    clear PG_anon_exclusive if set.
    
    Instead, we want to store that information directly in the swap pte,
    protected by the page table lock, similarly to how we handle
    SWP_MIGRATION_READ_EXCLUSIVE for migration entries.  However, for actual
    swap entries, we don't want to mess with the swap type (e.g., still one
    bit) because it overcomplicates swap code.
    
    In try_to_unmap(), we already reject to unmap in case the page might be
    pinned, because we must not lose PG_anon_exclusive on pinned pages ever. 
    Checking if there are other unexpected references reliably *before*
    completely unmapping a page is unfortunately not really possible: THP
    heavily overcomplicate the situation.  Once fully unmapped it's easier --
    we, for example, make sure that there are no unexpected references *after*
    unmapping a page before starting writeback on that page.
    
    So, we currently might end up unmapping a page and clearing
    PG_anon_exclusive if that page has additional references, for example, due
    to a FOLL_GET.
    
    do_swap_page() has to re-determine if a page is exclusive, which will
    easily fail if there are other references on a page, most prominently GUP
    references via FOLL_GET.  This can currently result in memory corruptions
    when taking a FOLL_GET | FOLL_WRITE reference on a page even when fork()
    is never involved: try_to_unmap() will succeed, and when refaulting the
    page, it cannot be marked exclusive and will get replaced by a copy in the
    page tables on the next write access, resulting in writes via the GUP
    reference to the page being lost.
    
    In an ideal world, everybody that uses GUP and wants to modify page
    content, such as O_DIRECT, would properly use FOLL_PIN.  However, that
    conversion will take a while.  It's easier to fix what used to work in the
    past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive.  In addition,
    by remembering PG_anon_exclusive we can further reduce unnecessary COW in
    some cases, so it's the natural thing to do.
    
    So let's transfer the PG_anon_exclusive information to the swap pte and
    store it via an architecture-dependant pte bit; use that information when
    restoring the swap pte in do_swap_page() and unuse_pte().  During fork(),
    we simply have to clear the pte bit and are done.
    
    Of course, there is one corner case to handle: swap backends that don't
    support concurrent page modifications while the page is under writeback. 
    Special case these, and drop the exclusive marker.  Add a comment why that
    is just fine (also, reuse_swap_page() would have done the same in the
    past).
    
    In the future, we'll hopefully have all architectures support
    __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty stubs
    and the define completely.  Then, we can also convert
    SWP_MIGRATION_READ_EXCLUSIVE.  For architectures it's fairly easy to
    support: either simply use a yet unused pte bit that can be used for swap
    entries, steal one from the arch type bits if they exceed 5, or steal one
    from the offset bits.
    
    Note: R/O FOLL_GET references were never really reliable, especially when
    taking one on a shared page and then writing to the page (e.g., GUP after
    fork()).  FOLL_GET, including R/W references, were never really reliable
    once fork was involved (e.g., GUP before fork(), GUP during fork()).  KSM
    steps back in case it stumbles over unexpected references and is,
    therefore, fine.
    
    [david@redhat.com: fix SWP_STABLE_WRITES test]
      Link: https://lkml.kernel.org/r/ac725bcb-313a-4fff-250a-68ba9a8f85fb@redhat.comLink: https://lkml.kernel.org/r/20220329164329.208407-1-david@redhat.com
    Link: https://lkml.kernel.org/r/20220329164329.208407-2-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
    Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: John Hubbard <jhubbard@nvidia.com>
    Cc: Jason Gunthorpe <jgg@nvidia.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Mike Rapoport <rppt@linux.ibm.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Cc: Jann Horn <jannh@google.com>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Roman Gushchin <guro@fb.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Don Dutile <ddutile@redhat.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Liang Zhang <zhangliang5@huawei.com>
    Cc: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
    Cc: Oded Gabbay <oded.gabbay@gmail.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Heiko Carstens <hca@linux.ibm.com>
    Cc: Vasily Gorbik <gor@linux.ibm.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    1493a191
rmap.c 71.1 KB