• Rick Edgecombe's avatar
    mm: introduce arch_get_unmapped_area_vmflags() · 96114870
    Rick Edgecombe authored
    When memory is being placed, mmap() will take care to respect the guard
    gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and
    VM_GROWSDOWN).  In order to ensure guard gaps between mappings, mmap()
    needs to consider two things:
    
     1. That the new mapping isn't placed in an any existing mappings guard
        gaps.
     2. That the new mapping isn't placed such that any existing mappings
        are not in *its* guard gaps.
    
    The longstanding behavior of mmap() is to ensure 1, but not take any care
    around 2.  So for example, if there is a PAGE_SIZE free area, and a mmap()
    with a PAGE_SIZE size, and a type that has a guard gap is being placed,
    mmap() may place the shadow stack in the PAGE_SIZE free area.  Then the
    mapping that is supposed to have a guard gap will not have a gap to the
    adjacent VMA.
    
    In order to take the start gap into account, the maple tree search needs
    to know the size of start gap the new mapping will need.  The call chain
    from do_mmap() to the actual maple tree search looks like this:
    
    do_mmap(size, vm_flags, map_flags, ..)
    	mm/mmap.c:get_unmapped_area(size, map_flags, ...)
    		arch_get_unmapped_area(size, map_flags, ...)
    			vm_unmapped_area(struct vm_unmapped_area_info)
    
    One option would be to add another MAP_ flag to mean a one page start gap
    (as is for shadow stack), but this consumes a flag unnecessarily.  Another
    option could be to simply increase the size passed in do_mmap() by the
    start gap size, and adjust after the fact, but this will interfere with
    the alignment requirements passed in struct vm_unmapped_area_info, and
    unknown to mmap.c.  Instead, introduce variants of
    arch_get_unmapped_area/_topdown() that take vm_flags.  In future changes,
    these variants can be used in mmap.c:get_unmapped_area() to allow the
    vm_flags to be passed through to vm_unmapped_area(), while preserving the
    normal arch_get_unmapped_area/_topdown() for the existing callers.
    
    Link: https://lkml.kernel.org/r/20240326021656.202649-4-rick.p.edgecombe@intel.comSigned-off-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
    Cc: Alexei Starovoitov <ast@kernel.org>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
    Cc: Borislav Petkov (AMD) <bp@alien8.de>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Deepak Gupta <debug@rivosinc.com>
    Cc: Guo Ren <guoren@kernel.org>
    Cc: Helge Deller <deller@gmx.de>
    Cc: H. Peter Anvin (Intel) <hpa@zytor.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
    Cc: Mark Brown <broonie@kernel.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    96114870
mmap.c 107 KB