Commit eddb1c22 authored by John Hubbard's avatar John Hubbard Committed by Linus Torvalds

mm/gup: introduce pin_user_pages*() and FOLL_PIN

Introduce pin_user_pages*() variations of get_user_pages*() calls, and
also pin_longterm_pages*() variations.

For now, these are placeholder calls, until the various call sites are
converted to use the correct get_user_pages*() or pin_user_pages*() API.

These variants will eventually all set FOLL_PIN, which is also
introduced, and thoroughly documented.

    pin_user_pages()
    pin_user_pages_remote()
    pin_user_pages_fast()

All pages that are pinned via the above calls, must be unpinned via
put_user_page().

The underlying rules are:

* FOLL_PIN is a gup-internal flag, so the call sites should not directly
  set it.  That behavior is enforced with assertions.

* Call sites that want to indicate that they are going to do DirectIO
  ("DIO") or something with similar characteristics, should call a
  get_user_pages()-like wrapper call that sets FOLL_PIN.  These wrappers
  will:

    * Start with "pin_user_pages" instead of "get_user_pages".  That
      makes it easy to find and audit the call sites.

    * Set FOLL_PIN

* For pages that are received via FOLL_PIN, those pages must be returned
  via put_user_page().

Thanks to Jan Kara and Vlastimil Babka for explaining the 4 cases in
this documentation.  (I've reworded it and expanded upon it.)

Link: http://lkml.kernel.org/r/20200107224558.2362728-12-jhubbard@nvidia.comSigned-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Reviewed-by: default avatarJan Kara <jack@suse.cz>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>		[Documentation]
Reviewed-by: default avatarJérôme Glisse <jglisse@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Björn Töpel <bjorn.topel@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 3c7470b6
...@@ -31,6 +31,7 @@ Core utilities ...@@ -31,6 +31,7 @@ Core utilities
generic-radix-tree generic-radix-tree
memory-allocation memory-allocation
mm-api mm-api
pin_user_pages
gfp_mask-from-fs-io gfp_mask-from-fs-io
timekeeping timekeeping
boot-time-mm boot-time-mm
......
This diff is collapsed.
...@@ -1042,16 +1042,14 @@ static inline void put_page(struct page *page) ...@@ -1042,16 +1042,14 @@ static inline void put_page(struct page *page)
* put_user_page() - release a gup-pinned page * put_user_page() - release a gup-pinned page
* @page: pointer to page to be released * @page: pointer to page to be released
* *
* Pages that were pinned via get_user_pages*() must be released via * Pages that were pinned via pin_user_pages*() must be released via either
* either put_user_page(), or one of the put_user_pages*() routines * put_user_page(), or one of the put_user_pages*() routines. This is so that
* below. This is so that eventually, pages that are pinned via * eventually such pages can be separately tracked and uniquely handled. In
* get_user_pages*() can be separately tracked and uniquely handled. In * particular, interactions with RDMA and filesystems need special handling.
* particular, interactions with RDMA and filesystems need special
* handling.
* *
* put_user_page() and put_page() are not interchangeable, despite this early * put_user_page() and put_page() are not interchangeable, despite this early
* implementation that makes them look the same. put_user_page() calls must * implementation that makes them look the same. put_user_page() calls must
* be perfectly matched up with get_user_page() calls. * be perfectly matched up with pin*() calls.
*/ */
static inline void put_user_page(struct page *page) static inline void put_user_page(struct page *page)
{ {
...@@ -1509,9 +1507,16 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, ...@@ -1509,9 +1507,16 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
unsigned long start, unsigned long nr_pages, unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages, unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas, int *locked); struct vm_area_struct **vmas, int *locked);
long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas, int *locked);
long get_user_pages(unsigned long start, unsigned long nr_pages, long get_user_pages(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages, unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas); struct vm_area_struct **vmas);
long pin_user_pages(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas);
long get_user_pages_locked(unsigned long start, unsigned long nr_pages, long get_user_pages_locked(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages, int *locked); unsigned int gup_flags, struct page **pages, int *locked);
long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
...@@ -1519,6 +1524,8 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, ...@@ -1519,6 +1524,8 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
int get_user_pages_fast(unsigned long start, int nr_pages, int get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages); unsigned int gup_flags, struct page **pages);
int pin_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages);
int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc); int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc);
int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
...@@ -2583,13 +2590,15 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, ...@@ -2583,13 +2590,15 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
#define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_ANON 0x8000 /* don't do file mappings */
#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */
#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */
#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */
/* /*
* NOTE on FOLL_LONGTERM: * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
* other. Here is what they mean, and how to use them:
* *
* FOLL_LONGTERM indicates that the page will be held for an indefinite time * FOLL_LONGTERM indicates that the page will be held for an indefinite time
* period _often_ under userspace control. This is contrasted with * period _often_ under userspace control. This is in contrast to
* iov_iter_get_pages() where usages which are transient. * iov_iter_get_pages(), whose usages are transient.
* *
* FIXME: For pages which are part of a filesystem, mappings are subject to the * FIXME: For pages which are part of a filesystem, mappings are subject to the
* lifetime enforced by the filesystem and we need guarantees that longterm * lifetime enforced by the filesystem and we need guarantees that longterm
...@@ -2604,11 +2613,39 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, ...@@ -2604,11 +2613,39 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
* Currently only get_user_pages() and get_user_pages_fast() support this flag * Currently only get_user_pages() and get_user_pages_fast() support this flag
* and calls to get_user_pages_[un]locked are specifically not allowed. This * and calls to get_user_pages_[un]locked are specifically not allowed. This
* is due to an incompatibility with the FS DAX check and * is due to an incompatibility with the FS DAX check and
* FAULT_FLAG_ALLOW_RETRY * FAULT_FLAG_ALLOW_RETRY.
* *
* In the CMA case: longterm pins in a CMA region would unnecessarily fragment * In the CMA case: long term pins in a CMA region would unnecessarily fragment
* that region. And so CMA attempts to migrate the page before pinning when * that region. And so, CMA attempts to migrate the page before pinning, when
* FOLL_LONGTERM is specified. * FOLL_LONGTERM is specified.
*
* FOLL_PIN indicates that a special kind of tracking (not just page->_refcount,
* but an additional pin counting system) will be invoked. This is intended for
* anything that gets a page reference and then touches page data (for example,
* Direct IO). This lets the filesystem know that some non-file-system entity is
* potentially changing the pages' data. In contrast to FOLL_GET (whose pages
* are released via put_page()), FOLL_PIN pages must be released, ultimately, by
* a call to put_user_page().
*
* FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use different
* and separate refcounting mechanisms, however, and that means that each has
* its own acquire and release mechanisms:
*
* FOLL_GET: get_user_pages*() to acquire, and put_page() to release.
*
* FOLL_PIN: pin_user_pages*() to acquire, and put_user_pages to release.
*
* FOLL_PIN and FOLL_GET are mutually exclusive for a given function call.
* (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-based
* calls applied to them, and that's perfectly OK. This is a constraint on the
* callers, not on the pages.)
*
* FOLL_PIN should be set internally by the pin_user_pages*() APIs, never
* directly by the caller. That's in order to help avoid mismatches when
* releasing pages: get_user_pages*() pages must be released via put_page(),
* while pin_user_pages*() pages must be released via put_user_page().
*
* Please see Documentation/vm/pin_user_pages.rst for more information.
*/ */
static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
......
...@@ -194,6 +194,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ...@@ -194,6 +194,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
spinlock_t *ptl; spinlock_t *ptl;
pte_t *ptep, pte; pte_t *ptep, pte;
/* FOLL_GET and FOLL_PIN are mutually exclusive. */
if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
(FOLL_PIN | FOLL_GET)))
return ERR_PTR(-EINVAL);
retry: retry:
if (unlikely(pmd_bad(*pmd))) if (unlikely(pmd_bad(*pmd)))
return no_page_table(vma, flags); return no_page_table(vma, flags);
...@@ -811,7 +815,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, ...@@ -811,7 +815,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
start = untagged_addr(start); start = untagged_addr(start);
VM_BUG_ON(!!pages != !!(gup_flags & FOLL_GET)); VM_BUG_ON(!!pages != !!(gup_flags & (FOLL_GET | FOLL_PIN)));
/* /*
* If FOLL_FORCE is set then do not force a full fault as the hinting * If FOLL_FORCE is set then do not force a full fault as the hinting
...@@ -1035,7 +1039,16 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, ...@@ -1035,7 +1039,16 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk,
BUG_ON(*locked != 1); BUG_ON(*locked != 1);
} }
if (pages) /*
* FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
* is to set FOLL_GET if the caller wants pages[] filled in (but has
* carelessly failed to specify FOLL_GET), so keep doing that, but only
* for FOLL_GET, not for the newer FOLL_PIN.
*
* FOLL_PIN always expects pages to be non-null, but no need to assert
* that here, as any failures will be obvious enough.
*/
if (pages && !(flags & FOLL_PIN))
flags |= FOLL_GET; flags |= FOLL_GET;
pages_done = 0; pages_done = 0;
...@@ -1606,11 +1619,19 @@ static __always_inline long __gup_longterm_locked(struct task_struct *tsk, ...@@ -1606,11 +1619,19 @@ static __always_inline long __gup_longterm_locked(struct task_struct *tsk,
* should use get_user_pages because it cannot pass * should use get_user_pages because it cannot pass
* FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault.
*/ */
#ifdef CONFIG_MMU
long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
unsigned long start, unsigned long nr_pages, unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages, unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas, int *locked) struct vm_area_struct **vmas, int *locked)
{ {
/*
* FOLL_PIN must only be set internally by the pin_user_pages*() APIs,
* never directly by the caller, so enforce that with an assertion:
*/
if (WARN_ON_ONCE(gup_flags & FOLL_PIN))
return -EINVAL;
/* /*
* Parts of FOLL_LONGTERM behavior are incompatible with * Parts of FOLL_LONGTERM behavior are incompatible with
* FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on
...@@ -1636,6 +1657,16 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, ...@@ -1636,6 +1657,16 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
} }
EXPORT_SYMBOL(get_user_pages_remote); EXPORT_SYMBOL(get_user_pages_remote);
#else /* CONFIG_MMU */
long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas, int *locked)
{
return 0;
}
#endif /* !CONFIG_MMU */
/* /*
* This is the same as get_user_pages_remote(), just with a * This is the same as get_user_pages_remote(), just with a
* less-flexible calling convention where we assume that the task * less-flexible calling convention where we assume that the task
...@@ -1647,6 +1678,13 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, ...@@ -1647,6 +1678,13 @@ long get_user_pages(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages, unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas) struct vm_area_struct **vmas)
{ {
/*
* FOLL_PIN must only be set internally by the pin_user_pages*() APIs,
* never directly by the caller, so enforce that with an assertion:
*/
if (WARN_ON_ONCE(gup_flags & FOLL_PIN))
return -EINVAL;
return __gup_longterm_locked(current, current->mm, start, nr_pages, return __gup_longterm_locked(current, current->mm, start, nr_pages,
pages, vmas, gup_flags | FOLL_TOUCH); pages, vmas, gup_flags | FOLL_TOUCH);
} }
...@@ -2389,30 +2427,15 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages, ...@@ -2389,30 +2427,15 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
return ret; return ret;
} }
/** static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
* get_user_pages_fast() - pin user pages in memory unsigned int gup_flags,
* @start: starting user address struct page **pages)
* @nr_pages: number of pages from start to pin
* @gup_flags: flags modifying pin behaviour
* @pages: array that receives pointers to the pages pinned.
* Should be at least nr_pages long.
*
* Attempt to pin user pages in memory without taking mm->mmap_sem.
* If not successful, it will fall back to taking the lock and
* calling get_user_pages().
*
* Returns number of pages pinned. This may be fewer than the number
* requested. If nr_pages is 0 or negative, returns 0. If no pages
* were pinned, returns -errno.
*/
int get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages)
{ {
unsigned long addr, len, end; unsigned long addr, len, end;
int nr = 0, ret = 0; int nr = 0, ret = 0;
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
FOLL_FORCE))) FOLL_FORCE | FOLL_PIN)))
return -EINVAL; return -EINVAL;
start = untagged_addr(start) & PAGE_MASK; start = untagged_addr(start) & PAGE_MASK;
...@@ -2452,4 +2475,103 @@ int get_user_pages_fast(unsigned long start, int nr_pages, ...@@ -2452,4 +2475,103 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
return ret; return ret;
} }
/**
* get_user_pages_fast() - pin user pages in memory
* @start: starting user address
* @nr_pages: number of pages from start to pin
* @gup_flags: flags modifying pin behaviour
* @pages: array that receives pointers to the pages pinned.
* Should be at least nr_pages long.
*
* Attempt to pin user pages in memory without taking mm->mmap_sem.
* If not successful, it will fall back to taking the lock and
* calling get_user_pages().
*
* Returns number of pages pinned. This may be fewer than the number requested.
* If nr_pages is 0 or negative, returns 0. If no pages were pinned, returns
* -errno.
*/
int get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages)
{
/*
* FOLL_PIN must only be set internally by the pin_user_pages*() APIs,
* never directly by the caller, so enforce that:
*/
if (WARN_ON_ONCE(gup_flags & FOLL_PIN))
return -EINVAL;
return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages);
}
EXPORT_SYMBOL_GPL(get_user_pages_fast); EXPORT_SYMBOL_GPL(get_user_pages_fast);
/**
* pin_user_pages_fast() - pin user pages in memory without taking locks
*
* For now, this is a placeholder function, until various call sites are
* converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
* this is identical to get_user_pages_fast().
*
* This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
* is NOT intended for Case 2 (RDMA: long-term pins).
*/
int pin_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages)
{
/*
* This is a placeholder, until the pin functionality is activated.
* Until then, just behave like the corresponding get_user_pages*()
* routine.
*/
return get_user_pages_fast(start, nr_pages, gup_flags, pages);
}
EXPORT_SYMBOL_GPL(pin_user_pages_fast);
/**
* pin_user_pages_remote() - pin pages of a remote process (task != current)
*
* For now, this is a placeholder function, until various call sites are
* converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
* this is identical to get_user_pages_remote().
*
* This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
* is NOT intended for Case 2 (RDMA: long-term pins).
*/
long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas, int *locked)
{
/*
* This is a placeholder, until the pin functionality is activated.
* Until then, just behave like the corresponding get_user_pages*()
* routine.
*/
return get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, pages,
vmas, locked);
}
EXPORT_SYMBOL(pin_user_pages_remote);
/**
* pin_user_pages() - pin user pages in memory for use by other devices
*
* For now, this is a placeholder function, until various call sites are
* converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
* this is identical to get_user_pages().
*
* This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
* is NOT intended for Case 2 (RDMA: long-term pins).
*/
long pin_user_pages(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas)
{
/*
* This is a placeholder, until the pin functionality is activated.
* Until then, just behave like the corresponding get_user_pages*()
* routine.
*/
return get_user_pages(start, nr_pages, gup_flags, pages, vmas);
}
EXPORT_SYMBOL(pin_user_pages);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment