Commit 0744f280 authored by Ralph Campbell's avatar Ralph Campbell Committed by Linus Torvalds

mm/migrate: optimize migrate_vma_setup() for holes

Patch series "mm/migrate: optimize migrate_vma_setup() for holes".

A simple optimization for migrate_vma_*() when the source vma is not an
anonymous vma and a new test case to exercise it.

This patch (of 2):

When migrating system memory to device private memory, if the source
address range is a valid VMA range and there is no memory or a zero page,
the source PFN array is marked as valid but with no PFN.

This lets the device driver allocate private memory and clear it, then
insert the new device private struct page into the CPU's page tables when
migrate_vma_pages() is called.  migrate_vma_pages() only inserts the new
page if the VMA is an anonymous range.

There is no point in telling the device driver to allocate device private
memory and then not migrate the page.  Instead, mark the source PFN array
entries as not migrating to avoid this overhead.

[rcampbell@nvidia.com: v2]
  Link: http://lkml.kernel.org/r/20200710194840.7602-2-rcampbell@nvidia.comSigned-off-by: default avatarRalph Campbell <rcampbell@nvidia.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: "Bharata B Rao" <bharata@linux.ibm.com>
Cc: Shuah Khan <shuah@kernel.org>
Link: http://lkml.kernel.org/r/20200710194840.7602-1-rcampbell@nvidia.com
Link: http://lkml.kernel.org/r/20200709165711.26584-1-rcampbell@nvidia.com
Link: http://lkml.kernel.org/r/20200709165711.26584-2-rcampbell@nvidia.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 34ae204f
...@@ -2168,6 +2168,16 @@ static int migrate_vma_collect_hole(unsigned long start, ...@@ -2168,6 +2168,16 @@ static int migrate_vma_collect_hole(unsigned long start,
struct migrate_vma *migrate = walk->private; struct migrate_vma *migrate = walk->private;
unsigned long addr; unsigned long addr;
/* Only allow populating anonymous memory. */
if (!vma_is_anonymous(walk->vma)) {
for (addr = start; addr < end; addr += PAGE_SIZE) {
migrate->src[migrate->npages] = 0;
migrate->dst[migrate->npages] = 0;
migrate->npages++;
}
return 0;
}
for (addr = start; addr < end; addr += PAGE_SIZE) { for (addr = start; addr < end; addr += PAGE_SIZE) {
migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
migrate->dst[migrate->npages] = 0; migrate->dst[migrate->npages] = 0;
...@@ -2260,8 +2270,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, ...@@ -2260,8 +2270,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
pte = *ptep; pte = *ptep;
if (pte_none(pte)) { if (pte_none(pte)) {
mpfn = MIGRATE_PFN_MIGRATE; if (vma_is_anonymous(vma)) {
migrate->cpages++; mpfn = MIGRATE_PFN_MIGRATE;
migrate->cpages++;
}
goto next; goto next;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment