Commit da391d64 authored by William Kucharski's avatar William Kucharski Committed by Linus Torvalds

mm: correct comments regarding do_fault_around()

There are multiple comments surrounding do_fault_around that memtion
fault_around_pages() and fault_around_mask(), two routines that do not
exist.  These comments should be reworded to reference
fault_around_bytes, the value which is used to determine how much
do_fault_around() will attempt to read when processing a fault.

These comments should have been updated when fault_around_pages() and
fault_around_mask() were removed in commit aecd6f44 ("mm: close race
between do_fault_around() and fault_around_bytes_set()").

Fixes: aecd6f44 ("mm: close race between do_fault_around() and fault_around_bytes_set()")
Link: http://lkml.kernel.org/r/302D0B14-C7E9-44C6-8BED-033F9ACBD030@oracle.comSigned-off-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
Reviewed-by: default avatarLarry Bassel <larry.bassel@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 859d4adc
...@@ -3511,9 +3511,8 @@ static int fault_around_bytes_get(void *data, u64 *val) ...@@ -3511,9 +3511,8 @@ static int fault_around_bytes_get(void *data, u64 *val)
} }
/* /*
* fault_around_pages() and fault_around_mask() expects fault_around_bytes * fault_around_bytes must be rounded down to the nearest page order as it's
* rounded down to nearest page order. It's what do_fault_around() expects to * what do_fault_around() expects to see.
* see.
*/ */
static int fault_around_bytes_set(void *data, u64 val) static int fault_around_bytes_set(void *data, u64 val)
{ {
...@@ -3556,13 +3555,14 @@ late_initcall(fault_around_debugfs); ...@@ -3556,13 +3555,14 @@ late_initcall(fault_around_debugfs);
* This function doesn't cross the VMA boundaries, in order to call map_pages() * This function doesn't cross the VMA boundaries, in order to call map_pages()
* only once. * only once.
* *
* fault_around_pages() defines how many pages we'll try to map. * fault_around_bytes defines how many bytes we'll try to map.
* do_fault_around() expects it to return a power of two less than or equal to * do_fault_around() expects it to be set to a power of two less than or equal
* PTRS_PER_PTE. * to PTRS_PER_PTE.
* *
* The virtual address of the area that we map is naturally aligned to the * The virtual address of the area that we map is naturally aligned to
* fault_around_pages() value (and therefore to page order). This way it's * fault_around_bytes rounded down to the machine page size
* easier to guarantee that we don't cross page table boundaries. * (and therefore to page order). This way it's easier to guarantee
* that we don't cross page table boundaries.
*/ */
static int do_fault_around(struct vm_fault *vmf) static int do_fault_around(struct vm_fault *vmf)
{ {
...@@ -3579,8 +3579,8 @@ static int do_fault_around(struct vm_fault *vmf) ...@@ -3579,8 +3579,8 @@ static int do_fault_around(struct vm_fault *vmf)
start_pgoff -= off; start_pgoff -= off;
/* /*
* end_pgoff is either end of page table or end of vma * end_pgoff is either the end of the page table, the end of
* or fault_around_pages() from start_pgoff, depending what is nearest. * the vma or nr_pages from start_pgoff, depending what is nearest.
*/ */
end_pgoff = start_pgoff - end_pgoff = start_pgoff -
((vmf->address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) + ((vmf->address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) +
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment