Commit 58f327f2 authored by ZhangPeng's avatar ZhangPeng Committed by Andrew Morton

filemap: avoid unnecessary major faults in filemap_fault()

A major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE) in
application, which leading to an unexpected issue[1].

This is caused by temporarily cleared PTE during a read+clear/modify/write
update of the PTE, eg, do_numa_page()/change_pte_range().

For the data segment of the user-mode program, the global variable area is
a private mapping.  After the pagecache is loaded, the private anonymous
page is generated after the COW is triggered.  Mlockall can lock COW pages
(anonymous pages), but the original file pages cannot be locked and may be
reclaimed.  If the global variable (private anon page) is accessed when
vmf->pte is zeroed in numa fault, a file page fault will be triggered.  At
this time, the original private file page may have been reclaimed.  If the
page cache is not available at this time, a major fault will be triggered
and the file will be read, causing additional overhead.

This issue affects our traffic analysis service.  The inbound traffic is
heavy.  If a major fault occurs, the I/O schedule is triggered and the
original I/O is suspended.  Generally, the I/O schedule is 0.7 ms.  If
other applications are operating disks, the system needs to wait for more
than 10 ms.  However, the inbound traffic is heavy and the NIC buffer is
small.  As a result, packet loss occurs.  But the traffic analysis service
can't tolerate packet loss.

Fix this by holding PTL and rechecking the PTE in filemap_fault() before
triggering a major fault.  We do this check only if vma is VM_LOCKED to
reduce the performance impact in common scenarios.

In our product environment, there were 7 major faults every 12 hours. 
After the patch is applied, no major fault have been triggered.

Testing file page read and write page fault performance in ext4 and
ramdisk using will-it-scale[2] on a x86 physical machine.  The data is the
average change compared with the mainline after the patch is applied.  The
test results are within the range of fluctuation.  We do this check only
if vma is VM_LOCKED, therefore, no performance regressions is caused for
most common cases.

The test results are as follows:
                          processes processes_idle  threads threads_idle
ext4    private file write:  0.22%    0.26%           1.21%  -0.15%
ext4    private file  read:  0.03%    1.00%           1.39%   0.34%
ext4    shared  file write: -0.50%   -0.02%          -0.14%  -0.02%
ramdisk private file write:  0.07%    0.02%           0.53%   0.04%
ramdisk private file  read:  0.01%    1.60%          -0.32%  -0.02%

[1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
[2] https://github.com/antonblanchard/will-it-scale/

Link: https://lkml.kernel.org/r/20240306083809.1236634-1-zhangpeng362@huawei.comSigned-off-by: default avatarZhangPeng <zhangpeng362@huawei.com>
Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Suggested-by: default avatar"Huang, Ying" <ying.huang@intel.com>
Suggested-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 4839e79c
...@@ -3181,6 +3181,48 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, ...@@ -3181,6 +3181,48 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
return fpin; return fpin;
} }
static vm_fault_t filemap_fault_recheck_pte_none(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
vm_fault_t ret = 0;
pte_t *ptep;
/*
* We might have COW'ed a pagecache folio and might now have an mlocked
* anon folio mapped. The original pagecache folio is not mlocked and
* might have been evicted. During a read+clear/modify/write update of
* the PTE, such as done in do_numa_page()/change_pte_range(), we
* temporarily clear the PTE under PT lock and might detect it here as
* "none" when not holding the PT lock.
*
* Not rechecking the PTE under PT lock could result in an unexpected
* major fault in an mlock'ed region. Recheck only for this special
* scenario while holding the PT lock, to not degrade non-mlocked
* scenarios. Recheck the PTE without PT lock firstly, thereby reducing
* the number of times we hold PT lock.
*/
if (!(vma->vm_flags & VM_LOCKED))
return 0;
if (!(vmf->flags & FAULT_FLAG_ORIG_PTE_VALID))
return 0;
ptep = pte_offset_map(vmf->pmd, vmf->address);
if (unlikely(!ptep))
return VM_FAULT_NOPAGE;
if (unlikely(!pte_none(ptep_get_lockless(ptep)))) {
ret = VM_FAULT_NOPAGE;
} else {
spin_lock(vmf->ptl);
if (unlikely(!pte_none(ptep_get(ptep))))
ret = VM_FAULT_NOPAGE;
spin_unlock(vmf->ptl);
}
pte_unmap(ptep);
return ret;
}
/** /**
* filemap_fault - read in file data for page fault handling * filemap_fault - read in file data for page fault handling
* @vmf: struct vm_fault containing details of the fault * @vmf: struct vm_fault containing details of the fault
...@@ -3236,6 +3278,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) ...@@ -3236,6 +3278,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
mapping_locked = true; mapping_locked = true;
} }
} else { } else {
ret = filemap_fault_recheck_pte_none(vmf);
if (unlikely(ret))
return ret;
/* No page in the page cache at all */ /* No page in the page cache at all */
count_vm_event(PGMAJFAULT); count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment