Commit bce617ed authored by Peter Xu's avatar Peter Xu Committed by Linus Torvalds

mm: do page fault accounting in handle_mm_fault

Patch series "mm: Page fault accounting cleanups", v5.

This is v5 of the pf accounting cleanup series.  It originates from Gerald
Schaefer's report on an issue a week ago regarding to incorrect page fault
accountings for retried page fault after commit 4064b982 ("mm: allow
VM_FAULT_RETRY for multiple times"):

  https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/

What this series did:

  - Correct page fault accounting: we do accounting for a page fault
    (no matter whether it's from #PF handling, or gup, or anything else)
    only with the one that completed the fault.  For example, page fault
    retries should not be counted in page fault counters.  Same to the
    perf events.

  - Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf
    event is used in an adhoc way across different archs.

    Case (1): for many archs it's done at the entry of a page fault
    handler, so that it will also cover e.g.  errornous faults.

    Case (2): for some other archs, it is only accounted when the page
    fault is resolved successfully.

    Case (3): there're still quite some archs that have not enabled
    this perf event.

    Since this series will touch merely all the archs, we unify this
    perf event to always follow case (1), which is the one that makes most
    sense.  And since we moved the accounting into handle_mm_fault, the
    other two MAJ/MIN perf events are well taken care of naturally.

  - Unify definition of "major faults": the definition of "major
    fault" is slightly changed when used in accounting (not
    VM_FAULT_MAJOR).  More information in patch 1.

  - Always account the page fault onto the one that triggered the page
    fault.  This does not matter much for #PF handlings, but mostly for
    gup.  More information on this in patch 25.

Patchset layout:

Patch 1:     Introduced the accounting in handle_mm_fault(), not enabled.
Patch 2-23:  Enable the new accounting for arch #PF handlers one by one.
Patch 24:    Enable the new accounting for the rest outliers (gup, iommu, etc.)
Patch 25:    Cleanup GUP task_struct pointer since it's not needed any more

This patch (of 25):

This is a preparation patch to move page fault accountings into the
general code in handle_mm_fault().  This includes both the per task
flt_maj/flt_min counters, and the major/minor page fault perf events.  To
do this, the pt_regs pointer is passed into handle_mm_fault().

PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault
handlers.

So far, all the pt_regs pointer that passed into handle_mm_fault() is
NULL, which means this patch should have no intented functional change.
Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com
Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ed03d924
...@@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, ...@@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
/* If for any reason at all we couldn't handle the fault, /* If for any reason at all we couldn't handle the fault,
make sure we exit gracefully rather than endlessly redo make sure we exit gracefully rather than endlessly redo
the fault. */ the fault. */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -130,7 +130,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) ...@@ -130,7 +130,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
goto bad_area; goto bad_area;
} }
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
/* Quick path to respond to signals */ /* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
......
...@@ -224,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, ...@@ -224,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
goto out; goto out;
} }
return handle_mm_fault(vma, addr & PAGE_MASK, flags); return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL);
check_stack: check_stack:
/* Don't allow expansion below FIRST_USER_ADDRESS */ /* Don't allow expansion below FIRST_USER_ADDRESS */
......
...@@ -428,7 +428,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, ...@@ -428,7 +428,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
*/ */
if (!(vma->vm_flags & vm_flags)) if (!(vma->vm_flags & vm_flags))
return VM_FAULT_BADACCESS; return VM_FAULT_BADACCESS;
return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags); return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL);
} }
static bool is_el0_instruction_abort(unsigned int esr) static bool is_el0_instruction_abort(unsigned int esr)
......
...@@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, ...@@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0); fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0,
NULL);
if (unlikely(fault & VM_FAULT_ERROR)) { if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM) if (fault & VM_FAULT_OOM)
goto out_of_memory; goto out_of_memory;
......
...@@ -88,7 +88,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) ...@@ -88,7 +88,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
break; break;
} }
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -143,7 +143,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re ...@@ -143,7 +143,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
* sure we exit gracefully rather than endlessly redo the * sure we exit gracefully rather than endlessly redo the
* fault. * fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -134,7 +134,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, ...@@ -134,7 +134,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
pr_debug("handle_mm_fault returns %x\n", fault); pr_debug("handle_mm_fault returns %x\n", fault);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
......
...@@ -214,7 +214,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, ...@@ -214,7 +214,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, ...@@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -206,7 +206,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, ...@@ -206,7 +206,7 @@ void do_page_fault(unsigned long entry, unsigned long addr,
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, addr, flags); fault = handle_mm_fault(vma, addr, flags, NULL);
/* /*
* If we need to retry but a fatal signal is pending, handle the * If we need to retry but a fatal signal is pending, handle the
......
...@@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, ...@@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, ...@@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, ...@@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
* fault. * fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, ...@@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
} }
ret = 0; ret = 0;
*flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0); *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0, NULL);
if (unlikely(*flt & VM_FAULT_ERROR)) { if (unlikely(*flt & VM_FAULT_ERROR)) {
if (*flt & VM_FAULT_OOM) { if (*flt & VM_FAULT_OOM) {
ret = -ENOMEM; ret = -ENOMEM;
......
...@@ -511,7 +511,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, ...@@ -511,7 +511,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
major |= fault & VM_FAULT_MAJOR; major |= fault & VM_FAULT_MAJOR;
......
...@@ -109,7 +109,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) ...@@ -109,7 +109,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, addr, flags); fault = handle_mm_fault(vma, addr, flags, NULL);
/* /*
* If we need to retry but a fatal signal is pending, handle the * If we need to retry but a fatal signal is pending, handle the
......
...@@ -476,7 +476,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) ...@@ -476,7 +476,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
fault = VM_FAULT_SIGNAL; fault = VM_FAULT_SIGNAL;
if (flags & FAULT_FLAG_RETRY_NOWAIT) if (flags & FAULT_FLAG_RETRY_NOWAIT)
......
...@@ -482,7 +482,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, ...@@ -482,7 +482,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR)))
if (mm_fault_error(regs, error_code, address, fault)) if (mm_fault_error(regs, error_code, address, fault))
......
...@@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, ...@@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
...@@ -410,7 +410,7 @@ static void force_user_fault(unsigned long address, int write) ...@@ -410,7 +410,7 @@ static void force_user_fault(unsigned long address, int write)
if (!(vma->vm_flags & (VM_READ | VM_EXEC))) if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
goto bad_area; goto bad_area;
} }
switch (handle_mm_fault(vma, address, flags)) { switch (handle_mm_fault(vma, address, flags, NULL)) {
case VM_FAULT_SIGBUS: case VM_FAULT_SIGBUS:
case VM_FAULT_OOM: case VM_FAULT_OOM:
goto do_sigbus; goto do_sigbus;
......
...@@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) ...@@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
goto bad_area; goto bad_area;
} }
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
goto exit_exception; goto exit_exception;
......
...@@ -71,7 +71,7 @@ int handle_page_fault(unsigned long address, unsigned long ip, ...@@ -71,7 +71,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
do { do {
vm_fault_t fault; vm_fault_t fault;
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current))
goto out_nosemaphore; goto out_nosemaphore;
......
...@@ -1291,7 +1291,7 @@ void do_user_addr_fault(struct pt_regs *regs, ...@@ -1291,7 +1291,7 @@ void do_user_addr_fault(struct pt_regs *regs,
* userland). The return to userland is identified whenever * userland). The return to userland is identified whenever
* FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
major |= fault & VM_FAULT_MAJOR; major |= fault & VM_FAULT_MAJOR;
/* Quick path to respond to signals */ /* Quick path to respond to signals */
......
...@@ -107,7 +107,7 @@ void do_page_fault(struct pt_regs *regs) ...@@ -107,7 +107,7 @@ void do_page_fault(struct pt_regs *regs)
* make sure we exit gracefully rather than endlessly redo * make sure we exit gracefully rather than endlessly redo
* the fault. * the fault.
*/ */
fault = handle_mm_fault(vma, address, flags); fault = handle_mm_fault(vma, address, flags, NULL);
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return; return;
......
...@@ -495,7 +495,7 @@ static void do_fault(struct work_struct *work) ...@@ -495,7 +495,7 @@ static void do_fault(struct work_struct *work)
if (access_error(vma, fault)) if (access_error(vma, fault))
goto out; goto out;
ret = handle_mm_fault(vma, address, flags); ret = handle_mm_fault(vma, address, flags, NULL);
out: out:
mmap_read_unlock(mm); mmap_read_unlock(mm);
......
...@@ -872,7 +872,8 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -872,7 +872,8 @@ static irqreturn_t prq_event_thread(int irq, void *d)
goto invalid; goto invalid;
ret = handle_mm_fault(vma, address, ret = handle_mm_fault(vma, address,
req->wr_req ? FAULT_FLAG_WRITE : 0); req->wr_req ? FAULT_FLAG_WRITE : 0,
NULL);
if (ret & VM_FAULT_ERROR) if (ret & VM_FAULT_ERROR)
goto invalid; goto invalid;
......
...@@ -38,6 +38,7 @@ struct file_ra_state; ...@@ -38,6 +38,7 @@ struct file_ra_state;
struct user_struct; struct user_struct;
struct writeback_control; struct writeback_control;
struct bdi_writeback; struct bdi_writeback;
struct pt_regs;
void init_mm_internals(void); void init_mm_internals(void);
...@@ -1658,7 +1659,8 @@ int invalidate_inode_page(struct page *page); ...@@ -1658,7 +1659,8 @@ int invalidate_inode_page(struct page *page);
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
unsigned long address, unsigned int flags); unsigned long address, unsigned int flags,
struct pt_regs *regs);
extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
unsigned long address, unsigned int fault_flags, unsigned long address, unsigned int fault_flags,
bool *unlocked); bool *unlocked);
...@@ -1668,7 +1670,8 @@ void unmap_mapping_range(struct address_space *mapping, ...@@ -1668,7 +1670,8 @@ void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows); loff_t const holebegin, loff_t const holelen, int even_cows);
#else #else
static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
unsigned long address, unsigned int flags) unsigned long address, unsigned int flags,
struct pt_regs *regs)
{ {
/* should never happen if there's no MMU */ /* should never happen if there's no MMU */
BUG(); BUG();
......
...@@ -884,7 +884,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, ...@@ -884,7 +884,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
fault_flags |= FAULT_FLAG_TRIED; fault_flags |= FAULT_FLAG_TRIED;
} }
ret = handle_mm_fault(vma, address, fault_flags); ret = handle_mm_fault(vma, address, fault_flags, NULL);
if (ret & VM_FAULT_ERROR) { if (ret & VM_FAULT_ERROR) {
int err = vm_fault_to_errno(ret, *flags); int err = vm_fault_to_errno(ret, *flags);
...@@ -1238,7 +1238,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, ...@@ -1238,7 +1238,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
fatal_signal_pending(current)) fatal_signal_pending(current))
return -EINTR; return -EINTR;
ret = handle_mm_fault(vma, address, fault_flags); ret = handle_mm_fault(vma, address, fault_flags, NULL);
major |= ret & VM_FAULT_MAJOR; major |= ret & VM_FAULT_MAJOR;
if (ret & VM_FAULT_ERROR) { if (ret & VM_FAULT_ERROR) {
int err = vm_fault_to_errno(ret, 0); int err = vm_fault_to_errno(ret, 0);
......
...@@ -75,7 +75,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end, ...@@ -75,7 +75,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
} }
for (; addr < end; addr += PAGE_SIZE) for (; addr < end; addr += PAGE_SIZE)
if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR) if (handle_mm_fault(vma, addr, fault_flags, NULL) &
VM_FAULT_ERROR)
return -EFAULT; return -EFAULT;
return -EBUSY; return -EBUSY;
} }
......
...@@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) ...@@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
break; break;
if (PageKsm(page)) if (PageKsm(page))
ret = handle_mm_fault(vma, addr, ret = handle_mm_fault(vma, addr,
FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
NULL);
else else
ret = VM_FAULT_WRITE; ret = VM_FAULT_WRITE;
put_page(page); put_page(page);
......
...@@ -71,6 +71,8 @@ ...@@ -71,6 +71,8 @@
#include <linux/dax.h> #include <linux/dax.h>
#include <linux/oom.h> #include <linux/oom.h>
#include <linux/numa.h> #include <linux/numa.h>
#include <linux/perf_event.h>
#include <linux/ptrace.h>
#include <trace/events/kmem.h> #include <trace/events/kmem.h>
...@@ -4356,6 +4358,64 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, ...@@ -4356,6 +4358,64 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
return handle_pte_fault(&vmf); return handle_pte_fault(&vmf);
} }
/**
* mm_account_fault - Do page fault accountings
*
* @regs: the pt_regs struct pointer. When set to NULL, will skip accounting
* of perf event counters, but we'll still do the per-task accounting to
* the task who triggered this page fault.
* @address: the faulted address.
* @flags: the fault flags.
* @ret: the fault retcode.
*
* This will take care of most of the page fault accountings. Meanwhile, it
* will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf counter
* updates. However note that the handling of PERF_COUNT_SW_PAGE_FAULTS should
* still be in per-arch page fault handlers at the entry of page fault.
*/
static inline void mm_account_fault(struct pt_regs *regs,
unsigned long address, unsigned int flags,
vm_fault_t ret)
{
bool major;
/*
* We don't do accounting for some specific faults:
*
* - Unsuccessful faults (e.g. when the address wasn't valid). That
* includes arch_vma_access_permitted() failing before reaching here.
* So this is not a "this many hardware page faults" counter. We
* should use the hw profiling for that.
*
* - Incomplete faults (VM_FAULT_RETRY). They will only be counted
* once they're completed.
*/
if (ret & (VM_FAULT_ERROR | VM_FAULT_RETRY))
return;
/*
* We define the fault as a major fault when the final successful fault
* is VM_FAULT_MAJOR, or if it retried (which implies that we couldn't
* handle it immediately previously).
*/
major = (ret & VM_FAULT_MAJOR) || (flags & FAULT_FLAG_TRIED);
/*
* If the fault is done for GUP, regs will be NULL, and we will skip
* the fault accounting.
*/
if (!regs)
return;
if (major) {
current->maj_flt++;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address);
} else {
current->min_flt++;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address);
}
}
/* /*
* By the time we get here, we already hold the mm semaphore * By the time we get here, we already hold the mm semaphore
* *
...@@ -4363,7 +4423,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, ...@@ -4363,7 +4423,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
* return value. See filemap_fault() and __lock_page_or_retry(). * return value. See filemap_fault() and __lock_page_or_retry().
*/ */
vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags) unsigned int flags, struct pt_regs *regs)
{ {
vm_fault_t ret; vm_fault_t ret;
...@@ -4404,6 +4464,8 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, ...@@ -4404,6 +4464,8 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
mem_cgroup_oom_synchronize(false); mem_cgroup_oom_synchronize(false);
} }
mm_account_fault(regs, address, flags, ret);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(handle_mm_fault); EXPORT_SYMBOL_GPL(handle_mm_fault);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment