1. 06 Mar, 2019 40 commits
    • David Hildenbrand's avatar
      powerpc/vdso: don't clear PG_reserved · f55b7417
      David Hildenbrand authored
      The VDSO is part of the kernel image and therefore the struct pages are
      marked as reserved during boot.
      
      As we install a special mapping, the actual struct pages will never be
      exposed to MM via the page tables.  We can therefore leave the pages
      marked as reserved.
      
      Link: http://lkml.kernel.org/r/20190114125903.24845-4-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au>		[powerpc]
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f55b7417
    • David Hildenbrand's avatar
      s390/vdso: don't clear PG_reserved · 446d2964
      David Hildenbrand authored
      The VDSO is part of the kernel image and therefore the struct pages are
      marked as reserved during boot.
      
      As we install a special mapping, the actual struct pages will never be
      exposed to MM via the page tables.  We can therefore leave the pages
      marked as reserved.
      
      Link: http://lkml.kernel.org/r/20190114125903.24845-3-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Suggested-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      446d2964
    • David Hildenbrand's avatar
      agp: efficeon: no need to set PG_reserved on GATT tables · 750b317f
      David Hildenbrand authored
      Patch series "mm: PG_reserved cleanups and documentation", v2.
      
      I was recently going over all users of PG_reserved.  Short story: it is
      difficult and sometimes not really clear if setting/checking for
      PG_reserved is only a relict from the past.  Easy to break things.  I
      guess I now have a pretty good idea wh things are like that nowadays and
      how they evolved.
      
      I had way more cleanups in this series inititally, but some
      architectures take PG_reserved as a way to apply a different caching
      strategy (for MMIO pages).  So I decided to only include the most
      obvious changes (that are less likely to break something).  So the big
      chunk of manual SetPageReserved users are MMIO/DMA related things on
      device buffers.
      
      Most notably, for device memory we will hopefully soon stop setting
      PG_reserved.  Then the documentation has to be updated.
      
      This patch (of 9):
      
      The l1 GATT page table is kept in a special on-chip page with 64
      entries.  We allocate the l2 page table pages via get_zeroed_page() and
      enter them into the table.  These l2 pages are modified accordingly when
      inserting/removing memory via efficeon_insert_memory and
      efficeon_remove_memory.
      
      Apart from that, these pages are not exposed or ioremap'ed.  We can stop
      setting them reserved (propably copied from generic code).
      
      Link: http://lkml.kernel.org/r/20190114125903.24845-2-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      750b317f
    • Vineeth Remanan Pillai's avatar
      mm: rid swapoff of quadratic complexity · b56a2d8a
      Vineeth Remanan Pillai authored
      This patch was initially posted by Kelley Nielsen.  Reposting the patch
      with all review comments addressed and with minor modifications and
      optimizations.  Also, folding in the fixes offered by Hugh Dickins and
      Huang Ying.  Tests were rerun and commit message updated with new
      results.
      
      try_to_unuse() is of quadratic complexity, with a lot of wasted effort.
      It unuses swap entries one by one, potentially iterating over all the
      page tables for all the processes in the system for each one.
      
      This new proposed implementation of try_to_unuse simplifies its
      complexity to linear.  It iterates over the system's mms once, unusing
      all the affected entries as it walks each set of page tables.  It also
      makes similar changes to shmem_unuse.
      
      Improvement
      
      swapoff was called on a swap partition containing about 6G of data, in a
      VM(8cpu, 16G RAM), and calls to unuse_pte_range() were counted.
      
      Present implementation....about 1200M calls(8min, avg 80% cpu util).
      Prototype.................about  9.0K calls(3min, avg 5% cpu util).
      
      Details
      
      In shmem_unuse(), iterate over the shmem_swaplist and, for each
      shmem_inode_info that contains a swap entry, pass it to
      shmem_unuse_inode(), along with the swap type.  In shmem_unuse_inode(),
      iterate over its associated xarray, and store the index and value of
      each swap entry in an array for passing to shmem_swapin_page() outside
      of the RCU critical section.
      
      In try_to_unuse(), instead of iterating over the entries in the type and
      unusing them one by one, perhaps walking all the page tables for all the
      processes for each one, iterate over the mmlist, making one pass.  Pass
      each mm to unuse_mm() to begin its page table walk, and during the walk,
      unuse all the ptes that have backing store in the swap type received by
      try_to_unuse().  After the walk, check the type for orphaned swap
      entries with find_next_to_unuse(), and remove them from the swap cache.
      If find_next_to_unuse() starts over at the beginning of the type, repeat
      the check of the shmem_swaplist and the walk a maximum of three times.
      
      Change unuse_mm() and the intervening walk functions down to
      unuse_pte_range() to take the type as a parameter, and to iterate over
      their entire range, calling the next function down on every iteration.
      In unuse_pte_range(), make a swap entry from each pte in the range using
      the passed in type.  If it has backing store in the type, call
      swapin_readahead() to retrieve the page and pass it to unuse_pte().
      
      Pass the count of pages_to_unuse down the page table walks in
      try_to_unuse(), and return from the walk when the desired number of
      pages has been swapped back in.
      
      Link: http://lkml.kernel.org/r/20190114153129.4852-2-vpillai@digitalocean.comSigned-off-by: default avatarVineeth Remanan Pillai <vpillai@digitalocean.com>
      Signed-off-by: default avatarKelley Nielsen <kelleynnn@gmail.com>
      Signed-off-by: default avatarHuang Ying <ying.huang@intel.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@surriel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b56a2d8a
    • Vineeth Remanan Pillai's avatar
      mm: refactor swap-in logic out of shmem_getpage_gfp · c5bf121e
      Vineeth Remanan Pillai authored
      swapin logic can be reused independently without rest of the logic in
      shmem_getpage_gfp.  So lets refactor it out as an independent function.
      
      Link: http://lkml.kernel.org/r/20190114153129.4852-1-vpillai@digitalocean.comSigned-off-by: default avatarVineeth Remanan Pillai <vpillai@digitalocean.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kelley Nielsen <kelleynnn@gmail.com>
      Cc: Rik van Riel <riel@surriel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c5bf121e
    • Kirill Tkhai's avatar
      mm/vmscan.c: remove 7th argument of isolate_lru_pages() · a9e7c39f
      Kirill Tkhai authored
      We may simply check for sc->may_unmap in isolate_lru_pages() instead of
      doing that in both of its callers.
      
      Link: http://lkml.kernel.org/r/154748280735.29962.15867846875217618569.stgit@localhost.localdomainSigned-off-by: default avatarKirill Tkhai <ktkhai@virtuozzo.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a9e7c39f
    • Vlastimil Babka's avatar
      mm, mempolicy: fix uninit memory access · 2e25644e
      Vlastimil Babka authored
      Syzbot with KMSAN reports (excerpt):
      
      ==================================================================
      BUG: KMSAN: uninit-value in mpol_rebind_policy mm/mempolicy.c:353 [inline]
      BUG: KMSAN: uninit-value in mpol_rebind_mm+0x249/0x370 mm/mempolicy.c:384
      CPU: 1 PID: 17420 Comm: syz-executor4 Not tainted 4.20.0-rc7+ #15
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
      Google 01/01/2011
      Call Trace:
        __dump_stack lib/dump_stack.c:77 [inline]
        dump_stack+0x173/0x1d0 lib/dump_stack.c:113
        kmsan_report+0x12e/0x2a0 mm/kmsan/kmsan.c:613
        __msan_warning+0x82/0xf0 mm/kmsan/kmsan_instr.c:295
        mpol_rebind_policy mm/mempolicy.c:353 [inline]
        mpol_rebind_mm+0x249/0x370 mm/mempolicy.c:384
        update_tasks_nodemask+0x608/0xca0 kernel/cgroup/cpuset.c:1120
        update_nodemasks_hier kernel/cgroup/cpuset.c:1185 [inline]
        update_nodemask kernel/cgroup/cpuset.c:1253 [inline]
        cpuset_write_resmask+0x2a98/0x34b0 kernel/cgroup/cpuset.c:1728
      
      ...
      
      Uninit was created at:
        kmsan_save_stack_with_flags mm/kmsan/kmsan.c:204 [inline]
        kmsan_internal_poison_shadow+0x92/0x150 mm/kmsan/kmsan.c:158
        kmsan_kmalloc+0xa6/0x130 mm/kmsan/kmsan_hooks.c:176
        kmem_cache_alloc+0x572/0xb90 mm/slub.c:2777
        mpol_new mm/mempolicy.c:276 [inline]
        do_mbind mm/mempolicy.c:1180 [inline]
        kernel_mbind+0x8a7/0x31a0 mm/mempolicy.c:1347
        __do_sys_mbind mm/mempolicy.c:1354 [inline]
      
      As it's difficult to report where exactly the uninit value resides in
      the mempolicy object, we have to guess a bit.  mm/mempolicy.c:353
      contains this part of mpol_rebind_policy():
      
              if (!mpol_store_user_nodemask(pol) &&
                  nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
      
      "mpol_store_user_nodemask(pol)" is testing pol->flags, which I couldn't
      ever see being uninitialized after leaving mpol_new().  So I'll guess
      it's actually about accessing pol->w.cpuset_mems_allowed on line 354,
      but still part of statement starting on line 353.
      
      For w.cpuset_mems_allowed to be not initialized, and the nodes_equal()
      reachable for a mempolicy where mpol_set_nodemask() is called in
      do_mbind(), it seems the only possibility is a MPOL_PREFERRED policy
      with empty set of nodes, i.e.  MPOL_LOCAL equivalent, with MPOL_F_LOCAL
      flag.  Let's exclude such policies from the nodes_equal() check.  Note
      the uninit access should be benign anyway, as rebinding this kind of
      policy is always a no-op.  Therefore no actual need for stable
      inclusion.
      
      Link: http://lkml.kernel.org/r/a71997c3-e8ae-a787-d5ce-3db05768b27c@suse.cz
      Link: http://lkml.kernel.org/r/73da3e9c-cc84-509e-17d9-0c434bb9967d@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: syzbot+b19c2dc2c990ea657a71@syzkaller.appspotmail.com
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Yisheng Xie <xieyisheng1@huawei.com>
      Cc: zhong jiang <zhongjiang@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e25644e
    • Tetsuo Handa's avatar
      memcg: killed threads should not invoke memcg OOM killer · 7775face
      Tetsuo Handa authored
      If a memory cgroup contains a single process with many threads
      (including different process group sharing the mm) then it is possible
      to trigger a race when the oom killer complains that there are no oom
      elible tasks and complain into the log which is both annoying and
      confusing because there is no actual problem.  The race looks as
      follows:
      
      P1				oom_reaper		P2
      try_charge						try_charge
        mem_cgroup_out_of_memory
          mutex_lock(oom_lock)
            out_of_memory
              oom_kill_process(P1,P2)
               wake_oom_reaper
          mutex_unlock(oom_lock)
          				oom_reap_task
      							  mutex_lock(oom_lock)
      							    select_bad_process # no victim
      
      The problem is more visible with many threads.
      
      Fix this by checking for fatal_signal_pending from
      mem_cgroup_out_of_memory when the oom_lock is already held.
      
      The oom bypass is safe because we do the same early in the try_charge
      path already.  The situation migh have changed in the mean time.  It
      should be safe to check for fatal_signal_pending and tsk_is_oom_victim
      but for a better code readability abstract the current charge bypass
      condition into should_force_charge and reuse it from that path.  "
      
      Link: http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jpSigned-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7775face
    • Mike Rapoport's avatar
      mm/page_alloc.c: check return value of memblock_alloc_node_nopanic() · 23a7052a
      Mike Rapoport authored
      There are two early memory allocations that use
      memblock_alloc_node_nopanic() and do not check its return value.
      
      While this happens very early during boot and chances that the
      allocation will fail are diminishing, it is still worth to have proper
      checks for the allocation errors.
      
      Link: http://lkml.kernel.org/r/1547734941-944-1-git-send-email-rppt@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23a7052a
    • Aneesh Kumar K.V's avatar
      arch/powerpc/mm/hugetlb: NestMMU workaround for hugetlb mprotect RW upgrade · 8ef5cbde
      Aneesh Kumar K.V authored
      NestMMU requires us to mark the pte invalid and flush the tlb when we do
      a RW upgrade of pte.  We fixed a variant of this in the fault path in
      bd5050e3 ("powerpc/mm/radix: Change pte relax sequence to handle
      nest MMU hang").
      
      Link: http://lkml.kernel.org/r/20190116085035.29729-6-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Reviewed-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8ef5cbde
    • Aneesh Kumar K.V's avatar
      mm/hugetlb: add prot_modify_start/commit sequence for hugetlb update · 023bdd00
      Aneesh Kumar K.V authored
      Architectures like ppc64 require to do a conditional tlb flush based on
      the old and new value of pte.  Follow the regular pte change protection
      sequence for hugetlb too.  This allows the architectures to override the
      update sequence.
      
      Link: http://lkml.kernel.org/r/20190116085035.29729-5-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Reviewed-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      023bdd00
    • Aneesh Kumar K.V's avatar
      arch/powerpc/mm: Nest MMU workaround for mprotect RW upgrade · 5b323367
      Aneesh Kumar K.V authored
      NestMMU requires us to mark the pte invalid and flush the tlb when we do
      a RW upgrade of pte.  We fixed a variant of this in the fault path in
      bd5050e3 ("powerpc/mm/radix: Change pte relax sequence to handle
      nest MMU hang").
      
      Do the same for mprotect upgrades.
      
      Hugetlb is handled in the next patch.
      
      Link: http://lkml.kernel.org/r/20190116085035.29729-4-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b323367
    • Aneesh Kumar K.V's avatar
      mm: update ptep_modify_prot_commit to take old pte value as arg · 04a86453
      Aneesh Kumar K.V authored
      Architectures like ppc64 require to do a conditional tlb flush based on
      the old and new value of pte.  Enable that by passing old pte value as
      the arg.
      
      Link: http://lkml.kernel.org/r/20190116085035.29729-3-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04a86453
    • Aneesh Kumar K.V's avatar
      mm: update ptep_modify_prot_start/commit to take vm_area_struct as arg · 0cbe3e26
      Aneesh Kumar K.V authored
      Patch series "NestMMU pte upgrade workaround for mprotect", v5.
      
      We can upgrade pte access (R -> RW transition) via mprotect.  We need to
      make sure we follow the recommended pte update sequence as outlined in
      commit bd5050e3 ("powerpc/mm/radix: Change pte relax sequence to
      handle nest MMU hang") for such updates.  This patch series does that.
      
      This patch (of 5):
      
      Some architectures may want to call flush_tlb_range from these helpers.
      
      Link: http://lkml.kernel.org/r/20190116085035.29729-2-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0cbe3e26
    • Wei Yang's avatar
    • Changbin Du's avatar
      mm/page_owner: move config option to mm/Kconfig.debug · 8aa49762
      Changbin Du authored
      Move the PAGE_OWNER option from submenu "Compile-time checks and
      compiler options" to dedicated submenu "Memory Debugging".
      
      Link: http://lkml.kernel.org/r/20190120024254.6270-1-changbin.du@gmail.comSigned-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8aa49762
    • Yang Fan's avatar
      mm/mmap.c: remove some redundancy in arch_get_unmapped_area_topdown() · 43cca0b1
      Yang Fan authored
      The variable 'addr' is redundant in arch_get_unmapped_area_topdown(),
      just use parameter 'addr0' directly.  Then remove the const qualifier of
      the parameter, and change its name to 'addr'.
      
      And in according with other functions, remove the const qualifier of all
      other no-pointer parameters in function arch_get_unmapped_area_topdown().
      
      Link: http://lkml.kernel.org/r/20190127041112.25599-1-nullptr.cpp@gmail.comSigned-off-by: default avatarYang Fan <nullptr.cpp@gmail.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      43cca0b1
    • Shakeel Butt's avatar
      mm, oom: remove 'prefer children over parent' heuristic · bbbe4802
      Shakeel Butt authored
      Since the start of the git history of Linux, the kernel after selecting
      the worst process to be oom-killed, prefer to kill its child (if the
      child does not share mm with the parent).  Later it was changed to
      prefer to kill a child who is worst.  If the parent is still the worst
      then the parent will be killed.
      
      This heuristic assumes that the children did less work than their parent
      and by killing one of them, the work lost will be less.  However this is
      very workload dependent.  If there is a workload which can benefit from
      this heuristic, can use oom_score_adj to prefer children to be killed
      before the parent.
      
      The select_bad_process() has already selected the worst process in the
      system/memcg.  There is no need to recheck the badness of its children
      and hoping to find a worse candidate.  That's a lot of unneeded racy
      work.  Also the heuristic is dangerous because it make fork bomb like
      workloads to recover much later because we constantly pick and kill
      processes which are not memory hogs.  So, let's remove this whole
      heuristic.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/20190121215850.221745-2-shakeelb@google.comSigned-off-by: default avatarShakeel Butt <shakeelb@google.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bbbe4802
    • Greg Kroah-Hartman's avatar
      mm: no need to check return value of debugfs_create functions · d9f7979c
      Greg Kroah-Hartman authored
      When calling debugfs functions, there is no need to ever check the
      return value.  The function can work or not, but the code logic should
      never do something different based on this.
      
      Link: http://lkml.kernel.org/r/20190122152151.16139-14-gregkh@linuxfoundation.orgSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d9f7979c
    • Matthew Wilcox's avatar
      mm/memory.c: prevent mapping typed pages to userspace · 0ee930e6
      Matthew Wilcox authored
      Pages which use page_type must never be mapped to userspace as it would
      destroy their page type.  Add an explicit check for this instead of
      assuming that kernel drivers always get this right.
      
      Link: http://lkml.kernel.org/r/20190129053830.3749-1-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox <willy@infradead.org>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ee930e6
    • Matthew Wilcox's avatar
      mm: prevent mapping slab pages to userspace · 2d432cb7
      Matthew Wilcox authored
      It's never appropriate to map a page allocated by SLAB into userspace.
      A buggy device driver might try this, or an attacker might be able to
      find a way to make it happen.
      
      Christoph said:
      
      : Let's just fail the code.  Currently this may work with SLUB.  But SLAB
      : and SLOB overlay fields with mapcount.  So you would have a corrupted page
      : struct if you mapped a slab page to user space.
      
      Link: http://lkml.kernel.org/r/20190125173827.2658-1-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox <willy@infradead.org>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarPekka Enberg <penberg@kernel.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2d432cb7
    • Uladzislau Rezki (Sony)'s avatar
      mm/vmalloc.c: fix kernel BUG at mm/vmalloc.c:512! · afd07389
      Uladzislau Rezki (Sony) authored
      One of the vmalloc stress test case triggers the kernel BUG():
      
        <snip>
        [60.562151] ------------[ cut here ]------------
        [60.562154] kernel BUG at mm/vmalloc.c:512!
        [60.562206] invalid opcode: 0000 [#1] PREEMPT SMP PTI
        [60.562247] CPU: 0 PID: 430 Comm: vmalloc_test/0 Not tainted 4.20.0+ #161
        [60.562293] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
        [60.562351] RIP: 0010:alloc_vmap_area+0x36f/0x390
        <snip>
      
      it can happen due to big align request resulting in overflowing of
      calculated address, i.e.  it becomes 0 after ALIGN()'s fixup.
      
      Fix it by checking if calculated address is within vstart/vend range.
      
      Link: http://lkml.kernel.org/r/20190124115648.9433-2-urezki@gmail.comSigned-off-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      afd07389
    • Chris Down's avatar
      mm, memcg: extract memcg maxable seq_file logic to seq_show_memcg_tunable · 677dc973
      Chris Down authored
      memcg has a significant number of files exposed to kernfs where their
      value is either exposed directly or is "max" in the case of
      PAGE_COUNTER_MAX.
      
      This patch makes this generic by providing a single function to do this
      work.  In combination with the previous patch adding
      mem_cgroup_from_seq, this makes all of the seq_show feeder functions
      significantly more simple.
      
      Link: http://lkml.kernel.org/r/20190124194100.GA31425@chrisdown.nameSigned-off-by: default avatarChris Down <chris@chrisdown.name>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Roman Gushchin <guro@fb.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      677dc973
    • Chris Down's avatar
      mm, memcg: create mem_cgroup_from_seq · aa9694bb
      Chris Down authored
      This is the start of a series of patches similar to my earlier
      DEFINE_MEMCG_MAX_OR_VAL work, but with less Macro Magic(tm).
      
      There are a bunch of places we go from seq_file to mem_cgroup, which
      currently requires manually getting the css, then getting the mem_cgroup
      from the css.  It's in enough places now that having mem_cgroup_from_seq
      makes sense (and also makes the next patch a bit nicer).
      
      Link: http://lkml.kernel.org/r/20190124194050.GA31341@chrisdown.nameSigned-off-by: default avatarChris Down <chris@chrisdown.name>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Roman Gushchin <guro@fb.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      aa9694bb
    • Johannes Weiner's avatar
      kernel: cgroup: add poll file operation · dc50537b
      Johannes Weiner authored
      Cgroup has a standardized poll/notification mechanism for waking all
      pollers on all fds when a filesystem node changes.  To allow polling for
      custom events, add a .poll callback that can override the default.
      
      This is in preparation for pollable cgroup pressure files which have
      per-fd trigger configurations.
      
      Link: http://lkml.kernel.org/r/20190124211518.244221-3-surenb@google.comSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dc50537b
    • Johannes Weiner's avatar
      fs: kernfs: add poll file operation · 147e1a97
      Johannes Weiner authored
      Patch series "psi: pressure stall monitors", v3.
      
      Android is adopting psi to detect and remedy memory pressure that
      results in stuttering and decreased responsiveness on mobile devices.
      
      Psi gives us the stall information, but because we're dealing with
      latencies in the millisecond range, periodically reading the pressure
      files to detect stalls in a timely fashion is not feasible.  Psi also
      doesn't aggregate its averages at a high enough frequency right now.
      
      This patch series extends the psi interface such that users can
      configure sensitive latency thresholds and use poll() and friends to be
      notified when these are breached.
      
      As high-frequency aggregation is costly, it implements an aggregation
      method that is optimized for fast, short-interval averaging, and makes
      the aggregation frequency adaptive, such that high-frequency updates
      only happen while monitored stall events are actively occurring.
      
      With these patches applied, Android can monitor for, and ward off,
      mounting memory shortages before they cause problems for the user.  For
      example, using memory stall monitors in userspace low memory killer
      daemon (lmkd) we can detect mounting pressure and kill less important
      processes before device becomes visibly sluggish.
      
      In our memory stress testing psi memory monitors produce roughly 10x
      less false positives compared to vmpressure signals.  Having ability to
      specify multiple triggers for the same psi metric allows other parts of
      Android framework to monitor memory state of the device and act
      accordingly.
      
      The new interface is straightforward.  The user opens one of the
      pressure files for writing and writes a trigger description into the
      file descriptor that defines the stall state - some or full, and the
      maximum stall time over a given window of time.  E.g.:
      
              /* Signal when stall time exceeds 100ms of a 1s window */
              char trigger[] = "full 100000 1000000";
              fd = open("/proc/pressure/memory");
              write(fd, trigger, sizeof(trigger));
              while (poll() >= 0) {
                      ...
              }
              close(fd);
      
      When the monitored stall state is entered, psi adapts its aggregation
      frequency according to what the configured time window requires in order
      to emit event signals in a timely fashion.  Once the stalling subsides,
      aggregation reverts back to normal.
      
      The trigger is associated with the open file descriptor.  To stop
      monitoring, the user only needs to close the file descriptor and the
      trigger is discarded.
      
      Patches 1-4 prepare the psi code for polling support.  Patch 5
      implements the adaptive polling logic, the pressure growth detection
      optimized for short intervals, and hooks up write() and poll() on the
      pressure files.
      
      The patches were developed in collaboration with Johannes Weiner.
      
      This patch (of 5):
      
      Kernfs has a standardized poll/notification mechanism for waking all
      pollers on all fds when a filesystem node changes.  To allow polling for
      custom events, add a .poll callback that can override the default.
      
      This is in preparation for pollable cgroup pressure files which have
      per-fd trigger configurations.
      
      Link: http://lkml.kernel.org/r/20190124211518.244221-2-surenb@google.comSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      147e1a97
    • Mel Gorman's avatar
      mm, compaction: capture a page under direct compaction · 5e1f0f09
      Mel Gorman authored
      Compaction is inherently race-prone as a suitable page freed during
      compaction can be allocated by any parallel task.  This patch uses a
      capture_control structure to isolate a page immediately when it is freed
      by a direct compactor in the slow path of the page allocator.  The
      intent is to avoid redundant scanning.
      
                                           5.0.0-rc1              5.0.0-rc1
                                     selective-v3r17          capture-v3r19
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2582.11 (   0.00%)     2563.68 (   0.71%)
      Amean     fault-both-5      4500.26 (   0.00%)     4233.52 (   5.93%)
      Amean     fault-both-7      5819.53 (   0.00%)     6333.65 (  -8.83%)
      Amean     fault-both-12     9321.18 (   0.00%)     9759.38 (  -4.70%)
      Amean     fault-both-18     9782.76 (   0.00%)    10338.76 (  -5.68%)
      Amean     fault-both-24    15272.81 (   0.00%)    13379.55 *  12.40%*
      Amean     fault-both-30    15121.34 (   0.00%)    16158.25 (  -6.86%)
      Amean     fault-both-32    18466.67 (   0.00%)    18971.21 (  -2.73%)
      
      Latency is only moderately affected but the devil is in the details.  A
      closer examination indicates that base page fault latency is reduced but
      latency of huge pages is increased as it takes creater care to succeed.
      Part of the "problem" is that allocation success rates are close to 100%
      even when under pressure and compaction gets harder
      
                                      5.0.0-rc1              5.0.0-rc1
                                selective-v3r17          capture-v3r19
      Percentage huge-3        96.70 (   0.00%)       98.23 (   1.58%)
      Percentage huge-5        96.99 (   0.00%)       95.30 (  -1.75%)
      Percentage huge-7        94.19 (   0.00%)       97.24 (   3.24%)
      Percentage huge-12       94.95 (   0.00%)       97.35 (   2.53%)
      Percentage huge-18       96.74 (   0.00%)       97.30 (   0.58%)
      Percentage huge-24       97.07 (   0.00%)       97.55 (   0.50%)
      Percentage huge-30       95.69 (   0.00%)       98.50 (   2.95%)
      Percentage huge-32       96.70 (   0.00%)       99.27 (   2.65%)
      
      And scan rates are reduced as expected by 6% for the migration scanner
      and 29% for the free scanner indicating that there is less redundant
      work.
      
      Compaction migrate scanned    20815362    19573286
      Compaction free scanned       16352612    11510663
      
      [mgorman@techsingularity.net: remove redundant check]
        Link: http://lkml.kernel.org/r/20190201143853.GH9565@techsingularity.net
      Link: http://lkml.kernel.org/r/20190118175136.31341-23-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e1f0f09
    • Mel Gorman's avatar
      mm, compaction: be selective about what pageblocks to clear skip hints · e332f741
      Mel Gorman authored
      Pageblock hints are cleared when compaction restarts or kswapd makes
      enough progress that it can sleep but it's over-eager in that the bit is
      cleared for migration sources with no LRU pages and migration targets
      with no free pages.  As pageblock skip hint flushes are relatively rare
      and out-of-band with respect to kswapd, this patch makes a few more
      expensive checks to see if it's appropriate to even clear the bit.
      Every pageblock that is not cleared will avoid 512 pages being scanned
      unnecessarily on x86-64.
      
      The impact is variable with different workloads showing small
      differences in latency, success rates and scan rates.  This is expected
      as clearing the hints is not that common but doing a small amount of
      work out-of-band to avoid a large amount of work in-band later is
      generally a good thing.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-22-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      [cai@lca.pw: no stuck in __reset_isolation_pfn()]
        Link: http://lkml.kernel.org/r/20190206034732.75687-1-cai@lca.pwSigned-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e332f741
    • Mel Gorman's avatar
      mm, compaction: sample pageblocks for free pages · 4fca9730
      Mel Gorman authored
      Once fast searching finishes, there is a possibility that the linear
      scanner is scanning full blocks found by the fast scanner earlier.  This
      patch uses an adaptive stride to sample pageblocks for free pages.  The
      more consecutive full pageblocks encountered, the larger the stride
      until a pageblock with free pages is found.  The scanners might meet
      slightly sooner but it is an acceptable risk given that the search of
      the free lists may still encounter the pages and adjust the cached PFN
      of the free scanner accordingly.
      
                                           5.0.0-rc1              5.0.0-rc1
                                    roundrobin-v3r17       samplefree-v3r17
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2752.37 (   0.00%)     2729.95 (   0.81%)
      Amean     fault-both-5      4341.69 (   0.00%)     4397.80 (  -1.29%)
      Amean     fault-both-7      6308.75 (   0.00%)     6097.61 (   3.35%)
      Amean     fault-both-12    10241.81 (   0.00%)     9407.15 (   8.15%)
      Amean     fault-both-18    13736.09 (   0.00%)    10857.63 *  20.96%*
      Amean     fault-both-24    16853.95 (   0.00%)    13323.24 *  20.95%*
      Amean     fault-both-30    15862.61 (   0.00%)    17345.44 (  -9.35%)
      Amean     fault-both-32    18450.85 (   0.00%)    16892.00 (   8.45%)
      
      The latency is mildly improved offseting some overhead from earlier
      patches that are prerequisites for the rest of the series.  However, a
      major impact is on the free scan rate with an 82% reduction.
      
                                      5.0.0-rc1      5.0.0-rc1
                               roundrobin-v3r17 samplefree-v3r17
      Compaction migrate scanned    21607271            20116887
      Compaction free scanned       95336406            16668703
      
      It's also the first time in the series where the number of pages scanned
      by the migration scanner is greater than the free scanner due to the
      increased search efficiency.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-21-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4fca9730
    • Mel Gorman's avatar
      mm, compaction: round-robin the order while searching the free lists for a target · dbe2d4e4
      Mel Gorman authored
      As compaction proceeds and creates high-order blocks, the free list
      search gets less efficient as the larger blocks are used as compaction
      targets.  Eventually, the larger blocks will be behind the migration
      scanner for partially migrated pageblocks and the search fails.  This
      patch round-robins what orders are searched so that larger blocks can be
      ignored and find smaller blocks that can be used as migration targets.
      
      The overall impact was small on 1-socket but it avoids corner cases
      where the migration/free scanners meet prematurely or situations where
      many of the pageblocks encountered by the free scanner are almost full
      instead of being properly packed.  Previous testing had indicated that
      without this patch there were occasional large spikes in the free
      scanner without this patch.
      
      [dan.carpenter@oracle.com: fix static checker warning]
      Link: http://lkml.kernel.org/r/20190118175136.31341-20-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dbe2d4e4
    • Mel Gorman's avatar
      mm, compaction: reduce premature advancement of the migration target scanner · d097a6f6
      Mel Gorman authored
      The fast isolation of free pages allows the cached PFN of the free
      scanner to advance faster than necessary depending on the contents of
      the free list.  The key is that fast_isolate_freepages() can update
      zone->compact_cached_free_pfn via isolate_freepages_block().  When the
      fast search fails, the linear scan can start from a point that has
      skipped valid migration targets, particularly pageblocks with just
      low-order free pages.  This can cause the migration source/target
      scanners to meet prematurely causing a reset.
      
      This patch starts by avoiding an update of the pageblock skip
      information and cached PFN from isolate_freepages_block() and puts the
      responsibility of updating that information in the callers.  The fast
      scanner will update the cached PFN if and only if it finds a block that
      is higher than the existing cached PFN and sets the skip if the
      pageblock is full or nearly full.  The linear scanner will update
      skipped information and the cached PFN only when a block is completely
      scanned.  The total impact is that the free scanner advances more slowly
      as it is primarily driven by the linear scanner instead of the fast
      search.
      
                                           5.0.0-rc1              5.0.0-rc1
                                     noresched-v3r17         slowfree-v3r17
      Amean     fault-both-3      2965.68 (   0.00%)     3036.75 (  -2.40%)
      Amean     fault-both-5      3995.90 (   0.00%)     4522.24 * -13.17%*
      Amean     fault-both-7      5842.12 (   0.00%)     6365.35 (  -8.96%)
      Amean     fault-both-12     9550.87 (   0.00%)    10340.93 (  -8.27%)
      Amean     fault-both-18    13304.72 (   0.00%)    14732.46 ( -10.73%)
      Amean     fault-both-24    14618.59 (   0.00%)    16288.96 ( -11.43%)
      Amean     fault-both-30    16650.96 (   0.00%)    16346.21 (   1.83%)
      Amean     fault-both-32    17145.15 (   0.00%)    19317.49 ( -12.67%)
      
      The impact to latency is higher than the last version but it appears to
      be due to a slight increase in the free scan rates which is a potential
      side-effect of the patch.  However, this is necessary for later patches
      that are more careful about how pageblocks are treated as earlier
      iterations of those patches hit corner cases where the restarts were
      punishing and very visible.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-19-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d097a6f6
    • Mel Gorman's avatar
      mm, compaction: do not consider a need to reschedule as contention · cf66f070
      Mel Gorman authored
      Scanning on large machines can take a considerable length of time and
      eventually need to be rescheduled.  This is treated as an abort event
      but that's not appropriate as the attempt is likely to be retried after
      making numerous checks and taking another cycle through the page
      allocator.  This patch will check the need to reschedule if necessary
      but continue the scanning.
      
      The main benefit is reduced scanning when compaction is taking a long
      time or the machine is over-saturated.  It also avoids an unnecessary
      exit of compaction that ends up being retried by the page allocator in
      the outer loop.
      
                                           5.0.0-rc1              5.0.0-rc1
                                    synccached-v3r16        noresched-v3r17
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2958.27 (   0.00%)     2965.68 (  -0.25%)
      Amean     fault-both-5      4091.90 (   0.00%)     3995.90 (   2.35%)
      Amean     fault-both-7      5803.05 (   0.00%)     5842.12 (  -0.67%)
      Amean     fault-both-12     9481.06 (   0.00%)     9550.87 (  -0.74%)
      Amean     fault-both-18    14141.51 (   0.00%)    13304.72 (   5.92%)
      Amean     fault-both-24    16438.00 (   0.00%)    14618.59 (  11.07%)
      Amean     fault-both-30    17531.72 (   0.00%)    16650.96 (   5.02%)
      Amean     fault-both-32    17101.96 (   0.00%)    17145.15 (  -0.25%)
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf66f070
    • Mel Gorman's avatar
      mm, compaction: rework compact_should_abort as compact_check_resched · cb810ad2
      Mel Gorman authored
      With incremental changes, compact_should_abort no longer makes any
      documented sense.  Rename to compact_check_resched and update the
      associated comments.  There is no benefit other than reducing redundant
      code and making the intent slightly clearer.  It could potentially be
      merged with earlier patches but it just makes the review slightly
      harder.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-17-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb810ad2
    • Mel Gorman's avatar
      mm, compaction: keep cached migration PFNs synced for unusable pageblocks · 8854c55f
      Mel Gorman authored
      Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
      basis that some migrations will fail in ASYNC mode.  However, if the
      cached PFNs match at the start of scanning and pageblocks are skipped
      due to having no isolation candidates, then the sync state does not
      matter.  This patch keeps matching cached PFNs in sync until a pageblock
      with isolation candidates is found.
      
      The actual benefit is marginal given that the sync scanner following the
      async scanner will often skip a number of pageblocks but it's useless
      work.  Any benefit depends heavily on whether the scanners restarted
      recently.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-16-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8854c55f
    • Mel Gorman's avatar
      mm, compaction: check early for huge pages encountered by the migration scanner · 9bebefd5
      Mel Gorman authored
      When scanning for sources or targets, PageCompound is checked for huge
      pages as they can be skipped quickly but it happens relatively late
      after a lot of setup and checking.  This patch short-cuts the check to
      make it earlier.  It might still change when the lock is acquired but
      this has less overhead overall.  The free scanner advances but the
      migration scanner does not.  Typically the free scanner encounters more
      movable blocks that change state over the lifetime of the system and
      also tends to scan more aggressively as it's actively filling its
      portion of the physical address space with data.  This could change in
      the future but for the moment, this worked better in practice and
      incurred fewer scan restarts.
      
      The impact on latency and allocation success rates is marginal but the
      free scan rates are reduced by 15% and system CPU usage is reduced by
      3.3%.  The 2-socket results are not materially different.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-15-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9bebefd5
    • Mel Gorman's avatar
      mm, compaction: finish pageblock scanning on contention · cb2dcaf0
      Mel Gorman authored
      Async migration aborts on spinlock contention but contention can be high
      when there are multiple compaction attempts and kswapd is active.  The
      consequence is that the migration scanners move forward uselessly while
      still contending on locks for longer while leaving suitable migration
      sources behind.
      
      This patch will acquire the lock but track when contention occurs.  When
      it does, the current pageblock will finish as compaction may succeed for
      that block and then abort.  This will have a variable impact on latency
      as in some cases useless scanning is avoided (reduces latency) but a
      lock will be contended (increase latency) or a single contended
      pageblock is scanned that would otherwise have been skipped (increase
      latency).
      
                                           5.0.0-rc1              5.0.0-rc1
                                      norescan-v3r16    finishcontend-v3r16
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3002.07 (   0.00%)     3153.17 (  -5.03%)
      Amean     fault-both-5      4684.47 (   0.00%)     4280.52 (   8.62%)
      Amean     fault-both-7      6815.54 (   0.00%)     5811.50 *  14.73%*
      Amean     fault-both-12    10864.02 (   0.00%)     9276.85 (  14.61%)
      Amean     fault-both-18    12247.52 (   0.00%)    11032.67 (   9.92%)
      Amean     fault-both-24    15683.99 (   0.00%)    14285.70 (   8.92%)
      Amean     fault-both-30    18620.02 (   0.00%)    16293.76 *  12.49%*
      Amean     fault-both-32    19250.28 (   0.00%)    16721.02 *  13.14%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                 norescan-v3r16    finishcontend-v3r16
      Percentage huge-1         0.00 (   0.00%)        0.00 (   0.00%)
      Percentage huge-3        95.00 (   0.00%)       96.82 (   1.92%)
      Percentage huge-5        94.22 (   0.00%)       95.40 (   1.26%)
      Percentage huge-7        92.35 (   0.00%)       95.92 (   3.86%)
      Percentage huge-12       91.90 (   0.00%)       96.73 (   5.25%)
      Percentage huge-18       89.58 (   0.00%)       96.77 (   8.03%)
      Percentage huge-24       90.03 (   0.00%)       96.05 (   6.69%)
      Percentage huge-30       89.14 (   0.00%)       96.81 (   8.60%)
      Percentage huge-32       90.58 (   0.00%)       97.41 (   7.54%)
      
      There is a variable impact that is mostly good on latency while allocation
      success rates are slightly higher.  System CPU usage is reduced by about
      10% but scan rate impact is mixed
      
      Compaction migrate scanned    27997659.00    20148867
      Compaction free scanned      120782791.00   118324914
      
      Migration scan rates are reduced 28% which is expected as a pageblock is
      used by the async scanner instead of skipped.  The impact on the free
      scanner is known to be variable.  Overall the primary justification for
      this patch is that completing scanning of a pageblock is very important
      for later patches.
      
      [yuehaibing@huawei.com: fix unused variable warning]
      Link: http://lkml.kernel.org/r/20190118175136.31341-14-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb2dcaf0
    • Mel Gorman's avatar
      mm, compaction: avoid rescanning the same pageblock multiple times · 804d3121
      Mel Gorman authored
      Pageblocks are marked for skip when no pages are isolated after a scan.
      However, it's possible to hit corner cases where the migration scanner
      gets stuck near the boundary between the source and target scanner.  Due
      to pages being migrated in blocks of COMPACT_CLUSTER_MAX, pages that are
      migrated can be reallocated before the pageblock is complete.  The
      pageblock is not necessarily skipped so it can be rescanned multiple
      times.  Similarly, a pageblock with some dirty/writeback pages may fail
      to migrate and be rescanned until writeback completes which is wasteful.
      
      This patch tracks if a pageblock is being rescanned.  If so, then the
      entire pageblock will be migrated as one operation.  This narrows the
      race window during which pages can be reallocated during migration.
      Secondly, if there are pages that cannot be isolated then the pageblock
      will still be fully scanned and marked for skipping.  On the second
      rescan, the pageblock skip is set and the migration scanner makes
      progress.
      
                                           5.0.0-rc1              5.0.0-rc1
                                      findfree-v3r16         norescan-v3r16
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3200.68 (   0.00%)     3002.07 (   6.21%)
      Amean     fault-both-5      4847.75 (   0.00%)     4684.47 (   3.37%)
      Amean     fault-both-7      6658.92 (   0.00%)     6815.54 (  -2.35%)
      Amean     fault-both-12    11077.62 (   0.00%)    10864.02 (   1.93%)
      Amean     fault-both-18    12403.97 (   0.00%)    12247.52 (   1.26%)
      Amean     fault-both-24    15607.10 (   0.00%)    15683.99 (  -0.49%)
      Amean     fault-both-30    18752.27 (   0.00%)    18620.02 (   0.71%)
      Amean     fault-both-32    21207.54 (   0.00%)    19250.28 *   9.23%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                 findfree-v3r16         norescan-v3r16
      Percentage huge-3        96.86 (   0.00%)       95.00 (  -1.91%)
      Percentage huge-5        93.72 (   0.00%)       94.22 (   0.53%)
      Percentage huge-7        94.31 (   0.00%)       92.35 (  -2.08%)
      Percentage huge-12       92.66 (   0.00%)       91.90 (  -0.82%)
      Percentage huge-18       91.51 (   0.00%)       89.58 (  -2.11%)
      Percentage huge-24       90.50 (   0.00%)       90.03 (  -0.52%)
      Percentage huge-30       91.57 (   0.00%)       89.14 (  -2.65%)
      Percentage huge-32       91.00 (   0.00%)       90.58 (  -0.46%)
      
      Negligible difference but this was likely a case when the specific
      corner case was not hit.  A previous run of the same patch based on an
      earlier iteration of the series showed large differences where migration
      rates could be halved when the corner case was hit.
      
      The specific corner case where migration scan rates go through the roof
      was due to a dirty/writeback pageblock located at the boundary of the
      migration/free scanner did not happen in this case.  When it does
      happen, the scan rates multipled by massive margins.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-13-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      804d3121
    • Mel Gorman's avatar
      mm, compaction: use free lists to quickly locate a migration target · 5a811889
      Mel Gorman authored
      Similar to the migration scanner, this patch uses the free lists to
      quickly locate a migration target.  The search is different in that
      lower orders will be searched for a suitable high PFN if necessary but
      the search is still bound.  This is justified on the grounds that the
      free scanner typically scans linearly much more than the migration
      scanner.
      
      If a free page is found, it is isolated and compaction continues if
      enough pages were isolated.  For SYNC* scanning, the full pageblock is
      scanned for any remaining free pages so that is can be marked for
      skipping in the near future.
      
      1-socket thpfioscale
                                           5.0.0-rc1              5.0.0-rc1
                                       isolmig-v3r15         findfree-v3r16
      Amean     fault-both-3      3024.41 (   0.00%)     3200.68 (  -5.83%)
      Amean     fault-both-5      4749.30 (   0.00%)     4847.75 (  -2.07%)
      Amean     fault-both-7      6454.95 (   0.00%)     6658.92 (  -3.16%)
      Amean     fault-both-12    10324.83 (   0.00%)    11077.62 (  -7.29%)
      Amean     fault-both-18    12896.82 (   0.00%)    12403.97 (   3.82%)
      Amean     fault-both-24    13470.60 (   0.00%)    15607.10 * -15.86%*
      Amean     fault-both-30    17143.99 (   0.00%)    18752.27 (  -9.38%)
      Amean     fault-both-32    17743.91 (   0.00%)    21207.54 * -19.52%*
      
      The impact on latency is variable but the search is optimistic and
      sensitive to the exact system state.  Success rates are similar but the
      major impact is to the rate of scanning
      
                                      5.0.0-rc1      5.0.0-rc1
                                  isolmig-v3r15 findfree-v3r16
      Compaction migrate scanned    25646769          29507205
      Compaction free scanned      201558184         100359571
      
      The free scan rates are reduced by 50%.  The 2-socket reductions for the
      free scanner are more dramatic which is a likely reflection that the
      machine has more memory.
      
      [dan.carpenter@oracle.com: fix static checker warning]
      [vbabka@suse.cz: correct number of pages scanned for lower orders]
      Link: http://lkml.kernel.org/r/20190118175136.31341-12-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5a811889
    • Mel Gorman's avatar
      mm, compaction: keep migration source private to a single compaction instance · e380bebe
      Mel Gorman authored
      Due to either a fast search of the free list or a linear scan, it is
      possible for multiple compaction instances to pick the same pageblock
      for migration.  This is lucky for one scanner and increased scanning for
      all the others.  It also allows a race between requests on which first
      allocates the resulting free block.
      
      This patch tests and updates the pageblock skip for the migration
      scanner carefully.  When isolating a block, it will check and skip if
      the block is already in use.  Once the zone lock is acquired, it will be
      rechecked so that only one scanner can set the pageblock skip for
      exclusive use.  Any scanner contending will continue with a linear scan.
      The skip bit is still set if no pages can be isolated in a range.  While
      this may result in redundant scanning, it avoids unnecessarily acquiring
      the zone lock when there are no suitable migration sources.
      
      1-socket thpscale
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3390.40 (   0.00%)     3024.41 (  10.80%)
      Amean     fault-both-5      5082.28 (   0.00%)     4749.30 (   6.55%)
      Amean     fault-both-7      7012.51 (   0.00%)     6454.95 (   7.95%)
      Amean     fault-both-12    11346.63 (   0.00%)    10324.83 (   9.01%)
      Amean     fault-both-18    15324.19 (   0.00%)    12896.82 *  15.84%*
      Amean     fault-both-24    16088.50 (   0.00%)    13470.60 *  16.27%*
      Amean     fault-both-30    18723.42 (   0.00%)    17143.99 (   8.44%)
      Amean     fault-both-32    18612.01 (   0.00%)    17743.91 (   4.66%)
      
                                      5.0.0-rc1              5.0.0-rc1
                                  findmig-v3r15          isolmig-v3r15
      Percentage huge-3        89.83 (   0.00%)       92.96 (   3.48%)
      Percentage huge-5        91.96 (   0.00%)       93.26 (   1.41%)
      Percentage huge-7        92.85 (   0.00%)       93.63 (   0.84%)
      Percentage huge-12       92.74 (   0.00%)       92.80 (   0.07%)
      Percentage huge-18       91.71 (   0.00%)       91.62 (  -0.10%)
      Percentage huge-24       92.13 (   0.00%)       91.50 (  -0.69%)
      Percentage huge-30       93.79 (   0.00%)       92.73 (  -1.13%)
      Percentage huge-32       91.27 (   0.00%)       91.94 (   0.74%)
      
      This shows a reasonable reduction in latency as multiple compaction
      scanners do not operate on the same blocks with a similar allocation
      success rate.
      
      Compaction migrate scanned    41093126    25646769
      
      Migration scan rates are reduced by 38%.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-11-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e380bebe
    • Mel Gorman's avatar
      mm, compaction: use free lists to quickly locate a migration source · 70b44595
      Mel Gorman authored
      The migration scanner is a linear scan of a zone with a potentiall large
      search space.  Furthermore, many pageblocks are unusable such as those
      filled with reserved pages or partially filled with pages that cannot
      migrate.  These still get scanned in the common case of allocating a THP
      and the cost accumulates.
      
      The patch uses a partial search of the free lists to locate a migration
      source candidate that is marked as MOVABLE when allocating a THP.  It
      prefers picking a block with a larger number of free pages already on
      the basis that there are fewer pages to migrate to free the entire
      block.  The lowest PFN found during searches is tracked as the basis of
      the start for the linear search after the first search of the free list
      fails.  After the search, the free list is shuffled so that the next
      search will not encounter the same page.  If the search fails then the
      subsequent searches will be shorter and the linear scanner is used.
      
      If this search fails, or if the request is for a small or
      unmovable/reclaimable allocation then the linear scanner is still used.
      It is somewhat pointless to use the list search in those cases.  Small
      free pages must be used for the search and there is no guarantee that
      movable pages are located within that block that are contiguous.
      
                                           5.0.0-rc1              5.0.0-rc1
                                       noboost-v3r10          findmig-v3r15
      Amean     fault-both-3      3771.41 (   0.00%)     3390.40 (  10.10%)
      Amean     fault-both-5      5409.05 (   0.00%)     5082.28 (   6.04%)
      Amean     fault-both-7      7040.74 (   0.00%)     7012.51 (   0.40%)
      Amean     fault-both-12    11887.35 (   0.00%)    11346.63 (   4.55%)
      Amean     fault-both-18    16718.19 (   0.00%)    15324.19 (   8.34%)
      Amean     fault-both-24    21157.19 (   0.00%)    16088.50 *  23.96%*
      Amean     fault-both-30    21175.92 (   0.00%)    18723.42 *  11.58%*
      Amean     fault-both-32    21339.03 (   0.00%)    18612.01 *  12.78%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                  noboost-v3r10          findmig-v3r15
      Percentage huge-3        86.50 (   0.00%)       89.83 (   3.85%)
      Percentage huge-5        92.52 (   0.00%)       91.96 (  -0.61%)
      Percentage huge-7        92.44 (   0.00%)       92.85 (   0.44%)
      Percentage huge-12       92.98 (   0.00%)       92.74 (  -0.25%)
      Percentage huge-18       91.70 (   0.00%)       91.71 (   0.02%)
      Percentage huge-24       91.59 (   0.00%)       92.13 (   0.60%)
      Percentage huge-30       90.14 (   0.00%)       93.79 (   4.04%)
      Percentage huge-32       90.03 (   0.00%)       91.27 (   1.37%)
      
      This shows an improvement in allocation latencies with similar
      allocation success rates.  While not presented, there was a 31%
      reduction in migration scanning and a 8% reduction on system CPU usage.
      A 2-socket machine showed similar benefits.
      
      [mgorman@techsingularity.net: several fixes]
        Link: http://lkml.kernel.org/r/20190204120111.GL9565@techsingularity.net
      [vbabka@suse.cz: migrate block that was found-fast, some optimisations]
      Link: http://lkml.kernel.org/r/20190118175136.31341-10-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <Vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70b44595