1. 15 Apr, 2015 40 commits
    • Boaz Harrosh's avatar
      dax: use pfn_mkwrite to update c/mtime + freeze protection · 0e3b210c
      Boaz Harrosh authored
      From: Yigal Korman <yigal@plexistor.com>
      
      [v1]
      Without this patch, c/mtime is not updated correctly when mmap'ed page is
      first read from and then written to.
      
      A new xfstest is submitted for testing this (generic/080)
      
      [v2]
      Jan Kara has pointed out that if we add the
      sb_start/end_pagefault pair in the new pfn_mkwrite we
      are then fixing another bug where: A user could start
      writing to the page while filesystem is frozen.
      Signed-off-by: default avatarYigal Korman <yigal@plexistor.com>
      Signed-off-by: default avatarBoaz Harrosh <boaz@plexistor.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0e3b210c
    • Boaz Harrosh's avatar
      mm: new pfn_mkwrite same as page_mkwrite for VM_PFNMAP · dd906184
      Boaz Harrosh authored
      This will allow FS that uses VM_PFNMAP | VM_MIXEDMAP (no page structs) to
      get notified when access is a write to a read-only PFN.
      
      This can happen if we mmap() a file then first mmap-read from it to
      page-in a read-only PFN, than we mmap-write to the same page.
      
      We need this functionality to fix a DAX bug, where in the scenario above
      we fail to set ctime/mtime though we modified the file.  An xfstest is
      attached to this patchset that shows the failure and the fix.  (A DAX
      patch will follow)
      
      This functionality is extra important for us, because upon dirtying of a
      pmem page we also want to RDMA the page to a remote cluster node.
      
      We define a new pfn_mkwrite and do not reuse page_mkwrite because
        1 - The name ;-)
        2 - But mainly because it would take a very long and tedious
            audit of all page_mkwrite functions of VM_MIXEDMAP/VM_PFNMAP
            users. To make sure they do not now CRASH. For example current
            DAX code (which this is for) would crash.
            If we would want to reuse page_mkwrite, We will need to first
            patch all users, so to not-crash-on-no-page. Then enable this
            patch. But even if I did that I would not sleep so well at night.
            Adding a new vector is the safest thing to do, and is not that
            expensive. an extra pointer at a static function vector per driver.
            Also the new vector is better for performance, because else we
            Will call all current Kernel vectors, so to:
              check-ha-no-page-do-nothing and return.
      
      No need to call it from do_shared_fault because do_wp_page is called to
      change pte permissions anyway.
      Signed-off-by: default avatarYigal Korman <yigal@plexistor.com>
      Signed-off-by: default avatarBoaz Harrosh <boaz@plexistor.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd906184
    • Konstantin Khlebnikov's avatar
      mm/memory: also print a_ops->readpage in print_bad_pte() · 2682582a
      Konstantin Khlebnikov authored
      A lot of filesystems use generic_file_mmap() and filemap_fault(),
      f_op->mmap and vm_ops->fault aren't enough to identify filesystem.
      
      This prints file name, vm_ops->fault, f_op->mmap and a_ops->readpage
      (which is almost always implemented and filesystem-specific).
      
      Example:
      
      [   23.676410] BUG: Bad page map in process sh  pte:1b7e6025 pmd:19bbd067
      [   23.676887] page:ffffea00006df980 count:4 mapcount:1 mapping:ffff8800196426c0 index:0x97
      [   23.677481] flags: 0x10000000000000c(referenced|uptodate)
      [   23.677896] page dumped because: bad pte
      [   23.678205] addr:00007f52fcb17000 vm_flags:00000075 anon_vma:          (null) mapping:ffff8800196426c0 index:97
      [   23.678922] file:libc-2.19.so fault:filemap_fault mmap:generic_file_readonly_mmap readpage:v9fs_vfs_readpage
      
      [akpm@linux-foundation.org: use pr_alert, per Kirill]
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2682582a
    • Andrey Ryabinin's avatar
      mm/mempool.c: kasan: poison mempool elements · 92393615
      Andrey Ryabinin authored
      Mempools keep allocated objects in reserved for situations when ordinary
      allocation may not be possible to satisfy.  These objects shouldn't be
      accessed before they leave the pool.
      
      This patch poison elements when get into the pool and unpoison when they
      leave it.  This will let KASan to detect use-after-free of mempool's
      elements.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Tested-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Chernenkov <drcheren@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      92393615
    • Andrew Morton's avatar
      mm/cma_debug.c: remove blank lines before DEFINE_SIMPLE_ATTRIBUTE() · bda6d330
      Andrew Morton authored
      Like EXPORT_SYMBOL(): the positioning communicates that the macro pertains
      to the immediately preceding function.
      
      Cc: Dmitry Safonov <d.safonov@partner.samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Stefan Strogin <stefan.strogin@gmail.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Cc: Weijie Yang <weijie.yang@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Vyacheslav Tyrtov <v.tyrtov@samsung.com>
      Cc: Aleksei Mateosian <a.mateosian@samsung.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bda6d330
    • Dmitry Safonov's avatar
      mm: cma: add functions to get region pages counters · 2e32b947
      Dmitry Safonov authored
      Here are two functions that provide interface to compute/get used size and
      size of biggest free chunk in cma region.  Add that information to
      debugfs.
      
      [akpm@linux-foundation.org: move debug code from cma.c into cma_debug.c]
      [stefan.strogin@gmail.com: move code from cma_get_used() and cma_get_maxchunk() to cma_used_get() and cma_maxchunk_get()]
      Signed-off-by: default avatarDmitry Safonov <d.safonov@partner.samsung.com>
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Cc: Weijie Yang <weijie.yang@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Vyacheslav Tyrtov <v.tyrtov@samsung.com>
      Cc: Aleksei Mateosian <a.mateosian@samsung.com>
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e32b947
    • Kirill A. Shutemov's avatar
      thp: cleanup khugepaged startup · 79553da2
      Kirill A. Shutemov authored
      Few trivial cleanups:
      
       - no need to call set_recommended_min_free_kbytes() from
         late_initcall() -- start_khugepaged() calls it;
      
       - no need to call set_recommended_min_free_kbytes() from
         start_khugepaged() if khugepaged is not started;
      
       - there isn't much point in running start_khugepaged() if we've just
         set transparent_hugepage_flags to zero;
      
       - start_khugepaged() is misnamed -- it also used to stop the thread;
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      79553da2
    • Kirill A. Shutemov's avatar
      mm: uninline and cleanup page-mapping related helpers · e39155ea
      Kirill A. Shutemov authored
      Most-used page->mapping helper -- page_mapping() -- has already uninlined.
       Let's uninline also page_rmapping() and page_anon_vma().  It saves us
      depending on configuration around 400 bytes in text:
      
         text	   data	    bss	    dec	    hex	filename
       660318	  99254	 410000	1169572	 11d8a4	mm/built-in.o-before
       659854	  99254	 410000	1169108	 11d6d4	mm/built-in.o
      
      I also tried to make code a bit more clean.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e39155ea
    • Stefan Strogin's avatar
      mm: cma: add trace events for CMA allocations and freeings · 99e8ea6c
      Stefan Strogin authored
      Add trace events for cma_alloc() and cma_release().
      
      The cma_alloc tracepoint is used both for successful and failed allocations,
      in case of allocation failure pfn=-1UL is stored and printed.
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Nazarewicz <mpn@google.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      99e8ea6c
    • Borislav Petkov's avatar
      include/linux/mm.h: simplify flag check · cdd7875e
      Borislav Petkov authored
      Flip the flag test so that it is the simplest.  No functional change, just
      a small readability improvement:
      
      No code changed:
      
        # arch/x86/kernel/sys_x86_64.o:
      
         text    data     bss     dec     hex filename
         1551      24       0    1575     627 sys_x86_64.o.before
         1551      24       0    1575     627 sys_x86_64.o.after
      
      md5:
         70708d1b1ad35cc891118a69dc1a63f9  sys_x86_64.o.before.asm
         70708d1b1ad35cc891118a69dc1a63f9  sys_x86_64.o.after.asm
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdd7875e
    • Alexander Kuleshov's avatar
      mm/memblock.c: add debug output for memblock_add() · 6a4055bc
      Alexander Kuleshov authored
      memblock_reserve() calls memblock_reserve_region() which prints debugging
      information if 'memblock=debug' was passed on the command line.  This
      patch adds the same behaviour, but for memblock_add function().
      
      [akpm@linux-foundation.org: s/memblock_memory/memblock_add/ in message]
      Signed-off-by: default avatarAlexander Kuleshov <kuleshovmail@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Emil Medve <Emilian.Medve@freescale.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6a4055bc
    • Naoya Horiguchi's avatar
      mm: hugetlb: cleanup using paeg_huge_active() · 7e1f049e
      Naoya Horiguchi authored
      Now we have an easy access to hugepages' activeness, so existing helpers to
      get the information can be cleaned up.
      
      [akpm@linux-foundation.org: s/PageHugeActive/page_huge_active/]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7e1f049e
    • Naoya Horiguchi's avatar
      mm: hugetlb: introduce page_huge_active · bcc54222
      Naoya Horiguchi authored
      We are not safe from calling isolate_huge_page() on a hugepage
      concurrently, which can make the victim hugepage in invalid state and
      results in BUG_ON().
      
      The root problem of this is that we don't have any information on struct
      page (so easily accessible) about hugepages' activeness.  Note that
      hugepages' activeness means just being linked to
      hstate->hugepage_activelist, which is not the same as normal pages'
      activeness represented by PageActive flag.
      
      Normal pages are isolated by isolate_lru_page() which prechecks PageLRU
      before isolation, so let's do similarly for hugetlb with a new
      paeg_huge_active().
      
      set/clear_page_huge_active() should be called within hugetlb_lock.  But
      hugetlb_cow() and hugetlb_no_page() don't do this, being justified because
      in these functions set_page_huge_active() is called right after the
      hugepage is allocated and no other thread tries to isolate it.
      
      [akpm@linux-foundation.org: s/PageHugeActive/page_huge_active/, make it return bool]
      [fengguang.wu@intel.com: set_page_huge_active() can be static]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bcc54222
    • Naoya Horiguchi's avatar
      mm: don't call __page_cache_release for hugetlb · 822fc613
      Naoya Horiguchi authored
      __put_compound_page() calls __page_cache_release() to do some freeing
      work, but it's obviously for thps, not for hugetlb.  We don't care because
      PageLRU is always cleared and page->mem_cgroup is always NULL for hugetlb.
      But it's not correct and has potential risks, so let's make it
      conditional.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      822fc613
    • Rasmus Villemoes's avatar
      mm/mmap.c: use while instead of if+goto · 9fcd1457
      Rasmus Villemoes authored
      The creators of the C language gave us the while keyword. Let's use
      that instead of synthesizing it from if+goto.
      
      Made possible by 6597d783 ("mm/mmap.c: replace find_vma_prepare()
      with clearer find_vma_links()").
      
      [akpm@linux-foundation.org: fix 80-col overflows]
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9fcd1457
    • David Rientjes's avatar
      mm, selftests: test return value of munmap for MAP_HUGETLB memory · 215ba781
      David Rientjes authored
      When MAP_HUGETLB memory is unmapped, the length must be hugepage aligned,
      otherwise it fails with -EINVAL.
      
      All tests currently behave correctly, but it's better to explcitly test
      the return value for completeness and document the requirement, especially
      if users copy map_hugetlb.c as a sample implementation.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Eric B Munson <emunson@akamai.com>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      215ba781
    • David Rientjes's avatar
      mm, doc: cleanup and clarify munmap behavior for hugetlb memory · 80d6b94b
      David Rientjes authored
      munmap(2) of hugetlb memory requires a length that is hugepage aligned,
      otherwise it may fail.  Add this to the documentation.
      
      This also cleans up the documentation and separates it into logical units:
      one part refers to MAP_HUGETLB and another part refers to requirements for
      shared memory segments.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Eric B Munson <emunson@akamai.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80d6b94b
    • Kirill A. Shutemov's avatar
      thp: do not adjust zone water marks if khugepaged is not started · ae7efa50
      Kirill A. Shutemov authored
      set_recommended_min_free_kbytes() adjusts zone water marks to be suitable
      for khugepaged. We avoid doing this if khugepaged is disabled, but don't
      catch the case when khugepaged is failed to start.
      
      Let's address this by checking khugepaged_thread instead of
      khugepaged_enabled() in set_recommended_min_free_kbytes().
      It's NULL if the kernel thread is stopped or failed to start.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ae7efa50
    • Kirill A. Shutemov's avatar
      thp: handle errors in hugepage_init() properly · 65ebb64f
      Kirill A. Shutemov authored
      We miss error-handling in few cases hugepage_init(). Let's fix that.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      65ebb64f
    • David Rientjes's avatar
      mm, mempool: poison elements backed by slab allocator · bdfedb76
      David Rientjes authored
      Mempools keep elements in a reserved pool for contexts in which allocation
      may not be possible.  When an element is allocated from the reserved pool,
      its memory contents is the same as when it was added to the reserved pool.
      
      Because of this, elements lack any free poisoning to detect use-after-free
      errors.
      
      This patch adds free poisoning for elements backed by the slab allocator.
      This is possible because the mempool layer knows the object size of each
      element.
      
      When an element is added to the reserved pool, it is poisoned with
      POISON_FREE.  When it is removed from the reserved pool, the contents are
      checked for POISON_FREE.  If there is a mismatch, a warning is emitted to
      the kernel log.
      
      This is only effective for configs with CONFIG_DEBUG_SLAB or
      CONFIG_SLUB_DEBUG_ON.
      
      [fabio.estevam@freescale.com: use '%zu' for printing 'size_t' variable]
      [arnd@arndb.de: add missing include]
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bdfedb76
    • David Rientjes's avatar
      mm, mempool: disallow mempools based on slab caches with constructors · e244c9e6
      David Rientjes authored
      All occurrences of mempools based on slab caches with object constructors
      have been removed from the tree, so disallow creating them.
      
      We can only dereference mem->ctor in mm/mempool.c without including
      mm/slab.h in include/linux/mempool.h.  So simply note the restriction,
      just like the comment restricting usage of __GFP_ZERO, and warn on kernels
      with CONFIG_DEBUG_VM() if such a mempool is allocated from.
      
      We don't want to incur this check on every element allocation, so use
      VM_BUG_ON().
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e244c9e6
    • David Rientjes's avatar
      fs, jfs: remove slab object constructor · ee146245
      David Rientjes authored
      Mempools based on slab caches with object constructors are risky because
      element allocation can happen either from the slab cache itself, meaning
      the constructor is properly called before returning, or from the mempool
      reserve pool, meaning the constructor is not called before returning,
      depending on the allocation context.
      
      For this reason, we should disallow creating mempools based on slab caches
      that have object constructors.  Callers of mempool_alloc() will be
      responsible for properly initializing the returned element.
      
      Then, it doesn't matter if the element came from the slab cache or the
      mempool reserved pool.
      
      The only occurrence of a mempool being based on a slab cache with an
      object constructor in the tree is in fs/jfs/jfs_metapage.c.  Remove it and
      properly initialize the element in alloc_metapage().
      
      At the same time, META_free is never used, so remove it as well.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ee146245
    • Jason Low's avatar
      mm: remove rest of ACCESS_ONCE() usages · 4db0c3c2
      Jason Low authored
      We converted some of the usages of ACCESS_ONCE to READ_ONCE in the mm/
      tree since it doesn't work reliably on non-scalar types.
      
      This patch removes the rest of the usages of ACCESS_ONCE, and use the new
      READ_ONCE API for the read accesses.  This makes things cleaner, instead
      of using separate/multiple sets of APIs.
      Signed-off-by: default avatarJason Low <jason.low2@hp.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4db0c3c2
    • Jason Low's avatar
      mm: use READ_ONCE() for non-scalar types · 9d8c47e4
      Jason Low authored
      Commit 38c5ce93 ("mm/gup: Replace ACCESS_ONCE with READ_ONCE")
      converted ACCESS_ONCE usage in gup_pmd_range() to READ_ONCE, since
      ACCESS_ONCE doesn't work reliably on non-scalar types.
      
      This patch also fixes the other ACCESS_ONCE usages in gup_pte_range()
      and __get_user_pages_fast() in mm/gup.c
      Signed-off-by: default avatarJason Low <jason.low2@hp.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9d8c47e4
    • Derek's avatar
      mm/mremap.c: clean up goto just return ERR_PTR · 6cd57613
      Derek authored
      As suggested by Kirill the "goto"s in vma_to_resize aren't necessary, just
      change them to explicit return.
      Signed-off-by: default avatarDerek Che <crquan@ymail.com>
      Suggested-by: default avatar"Kirill A. Shutemov" <kirill@shutemov.name>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6cd57613
    • Derek's avatar
      mremap should return -ENOMEM when __vm_enough_memory fail · 12215182
      Derek authored
      Recently I straced bash behavior in this dd zero pipe to read test, in
      part of testing under vm.overcommit_memory=2 (OVERCOMMIT_NEVER mode):
      
          # dd if=/dev/zero | read x
      
      The bash sub shell is calling mremap to reallocate more and more memory
      untill it finally failed -ENOMEM (I expect), or to be killed by system OOM
      killer (which should not happen under OVERCOMMIT_NEVER mode); But the
      mremap system call actually failed of -EFAULT, which is a surprise to me,
      I think it's supposed to be -ENOMEM?  then I wrote this piece of C code
      testing confirmed it: https://gist.github.com/crquan/326bde37e1ddda8effe5
      
          $ ./remap
          allocated one page @0x7f686bf71000, (PAGE_SIZE: 4096)
          grabbed 7680512000 bytes of memory (1875125 pages) @ 00007f6690993000.
          mremap failed Bad address (14).
      
      The -EFAULT comes from the branch of security_vm_enough_memory_mm failure,
      underlyingly it calls __vm_enough_memory which returns only 0 for success
      or -ENOMEM; So why vma_to_resize needs to return -EFAULT in this case?
      this sounds like a mistake to me.
      
      Some more digging into git history:
      
      1) Before commit 119f657c ("RLIMIT_AS checking fix") in May 1 2005
         (pre 2.6.12 days) it was returning -ENOMEM for this failure;
      
      2) but commit 119f657c ("untangling do_mremap(), part 1") changed it
         accidentally, to what ever is preserved in local ret, which happened to
         be -EFAULT, in a previous assignment;
      
      3) then in commit 54f5de70 code refactoring, it's explicitly returning
         -EFAULT, should be wrong.
      Signed-off-by: default avatarDerek Che <crquan@ymail.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      12215182
    • Roman Pen's avatar
      mm/vmalloc: get rid of dirty bitmap inside vmap_block structure · 7d61bfe8
      Roman Pen authored
      In original implementation of vm_map_ram made by Nick Piggin there were
      two bitmaps: alloc_map and dirty_map.  None of them were used as supposed
      to be: finding a suitable free hole for next allocation in block.
      vm_map_ram allocates space sequentially in block and on free call marks
      pages as dirty, so freed space can't be reused anymore.
      
      Actually it would be very interesting to know the real meaning of those
      bitmaps, maybe implementation was incomplete, etc.
      
      But long time ago Zhang Yanfei removed alloc_map by these two commits:
      
        mm/vmalloc.c: remove dead code in vb_alloc
           3fcd76e8
        mm/vmalloc.c: remove alloc_map from vmap_block
           b8e748b6
      
      In this patch I replaced dirty_map with two range variables: dirty min and
      max.  These variables store minimum and maximum position of dirty space in
      a block, since we need only to know the dirty range, not exact position of
      dirty pages.
      
      Why it was made?  Several reasons: at first glance it seems that
      vm_map_ram allocator concerns about fragmentation thus it uses bitmaps for
      finding free hole, but it is not true.  To avoid complexity seems it is
      better to use something simple, like min or max range values.  Secondly,
      code also becomes simpler, without iteration over bitmap, just comparing
      values in min and max macros.  Thirdly, bitmap occupies up to 1024 bits
      (4MB is a max size of a block).  Here I replaced the whole bitmap with two
      longs.
      
      Finally vm_unmap_aliases should be slightly faster and the whole
      vmap_block structure occupies less memory.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: Rob Jones <rob.jones@codethink.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7d61bfe8
    • Roman Pen's avatar
      mm/vmalloc: occupy newly allocated vmap block just after allocation · cf725ce2
      Roman Pen authored
      Previous implementation allocates new vmap block and repeats search of a
      free block from the very beginning, iterating over the CPU free list.
      
      Why it can be better??
      
      1. Allocation can happen on one CPU, but search can be done on another CPU.
         In worst case we preallocate amount of vmap blocks which is equal to
         CPU number on the system.
      
      2. In previous patch I added newly allocated block to the tail of free list
         to avoid soon exhaustion of virtual space and give a chance to occupy
         blocks which were allocated long time ago.  Thus to find newly allocated
         block all the search sequence should be repeated, seems it is not efficient.
      
      In this patch newly allocated block is occupied right away, address of
      virtual space is returned to the caller, so there is no any need to repeat
      the search sequence, allocation job is done.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: Rob Jones <rob.jones@codethink.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf725ce2
    • Roman Pen's avatar
      mm/vmalloc: fix possible exhaustion of vmalloc space caused by vm_map_ram allocator · 68ac546f
      Roman Pen authored
      Recently I came across high fragmentation of vm_map_ram allocator:
      vmap_block has free space, but still new blocks continue to appear.
      Further investigation showed that certain mapping/unmapping sequences
      can exhaust vmalloc space.  On small 32bit systems that's not a big
      problem, cause purging will be called soon on a first allocation failure
      (alloc_vmap_area), but on 64bit machines, e.g.  x86_64 has 45 bits of
      vmalloc space, that can be a disaster.
      
      1) I came up with a simple allocation sequence, which exhausts virtual
         space very quickly:
      
        while (iters) {
      
                      /* Map/unmap big chunk */
                      vaddr = vm_map_ram(pages, 16, -1, PAGE_KERNEL);
                      vm_unmap_ram(vaddr, 16);
      
                      /* Map/unmap small chunks.
                       *
                       * -1 for hole, which should be left at the end of each block
                       * to keep it partially used, with some free space available */
                      for (i = 0; i < (VMAP_BBMAP_BITS - 16) / 8 - 1; i++) {
                              vaddr = vm_map_ram(pages, 8, -1, PAGE_KERNEL);
                              vm_unmap_ram(vaddr, 8);
                      }
        }
      
      The idea behind is simple:
      
       1. We have to map a big chunk, e.g. 16 pages.
      
       2. Then we have to occupy the remaining space with smaller chunks, i.e.
          8 pages. At the end small hole should remain to keep block in free list,
          but do not let big chunk to occupy remaining space.
      
       3. Goto 1 - allocation request of 16 pages can't be completed (only 8 slots
          are left free in the block in the #2 step), new block will be allocated,
          all further requests will lay into newly allocated block.
      
      To have some measurement numbers for all further tests I setup ftrace and
      enabled 4 basic calls in a function profile:
      
              echo vm_map_ram              > /sys/kernel/debug/tracing/set_ftrace_filter;
              echo alloc_vmap_area        >> /sys/kernel/debug/tracing/set_ftrace_filter;
              echo vm_unmap_ram           >> /sys/kernel/debug/tracing/set_ftrace_filter;
              echo free_vmap_block        >> /sys/kernel/debug/tracing/set_ftrace_filter;
      
      So for this scenario I got these results:
      
      BEFORE (all new blocks are put to the head of a free list)
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                          126000    30683.30 us     0.243 us        30819.36 us
        vm_unmap_ram                        126000    22003.24 us     0.174 us        340.886 us
        alloc_vmap_area                       1000    4132.065 us     4.132 us        0.903 us
      
      AFTER (all new blocks are put to the tail of a free list)
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                          126000    28713.13 us     0.227 us        24944.70 us
        vm_unmap_ram                        126000    20403.96 us     0.161 us        1429.872 us
        alloc_vmap_area                        993    3916.795 us     3.944 us        29.370 us
        free_vmap_block                        992    654.157 us      0.659 us        1.273 us
      
      SUMMARY:
      
      The most interesting numbers in those tables are numbers of block
      allocations and deallocations: alloc_vmap_area and free_vmap_block
      calls, which show that before the change blocks were not freed, and
      virtual space and physical memory (vmap_block structure allocations,
      etc) were consumed.
      
      Average time which were spent in vm_map_ram/vm_unmap_ram became slightly
      better.  That can be explained with a reasonable amount of blocks in a
      free list, which we need to iterate to find a suitable free block.
      
      2) Another scenario is a random allocation:
      
        while (iters) {
      
                      /* Randomly take number from a range [1..32/64] */
                      nr = rand(1, VMAP_MAX_ALLOC);
                      vaddr = vm_map_ram(pages, nr, -1, PAGE_KERNEL);
                      vm_unmap_ram(vaddr, nr);
        }
      
      I chose mersenne twister PRNG to generate persistent random state to
      guarantee that both runs have the same random sequence.  For each
      vm_map_ram call random number from [1..32/64] was taken to represent
      amount of pages which I do map.
      
      I did 10'000 vm_map_ram calls and got these two tables:
      
      BEFORE (all new blocks are put to the head of a free list)
      
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                           10000    10170.01 us     1.017 us        993.609 us
        vm_unmap_ram                         10000    5321.823 us     0.532 us        59.789 us
        alloc_vmap_area                        420    2150.239 us     5.119 us        3.307 us
        free_vmap_block                         37    159.587 us      4.313 us        134.344 us
      
      AFTER (all new blocks are put to the tail of a free list)
      
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                           10000    7745.637 us     0.774 us        395.229 us
        vm_unmap_ram                         10000    5460.573 us     0.546 us        67.187 us
        alloc_vmap_area                        414    2201.650 us     5.317 us        5.591 us
        free_vmap_block                        412    574.421 us      1.394 us        15.138 us
      
      SUMMARY:
      
      'BEFORE' table shows, that 420 blocks were allocated and only 37 were
      freed.  Remained 383 blocks are still in a free list, consuming virtual
      space and physical memory.
      
      'AFTER' table shows, that 414 blocks were allocated and 412 were really
      freed.  2 blocks remained in a free list.
      
      So fragmentation was dramatically reduced.  Why? Because when we put
      newly allocated block to the head, all further requests will occupy new
      block, regardless remained space in other blocks.  In this scenario all
      requests come randomly.  Eventually remained free space will be less
      than requested size, free list will be iterated and it is possible that
      nothing will be found there - finally new block will be created.  So
      exhaustion in random scenario happens for the maximum possible
      allocation size: 32 pages for 32-bit system and 64 pages for 64-bit
      system.
      
      Also average cost of vm_map_ram was reduced from 1.017 us to 0.774 us.
      Again this can be explained by iteration through smaller list of free
      blocks.
      
      3) Next simple scenario is a sequential allocation, when the allocation
         order is increased for each block.  This scenario forces allocator to
         reach maximum amount of partially free blocks in a free list:
      
        while (iters) {
      
                      /* Populate free list with blocks with remaining space */
                      for (order = 0; order <= ilog2(VMAP_MAX_ALLOC); order++) {
                              nr = VMAP_BBMAP_BITS / (1 << order);
      
                              /* Leave a hole */
                              nr -= 1;
      
                              for (i = 0; i < nr; i++) {
                                      vaddr = vm_map_ram(pages, (1 << order), -1, PAGE_KERNEL);
                                      vm_unmap_ram(vaddr, (1 << order));
                      }
      
                      /* Completely occupy blocks from a free list */
                      for (order = 0; order <= ilog2(VMAP_MAX_ALLOC); order++) {
                              vaddr = vm_map_ram(pages, (1 << order), -1, PAGE_KERNEL);
                              vm_unmap_ram(vaddr, (1 << order));
                      }
        }
      
      Results which I got:
      
      BEFORE (all new blocks are put to the head of a free list)
      
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                         2032000    399545.2 us     0.196 us        467123.7 us
        vm_unmap_ram                       2032000    363225.7 us     0.178 us        111405.9 us
        alloc_vmap_area                       7001    30627.76 us     4.374 us        495.755 us
        free_vmap_block                       6993    7011.685 us     1.002 us        159.090 us
      
      AFTER (all new blocks are put to the tail of a free list)
      
      # cat /sys/kernel/debug/tracing/trace_stat/function0
        Function                               Hit    Time            Avg             s^2
        --------                               ---    ----            ---             ---
        vm_map_ram                         2032000    394259.7 us     0.194 us        589395.9 us
        vm_unmap_ram                       2032000    292500.7 us     0.143 us        94181.08 us
        alloc_vmap_area                       7000    31103.11 us     4.443 us        703.225 us
        free_vmap_block                       7000    6750.844 us     0.964 us        119.112 us
      
      SUMMARY:
      
      No surprises here, almost all numbers are the same.
      
      Fixing this fragmentation problem I also did some improvements in a
      allocation logic of a new vmap block: occupy block immediately and get
      rid of extra search in a free list.
      
      Also I replaced dirty bitmap with min/max dirty range values to make the
      logic simpler and slightly faster, since two longs comparison costs
      less, than loop thru bitmap.
      
      This patchset raises several questions:
      
       Q: Think the problem you comments is already known so that I wrote comments
          about it as "it could consume lots of address space through fragmentation".
          Could you tell me about your situation and reason why it should be avoided?
                                                                           Gioh Kim
      
       A: Indeed, there was a commit 36437638 which adds explicit comment about
          fragmentation.  But fragmentation which is described in this comment caused
          by mixing of long-lived and short-lived objects, when a whole block is pinned
          in memory because some page slots are still in use.  But here I am talking
          about blocks which are free, nobody uses them, and allocator keeps them alive
          forever, continuously allocating new blocks.
      
       Q: I think that if you put newly allocated block to the tail of a free
          list, below example would results in enormous performance degradation.
      
          new block: 1MB (256 pages)
      
          while (iters--) {
            vm_map_ram(3 or something else not dividable for 256) * 85
            vm_unmap_ram(3) * 85
          }
      
          On every iteration, it needs newly allocated block and it is put to the
          tail of a free list so finding it consumes large amount of time.
                                                                          Joonsoo Kim
      
       A: Second patch in current patchset gets rid of extra search in a free list,
          so new block will be immediately occupied..
      
          Also, the scenario above is impossible, cause vm_map_ram allocates virtual
          range in orders, i.e. 2^n.  I.e. passing 3 to vm_map_ram you will allocate
          4 slots in a block and 256 slots (capacity of a block) of course dividable
          on 4, so block will be completely occupied.
      
          But there is a worst case which we can achieve: each free block has a hole
          equal to order size.
      
          The maximum size of allocation is 64 pages for 64-bit system
          (if you try to map more, original alloc_vmap_area will be called).
      
          So the maximum order is 6.  That means that worst case, before allocator
          makes a decision to allocate a new block, is to iterate 7 blocks:
      
          HEAD
          1st block - has 1  page slot  free (order 0)
          2nd block - has 2  page slots free (order 1)
          3rd block - has 4  page slots free (order 2)
          4th block - has 8  page slots free (order 3)
          5th block - has 16 page slots free (order 4)
          6th block - has 32 page slots free (order 5)
          7th block - has 64 page slots free (order 6)
          TAIL
      
          So the worst scenario on 64-bit system is that each CPU queue can have 7
          blocks in a free list.
      
          This can happen only and only if you allocate blocks increasing the order.
          (as I did in the function written in the comment of the first patch)
          This is weird and rare case, but still it is possible.  Afterwards you will
          get 7 blocks in a list.
      
          All further requests should be placed in a newly allocated block or some
          free slots should be found in a free list.
          Seems it does not look dramatically awful.
      
      This patch (of 3):
      
      If suitable block can't be found, new block is allocated and put into a
      head of a free list, so on next iteration this new block will be found
      first.
      
      That's bad, because old blocks in a free list will not get a chance to be
      fully used, thus fragmentation will grow.
      
      Let's consider this simple example:
      
       #1 We have one block in a free list which is partially used, and where only
          one page is free:
      
          HEAD |xxxxxxxxx-| TAIL
                         ^
                         free space for 1 page, order 0
      
       #2 New allocation request of order 1 (2 pages) comes, new block is allocated
          since we do not have free space to complete this request. New block is put
          into a head of a free list:
      
          HEAD |----------|xxxxxxxxx-| TAIL
      
       #3 Two pages were occupied in a new found block:
      
          HEAD |xx--------|xxxxxxxxx-| TAIL
                ^
                two pages mapped here
      
       #4 New allocation request of order 0 (1 page) comes.  Block, which was created
          on #2 step, is located at the beginning of a free list, so it will be found
          first:
      
        HEAD |xxX-------|xxxxxxxxx-| TAIL
                ^                 ^
                page mapped here, but better to use this hole
      
      It is obvious, that it is better to complete request of #4 step using the
      old block, where free space is left, because in other case fragmentation
      will be highly increased.
      
      But fragmentation is not only the case.  The worst thing is that I can
      easily create scenario, when the whole vmalloc space is exhausted by
      blocks, which are not used, but already dirty and have several free pages.
      
      Let's consider this function which execution should be pinned to one CPU:
      
      static void exhaust_virtual_space(struct page *pages[16], int iters)
      {
              /* Firstly we have to map a big chunk, e.g. 16 pages.
               * Then we have to occupy the remaining space with smaller
               * chunks, i.e. 8 pages. At the end small hole should remain.
               * So at the end of our allocation sequence block looks like
               * this:
               *                XX  big chunk
               * |XXxxxxxxx-|    x  small chunk
               *                 -  hole, which is enough for a small chunk,
               *                    but is not enough for a big chunk
               */
              while (iters--) {
                      int i;
                      void *vaddr;
      
                      /* Map/unmap big chunk */
                      vaddr = vm_map_ram(pages, 16, -1, PAGE_KERNEL);
                      vm_unmap_ram(vaddr, 16);
      
                      /* Map/unmap small chunks.
                       *
                       * -1 for hole, which should be left at the end of each block
                       * to keep it partially used, with some free space available */
                      for (i = 0; i < (VMAP_BBMAP_BITS - 16) / 8 - 1; i++) {
                              vaddr = vm_map_ram(pages, 8, -1, PAGE_KERNEL);
                              vm_unmap_ram(vaddr, 8);
                      }
              }
      }
      
      On every iteration new block (1MB of vm area in my case) will be
      allocated and then will be occupied, without attempt to resolve small
      allocation request using previously allocated blocks in a free list.
      
      In case of random allocation (size should be randomly taken from the
      range [1..64] in 64-bit case or [1..32] in 32-bit case) situation is the
      same: new blocks continue to appear if maximum possible allocation size
      (32 or 64) passed to the allocator, because all remaining blocks in a
      free list do not have enough free space to complete this allocation
      request.
      
      In summary if new blocks are put into the head of a free list eventually
      virtual space will be exhausted.
      
      In current patch I simply put newly allocated block to the tail of a
      free list, thus reduce fragmentation, giving a chance to resolve
      allocation request using older blocks with possible holes left.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: Rob Jones <rob.jones@codethink.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      68ac546f
    • Mike Kravetz's avatar
      hugetlbfs: document min_size mount option and cleanup · 8c9b9703
      Mike Kravetz authored
      Add min_size mount option to the hugetlbfs documentation.  Also, add the
      missing pagesize option and mention that size can be specified as bytes or
      a percentage of huge page pool.
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8c9b9703
    • Mike Kravetz's avatar
      hugetlbfs: accept subpool min_size mount option and setup accordingly · 7ca02d0a
      Mike Kravetz authored
      Make 'min_size=<value>' be an option when mounting a hugetlbfs.  This
      option takes the same value as the 'size' option.  min_size can be
      specified without specifying size.  If both are specified, min_size must
      be less that or equal to size else the mount will fail.  If min_size is
      specified, then at mount time an attempt is made to reserve min_size
      pages.  If the reservation fails, the mount fails.  At umount time, the
      reserved pages are released.
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7ca02d0a
    • Mike Kravetz's avatar
      hugetlbfs: add minimum size accounting to subpools · 1c5ecae3
      Mike Kravetz authored
      The same routines that perform subpool maximum size accounting
      hugepage_subpool_get/put_pages() are modified to also perform minimum size
      accounting.  When a delta value is passed to these routines, calculate how
      global reservations must be adjusted to maintain the subpool minimum size.
       The routines now return this global reserve count adjustment.  This
      global reserve count adjustment is then passed to the global accounting
      routine hugetlb_acct_memory().
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1c5ecae3
    • Mike Kravetz's avatar
      hugetlbfs: add minimum size tracking fields to subpool structure · c6a91820
      Mike Kravetz authored
      hugetlbfs allocates huge pages from the global pool as needed.  Even if
      the global pool contains a sufficient number pages for the filesystem size
      at mount time, those global pages could be grabbed for some other use.  As
      a result, filesystem huge page allocations may fail due to lack of pages.
      
      Applications such as a database want to use huge pages for performance
      reasons.  hugetlbfs filesystem semantics with ownership and modes work
      well to manage access to a pool of huge pages.  However, the application
      would like some reasonable assurance that allocations will not fail due to
      a lack of huge pages.  At application startup time, the application would
      like to configure itself to use a specific number of huge pages.  Before
      starting, the application can check to make sure that enough huge pages
      exist in the system global pools.  However, there are no guarantees that
      those pages will be available when needed by the application.  What the
      application wants is exclusive use of a subset of huge pages.
      
      Add a new hugetlbfs mount option 'min_size=<value>' to indicate that the
      specified number of pages will be available for use by the filesystem.  At
      mount time, this number of huge pages will be reserved for exclusive use
      of the filesystem.  If there is not a sufficient number of free pages, the
      mount will fail.  As pages are allocated to and freeed from the
      filesystem, the number of reserved pages is adjusted so that the specified
      minimum is maintained.
      
      This patch (of 4):
      
      Add a field to the subpool structure to indicate the minimimum number of
      huge pages to always be used by this subpool.  This minimum count includes
      allocated pages as well as reserved pages.  If the minimum number of pages
      for the subpool have not been allocated, pages are reserved up to this
      minimum.  An additional field (rsv_hpages) is used to track the number of
      pages reserved to meet this minimum size.  The hstate pointer in the
      subpool is convenient to have when reserving and unreserving the pages.
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c6a91820
    • Gioh Kim's avatar
      mm/compaction: reset compaction scanner positions · 195b0c60
      Gioh Kim authored
      When the compaction is activated via /proc/sys/vm/compact_memory it would
      better scan the whole zone.  And some platforms, for instance ARM, have
      the start_pfn of a zone at zero.  Therefore the first try to compact via
      /proc doesn't work.  It needs to reset the compaction scanner position
      first.
      Signed-off-by: default avatarGioh Kim <gioh.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      195b0c60
    • Michal Hocko's avatar
      mm, memcg: sync allocation and memcg charge gfp flags for THP · 3b363692
      Michal Hocko authored
      memcg currently uses hardcoded GFP_TRANSHUGE gfp flags for all THP
      charges.  THP allocations, however, might be using different flags
      depending on /sys/kernel/mm/transparent_hugepage/{,khugepaged/}defrag and
      the current allocation context.
      
      The primary difference is that defrag configured to "madvise" value will
      clear __GFP_WAIT flag from the core gfp mask to make the allocation
      lighter for all mappings which are not backed by VM_HUGEPAGE vmas.  If
      memcg charge path ignores this fact we will get light allocation but the a
      potential memcg reclaim would kill the whole point of the configuration.
      
      Fix the mismatch by providing the same gfp mask used for the allocation to
      the charge functions.  This is quite easy for all paths except for
      hugepaged kernel thread with !CONFIG_NUMA which is doing a pre-allocation
      long before the allocated page is used in collapse_huge_page via
      khugepaged_alloc_page.  To prevent from cluttering the whole code path
      from khugepaged_do_scan we simply return the current flags as per
      khugepaged_defrag() value which might have changed since the
      preallocation.  If somebody changed the value of the knob we would charge
      differently but this shouldn't happen often and it is definitely not
      critical because it would only lead to a reduced success rate of one-off
      THP promotion.
      
      [akpm@linux-foundation.org: fix weird code layout while we're there]
      [rientjes@google.com: clean up around alloc_hugepage_gfpmask()]
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3b363692
    • Minchan Kim's avatar
      mm: rename deactivate_page to deactivate_file_page · cc5993bd
      Minchan Kim authored
      "deactivate_page" was created for file invalidation so it has too
      specific logic for file-backed pages.  So, let's change the name of the
      function and date to a file-specific one and yield the generic name.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Wang, Yalin <Yalin.Wang@sonymobile.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc5993bd
    • Eric B Munson's avatar
      Documentation/vm/unevictable-lru.txt: document interaction between compaction... · 922c0551
      Eric B Munson authored
      Documentation/vm/unevictable-lru.txt: document interaction between compaction and the unevictable LRU
      
      The memory compaction code uses the migration code to do most of the
      work in compaction.  However, the compaction code interacts with the
      unevictable LRU differently than migration code and this difference
      should be noted in the documentation.
      
      [akpm@linux-foundation.org: identify /proc/sys/vm/compact_unevictable directly]
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      922c0551
    • Eric B Munson's avatar
      mm: allow compaction of unevictable pages · 5bbe3547
      Eric B Munson authored
      Currently, pages which are marked as unevictable are protected from
      compaction, but not from other types of migration.  The POSIX real time
      extension explicitly states that mlock() will prevent a major page
      fault, but the spirit of this is that mlock() should give a process the
      ability to control sources of latency, including minor page faults.
      However, the mlock manpage only explicitly says that a locked page will
      not be written to swap and this can cause some confusion.  The
      compaction code today does not give a developer who wants to avoid swap
      but wants to have large contiguous areas available any method to achieve
      this state.  This patch introduces a sysctl for controlling compaction
      behavior with respect to the unevictable lru.  Users who demand no page
      faults after a page is present can set compact_unevictable_allowed to 0
      and users who need the large contiguous areas can enable compaction on
      locked memory by leaving the default value of 1.
      
      To illustrate this problem I wrote a quick test program that mmaps a
      large number of 1MB files filled with random data.  These maps are
      created locked and read only.  Then every other mmap is unmapped and I
      attempt to allocate huge pages to the static huge page pool.  When the
      compact_unevictable_allowed sysctl is 0, I cannot allocate hugepages
      after fragmenting memory.  When the value is set to 1, allocations
      succeed.
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5bbe3547
    • Naoya Horiguchi's avatar
      mm/page-writeback: check-before-clear PageReclaim · a4bb3ecd
      Naoya Horiguchi authored
      With the page flag sanitization patchset, an invalid usage of
      ClearPageReclaim() is detected in set_page_dirty().  This can be called
      from __unmap_hugepage_range(), so let's check PageReclaim() before trying
      to clear it to avoid the misuse.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a4bb3ecd
    • Naoya Horiguchi's avatar
      mm/migrate: check-before-clear PageSwapCache · b3b3a99c
      Naoya Horiguchi authored
      With the page flag sanitization patchset, an invalid usage of
      ClearPageSwapCache() is detected in migration_page_copy().
      migrate_page_copy() is shared by both normal and hugepage (both thp and
      hugetlb) code path, so let's check PageSwapCache() and clear it if it's
      set to avoid misuse of the invalid clear operation.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b3b3a99c