1. 15 Apr, 2015 40 commits
    • Sergey Senozhatsky's avatar
      zram: remove `num_migrated' device attr · 10447b60
      Sergey Senozhatsky authored
      This patch introduces rework to zram stats.  We have per-stat sysfs nodes,
      and it makes things a bit hard to use in user space: it doesn't give an
      immediate stats 'snapshot', it requires user space to use more syscalls -
      open, read, close for every stat file, with appropriate error checks on
      every step, etc.
      
      First, zram now accounts block layer statistics, available in
      /sys/block/zram<id>/stat and /proc/diskstats files.  So some new stats are
      available (see Documentation/block/stat.txt), besides, zram's activities
      now can be monitored by sysstat's iostat or similar tools.
      
      Example:
      cat /sys/block/zram0/stat
      248     0    1984    0   251029     0  2008232   5120   0   5116   5116
      
      Second, group currently exported on per-stat basis nodes into two
      categories (files):
      
      -- zram<id>/io_stat
      accumulates device's IO stats, that are not accounted by block layer,
      and contains:
              failed_reads
              failed_writes
              invalid_io
              notify_free
      
      Example:
      cat /sys/block/zram0/io_stat
      0        0        0   652572
      
      -- zram<id>/mm_stat
      accumulates zram mm stats and contains:
              orig_data_size
              compr_data_size
              mem_used_total
              mem_limit
              mem_used_max
              zero_pages
              num_migrated
      
      Example:
      cat /sys/block/zram0/mm_stat
      434634752 270288572 279158784        0 579895296    15060        0
      
      per-stat sysfs nodes are now considered to be deprecated and we plan to
      remove them (and clean up some of the existing stat code) in two years (as
      of now, there is no warning printed to syslog about deprecated stats being
      used).  User space is advised to use the above mentioned 3 files.
      
      This patch (of 7):
      
      Remove sysfs `num_migrated' attribute.  We are moving away from per-stat
      device attrs towards 3 stat files that will accumulate io and mm stats in
      a format similar to block layer statistics in /sys/block/<dev>/stat.  That
      will be easier to use in user space, and reduce the number of syscalls
      needed to read zram device statistics.
      
      `num_migrated' will return back in zram<id>/mm_stat file.
      Signed-off-by: default avatarSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      10447b60
    • Yinghao Xie's avatar
    • Minchan Kim's avatar
      zsmalloc: zsmalloc documentation · d02be50d
      Minchan Kim authored
      Create zsmalloc doc which explains design concept and stat information.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d02be50d
    • Minchan Kim's avatar
      zsmalloc: add fullness into stat · 248ca1b0
      Minchan Kim authored
      During investigating compaction, fullness information of each class is
      helpful for investigating how the compaction works well.  With that, we
      could know how compaction works well more clear on each size class.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      248ca1b0
    • Minchan Kim's avatar
      zsmalloc: record handle in page->private for huge object · 7b60a685
      Minchan Kim authored
      We store handle on header of each allocated object so it increases the
      size of each object by sizeof(unsigned long).
      
      If zram stores 4096 bytes to zsmalloc(ie, bad compression), zsmalloc needs
      4104B-class to add handle.
      
      However, 4104B-class has 1-pages_per_zspage so wasted size by internal
      fragment is 8192 - 4104, which is terrible.
      
      So this patch records the handle in page->private on such huge object(ie,
      pages_per_zspage == 1 && maxobj_per_zspage == 1) instead of header of each
      object so we could use 4096B-class, not 4104B-class.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7b60a685
    • Minchan Kim's avatar
      zram: support compaction · 4e3ba878
      Minchan Kim authored
      Now that zsmalloc supports compaction, zram can use it.  For the first
      step, this patch exports compact knob via sysfs so user can do compaction
      via "echo 1 > /sys/block/zram0/compact".
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4e3ba878
    • Minchan Kim's avatar
      zsmalloc: adjust ZS_ALMOST_FULL · d3d07c92
      Minchan Kim authored
      Curretly, zsmalloc regards a zspage as ZS_ALMOST_EMPTY if the zspage has
      under 1/4 used objects(ie, fullness_threshold_frac).  It could make result
      in loose packing since zsmalloc migrates only ZS_ALMOST_EMPTY zspage out.
      
      This patch changes the rule so that zsmalloc makes zspage which has above
      3/4 used object ZS_ALMOST_FULL so it could make tight packing.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d3d07c92
    • Minchan Kim's avatar
      zsmalloc: support compaction · 312fcae2
      Minchan Kim authored
      This patch provides core functions for migration of zsmalloc.  Migraion
      policy is simple as follows.
      
      for each size class {
              while {
                      src_page = get zs_page from ZS_ALMOST_EMPTY
                      if (!src_page)
                              break;
                      dst_page = get zs_page from ZS_ALMOST_FULL
                      if (!dst_page)
                              dst_page = get zs_page from ZS_ALMOST_EMPTY
                      if (!dst_page)
                              break;
                      migrate(from src_page, to dst_page);
              }
      }
      
      For migration, we need to identify which objects in zspage are allocated
      to migrate them out.  We could know it by iterating of freed objects in a
      zspage because first_page of zspage keeps free objects singly-linked list
      but it's not efficient.  Instead, this patch adds a tag(ie,
      OBJ_ALLOCATED_TAG) in header of each object(ie, handle) so we could check
      whether the object is allocated easily.
      
      This patch adds another status bit in handle to synchronize between user
      access through zs_map_object and migration.  During migration, we cannot
      move objects user are using due to data coherency between old object and
      new object.
      
      [akpm@linux-foundation.org: zsmalloc.c needs sched.h for cond_resched()]
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      312fcae2
    • Minchan Kim's avatar
      zsmalloc: factor out obj_[malloc|free] · c7806261
      Minchan Kim authored
      In later patch, migration needs some part of functions in zs_malloc and
      zs_free so this patch factor out them.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c7806261
    • Minchan Kim's avatar
      zsmalloc: decouple handle and object · 2e40e163
      Minchan Kim authored
      Recently, we started to use zram heavily and some of issues
      popped.
      
      1) external fragmentation
      
      I got a report from Juneho Choi that fork failed although there are plenty
      of free pages in the system.  His investigation revealed zram is one of
      the culprit to make heavy fragmentation so there was no more contiguous
      16K page for pgd to fork in the ARM.
      
      2) non-movable pages
      
      Other problem of zram now is that inherently, user want to use zram as
      swap in small memory system so they use zRAM with CMA to use memory
      efficiently.  However, unfortunately, it doesn't work well because zRAM
      cannot use CMA's movable pages unless it doesn't support compaction.  I
      got several reports about that OOM happened with zram although there are
      lots of swap space and free space in CMA area.
      
      3) internal fragmentation
      
      zRAM has started support memory limitation feature to limit memory usage
      and I sent a patchset(https://lkml.org/lkml/2014/9/21/148) for VM to be
      harmonized with zram-swap to stop anonymous page reclaim if zram consumed
      memory up to the limit although there are free space on the swap.  One
      problem for that direction is zram has no way to know any hole in memory
      space zsmalloc allocated by internal fragmentation so zram would regard
      swap is full although there are free space in zsmalloc.  For solving the
      issue, zram want to trigger compaction of zsmalloc before it decides full
      or not.
      
      This patchset is first step to support above issues.  For that, it adds
      indirect layer between handle and object location and supports manual
      compaction to solve 3th problem first of all.
      
      After this patchset got merged, next step is to make VM aware of zsmalloc
      compaction so that generic compaction will move zsmalloced-pages
      automatically in runtime.
      
      In my imaginary experiment(ie, high compress ratio data with heavy swap
      in/out on 8G zram-swap), data is as follows,
      
      Before =
      zram allocated object :      60212066 bytes
      zram total used:     140103680 bytes
      ratio:         42.98 percent
      MemFree:          840192 kB
      
      Compaction
      
      After =
      frag ratio after compaction
      zram allocated object :      60212066 bytes
      zram total used:      76185600 bytes
      ratio:         79.03 percent
      MemFree:          901932 kB
      
      Juneho reported below in his real platform with small aging.
      So, I think the benefit would be bigger in real aging system
      for a long time.
      
      - frag_ratio increased 3% (ie, higher is better)
      - memfree increased about 6MB
      - In buddy info, Normal 2^3: 4, 2^2: 1: 2^1 increased, Highmem: 2^1 21 increased
      
      frag ratio after swap fragment
      used :        156677 kbytes
      total:        166092 kbytes
      frag_ratio :  94
      meminfo before compaction
      MemFree:           83724 kB
      Node 0, zone   Normal  13642   1364     57     10     61     17      9      5      4      0      0
      Node 0, zone  HighMem    425     29      1      0      0      0      0      0      0      0      0
      
      num_migrated :  23630
      compaction done
      
      frag ratio after compaction
      used :        156673 kbytes
      total:        160564 kbytes
      frag_ratio :  97
      meminfo after compaction
      MemFree:           89060 kB
      Node 0, zone   Normal  14076   1544     67     14     61     17      9      5      4      0      0
      Node 0, zone  HighMem    863     50      1      0      0      0      0      0      0      0      0
      
      This patchset adds more logics(about 480 lines) in zsmalloc but when I
      tested heavy swapin/out program, the regression for swapin/out speed is
      marginal because most of overheads were caused by compress/decompress and
      other MM reclaim stuff.
      
      This patch (of 7):
      
      Currently, handle of zsmalloc encodes object's location directly so it
      makes support of migration hard.
      
      This patch decouples handle and object via adding indirect layer.  For
      that, it allocates handle dynamically and returns it to user.  The handle
      is the address allocated by slab allocation so it's unique and we could
      keep object's location in the memory space allocated for handle.
      
      With it, we can change object's position without changing handle itself.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Juneho Choi <juno.choi@lge.com>
      Cc: Gunho Lee <gunho.lee@lge.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Seth Jennings <sjennings@variantweb.net>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e40e163
    • Andrew Morton's avatar
      mm/compaction.c: fix "suitable_migration_target() unused" warning · 018e9a49
      Andrew Morton authored
      mm/compaction.c:250:13: warning: 'suitable_migration_target' defined but not used [-Wunused-function]
      Reported-by: default avatarFengguang Wu <fengguang.wu@gmail.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      018e9a49
    • Boaz Harrosh's avatar
      dax: unify ext2/4_{dax,}_file_operations · be64f884
      Boaz Harrosh authored
      The original dax patchset split the ext2/4_file_operations because of the
      two NULL splice_read/splice_write in the dax case.
      
      In the vfs if splice_read/splice_write are NULL we then call
      default_splice_read/write.
      
      What we do here is make generic_file_splice_read aware of IS_DAX() so the
      original ext2/4_file_operations can be used as is.
      
      For write it appears that iter_file_splice_write is just fine.  It uses
      the regular f_op->write(file,..) or new_sync_write(file, ...).
      Signed-off-by: default avatarBoaz Harrosh <boaz@plexistor.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      be64f884
    • Boaz Harrosh's avatar
      dax: use pfn_mkwrite to update c/mtime + freeze protection · 0e3b210c
      Boaz Harrosh authored
      From: Yigal Korman <yigal@plexistor.com>
      
      [v1]
      Without this patch, c/mtime is not updated correctly when mmap'ed page is
      first read from and then written to.
      
      A new xfstest is submitted for testing this (generic/080)
      
      [v2]
      Jan Kara has pointed out that if we add the
      sb_start/end_pagefault pair in the new pfn_mkwrite we
      are then fixing another bug where: A user could start
      writing to the page while filesystem is frozen.
      Signed-off-by: default avatarYigal Korman <yigal@plexistor.com>
      Signed-off-by: default avatarBoaz Harrosh <boaz@plexistor.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0e3b210c
    • Boaz Harrosh's avatar
      mm: new pfn_mkwrite same as page_mkwrite for VM_PFNMAP · dd906184
      Boaz Harrosh authored
      This will allow FS that uses VM_PFNMAP | VM_MIXEDMAP (no page structs) to
      get notified when access is a write to a read-only PFN.
      
      This can happen if we mmap() a file then first mmap-read from it to
      page-in a read-only PFN, than we mmap-write to the same page.
      
      We need this functionality to fix a DAX bug, where in the scenario above
      we fail to set ctime/mtime though we modified the file.  An xfstest is
      attached to this patchset that shows the failure and the fix.  (A DAX
      patch will follow)
      
      This functionality is extra important for us, because upon dirtying of a
      pmem page we also want to RDMA the page to a remote cluster node.
      
      We define a new pfn_mkwrite and do not reuse page_mkwrite because
        1 - The name ;-)
        2 - But mainly because it would take a very long and tedious
            audit of all page_mkwrite functions of VM_MIXEDMAP/VM_PFNMAP
            users. To make sure they do not now CRASH. For example current
            DAX code (which this is for) would crash.
            If we would want to reuse page_mkwrite, We will need to first
            patch all users, so to not-crash-on-no-page. Then enable this
            patch. But even if I did that I would not sleep so well at night.
            Adding a new vector is the safest thing to do, and is not that
            expensive. an extra pointer at a static function vector per driver.
            Also the new vector is better for performance, because else we
            Will call all current Kernel vectors, so to:
              check-ha-no-page-do-nothing and return.
      
      No need to call it from do_shared_fault because do_wp_page is called to
      change pte permissions anyway.
      Signed-off-by: default avatarYigal Korman <yigal@plexistor.com>
      Signed-off-by: default avatarBoaz Harrosh <boaz@plexistor.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd906184
    • Konstantin Khlebnikov's avatar
      mm/memory: also print a_ops->readpage in print_bad_pte() · 2682582a
      Konstantin Khlebnikov authored
      A lot of filesystems use generic_file_mmap() and filemap_fault(),
      f_op->mmap and vm_ops->fault aren't enough to identify filesystem.
      
      This prints file name, vm_ops->fault, f_op->mmap and a_ops->readpage
      (which is almost always implemented and filesystem-specific).
      
      Example:
      
      [   23.676410] BUG: Bad page map in process sh  pte:1b7e6025 pmd:19bbd067
      [   23.676887] page:ffffea00006df980 count:4 mapcount:1 mapping:ffff8800196426c0 index:0x97
      [   23.677481] flags: 0x10000000000000c(referenced|uptodate)
      [   23.677896] page dumped because: bad pte
      [   23.678205] addr:00007f52fcb17000 vm_flags:00000075 anon_vma:          (null) mapping:ffff8800196426c0 index:97
      [   23.678922] file:libc-2.19.so fault:filemap_fault mmap:generic_file_readonly_mmap readpage:v9fs_vfs_readpage
      
      [akpm@linux-foundation.org: use pr_alert, per Kirill]
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2682582a
    • Andrey Ryabinin's avatar
      mm/mempool.c: kasan: poison mempool elements · 92393615
      Andrey Ryabinin authored
      Mempools keep allocated objects in reserved for situations when ordinary
      allocation may not be possible to satisfy.  These objects shouldn't be
      accessed before they leave the pool.
      
      This patch poison elements when get into the pool and unpoison when they
      leave it.  This will let KASan to detect use-after-free of mempool's
      elements.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Tested-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Chernenkov <drcheren@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      92393615
    • Andrew Morton's avatar
      mm/cma_debug.c: remove blank lines before DEFINE_SIMPLE_ATTRIBUTE() · bda6d330
      Andrew Morton authored
      Like EXPORT_SYMBOL(): the positioning communicates that the macro pertains
      to the immediately preceding function.
      
      Cc: Dmitry Safonov <d.safonov@partner.samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Stefan Strogin <stefan.strogin@gmail.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Cc: Weijie Yang <weijie.yang@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Vyacheslav Tyrtov <v.tyrtov@samsung.com>
      Cc: Aleksei Mateosian <a.mateosian@samsung.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bda6d330
    • Dmitry Safonov's avatar
      mm: cma: add functions to get region pages counters · 2e32b947
      Dmitry Safonov authored
      Here are two functions that provide interface to compute/get used size and
      size of biggest free chunk in cma region.  Add that information to
      debugfs.
      
      [akpm@linux-foundation.org: move debug code from cma.c into cma_debug.c]
      [stefan.strogin@gmail.com: move code from cma_get_used() and cma_get_maxchunk() to cma_used_get() and cma_maxchunk_get()]
      Signed-off-by: default avatarDmitry Safonov <d.safonov@partner.samsung.com>
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Cc: Weijie Yang <weijie.yang@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Vyacheslav Tyrtov <v.tyrtov@samsung.com>
      Cc: Aleksei Mateosian <a.mateosian@samsung.com>
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e32b947
    • Kirill A. Shutemov's avatar
      thp: cleanup khugepaged startup · 79553da2
      Kirill A. Shutemov authored
      Few trivial cleanups:
      
       - no need to call set_recommended_min_free_kbytes() from
         late_initcall() -- start_khugepaged() calls it;
      
       - no need to call set_recommended_min_free_kbytes() from
         start_khugepaged() if khugepaged is not started;
      
       - there isn't much point in running start_khugepaged() if we've just
         set transparent_hugepage_flags to zero;
      
       - start_khugepaged() is misnamed -- it also used to stop the thread;
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      79553da2
    • Kirill A. Shutemov's avatar
      mm: uninline and cleanup page-mapping related helpers · e39155ea
      Kirill A. Shutemov authored
      Most-used page->mapping helper -- page_mapping() -- has already uninlined.
       Let's uninline also page_rmapping() and page_anon_vma().  It saves us
      depending on configuration around 400 bytes in text:
      
         text	   data	    bss	    dec	    hex	filename
       660318	  99254	 410000	1169572	 11d8a4	mm/built-in.o-before
       659854	  99254	 410000	1169108	 11d6d4	mm/built-in.o
      
      I also tried to make code a bit more clean.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e39155ea
    • Stefan Strogin's avatar
      mm: cma: add trace events for CMA allocations and freeings · 99e8ea6c
      Stefan Strogin authored
      Add trace events for cma_alloc() and cma_release().
      
      The cma_alloc tracepoint is used both for successful and failed allocations,
      in case of allocation failure pfn=-1UL is stored and printed.
      Signed-off-by: default avatarStefan Strogin <stefan.strogin@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Nazarewicz <mpn@google.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      99e8ea6c
    • Borislav Petkov's avatar
      include/linux/mm.h: simplify flag check · cdd7875e
      Borislav Petkov authored
      Flip the flag test so that it is the simplest.  No functional change, just
      a small readability improvement:
      
      No code changed:
      
        # arch/x86/kernel/sys_x86_64.o:
      
         text    data     bss     dec     hex filename
         1551      24       0    1575     627 sys_x86_64.o.before
         1551      24       0    1575     627 sys_x86_64.o.after
      
      md5:
         70708d1b1ad35cc891118a69dc1a63f9  sys_x86_64.o.before.asm
         70708d1b1ad35cc891118a69dc1a63f9  sys_x86_64.o.after.asm
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdd7875e
    • Alexander Kuleshov's avatar
      mm/memblock.c: add debug output for memblock_add() · 6a4055bc
      Alexander Kuleshov authored
      memblock_reserve() calls memblock_reserve_region() which prints debugging
      information if 'memblock=debug' was passed on the command line.  This
      patch adds the same behaviour, but for memblock_add function().
      
      [akpm@linux-foundation.org: s/memblock_memory/memblock_add/ in message]
      Signed-off-by: default avatarAlexander Kuleshov <kuleshovmail@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Emil Medve <Emilian.Medve@freescale.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6a4055bc
    • Naoya Horiguchi's avatar
      mm: hugetlb: cleanup using paeg_huge_active() · 7e1f049e
      Naoya Horiguchi authored
      Now we have an easy access to hugepages' activeness, so existing helpers to
      get the information can be cleaned up.
      
      [akpm@linux-foundation.org: s/PageHugeActive/page_huge_active/]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7e1f049e
    • Naoya Horiguchi's avatar
      mm: hugetlb: introduce page_huge_active · bcc54222
      Naoya Horiguchi authored
      We are not safe from calling isolate_huge_page() on a hugepage
      concurrently, which can make the victim hugepage in invalid state and
      results in BUG_ON().
      
      The root problem of this is that we don't have any information on struct
      page (so easily accessible) about hugepages' activeness.  Note that
      hugepages' activeness means just being linked to
      hstate->hugepage_activelist, which is not the same as normal pages'
      activeness represented by PageActive flag.
      
      Normal pages are isolated by isolate_lru_page() which prechecks PageLRU
      before isolation, so let's do similarly for hugetlb with a new
      paeg_huge_active().
      
      set/clear_page_huge_active() should be called within hugetlb_lock.  But
      hugetlb_cow() and hugetlb_no_page() don't do this, being justified because
      in these functions set_page_huge_active() is called right after the
      hugepage is allocated and no other thread tries to isolate it.
      
      [akpm@linux-foundation.org: s/PageHugeActive/page_huge_active/, make it return bool]
      [fengguang.wu@intel.com: set_page_huge_active() can be static]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bcc54222
    • Naoya Horiguchi's avatar
      mm: don't call __page_cache_release for hugetlb · 822fc613
      Naoya Horiguchi authored
      __put_compound_page() calls __page_cache_release() to do some freeing
      work, but it's obviously for thps, not for hugetlb.  We don't care because
      PageLRU is always cleared and page->mem_cgroup is always NULL for hugetlb.
      But it's not correct and has potential risks, so let's make it
      conditional.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      822fc613
    • Rasmus Villemoes's avatar
      mm/mmap.c: use while instead of if+goto · 9fcd1457
      Rasmus Villemoes authored
      The creators of the C language gave us the while keyword. Let's use
      that instead of synthesizing it from if+goto.
      
      Made possible by 6597d783 ("mm/mmap.c: replace find_vma_prepare()
      with clearer find_vma_links()").
      
      [akpm@linux-foundation.org: fix 80-col overflows]
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9fcd1457
    • David Rientjes's avatar
      mm, selftests: test return value of munmap for MAP_HUGETLB memory · 215ba781
      David Rientjes authored
      When MAP_HUGETLB memory is unmapped, the length must be hugepage aligned,
      otherwise it fails with -EINVAL.
      
      All tests currently behave correctly, but it's better to explcitly test
      the return value for completeness and document the requirement, especially
      if users copy map_hugetlb.c as a sample implementation.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Eric B Munson <emunson@akamai.com>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      215ba781
    • David Rientjes's avatar
      mm, doc: cleanup and clarify munmap behavior for hugetlb memory · 80d6b94b
      David Rientjes authored
      munmap(2) of hugetlb memory requires a length that is hugepage aligned,
      otherwise it may fail.  Add this to the documentation.
      
      This also cleans up the documentation and separates it into logical units:
      one part refers to MAP_HUGETLB and another part refers to requirements for
      shared memory segments.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Eric B Munson <emunson@akamai.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80d6b94b
    • Kirill A. Shutemov's avatar
      thp: do not adjust zone water marks if khugepaged is not started · ae7efa50
      Kirill A. Shutemov authored
      set_recommended_min_free_kbytes() adjusts zone water marks to be suitable
      for khugepaged. We avoid doing this if khugepaged is disabled, but don't
      catch the case when khugepaged is failed to start.
      
      Let's address this by checking khugepaged_thread instead of
      khugepaged_enabled() in set_recommended_min_free_kbytes().
      It's NULL if the kernel thread is stopped or failed to start.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ae7efa50
    • Kirill A. Shutemov's avatar
      thp: handle errors in hugepage_init() properly · 65ebb64f
      Kirill A. Shutemov authored
      We miss error-handling in few cases hugepage_init(). Let's fix that.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      65ebb64f
    • David Rientjes's avatar
      mm, mempool: poison elements backed by slab allocator · bdfedb76
      David Rientjes authored
      Mempools keep elements in a reserved pool for contexts in which allocation
      may not be possible.  When an element is allocated from the reserved pool,
      its memory contents is the same as when it was added to the reserved pool.
      
      Because of this, elements lack any free poisoning to detect use-after-free
      errors.
      
      This patch adds free poisoning for elements backed by the slab allocator.
      This is possible because the mempool layer knows the object size of each
      element.
      
      When an element is added to the reserved pool, it is poisoned with
      POISON_FREE.  When it is removed from the reserved pool, the contents are
      checked for POISON_FREE.  If there is a mismatch, a warning is emitted to
      the kernel log.
      
      This is only effective for configs with CONFIG_DEBUG_SLAB or
      CONFIG_SLUB_DEBUG_ON.
      
      [fabio.estevam@freescale.com: use '%zu' for printing 'size_t' variable]
      [arnd@arndb.de: add missing include]
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bdfedb76
    • David Rientjes's avatar
      mm, mempool: disallow mempools based on slab caches with constructors · e244c9e6
      David Rientjes authored
      All occurrences of mempools based on slab caches with object constructors
      have been removed from the tree, so disallow creating them.
      
      We can only dereference mem->ctor in mm/mempool.c without including
      mm/slab.h in include/linux/mempool.h.  So simply note the restriction,
      just like the comment restricting usage of __GFP_ZERO, and warn on kernels
      with CONFIG_DEBUG_VM() if such a mempool is allocated from.
      
      We don't want to incur this check on every element allocation, so use
      VM_BUG_ON().
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e244c9e6
    • David Rientjes's avatar
      fs, jfs: remove slab object constructor · ee146245
      David Rientjes authored
      Mempools based on slab caches with object constructors are risky because
      element allocation can happen either from the slab cache itself, meaning
      the constructor is properly called before returning, or from the mempool
      reserve pool, meaning the constructor is not called before returning,
      depending on the allocation context.
      
      For this reason, we should disallow creating mempools based on slab caches
      that have object constructors.  Callers of mempool_alloc() will be
      responsible for properly initializing the returned element.
      
      Then, it doesn't matter if the element came from the slab cache or the
      mempool reserved pool.
      
      The only occurrence of a mempool being based on a slab cache with an
      object constructor in the tree is in fs/jfs/jfs_metapage.c.  Remove it and
      properly initialize the element in alloc_metapage().
      
      At the same time, META_free is never used, so remove it as well.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ee146245
    • Jason Low's avatar
      mm: remove rest of ACCESS_ONCE() usages · 4db0c3c2
      Jason Low authored
      We converted some of the usages of ACCESS_ONCE to READ_ONCE in the mm/
      tree since it doesn't work reliably on non-scalar types.
      
      This patch removes the rest of the usages of ACCESS_ONCE, and use the new
      READ_ONCE API for the read accesses.  This makes things cleaner, instead
      of using separate/multiple sets of APIs.
      Signed-off-by: default avatarJason Low <jason.low2@hp.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4db0c3c2
    • Jason Low's avatar
      mm: use READ_ONCE() for non-scalar types · 9d8c47e4
      Jason Low authored
      Commit 38c5ce93 ("mm/gup: Replace ACCESS_ONCE with READ_ONCE")
      converted ACCESS_ONCE usage in gup_pmd_range() to READ_ONCE, since
      ACCESS_ONCE doesn't work reliably on non-scalar types.
      
      This patch also fixes the other ACCESS_ONCE usages in gup_pte_range()
      and __get_user_pages_fast() in mm/gup.c
      Signed-off-by: default avatarJason Low <jason.low2@hp.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9d8c47e4
    • Derek's avatar
      mm/mremap.c: clean up goto just return ERR_PTR · 6cd57613
      Derek authored
      As suggested by Kirill the "goto"s in vma_to_resize aren't necessary, just
      change them to explicit return.
      Signed-off-by: default avatarDerek Che <crquan@ymail.com>
      Suggested-by: default avatar"Kirill A. Shutemov" <kirill@shutemov.name>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6cd57613
    • Derek's avatar
      mremap should return -ENOMEM when __vm_enough_memory fail · 12215182
      Derek authored
      Recently I straced bash behavior in this dd zero pipe to read test, in
      part of testing under vm.overcommit_memory=2 (OVERCOMMIT_NEVER mode):
      
          # dd if=/dev/zero | read x
      
      The bash sub shell is calling mremap to reallocate more and more memory
      untill it finally failed -ENOMEM (I expect), or to be killed by system OOM
      killer (which should not happen under OVERCOMMIT_NEVER mode); But the
      mremap system call actually failed of -EFAULT, which is a surprise to me,
      I think it's supposed to be -ENOMEM?  then I wrote this piece of C code
      testing confirmed it: https://gist.github.com/crquan/326bde37e1ddda8effe5
      
          $ ./remap
          allocated one page @0x7f686bf71000, (PAGE_SIZE: 4096)
          grabbed 7680512000 bytes of memory (1875125 pages) @ 00007f6690993000.
          mremap failed Bad address (14).
      
      The -EFAULT comes from the branch of security_vm_enough_memory_mm failure,
      underlyingly it calls __vm_enough_memory which returns only 0 for success
      or -ENOMEM; So why vma_to_resize needs to return -EFAULT in this case?
      this sounds like a mistake to me.
      
      Some more digging into git history:
      
      1) Before commit 119f657c ("RLIMIT_AS checking fix") in May 1 2005
         (pre 2.6.12 days) it was returning -ENOMEM for this failure;
      
      2) but commit 119f657c ("untangling do_mremap(), part 1") changed it
         accidentally, to what ever is preserved in local ret, which happened to
         be -EFAULT, in a previous assignment;
      
      3) then in commit 54f5de70 code refactoring, it's explicitly returning
         -EFAULT, should be wrong.
      Signed-off-by: default avatarDerek Che <crquan@ymail.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      12215182
    • Roman Pen's avatar
      mm/vmalloc: get rid of dirty bitmap inside vmap_block structure · 7d61bfe8
      Roman Pen authored
      In original implementation of vm_map_ram made by Nick Piggin there were
      two bitmaps: alloc_map and dirty_map.  None of them were used as supposed
      to be: finding a suitable free hole for next allocation in block.
      vm_map_ram allocates space sequentially in block and on free call marks
      pages as dirty, so freed space can't be reused anymore.
      
      Actually it would be very interesting to know the real meaning of those
      bitmaps, maybe implementation was incomplete, etc.
      
      But long time ago Zhang Yanfei removed alloc_map by these two commits:
      
        mm/vmalloc.c: remove dead code in vb_alloc
           3fcd76e8
        mm/vmalloc.c: remove alloc_map from vmap_block
           b8e748b6
      
      In this patch I replaced dirty_map with two range variables: dirty min and
      max.  These variables store minimum and maximum position of dirty space in
      a block, since we need only to know the dirty range, not exact position of
      dirty pages.
      
      Why it was made?  Several reasons: at first glance it seems that
      vm_map_ram allocator concerns about fragmentation thus it uses bitmaps for
      finding free hole, but it is not true.  To avoid complexity seems it is
      better to use something simple, like min or max range values.  Secondly,
      code also becomes simpler, without iteration over bitmap, just comparing
      values in min and max macros.  Thirdly, bitmap occupies up to 1024 bits
      (4MB is a max size of a block).  Here I replaced the whole bitmap with two
      longs.
      
      Finally vm_unmap_aliases should be slightly faster and the whole
      vmap_block structure occupies less memory.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: Rob Jones <rob.jones@codethink.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7d61bfe8
    • Roman Pen's avatar
      mm/vmalloc: occupy newly allocated vmap block just after allocation · cf725ce2
      Roman Pen authored
      Previous implementation allocates new vmap block and repeats search of a
      free block from the very beginning, iterating over the CPU free list.
      
      Why it can be better??
      
      1. Allocation can happen on one CPU, but search can be done on another CPU.
         In worst case we preallocate amount of vmap blocks which is equal to
         CPU number on the system.
      
      2. In previous patch I added newly allocated block to the tail of free list
         to avoid soon exhaustion of virtual space and give a chance to occupy
         blocks which were allocated long time ago.  Thus to find newly allocated
         block all the search sequence should be repeated, seems it is not efficient.
      
      In this patch newly allocated block is occupied right away, address of
      virtual space is returned to the caller, so there is no any need to repeat
      the search sequence, allocation job is done.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: Rob Jones <rob.jones@codethink.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf725ce2