1. 24 Jan, 2014 36 commits
    • Paul Gortmaker's avatar
      mm/mm_init.c: make creation of the mm_kobj happen earlier than device_initcall · da29bd36
      Paul Gortmaker authored
      The use of __initcall is to be eventually replaced by choosing one from
      the prioritized groupings laid out in init.h header:
      
      	pure_initcall               0
      	core_initcall               1
      	postcore_initcall           2
      	arch_initcall               3
      	subsys_initcall             4
      	fs_initcall                 5
      	device_initcall             6
      	late_initcall               7
      
      In the interim, all __initcall are mapped onto device_initcall, which as
      can be seen above, comes quite late in the ordering.
      
      Currently the mm_kobj is created with __initcall in mm_sysfs_init().
      This means that any other initcalls that want to reference the mm_kobj
      have to be device_initcall (or later), otherwise we will for example,
      trip the BUG_ON(!kobj) in sysfs's internal_create_group().  This
      unfairly restricts those users; for example something that clearly makes
      sense to be an arch_initcall will not be able to choose that.
      
      However, upon examination, it is only this way for historical reasons
      (i.e.  simply not reprioritized yet).  We see that sysfs is ready quite
      earlier in init/main.c via:
      
       vfs_caches_init
       |_ mnt_init
          |_ sysfs_init
      
      well ahead of the processing of the prioritized calls listed above.
      
      So we can recategorize mm_sysfs_init to be a pure_initcall, which in
      turn allows any mm_kobj initcall users a wider range (1 --> 7) of
      initcall priorities to choose from.
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      da29bd36
    • Han Pingtian's avatar
      mm: show message when updating min_free_kbytes in thp · 42aa83cb
      Han Pingtian authored
      min_free_kbytes may be raised during THP's initialization.  Sometimes,
      this will change the value which was set by the user.  Showing this
      message will clarify this confusion.
      
      Only show this message when changing a value which was set by the user
      according to Michal Hocko's suggestion.
      
      Show the old value of min_free_kbytes according to Dave Hansen's
      suggestion.  This will give user the chance to restore old value of
      min_free_kbytes.
      Signed-off-by: default avatarHan Pingtian <hanpt@linux.vnet.ibm.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42aa83cb
    • Nathan Zimmer's avatar
      mm/memory_hotplug.c: move register_memory_resource out of the lock_memory_hotplug · ac13c462
      Nathan Zimmer authored
      We don't need to do register_memory_resource() under
      lock_memory_hotplug() since it has its own lock and doesn't make any
      callbacks.
      
      Also register_memory_resource return NULL on failure so we don't have
      anything to cleanup at this point.
      
      The reason for this rfc is I was doing some experiments with hotplugging
      of memory on some of our larger systems.  While it seems to work, it can
      be quite slow.  With some preliminary digging I found that
      lock_memory_hotplug is clearly ripe for breakup.
      
      It could be broken up per nid or something but it also covers the
      online_page_callback.  The online_page_callback shouldn't be very hard
      to break out.
      
      Also there is the issue of various structures(wmarks come to mind) that
      are only updated under the lock_memory_hotplug that would need to be
      dealt with.
      
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Hedi <hedi@sgi.com>
      Cc: Mike Travis <travis@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ac13c462
    • Philipp Hachtmann's avatar
      mm/nobootmem: free_all_bootmem again · 354f17e1
      Philipp Hachtmann authored
      get_allocated_memblock_reserved_regions_info() should work if it is
      compiled in.  Extended the ifdef around
      get_allocated_memblock_memory_regions_info() to include
      get_allocated_memblock_reserved_regions_info() as well.  Similar changes
      in nobootmem.c/free_low_memory_core_early() where the two functions are
      called.
      
      [akpm@linux-foundation.org: cleanup]
      Signed-off-by: default avatarPhilipp Hachtmann <phacht@linux.vnet.ibm.com>
      Cc: qiuxishi <qiuxishi@huawei.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Daeseok Youn <daeseok.youn@gmail.com>
      Cc: Jiang Liu <liuj97@gmail.com>
      Acked-by: default avatarYinghai Lu <yinghai@kernel.org>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      354f17e1
    • Vladimir Davydov's avatar
      mm: vmscan: call NUMA-unaware shrinkers irrespective of nodemask · ec97097b
      Vladimir Davydov authored
      If a shrinker is not NUMA-aware, shrink_slab() should call it exactly
      once with nid=0, but currently it is not true: if node 0 is not set in
      the nodemask or if it is not online, we will not call such shrinkers at
      all.  As a result some slabs will be left untouched under some
      circumstances.  Let us fix it.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Reported-by: default avatarDave Chinner <dchinner@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ec97097b
    • Vladimir Davydov's avatar
      mm: vmscan: shrink all slab objects if tight on memory · 0b1fb40a
      Vladimir Davydov authored
      When reclaiming kmem, we currently don't scan slabs that have less than
      batch_size objects (see shrink_slab_node()):
      
              while (total_scan >= batch_size) {
                      shrinkctl->nr_to_scan = batch_size;
                      shrinker->scan_objects(shrinker, shrinkctl);
                      total_scan -= batch_size;
              }
      
      If there are only a few shrinkers available, such a behavior won't cause
      any problems, because the batch_size is usually small, but if we have a
      lot of slab shrinkers, which is perfectly possible since FS shrinkers
      are now per-superblock, we can end up with hundreds of megabytes of
      practically unreclaimable kmem objects.  For instance, mounting a
      thousand of ext2 FS images with a hundred of files in each and iterating
      over all the files using du(1) will result in about 200 Mb of FS caches
      that cannot be dropped even with the aid of the vm.drop_caches sysctl!
      
      This problem was initially pointed out by Glauber Costa [*].  Glauber
      proposed to fix it by making the shrink_slab() always take at least one
      pass, to put it simply, turning the scan loop above to a do{}while()
      loop.  However, this proposal was rejected, because it could result in
      more aggressive and frequent slab shrinking even under low memory
      pressure when total_scan is naturally very small.
      
      This patch is a slightly modified version of Glauber's approach.
      Similarly to Glauber's patch, it makes shrink_slab() scan less than
      batch_size objects, but only if the total number of objects we want to
      scan (total_scan) is greater than the total number of objects available
      (max_pass).  Since total_scan is biased as half max_pass if the current
      delta change is small:
      
              if (delta < max_pass / 4)
                      total_scan = min(total_scan, max_pass / 2);
      
      this is only possible if we are scanning at high prio.  That said, this
      patch shouldn't change the vmscan behaviour if the memory pressure is
      low, but if we are tight on memory, we will do our best by trying to
      reclaim all available objects, which sounds reasonable.
      
      [*] http://www.spinics.net/lists/cgroups/msg06913.htmlSigned-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0b1fb40a
    • Wanpeng Li's avatar
      sched/numa: fix setting of cpupid on page migration twice · baae911b
      Wanpeng Li authored
      Commit 7851a45c ("mm: numa: Copy cpupid on page migration") copiess
      over the cpupid at page migration time.  It is unnecessary to set it
      again in migrate_misplaced_transhuge_page().
      Signed-off-by: default avatarWanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      baae911b
    • Jianguo Wu's avatar
      mm: do_mincore() cleanup · c980e66a
      Jianguo Wu authored
      Two cleanups:
      1. remove redundant codes for hugetlb pages.
      2. end = pmd_addr_end(addr, end) restricts [addr, end) within PMD_SIZE,
         this may increase do_mincore() calls, remove it.
      Signed-off-by: default avatarJianguo Wu <wujianguo@huawei.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: qiuxishi <qiuxishi@huawei.com>
      Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c980e66a
    • Shawn Guo's avatar
      include/linux/genalloc.h: spinlock_t needs spinlock_types.h · b30afea0
      Shawn Guo authored
      Compiling a C file which includes genalloc.h but without
      spinlock_types.h being included before, we will see the compile error
      below.
      
       include/linux/genalloc.h:54:2: error: unknown type name `spinlock_t'
      
      Include spinlock_types.h from genalloc.h to fix the problem.
      Signed-off-by: default avatarShawn Guo <shawn.guo@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b30afea0
    • Vinayak Menon's avatar
      Documentation/trace/postprocess/trace-vmscan-postprocess.pl: fix the traceevent regex · bd727816
      Vinayak Menon authored
      When irq, preempt and lockdep fields are printed (field 3 in the example
      below) in the trace output, the script fails.
      
      An example entry:
        kswapd0-610   [000] ...1   158.112152: mm_vmscan_kswapd_wake: nid=0 order=0
      Signed-off-by: default avatarVinayak Menon <vinayakm.list@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bd727816
    • Han Pingtian's avatar
      mm: prevent setting of a value less than 0 to min_free_kbytes · da8c757b
      Han Pingtian authored
      If echo -1 > /proc/vm/sys/min_free_kbytes, the system will hang.  Changing
      proc_dointvec() to proc_dointvec_minmax() in the
      min_free_kbytes_sysctl_handler() can prevent this to happen.
      
      mhocko said:
      
      : You can still do echo $BIG_VALUE > /proc/vm/sys/min_free_kbytes and make
      : your machine unusable but I agree that proc_dointvec_minmax is more
      : suitable here as we already have:
      :
      : 	.proc_handler   = min_free_kbytes_sysctl_handler,
      : 	.extra1         = &zero,
      :
      : It used to work properly but then 6fce56ec ("sysctl: Remove references
      : to ctl_name and strategy from the generic sysctl table") has removed
      : sysctl_intvec strategy and so extra1 is ignored.
      Signed-off-by: default avatarHan Pingtian <hanpt@linux.vnet.ibm.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      da8c757b
    • Michal Hocko's avatar
      mm: new_vma_page() cannot see NULL vma for hugetlb pages · cc81717e
      Michal Hocko authored
      Commit 11c731e8 ("mm/mempolicy: fix !vma in new_vma_page()") has
      removed BUG_ON(!vma) from new_vma_page which is partially correct
      because page_address_in_vma will return EFAULT for non-linear mappings
      and at least shared shmem might be mapped this way.
      
      The patch also tried to prevent NULL ptr for hugetlb pages which is not
      correct AFAICS because hugetlb pages cannot be mapped as VM_NONLINEAR
      and other conditions in page_address_in_vma seem to be legit and catch
      real bugs.
      
      This patch restores BUG_ON for PageHuge to catch potential issues when
      the to-be-migrated page is not setup properly.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Reviewed-by: default avatarBob Liu <bob.liu@oracle.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc81717e
    • Naoya Horiguchi's avatar
      mm/memory-failure.c: shift page lock from head page to tail page after thp split · 54b9dd14
      Naoya Horiguchi authored
      After thp split in hwpoison_user_mappings(), we hold page lock on the
      raw error page only between try_to_unmap, hence we are in danger of race
      condition.
      
      I found in the RHEL7 MCE-relay testing that we have "bad page" error
      when a memory error happens on a thp tail page used by qemu-kvm:
      
        Triggering MCE exception on CPU 10
        mce: [Hardware Error]: Machine check events logged
        MCE exception done on CPU 10
        MCE 0x38c535: Killing qemu-kvm:8418 due to hardware memory corruption
        MCE 0x38c535: dirty LRU page recovery: Recovered
        qemu-kvm[8418]: segfault at 20 ip 00007ffb0f0f229a sp 00007fffd6bc5240 error 4 in qemu-kvm[7ffb0ef14000+420000]
        BUG: Bad page state in process qemu-kvm  pfn:38c400
        page:ffffea000e310000 count:0 mapcount:0 mapping:          (null) index:0x7ffae3c00
        page flags: 0x2fffff0008001d(locked|referenced|uptodate|dirty|swapbacked)
        Modules linked in: hwpoison_inject mce_inject vhost_net macvtap macvlan ...
        CPU: 0 PID: 8418 Comm: qemu-kvm Tainted: G   M        --------------   3.10.0-54.0.1.el7.mce_test_fixed.x86_64 #1
        Hardware name: NEC NEC Express5800/R120b-1 [N8100-1719F]/MS-91E7-001, BIOS 4.6.3C19 02/10/2011
        Call Trace:
          dump_stack+0x19/0x1b
          bad_page.part.59+0xcf/0xe8
          free_pages_prepare+0x148/0x160
          free_hot_cold_page+0x31/0x140
          free_hot_cold_page_list+0x46/0xa0
          release_pages+0x1c1/0x200
          free_pages_and_swap_cache+0xad/0xd0
          tlb_flush_mmu.part.46+0x4c/0x90
          tlb_finish_mmu+0x55/0x60
          exit_mmap+0xcb/0x170
          mmput+0x67/0xf0
          vhost_dev_cleanup+0x231/0x260 [vhost_net]
          vhost_net_release+0x3f/0x90 [vhost_net]
          __fput+0xe9/0x270
          ____fput+0xe/0x10
          task_work_run+0xc4/0xe0
          do_exit+0x2bb/0xa40
          do_group_exit+0x3f/0xa0
          get_signal_to_deliver+0x1d0/0x6e0
          do_signal+0x48/0x5e0
          do_notify_resume+0x71/0xc0
          retint_signal+0x48/0x8c
      
      The reason of this bug is that a page fault happens before unlocking the
      head page at the end of memory_failure().  This strange page fault is
      trying to access to address 0x20 and I'm not sure why qemu-kvm does
      this, but anyway as a result the SIGSEGV makes qemu-kvm exit and on the
      way we catch the bad page bug/warning because we try to free a locked
      page (which was the former head page.)
      
      To fix this, this patch suggests to shift page lock from head page to
      tail page just after thp split.  SIGSEGV still happens, but it affects
      only error affected VMs, not a whole system.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: <stable@vger.kernel.org>        [3.9+] # a3e0f9e4 "mm/memory-failure.c: transfer page count from head page to tail page after split thp"
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      54b9dd14
    • Andi Kleen's avatar
      numa: add a sysctl for numa_balancing · 54a43d54
      Andi Kleen authored
      Add a working sysctl to enable/disable automatic numa memory balancing
      at runtime.
      
      This allows us to track down performance problems with this feature and
      is generally a good idea.
      
      This was possible earlier through debugfs, but only with special
      debugging options set.  Also fix the boot message.
      
      [akpm@linux-foundation.org: s/sched_numa_balancing/sysctl_numa_balancing/]
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      54a43d54
    • Philipp Hachtmann's avatar
      mm: free memblock.memory in free_all_bootmem · 5e270e25
      Philipp Hachtmann authored
      When calling free_all_bootmem() the free areas under memblock's control
      are released to the buddy allocator.  Additionally the reserved list is
      freed if it was reallocated by memblock.  The same should apply for the
      memory list.
      Signed-off-by: default avatarPhilipp Hachtmann <phacht@linux.vnet.ibm.com>
      Reviewed-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e270e25
    • Philipp Hachtmann's avatar
      mm/nobootmem.c: add return value check in __alloc_memory_core_early() · 87379ec8
      Philipp Hachtmann authored
      When memblock_reserve() fails because memblock.reserved.regions cannot
      be resized, the caller (e.g.  alloc_bootmem()) is not informed of the
      failed allocation.  Therefore alloc_bootmem() silently returns the same
      pointer again and again.
      
      This patch adds a check for the return value of memblock_reserve() in
      __alloc_memory_core().
      Signed-off-by: default avatarPhilipp Hachtmann <phacht@linux.vnet.ibm.com>
      Reviewed-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      87379ec8
    • Vladimir Davydov's avatar
      memcg: rework memcg_update_kmem_limit synchronization · d6441637
      Vladimir Davydov authored
      Currently we take both the memcg_create_mutex and the set_limit_mutex
      when we enable kmem accounting for a memory cgroup, which makes kmem
      activation events serialize with both memcg creations and other memcg
      limit updates (memory.limit, memory.memsw.limit).  However, there is no
      point in such strict synchronization rules there.
      
      First, the set_limit_mutex was introduced to keep the memory.limit and
      memory.memsw.limit values in sync.  Since memory.kmem.limit can be set
      independently of them, it is better to introduce a separate mutex to
      synchronize against concurrent kmem limit updates.
      
      Second, we take the memcg_create_mutex in order to make sure all
      children of this memcg will be kmem-active as well.  For achieving that,
      it is enough to hold this mutex only while checking if
      memcg_has_children() though.  This guarantees that if a child is added
      after we checked that the memcg has no children, the newly added cgroup
      will see its parent kmem-active (of course if the latter succeeded), and
      call kmem activation for itself.
      
      This patch simplifies the locking rules of memcg_update_kmem_limit()
      according to these considerations.
      
      [vdavydov@parallels.com: fix unintialized var warning]
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d6441637
    • Vladimir Davydov's avatar
      memcg: remove KMEM_ACCOUNTED_ACTIVATED flag · 6de64beb
      Vladimir Davydov authored
      Currently we have two state bits in mem_cgroup::kmem_account_flags
      regarding kmem accounting activation, ACTIVATED and ACTIVE.  We start
      kmem accounting only if both flags are set (memcg_can_account_kmem()),
      plus throughout the code there are several places where we check only
      the ACTIVE flag, but we never check the ACTIVATED flag alone.  These
      flags are both set from memcg_update_kmem_limit() under the
      set_limit_mutex, the ACTIVE flag always being set after ACTIVATED, and
      they never get cleared.  That said checking if both flags are set is
      equivalent to checking only for the ACTIVE flag, and since there is no
      ACTIVATED flag checks, we can safely remove the ACTIVATED flag, and
      nothing will change.
      
      Let's try to understand what was the reason for introducing these flags.
      The purpose of the ACTIVE flag is clear - it states that kmem should be
      accounting to the cgroup.  The only requirement for it is that it should
      be set after we have fully initialized kmem accounting bits for the
      cgroup and patched all static branches relating to kmem accounting.
      Since we always check if static branch is enabled before actually
      considering if we should account (otherwise we wouldn't benefit from
      static branching), this guarantees us that we won't skip a commit or
      uncharge after a charge due to an unpatched static branch.
      
      Now let's move on to the ACTIVATED bit.  As I proved in the beginning of
      this message, it is absolutely useless, and removing it will change
      nothing.  So what was the reason introducing it?
      
      The ACTIVATED flag was introduced by commit a8964b9b ("memcg: use
      static branches when code not in use") in order to guarantee that
      static_key_slow_inc(&memcg_kmem_enabled_key) would be called only once
      for each memory cgroup when its kmem accounting was activated.  The
      point was that at that time the memcg_update_kmem_limit() function's
      work-flow looked like this:
      
              bool must_inc_static_branch = false;
      
              cgroup_lock();
              mutex_lock(&set_limit_mutex);
              if (!memcg->kmem_account_flags && val != RESOURCE_MAX) {
                      /* The kmem limit is set for the first time */
                      ret = res_counter_set_limit(&memcg->kmem, val);
      
                      memcg_kmem_set_activated(memcg);
                      must_inc_static_branch = true;
              } else
                      ret = res_counter_set_limit(&memcg->kmem, val);
              mutex_unlock(&set_limit_mutex);
              cgroup_unlock();
      
              if (must_inc_static_branch) {
                      /* We can't do this under cgroup_lock */
                      static_key_slow_inc(&memcg_kmem_enabled_key);
                      memcg_kmem_set_active(memcg);
              }
      
      So that without the ACTIVATED flag we could race with other threads
      trying to set the limit and increment the static branching ref-counter
      more than once.  Today we call the whole memcg_update_kmem_limit()
      function under the set_limit_mutex and this race is impossible.
      
      As now we understand why the ACTIVATED bit was introduced and why we
      don't need it now, and know that removing it will change nothing anyway,
      let's get rid of it.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6de64beb
    • Vladimir Davydov's avatar
      memcg, slab: RCU protect memcg_params for root caches · f8570263
      Vladimir Davydov authored
      We relocate root cache's memcg_params whenever we need to grow the
      memcg_caches array to accommodate all kmem-active memory cgroups.
      Currently on relocation we free the old version immediately, which can
      lead to use-after-free, because the memcg_caches array is accessed
      lock-free (see cache_from_memcg_idx()).  This patch fixes this by making
      memcg_params RCU-protected for root caches.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f8570263
    • Vladimir Davydov's avatar
      slab: do not panic if we fail to create memcg cache · f717eb3a
      Vladimir Davydov authored
      There is no point in flooding logs with warnings or especially crashing
      the system if we fail to create a cache for a memcg.  In this case we
      will be accounting the memcg allocation to the root cgroup until we
      succeed to create its own cache, but it isn't that critical.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f717eb3a
    • Vladimir Davydov's avatar
      memcg: get rid of kmem_cache_dup() · 842e2873
      Vladimir Davydov authored
      kmem_cache_dup() is only called from memcg_create_kmem_cache().  The
      latter, in fact, does nothing besides this, so let's fold
      kmem_cache_dup() into memcg_create_kmem_cache().
      
      This patch also makes the memcg_cache_mutex private to
      memcg_create_kmem_cache(), because it is not used anywhere else.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      842e2873
    • Vladimir Davydov's avatar
      memcg, slab: fix races in per-memcg cache creation/destruction · 2edefe11
      Vladimir Davydov authored
      We obtain a per-memcg cache from a root kmem_cache by dereferencing an
      entry of the root cache's memcg_params::memcg_caches array.  If we find
      no cache for a memcg there on allocation, we initiate the memcg cache
      creation (see memcg_kmem_get_cache()).  The cache creation proceeds
      asynchronously in memcg_create_kmem_cache() in order to avoid lock
      clashes, so there can be several threads trying to create the same
      kmem_cache concurrently, but only one of them may succeed.  However, due
      to a race in the code, it is not always true.  The point is that the
      memcg_caches array can be relocated when we activate kmem accounting for
      a memcg (see memcg_update_all_caches(), memcg_update_cache_size()).  If
      memcg_update_cache_size() and memcg_create_kmem_cache() proceed
      concurrently as described below, we can leak a kmem_cache.
      
      Asume two threads schedule creation of the same kmem_cache.  One of them
      successfully creates it.  Another one should fail then, but if
      memcg_create_kmem_cache() interleaves with memcg_update_cache_size() as
      follows, it won't:
      
        memcg_create_kmem_cache()             memcg_update_cache_size()
        (called w/o mutexes held)             (called with slab_mutex,
                                               set_limit_mutex held)
        -------------------------             -------------------------
      
        mutex_lock(&memcg_cache_mutex)
      
                                              s->memcg_params=kzalloc(...)
      
        new_cachep=cache_from_memcg_idx(cachep,idx)
        // new_cachep==NULL => proceed to creation
      
                                              s->memcg_params->memcg_caches[i]
                                                  =cur_params->memcg_caches[i]
      
        // kmem_cache_create_memcg takes slab_mutex
        // so we will hang around until
        // memcg_update_cache_size finishes, but
        // nothing will prevent it from succeeding so
        // memcg_caches[idx] will be overwritten in
        // memcg_register_cache!
      
        new_cachep = kmem_cache_create_memcg(...)
        mutex_unlock(&memcg_cache_mutex)
      
      Let's fix this by moving the check for existence of the memcg cache to
      kmem_cache_create_memcg() to be called under the slab_mutex and make it
      return NULL if so.
      
      A similar race is possible when destroying a memcg cache (see
      kmem_cache_destroy()).  Since memcg_unregister_cache(), which clears the
      pointer in the memcg_caches array, is called w/o protection, we can race
      with memcg_update_cache_size() and omit clearing the pointer.  Therefore
      memcg_unregister_cache() should be moved before we release the
      slab_mutex.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2edefe11
    • Vladimir Davydov's avatar
      memcg: fix possible NULL deref while traversing memcg_slab_caches list · 96403da2
      Vladimir Davydov authored
      All caches of the same memory cgroup are linked in the memcg_slab_caches
      list via kmem_cache::memcg_params::list.  This list is traversed, for
      example, when we read memory.kmem.slabinfo.
      
      Since the list actually consists of memcg_cache_params objects, we have
      to convert an element of the list to a kmem_cache object using
      memcg_params_to_cache(), which obtains the pointer to the cache from the
      memcg_params::memcg_caches array of the corresponding root cache.  That
      said the pointer to a kmem_cache in its parent's memcg_params must be
      initialized before adding the cache to the list, and cleared only after
      it has been unlinked.  Currently it is vice-versa, which can result in a
      NULL ptr dereference while traversing the memcg_slab_caches list.  This
      patch restores the correct order.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      96403da2
    • Vladimir Davydov's avatar
      memcg, slab: fix barrier usage when accessing memcg_caches · 959c8963
      Vladimir Davydov authored
      Each root kmem_cache has pointers to per-memcg caches stored in its
      memcg_params::memcg_caches array.  Whenever we want to allocate a slab
      for a memcg, we access this array to get per-memcg cache to allocate
      from (see memcg_kmem_get_cache()).  The access must be lock-free for
      performance reasons, so we should use barriers to assert the kmem_cache
      is up-to-date.
      
      First, we should place a write barrier immediately before setting the
      pointer to it in the memcg_caches array in order to make sure nobody
      will see a partially initialized object.  Second, we should issue a read
      barrier before dereferencing the pointer to conform to the write
      barrier.
      
      However, currently the barrier usage looks rather strange.  We have a
      write barrier *after* setting the pointer and a read barrier *before*
      reading the pointer, which is incorrect.  This patch fixes this.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      959c8963
    • Vladimir Davydov's avatar
      memcg, slab: clean up memcg cache initialization/destruction · 1aa13254
      Vladimir Davydov authored
      Currently, we have rather a messy function set relating to per-memcg
      kmem cache initialization/destruction.
      
      Per-memcg caches are created in memcg_create_kmem_cache().  This
      function calls kmem_cache_create_memcg() to allocate and initialize a
      kmem cache and then "registers" the new cache in the
      memcg_params::memcg_caches array of the parent cache.
      
      During its work-flow, kmem_cache_create_memcg() executes the following
      memcg-related functions:
      
       - memcg_alloc_cache_params(), to initialize memcg_params of the newly
         created cache;
       - memcg_cache_list_add(), to add the new cache to the memcg_slab_caches
         list.
      
      On the other hand, kmem_cache_destroy() called on a cache destruction
      only calls memcg_release_cache(), which does all the work: it cleans the
      reference to the cache in its parent's memcg_params::memcg_caches,
      removes the cache from the memcg_slab_caches list, and frees
      memcg_params.
      
      Such an inconsistency between destruction and initialization paths make
      the code difficult to read, so let's clean this up a bit.
      
      This patch moves all the code relating to registration of per-memcg
      caches (adding to memcg list, setting the pointer to a cache from its
      parent) to the newly created memcg_register_cache() and
      memcg_unregister_cache() functions making the initialization and
      destruction paths look symmetrical.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1aa13254
    • Vladimir Davydov's avatar
      memcg, slab: kmem_cache_create_memcg(): fix memleak on fail path · 363a044f
      Vladimir Davydov authored
      We do not free the cache's memcg_params if __kmem_cache_create fails.
      Fix this.
      
      Plus, rename memcg_register_cache() to memcg_alloc_cache_params(),
      because it actually does not register the cache anywhere, but simply
      initialize kmem_cache::memcg_params.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      363a044f
    • Vladimir Davydov's avatar
      slab: clean up kmem_cache_create_memcg() error handling · 3965fc36
      Vladimir Davydov authored
      Currently kmem_cache_create_memcg() backoffs on failure inside
      conditionals, without using gotos.  This results in the rollback code
      duplication, which makes the function look cumbersome even though on
      error we should only free the allocated cache.  Since in the next patch
      I am going to add yet another rollback function call on error path
      there, let's employ labels instead of conditionals for undoing any
      changes on failure to keep things clean.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: default avatarPekka Enberg <penberg@kernel.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3965fc36
    • Sasha Levin's avatar
      mm: dump page when hitting a VM_BUG_ON using VM_BUG_ON_PAGE · 309381fe
      Sasha Levin authored
      Most of the VM_BUG_ON assertions are performed on a page.  Usually, when
      one of these assertions fails we'll get a BUG_ON with a call stack and
      the registers.
      
      I've recently noticed based on the requests to add a small piece of code
      that dumps the page to various VM_BUG_ON sites that the page dump is
      quite useful to people debugging issues in mm.
      
      This patch adds a VM_BUG_ON_PAGE(cond, page) which beyond doing what
      VM_BUG_ON() does, also dumps the page before executing the actual
      BUG_ON.
      
      [akpm@linux-foundation.org: fix up includes]
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      309381fe
    • Naoya Horiguchi's avatar
      fs/proc/page.c: add PageAnon check to surely detect thp · e3bba3c3
      Naoya Horiguchi authored
      stable_page_flags() checks !PageHuge && PageTransCompound && PageLRU to
      know that a specified page is thp or not.  But sometimes it's not enough
      and we fail to detect thp when the thp is on pagevec.  This happens only
      for a few seconds after LRU list operations, but it makes it difficult
      to control our applications depending on this flag.
      
      So this patch adds another check PageAnon to detect thps on pagevec.  It
      might not give the future extensibility for thp pagecache, but it's OK
      at least for now.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e3bba3c3
    • Vladimir Davydov's avatar
      memcg: do not use vmalloc for mem_cgroup allocations · 8ff69e2c
      Vladimir Davydov authored
      The vmalloc was introduced by 33327948 ("memcgroup: use vmalloc for
      mem_cgroup allocation"), because at that time MAX_NUMNODES was used for
      defining the per-node array in the mem_cgroup structure so that the
      structure could be huge even if the system had the only NUMA node.
      
      The situation was significantly improved by commit 45cf7ebd ("memcg:
      reduce the size of struct memcg 244-fold"), which made the size of the
      mem_cgroup structure calculated dynamically depending on the real number
      of NUMA nodes installed on the system (nr_node_ids), so now there is no
      point in using vmalloc here: the structure is allocated rarely and on
      most systems its size is about 1K.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Glauber Costa <glommer@openvz.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8ff69e2c
    • Vlastimil Babka's avatar
      mm: munlock: fix potential race with THP page split · 01cc2e58
      Vlastimil Babka authored
      Since commit ff6a6da6 ("mm: accelerate munlock() treatment of THP
      pages") munlock skips tail pages of a munlocked THP page.  There is some
      attempt to prevent bad consequences of racing with a THP page split, but
      code inspection indicates that there are two problems that may lead to a
      non-fatal, yet wrong outcome.
      
      First, __split_huge_page_refcount() copies flags including PageMlocked
      from the head page to the tail pages.  Clearing PageMlocked by
      munlock_vma_page() in the middle of this operation might result in part
      of tail pages left with PageMlocked flag.  As the head page still
      appears to be a THP page until all tail pages are processed,
      munlock_vma_page() might think it munlocked the whole THP page and skip
      all the former tail pages.  Before ff6a6da6, those pages would be
      cleared in further iterations of munlock_vma_pages_range(), but NR_MLOCK
      would still become undercounted (related the next point).
      
      Second, NR_MLOCK accounting is based on call to hpage_nr_pages() after
      the PageMlocked is cleared.  The accounting might also become
      inconsistent due to race with __split_huge_page_refcount()
      
      - undercount when HUGE_PMD_NR is subtracted, but some tail pages are
        left with PageMlocked set and counted again (only possible before
        ff6a6da6)
      
      - overcount when hpage_nr_pages() sees a normal page (split has already
        finished), but the parallel split has meanwhile cleared PageMlocked from
        additional tail pages
      
      This patch prevents both problems via extending the scope of lru_lock in
      munlock_vma_page().  This is convenient because:
      
      - __split_huge_page_refcount() takes lru_lock for its whole operation
      
      - munlock_vma_page() typically takes lru_lock anyway for page isolation
      
      As this becomes a second function where page isolation is done with
      lru_lock already held, factor this out to a new
      __munlock_isolate_lru_page() function and clean up the code around.
      
      [akpm@linux-foundation.org: avoid a coding-style ugly]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      01cc2e58
    • Dave Hansen's avatar
      mm: print more details for bad_page() · f0b791a3
      Dave Hansen authored
      bad_page() is cool in that it prints out a bunch of data about the page.
      But, I can never remember which page flags are good and which are bad,
      or whether ->index or ->mapping is required to be NULL.
      
      This patch allows bad/dump_page() callers to specify a string about why
      they are dumping the page and adds explanation strings to a number of
      places.  It also adds a 'bad_flags' argument to bad_page(), which it
      then dumps out separately from the flags which are actually set.
      
      This way, the messages will show specifically why the page was bad,
      *specifically* which flags it is complaining about, if it was a page
      flag combination which was the problem.
      
      [akpm@linux-foundation.org: switch to pr_alert]
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f0b791a3
    • Dan Streetman's avatar
      mm/zswap.c: change params from hidden to ro · 12ab028b
      Dan Streetman authored
      The "compressor" and "enabled" params are currently hidden, this changes
      them to read-only, so userspace can tell if zswap is enabled or not and
      see what compressor is in use.
      Signed-off-by: default avatarDan Streetman <ddstreet@ieee.org>
      Cc: Vladimir Murzin <murzin.v@gmail.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Weijie Yang <weijie.yang@samsung.com>
      Acked-by: default avatarSeth Jennings <sjennings@variantweb.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      12ab028b
    • Dave Hansen's avatar
      mm: documentation: remove hopelessly out-of-date locking doc · 57ea8171
      Dave Hansen authored
      Documentation/vm/locking is a blast from the past.  In the entire git
      history, it has had precisely Three modifications.  Two of those look to
      be pure renames, and the third was from 2005.
      
      The doc contains such gems as:
      
      > The page_table_lock is grabbed while holding the
      > kernel_lock spinning monitor.
      
      > Page stealers hold kernel_lock to protect against a bunch of
      > races.
      
      Or this which talks about mmap_sem:
      
      > 4. The exception to this rule is expand_stack, which just
      >    takes the read lock and the page_table_lock, this is ok
      >    because it doesn't really modify fields anybody relies on.
      
      expand_stack() doesn't take any locks any more directly, and the
      mmap_sem acquisition was long ago moved up in to the page fault code
      itself.
      
      It could be argued that we need to rewrite this, but it is dangerous to
      leave it as-is.  It will confuse more people than it helps.
      Signed-off-by: default avatarDave Hansen <dave.hansen@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      57ea8171
    • Michal Simek's avatar
      microblaze: extable: sort the exception table at build time · 372c7209
      Michal Simek authored
      Sort the exception table at build-time rather than during boot.
      
      Microblaze is the same case as AARCH64 that's why EM_MICROBLAZE
      conditional check was added to allow cross-compilation on machines which
      are not running the latest libc-dev.
      
      Inspired by AARCH64 commit adace895 ("arm64: extable: sort the
      exception table at build time").
      Signed-off-by: default avatarMichal Simek <michal.simek@xilinx.com>
      Acked-by: default avatarDavid Daney <david.daney@cavium.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      372c7209
    • Geert Uytterhoeven's avatar
      cris: provide {in,out}[wl]_p() · 3fdb38bd
      Geert Uytterhoeven authored
        drivers/staging/comedi/drivers/das6402.c: In function 'intr_handler':
        drivers/staging/comedi/drivers/das6402.c:164:3: error: implicit declaration of function 'outw_p' [-Werror=implicit-function-declaration]
        drivers/staging/speakup/speakup_dtlk.c: In function 'synth_probe':
        drivers/staging/speakup/speakup_dtlk.c:362:2: error: implicit declaration of function 'inw_p' [-Werror=implicit-function-declaration]
      Signed-off-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3fdb38bd
  2. 23 Jan, 2014 4 commits
    • Linus Torvalds's avatar
      Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs · 90804ed6
      Linus Torvalds authored
      Pull UDF & jbd fixes from Jan Kara:
       "A cleanup of JBD log messages and UDF fix of a lockdep warning"
      
      * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
        udf: Fix lockdep warning from udf_symlink()
        jbd: Revise KERN_EMERG error messages
      90804ed6
    • Stephen Hemminger's avatar
      assoc_array: remove global variable · 30b02c4b
      Stephen Hemminger authored
      The associative array code creates unnecessary and potentially
      problematic global variable 'status'.  Remove it since never used.
      Signed-off-by: default avatarStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      30b02c4b
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse · 5ee7a81a
      Linus Torvalds authored
      Pull fuse update from Miklos Szeredi:
       "This contains a fix for a potential use-after-module-unload bug
        noticed by Al and caching improvements for read-only fuse filesystems
        by Andrew Gallagher"
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse:
        fuse: support clients that don't implement 'open'
        fuse: don't invalidate attrs when not using atime
        fuse: fix SetPageUptodate() condition in STORE
        fuse: fix pipe_buf_operations
      5ee7a81a
    • Linus Torvalds's avatar
      Merge tag 'for-f2fs-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs · 0d90d638
      Linus Torvalds authored
      Pull f2fs updates from Jaegeuk Kim:
       "In this round, a couple of sysfs entries were introduced to tune the
        f2fs at runtime.
      
        In addition, f2fs starts to support inline_data and improves the
        read/write performance in some workloads by refactoring bio-related
        flows.
      
        This patch-set includes the following major enhancement patches.
         - support inline_data
         - refactor bio operations such as merge operations and rw type
           assignment
         - enhance the direct IO path
         - enhance bio operations
         - truncate a node page when it becomes obsolete
         - add sysfs entries: small_discards, max_victim_search, and
           in-place-update
         - add a sysfs entry to control max_victim_search
      
        The other bug fixes are as follows.
         - fix a bug in truncate_partial_nodes
         - avoid warnings during sparse and build process
         - fix error handling flows
         - fix potential bit overflows
      
        And, there are a bunch of cleanups"
      
      * tag 'for-f2fs-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (95 commits)
        f2fs: drop obsolete node page when it is truncated
        f2fs: introduce NODE_MAPPING for code consistency
        f2fs: remove the orphan block page array
        f2fs: add help function META_MAPPING
        f2fs: move a branch for code redability
        f2fs: call mark_inode_dirty to flush dirty pages
        f2fs: clean checkpatch warnings
        f2fs: missing REQ_META and REQ_PRIO when sync_meta_pages(META_FLUSH)
        f2fs: avoid f2fs_balance_fs call during pageout
        f2fs: add delimiter to seperate name and value in debug phrase
        f2fs: use spinlock rather than mutex for better speed
        f2fs: move alloc new orphan node out of lock protection region
        f2fs: move grabing orphan pages out of protection region
        f2fs: remove the needless parameter of f2fs_wait_on_page_writeback
        f2fs: update documents and a MAINTAINERS entry
        f2fs: add a sysfs entry to control max_victim_search
        f2fs: improve write performance under frequent fsync calls
        f2fs: avoid to read inline data except first page
        f2fs: avoid to left uninitialized data in page when read inline data
        f2fs: fix truncate_partial_nodes bug
        ...
      0d90d638