1. 15 Dec, 2009 40 commits
    • Daniel Mack's avatar
      drivers/misc: add driver for Texas Instruments DAC7512 · 4d00928c
      Daniel Mack authored
      Signed-off-by: default avatarDaniel Mack <daniel@caiaq.de>
      Cc: "H Hartley Sweeten" <hartleys@visionengravers.com>
      Cc: David Brownell <dbrownell@users.sourceforge.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4d00928c
    • Xiao Guangrong's avatar
      generic-ipi: cleanup for generic_smp_call_function_interrupt() · c0f68c2f
      Xiao Guangrong authored
      Use smp_processor_id() instead of get_cpu() and put_cpu() in
      generic_smp_call_function_interrupt(), It's no need to disable preempt,
      because we must call generic_smp_call_function_interrupt() with interrupts
      disabled.
      Signed-off-by: default avatarXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c0f68c2f
    • Michael Hennerich's avatar
      ad525x_dpot: new driver for AD525x digital potentiometers · 4eb174be
      Michael Hennerich authored
      This driver supports the non-volatile digital potentiometers via I2C:
      AD5258, AD5259, AD5251, AD5252, AD5253, AD5254, and AD5255
      
      It provides a sysfs interface to each device for reading/writing which
      is documented in Documentation/misc-devices/ad525x_dpot.txt.
      Signed-off-by: default avatarMichael Hennerich <michael.hennerich@analog.com>
      Signed-off-by: default avatarChris Verges <chrisv@cyberswitching.com>
      Signed-off-by: default avatarMike Frysinger <vapier@gentoo.org>
      Cc: Jean Delvare <khali@linux-fr.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4eb174be
    • Joe Perches's avatar
      dynamic_debug.h/kernel.h: Remove KBUILD_MODNAME from dynamic_pr_debug · 00b55864
      Joe Perches authored
      If CONFIG_DYNAMIC_DEBUG is enabled and a source file has:
      
      #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
      #include <linux/kernel.h>
      
      dynamic_debug.h will duplicate KBUILD_MODNAME
      in the output string.
      
      Remove the use of KBUILD_MODNAME from the
      output format string generated by dynamic_debug.h
      
      If CONFIG_DYNAMIC_DEBUG is not enabled, no compile-time
      check is done to printk/dev_printk arguments.
      
      Add it.
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      00b55864
    • Cesar Eduardo Barros's avatar
      WARN_ONCE(): use bool for boolean flag · 42f247c8
      Cesar Eduardo Barros authored
      Commit 70867453 ("printk_once(): use bool
      for boolean flag") changed printk_once() to use bool instead of int for
      its guard variable.  Do the same change to WARN_ONCE() and WARN_ON_ONCE(),
      for the same reasons.
      
      This resulted in a reduction of 1462 bytes on a x86-64 defconfig:
      
         text    data     bss     dec     hex filename
      8101271 1207116  992764 10301151         9d2edf vmlinux.before
      8100553 1207148  991988 10299689         9d2929 vmlinux.after
      Signed-off-by: default avatarCesar Eduardo Barros <cesarb@cesarb.net>
      Cc: Roland Dreier <rolandd@cisco.com>
      Cc: Daniel Walker <dwalker@fifo99.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42f247c8
    • Alexey Dobriyan's avatar
      uml: convert to seq_file/proc_fops · 6613c5e8
      Alexey Dobriyan authored
      Convert code away from ->read_proc/->write_proc interfaces.  Switch to
      proc_create()/proc_create_data() which make addition of proc entries
      reliable wrt NULL ->proc_fops, NULL ->data and so on.
      
      Problem with ->read_proc et al is described here commit
      786d7e16 "Fix rmmod/read/write races in
      /proc entries"
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6613c5e8
    • Arjan van de Ven's avatar
      floppy: Add an extra bound check on ioctl arguments · 2886a8bd
      Arjan van de Ven authored
      gcc is not convinced that the floppy.c ioctl has sufficient bound checks:
      
      In function `copy_from_user',
          inlined from `fd_copyin' at drivers/block/floppy.c:3080,
          inlined from `fd_ioctl' at drivers/block/floppy.c:3503:
          arch/x86/include/asm/uaccess_32.h:211:
      warning: call to `copy_from_user_overflow' declared with attribute
      warning: copy_from_user buffer size is not provably correct
      
      And frankly, as a human I have a hard time proving the same more or less
      (the size comes from the ioctl argument.  humpf.  maybe.  the code isn't
      very nice)
      
      This patch adds an explicit check to make 100% sure it's safe, better than
      finding out later that there indeed was a gap.
      
      [akpm@linux-foundation.org: add WARN_ON()]
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2886a8bd
    • Julia Lawall's avatar
      drivers/cpuidle: Move dereference after NULL test · faa7b7dd
      Julia Lawall authored
      It does not seem possible that ldev can be NULL, so drop the unnecessary
      test.  If ldev can somehow be NULL, then the initialization of last_idx
      should be moved below the test.
      
      A simplified version of the semantic match that detects this problem is as
      follows (http://coccinelle.lip6.fr/):
      
      // <smpl>
      @match exists@
      expression x, E;
      identifier fld;
      @@
      
      * x->fld
        ... when != \(x = E\|&x\)
      * x == NULL
      // </smpl>
      Signed-off-by: default avatarJulia Lawall <julia@diku.dk>
      Acked-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      faa7b7dd
    • Alexey Dobriyan's avatar
    • Alexey Dobriyan's avatar
      alpha: convert srm code to seq_file · 0ead0f84
      Alexey Dobriyan authored
      Convert code away from ->read_proc/->write_proc interfaces.  Switch to
      proc_create()/proc_create_data() which make addition of proc entries
      reliable wrt NULL ->proc_fops, NULL ->data and so on.
      
      Problem with ->read_proc et al is described here commit
      786d7e16 "Fix rmmod/read/write races in
      /proc entries"
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ead0f84
    • john stultz's avatar
      procfs: allow threads to rename siblings via /proc/pid/tasks/tid/comm · 4614a696
      john stultz authored
      Setting a thread's comm to be something unique is a very useful ability
      and is helpful for debugging complicated threaded applications.  However
      currently the only way to set a thread name is for the thread to name
      itself via the PR_SET_NAME prctl.
      
      However, there may be situations where it would be advantageous for a
      thread dispatcher to be naming the threads its managing, rather then
      having the threads self-describe themselves.  This sort of behavior is
      available on other systems via the pthread_setname_np() interface.
      
      This patch exports a task's comm via proc/pid/comm and
      proc/pid/task/tid/comm interfaces, and allows thread siblings to write to
      these values.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: default avatarJohn Stultz <johnstul@us.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Mike Fulton <fultonm@ca.ibm.com>
      Cc: Sean Foley <Sean_Foley@ca.ibm.com>
      Cc: Darren Hart <dvhltc@us.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4614a696
    • Steven J. Magnani's avatar
      procfs: use proper units for noMMU statm · 7e1e0ef2
      Steven J. Magnani authored
      On no-MMU systems, sizes reported in /proc/n/statm have units of bytes.
      Per Documentation/filesystems/proc.txt, these values should be in pages.
      Signed-off-by: default avatarSteven J. Magnani <steve@digidescorp.com>
      Cc: Greg Ungerer <gerg@snapgear.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7e1e0ef2
    • Jie Zhang's avatar
      nommu: fix malloc performance by adding uninitialized flag · ea637639
      Jie Zhang authored
      The NOMMU code currently clears all anonymous mmapped memory.  While this
      is what we want in the default case, all memory allocation from userspace
      under NOMMU has to go through this interface, including malloc() which is
      allowed to return uninitialized memory.  This can easily be a significant
      performance penalty.  So for constrained embedded systems were security is
      irrelevant, allow people to avoid clearing memory unnecessarily.
      
      This also alters the ELF-FDPIC binfmt such that it obtains uninitialised
      memory for the brk and stack region.
      Signed-off-by: default avatarJie Zhang <jie.zhang@analog.com>
      Signed-off-by: default avatarRobin Getz <rgetz@blackfin.uclinux.org>
      Signed-off-by: default avatarMike Frysinger <vapier@gentoo.org>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarPaul Mundt <lethal@linux-sh.org>
      Acked-by: default avatarGreg Ungerer <gerg@snapgear.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea637639
    • Naoya Horiguchi's avatar
      mm hugetlb: add hugepage support to pagemap · 5dc37642
      Naoya Horiguchi authored
      This patch enables extraction of the pfn of a hugepage from
      /proc/pid/pagemap in an architecture independent manner.
      
      Details
      -------
      My test program (leak_pagemap) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call page-types with option -p,
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patches
      ------------------
      $ ./leak_pagemap
                   flags page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          1        0  __________________________________
      0x0000000000000804          1        0  __R________M______________________ referenced,mmap
      0x000000000000086c         81        0  __RU_lA____M______________________ referenced,uptodate,lru,active,mmap
      0x0000000000005808          5        0  ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked
      0x0000000000005868         12        0  ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c          1        0  __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total        101        0
      
      The output of page-types don't show any hugepage.
      
      With my patches
      ---------------
      $ ./leak_pagemap
                   flags page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          1        0  __________________________________
      0x0000000000030000      51100      199  ________________TG________________ compound_tail,huge
      0x0000000000028018        100        0  ___UD__________H_G________________ uptodate,dirty,compound_head,huge
      0x0000000000000804          1        0  __R________M______________________ referenced,mmap
      0x000000000000080c          1        0  __RU_______M______________________ referenced,uptodate,mmap
      0x000000000000086c         80        0  __RU_lA____M______________________ referenced,uptodate,lru,active,mmap
      0x0000000000005808          4        0  ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked
      0x0000000000005868         12        0  ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c          1        0  __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total      51300      200
      
      The output of page-types shows 51200 pages contributing to hugepages,
      containing 100 head pages and 51100 tail pages as expected.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5dc37642
    • Naoya Horiguchi's avatar
      mm: hugetlb: fix hugepage memory leak in walk_page_range() · d33b9f45
      Naoya Horiguchi authored
      Most callers of pmd_none_or_clear_bad() check whether the target page is
      in a hugepage or not, but walk_page_range() do not check it.  So if we
      read /proc/pid/pagemap for the hugepage on x86 machine, the hugepage
      memory is leaked as shown below.  This patch fixes it.
      
      Details
      =======
      My test program (leak_pagemap) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call page-types with option -p (walk around the page tables),
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patches
      ------------------
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_pagemap
      [snip output]
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:      900
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      100 hugepages are accounted as used while there is no file on hugetlbfs.
      
      With my patches
      ---------------
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_pagemap
      [snip output]
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs
      $
      
      No memory leaks.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: <stable@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d33b9f45
    • Naoya Horiguchi's avatar
      mm: hugetlb: fix hugepage memory leak in mincore() · 4f16fc10
      Naoya Horiguchi authored
      Most callers of pmd_none_or_clear_bad() check whether the target page is
      in a hugepage or not, but mincore() and walk_page_range() do not check it.
       So if we use mincore() on a hugepage on x86 machine, the hugepage memory
      is leaked as shown below.  This patch fixes it by extending mincore()
      system call to support hugepages.
      
      Details
      =======
      My test program (leak_mincore) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call mincore() for first ten pages and printf() the values of *vec
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patch
      ----------------
      $ cat /proc/meminfo| grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_mincore
      vec[0] 0
      vec[1] 0
      vec[2] 0
      vec[3] 0
      vec[4] 0
      vec[5] 0
      vec[6] 0
      vec[7] 0
      vec[8] 0
      vec[9] 0
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:      999
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      Return values in *vec from mincore() are set to 0, while the hugepage
      should be in memory, and 1 hugepage is still accounted as used while
      there is no file on hugetlbfs.
      
      With my patch
      -------------
      $ cat /proc/meminfo| grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_mincore
      vec[0] 1
      vec[1] 1
      vec[2] 1
      vec[3] 1
      vec[4] 1
      vec[5] 1
      vec[6] 1
      vec[7] 1
      vec[8] 1
      vec[9] 1
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      Return value in *vec set to 1 and no memory leaks.
      
      [akpm@linux-foundation.org: cleanup]
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: <stable@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f16fc10
    • Mel Gorman's avatar
      hugetlb: abort a hugepage pool resize if a signal is pending · 536240f2
      Mel Gorman authored
      If a user asks for a hugepage pool resize but specified a large number,
      the machine can begin trashing.  In response, they might hit ctrl-c but
      signals are ignored and the pool resize continues until it fails an
      allocation.  This can take a considerable amount of time so this patch
      aborts a pool resize if a signal is pending.
      
      Suggested by Dave Hansen.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      536240f2
    • Lee Schermerhorn's avatar
      mlock: replace stale comments in munlock_vma_page() · 6927c1dd
      Lee Schermerhorn authored
      Cleanup stale comments on munlock_vma_page().
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6927c1dd
    • Lee Schermerhorn's avatar
      mm: remove unevictable_migrate_page function · 418b27ef
      Lee Schermerhorn authored
      unevictable_migrate_page() in mm/internal.h is a relic of the since
      removed UNEVICTABLE_LRU Kconfig option.  This patch removes the function
      and open codes the test in migrate_page_copy().
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Acked-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      418b27ef
    • Mel Gorman's avatar
      hugetlb: acquire the i_mmap_lock before walking the prio_tree to unmap a page · 4eb2b1dc
      Mel Gorman authored
      When the owner of a mapping fails COW because a child process is holding a
      reference, the children VMAs are walked and the page is unmapped.  The
      i_mmap_lock is taken for the unmapping of the page but not the walking of
      the prio_tree.  In theory, that tree could be changing if the lock is not
      held.  This patch takes the i_mmap_lock properly for the duration of the
      prio_tree walk.
      
      [hugh.dickins@tiscali.co.uk: Spotted the problem in the first place]
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4eb2b1dc
    • Amerigo Wang's avatar
      'sysctl_max_map_count' should be non-negative · 70da2340
      Amerigo Wang authored
      Jan Engelhardt reported we have this problem:
      
      setting max_map_count to a value large enough results in programs dying at
      first try.  This is on 2.6.31.6:
      
      15:59 borg:/proc/sys/vm # echo $[1<<31-1] >max_map_count
      15:59 borg:/proc/sys/vm # cat max_map_count
      1073741824
      15:59 borg:/proc/sys/vm # echo $[1<<31] >max_map_count
      15:59 borg:/proc/sys/vm # cat max_map_count
      Killed
      
      This is because we have a chance to make 'max_map_count' negative.  but
      it's meaningless.  Make it only accept non-negative values.
      Reported-by: default avatarJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: default avatarWANG Cong <amwang@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: James Morris <jmorris@namei.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70da2340
    • Huang Shijie's avatar
      include/linux/mm.h: remove unneeded ifdef · f096e59e
      Huang Shijie authored
      The check code for CONFIG_SWAP is redundant, because there is a
      non-CONFIG_SWAP version for PageSwapCache() which just returns 0.
      Signed-off-by: default avatarHuang Shijie <shijie8@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f096e59e
    • Magnus Damm's avatar
      mm: uncached vma support with writenotify · c9d0bf24
      Magnus Damm authored
      Modify the generic mmap() code to keep the cache attribute in
      vma->vm_page_prot regardless if writenotify is enabled or not.  Without
      this patch the cache configuration selected by f_op->mmap() is overwritten
      if writenotify is enabled, making it impossible to keep the vma uncached.
      
      Needed by drivers such as drivers/video/sh_mobile_lcdcfb.c which uses
      deferred io together with uncached memory.
      Signed-off-by: default avatarMagnus Damm <damm@opensource.se>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jaya Kumar <jayakumar.lkml@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c9d0bf24
    • Huang Shijie's avatar
      vmscan: simplify code · 62c0c2f1
      Huang Shijie authored
      Simplify the code for shrink_inactive_list().
      Signed-off-by: default avatarHuang Shijie <shijie8@gmail.com>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      62c0c2f1
    • Rik van Riel's avatar
      vmscan: do not evict inactive pages when skipping an active list scan · b39415b2
      Rik van Riel authored
      In AIM7 runs, recent kernels start swapping out anonymous pages well
      before they should.  This is due to shrink_list falling through to
      shrink_inactive_list if !inactive_anon_is_low(zone, sc), when all we
      really wanted to do is pre-age some anonymous pages to give them extra
      time to be referenced while on the inactive list.
      
      The obvious fix is to make sure that shrink_list does not fall through to
      scanning/reclaiming inactive pages when we called it to scan one of the
      active lists.
      
      This change should be safe because the loop in shrink_zone ensures that we
      will still shrink the anon and file inactive lists whenever we should.
      
      [kosaki.motohiro@jp.fujitsu.com: inactive_file_is_low() should be inactive_anon_is_low()]
      Reported-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Tomasz Chmielewski <mangoo@wpkg.org>
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b39415b2
    • Jan Beulich's avatar
      mm/bootmem.c: properly __init-annotate helper functions · 8aa043d7
      Jan Beulich authored
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8aa043d7
    • David Rientjes's avatar
      mm: slab-allocate memory section nodemask for large systems · 9ae49fab
      David Rientjes authored
      Nodemasks should not be allocated on the stack for large systems (when it
      is larger than 256 bytes) since there is a threat of overflow.
      
      This patch causes the unregister_mem_sect_under_nodes() nodemask to be
      allocated on the stack for smaller systems and be allocated by slab for
      larger systems.
      
      GFP_KERNEL is used since remove_memory_block() can block.
      
      Cc: Gary Hade <garyhade@us.ibm.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Alex Chiang <achiang@hp.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9ae49fab
    • KOSAKI Motohiro's avatar
      mm: simplify try_to_unmap_one() · caed0f48
      KOSAKI Motohiro authored
      SWAP_MLOCK mean "We marked the page as PG_MLOCK, please move it to
      unevictable-lru". So, following code is easy confusable.
      
              if (vma->vm_flags & VM_LOCKED) {
                      ret = SWAP_MLOCK;
                      goto out_unmap;
              }
      
      Plus, if the VMA doesn't have VM_LOCKED, We don't need to check
      the needed of calling mlock_vma_page().
      
      Also, add some commentary to try_to_unmap_one().
      Acked-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      caed0f48
    • Rakib Mullick's avatar
      mm: fix section mismatch in memory_hotplug.c · 23ce932a
      Rakib Mullick authored
      __free_pages_bootmem() is a __meminit function - which has been called
      from put_pages_bootmem thus causes a section mismatch warning.
      
       We were warned by the following warning:
      
        LD      mm/built-in.o
      WARNING: mm/built-in.o(.text+0x26b22): Section mismatch in reference
      from the function put_page_bootmem() to the function
      .meminit.text:__free_pages_bootmem()
      The function put_page_bootmem() references
      the function __meminit __free_pages_bootmem().
      This is often because put_page_bootmem lacks a __meminit
      annotation or the annotation of __free_pages_bootmem is wrong.
      Signed-off-by: default avatarRakib Mullick <rakib.mullick@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23ce932a
    • Larry Woodman's avatar
      hugetlb: prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails · b76c8cfb
      Larry Woodman authored
      hugetlb_fault() takes the mm->page_table_lock spinlock then calls
      hugetlb_cow().  If the alloc_huge_page() in hugetlb_cow() fails due to an
      insufficient huge page pool it calls unmap_ref_private() with the
      mm->page_table_lock held.  unmap_ref_private() then calls
      unmap_hugepage_range() which tries to acquire the mm->page_table_lock.
      
      [<ffffffff810928c3>] print_circular_bug_tail+0x80/0x9f
       [<ffffffff8109280b>] ? check_noncircular+0xb0/0xe8
       [<ffffffff810935e0>] __lock_acquire+0x956/0xc0e
       [<ffffffff81093986>] lock_acquire+0xee/0x12e
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff814c348d>] _spin_lock+0x40/0x89
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111afee>] ? alloc_huge_page+0x218/0x318
       [<ffffffff8111a7a6>] unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111b2d0>] hugetlb_cow+0x1e2/0x3f4
       [<ffffffff8111b935>] ? hugetlb_fault+0x453/0x4f6
       [<ffffffff8111b962>] hugetlb_fault+0x480/0x4f6
       [<ffffffff8111baee>] follow_hugetlb_page+0x116/0x2d9
       [<ffffffff814c31a7>] ? _spin_unlock_irq+0x3a/0x5c
       [<ffffffff81107b4d>] __get_user_pages+0x2a3/0x427
       [<ffffffff81107d0f>] get_user_pages+0x3e/0x54
       [<ffffffff81040b8b>] get_user_pages_fast+0x170/0x1b5
       [<ffffffff81160352>] dio_get_page+0x64/0x14a
       [<ffffffff8116112a>] __blockdev_direct_IO+0x4b7/0xb31
       [<ffffffff8115ef91>] blkdev_direct_IO+0x58/0x6e
       [<ffffffff8115e0a4>] ? blkdev_get_blocks+0x0/0xb8
       [<ffffffff810ed2c5>] generic_file_aio_read+0xdd/0x528
       [<ffffffff81219da3>] ? avc_has_perm+0x66/0x8c
       [<ffffffff81132842>] do_sync_read+0xf5/0x146
       [<ffffffff8107da00>] ? autoremove_wake_function+0x0/0x5a
       [<ffffffff81211857>] ? security_file_permission+0x24/0x3a
       [<ffffffff81132fd8>] vfs_read+0xb5/0x126
       [<ffffffff81133f6b>] ? fget_light+0x5e/0xf8
       [<ffffffff81133131>] sys_read+0x54/0x8c
       [<ffffffff81011e42>] system_call_fastpath+0x16/0x1b
      
      This can be fixed by dropping the mm->page_table_lock around the call to
      unmap_ref_private() if alloc_huge_page() fails, its dropped right below in
      the normal path anyway.  However, earlier in the that function, it's also
      possible to call into the page allocator with the same spinlock held.
      
      What this patch does is drop the spinlock before the page allocator is
      potentially entered.  The check for page allocation failure can be made
      without the page_table_lock as well as the copy of the huge page.  Even if
      the PTE changed while the spinlock was held, the consequence is that a
      huge page is copied unnecessarily.  This resolves both the double taking
      of the lock and sleeping with the spinlock held.
      
      [mel@csn.ul.ie: Cover also the case where process can sleep with spinlock]
      Signed-off-by: default avatarLarry Woodman <lwooman@redhat.com>
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b76c8cfb
    • Andrew Morton's avatar
      mm: memory_hotplug: make offline_pages() static · b4e655a4
      Andrew Morton authored
      It has no references outside memory_hotplug.c.
      
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4e655a4
    • Hugh Dickins's avatar
      ksm: remove unswappable max_kernel_pages · d0f209f6
      Hugh Dickins authored
      Now that ksm pages are swappable, and the known holes plugged, remove
      mention of unswappable kernel pages from KSM documentation and comments.
      
      Remove the totalram_pages/4 initialization of max_kernel_pages.  In fact,
      remove max_kernel_pages altogether - we can reinstate it if removal turns
      out to break someone's script; but if we later want to limit KSM's memory
      usage, limiting the stable nodes would not be an effective approach.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0f209f6
    • Hugh Dickins's avatar
      ksm: memory hotremove migration only · 62b61f61
      Hugh Dickins authored
      The previous patch enables page migration of ksm pages, but that soon gets
      into trouble: not surprising, since we're using the ksm page lock to lock
      operations on its stable_node, but page migration switches the page whose
      lock is to be used for that.  Another layer of locking would fix it, but
      do we need that yet?
      
      Do we actually need page migration of ksm pages?  Yes, memory hotremove
      needs to offline sections of memory: and since we stopped allocating ksm
      pages with GFP_HIGHUSER, they will tend to be GFP_HIGHUSER_MOVABLE
      candidates for migration.
      
      But KSM is currently unconscious of NUMA issues, happily merging pages
      from different NUMA nodes: at present the rule must be, not to use
      MADV_MERGEABLE where you care about NUMA.  So no, NUMA page migration of
      ksm pages does not make sense yet.
      
      So, to complete support for ksm swapping we need to make hotremove safe.
      ksm_memory_callback() take ksm_thread_mutex when MEM_GOING_OFFLINE and
      release it when MEM_OFFLINE or MEM_CANCEL_OFFLINE.  But if mapped pages
      are freed before migration reaches them, stable_nodes may be left still
      pointing to struct pages which have been removed from the system: the
      stable_node needs to identify a page by pfn rather than page pointer, then
      it can safely prune them when MEM_OFFLINE.
      
      And make NUMA migration skip PageKsm pages where it skips PageReserved.
      But it's only when we reach unmap_and_move() that the page lock is taken
      and we can be sure that raised pagecount has prevented a PageAnon from
      being upgraded: so add offlining arg to migrate_pages(), to migrate ksm
      page when offlining (has sufficient locking) but reject it otherwise.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      62b61f61
    • Hugh Dickins's avatar
      ksm: rmap_walk to remove_migation_ptes · e9995ef9
      Hugh Dickins authored
      A side-effect of making ksm pages swappable is that they have to be placed
      on the LRUs: which then exposes them to isolate_lru_page() and hence to
      page migration.
      
      Add rmap_walk() for remove_migration_ptes() to use: rmap_walk_anon() and
      rmap_walk_file() in rmap.c, but rmap_walk_ksm() in ksm.c.  Perhaps some
      consolidation with existing code is possible, but don't attempt that yet
      (try_to_unmap needs to handle nonlinears, but migration pte removal does
      not).
      
      rmap_walk() is sadly less general than it appears: rmap_walk_anon(), like
      remove_anon_migration_ptes() which it replaces, avoids calling
      page_lock_anon_vma(), because that includes a page_mapped() test which
      fails when all migration ptes are in place.  That was valid when NUMA page
      migration was introduced (holding mmap_sem provided the missing guarantee
      that anon_vma's slab had not already been destroyed), but I believe not
      valid in the memory hotremove case added since.
      
      For now do the same as before, and consider the best way to fix that
      unlikely race later on.  When fixed, we can probably use rmap_walk() on
      hwpoisoned ksm pages too: for now, they remain among hwpoison's various
      exceptions (its PageKsm test comes before the page is locked, but its
      page_lock_anon_vma fails safely if an anon gets upgraded).
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9995ef9
    • Hugh Dickins's avatar
      ksm: mem cgroup charge swapin copy · 407f9c8b
      Hugh Dickins authored
      But ksm swapping does require one small change in mem cgroup handling.
      When do_swap_page()'s call to ksm_might_need_to_copy() does indeed
      substitute a duplicate page to accommodate a different anon_vma (or a the
      !PageSwapCache check in mem_cgroup_try_charge_swapin().
      
      That was returning success without charging, on the assumption that
      pte_same() would fail after, which is not the case here.  Originally I
      proposed that success, so that an unshrinkable mem cgroup at its limit
      would not fail unnecessarily; but that's a minor point, and there are
      plenty of other places where we may fail an overallocation which might
      later prove unnecessary.  So just go ahead and do what all the other
      exceptions do: proceed to charge current mm.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      407f9c8b
    • Hugh Dickins's avatar
      ksm: share anon page without allocating · 80e14822
      Hugh Dickins authored
      When ksm pages were unswappable, it made no sense to include them in mem
      cgroup accounting; but now that they are swappable (although I see no
      strict logical connection) the principle of least surprise implies that
      they should be accounted (with the usual dissatisfaction, that a shared
      page is accounted to only one of the cgroups using it).
      
      This patch was intended to add mem cgroup accounting where necessary; but
      turned inside out, it now avoids allocating a ksm page, instead upgrading
      an anon page to ksm - which brings its existing mem cgroup accounting with
      it.  Thus mem cgroups don't appear in the patch at all.
      
      This upgrade from PageAnon to PageKsm takes place under page lock (via a
      somewhat hacky NULL kpage interface), and audit showed only one place
      which needed to cope with the race - page_referenced() is sometimes used
      without page lock, so page_lock_anon_vma() needs an ACCESS_ONCE() to be
      sure of getting anon_vma and flags together (no problem if the page goes
      ksm an instant after, the integrity of that anon_vma list is unaffected).
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80e14822
    • Hugh Dickins's avatar
      ksm: take keyhole reference to page · 4035c07a
      Hugh Dickins authored
      There's a lamentable flaw in KSM swapping: the stable_node holds a
      reference to the ksm page, so the page to be freed cannot actually be
      freed until ksmd works its way around to removing the last rmap_item from
      its stable_node.  Which in some configurations may take minutes: not quite
      responsive enough for memory reclaim.  And we don't want to twist KSM and
      its locking more tightly into the rest of mm.  What a pity.
      
      But although the stable_node needs to hold a pointer to the ksm page, does
      it actually need to raise the reference count of that page?
      
      No.  It would need to do so if struct pages were ordinary kmalloc'ed
      objects; but they are more stable than that, and reused in particular ways
      according to particular rules.
      
      Access to stable_node from its pointer in struct page is no problem, so
      long as we never free a stable_node before the ksm page itself has been
      freed.  Access to struct page from its pointer in stable_node: reintroduce
      get_ksm_page(), and let that peep out through its keyhole (the stable_node
      pointer to ksm page), to see if that struct page still holds the right key
      to open it (the ksm page mapping pointer back to this stable_node).
      
      This relies upon the established way in which free_hot_cold_page() sets an
      anon (including ksm) page->mapping to NULL; and relies upon no other user
      of a struct page to put something which looks like the original
      stable_node pointer (with two low bits also set) into page->mapping.  It
      also needs get_page_unless_zero() technique pioneered by speculative
      pagecache; and uses rcu_read_lock() to keep the guarantees that gives.
      
      There are several drivers which put pointers of their own into page->
      mapping; but none of those could coincide with our stable_node pointers,
      since KSM won't free a stable_node until it sees that the page has gone.
      
      The only problem case found is the pagetable spinlock USE_SPLIT_PTLOCKS
      places in struct page (my own abuse): to accommodate GENERIC_LOCKBREAK's
      break_lock on 32-bit, that spans both page->private and page->mapping.
      Since break_lock is only 0 or 1, again no confusion for get_ksm_page().
      
      But what of DEBUG_SPINLOCK on 64-bit bigendian?  When owner_cpu is 3
      (matching PageKsm low bits), it might see 0xdead4ead00000003 in page->
      mapping, which might coincide?  We could get around that by...  but a
      better answer is to suppress USE_SPLIT_PTLOCKS when DEBUG_SPINLOCK or
      DEBUG_LOCK_ALLOC, to stop bloating sizeof(struct page) in their case -
      already proposed in an earlier mm/Kconfig patch.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4035c07a
    • Hugh Dickins's avatar
      ksm: hold anon_vma in rmap_item · db114b83
      Hugh Dickins authored
      For full functionality, page_referenced_one() and try_to_unmap_one() need
      to know the vma: to pass vma down to arch-dependent flushes, or to observe
      VM_LOCKED or VM_EXEC.  But KSM keeps no record of vma: nor can it, since
      vmas get split and merged without its knowledge.
      
      Instead, note page's anon_vma in its rmap_item when adding to stable tree:
      all the vmas which might map that page are listed by its anon_vma.
      
      page_referenced_ksm() and try_to_unmap_ksm() then traverse the anon_vma,
      first to find the probable vma, that which matches rmap_item's mm; but if
      that is not enough to locate all instances, traverse again to try the
      others.  This catches those occasions when fork has duplicated a pte of a
      ksm page, but ksmd has not yet come around to assign it an rmap_item.
      
      But each rmap_item in the stable tree which refers to an anon_vma needs to
      take a reference to it.  Andrea's anon_vma design cleverly avoided a
      reference count (an anon_vma was free when its list of vmas was empty),
      but KSM now needs to add that.  Is a 32-bit count sufficient?  I believe
      so - the anon_vma is only free when both count is 0 and list is empty.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      db114b83
    • Hugh Dickins's avatar
      ksm: let shared pages be swappable · 5ad64688
      Hugh Dickins authored
      Initial implementation for swapping out KSM's shared pages: add
      page_referenced_ksm() and try_to_unmap_ksm(), which rmap.c calls when
      faced with a PageKsm page.
      
      Most of what's needed can be got from the rmap_items listed from the
      stable_node of the ksm page, without discovering the actual vma: so in
      this patch just fake up a struct vma for page_referenced_one() or
      try_to_unmap_one(), then refine that in the next patch.
      
      Add VM_NONLINEAR to ksm_madvise()'s list of exclusions: it has always been
      implicit there (being only set with VM_SHARED, already excluded), but
      let's make it explicit, to help justify the lack of nonlinear unmap.
      
      Rely on the page lock to protect against concurrent modifications to that
      page's node of the stable tree.
      
      The awkward part is not swapout but swapin: do_swap_page() and
      page_add_anon_rmap() now have to allow for new possibilities - perhaps a
      ksm page still in swapcache, perhaps a swapcache page associated with one
      location in one anon_vma now needed for another location or anon_vma.
      (And the vma might even be no longer VM_MERGEABLE when that happens.)
      
      ksm_might_need_to_copy() checks for that case, and supplies a duplicate
      page when necessary, simply leaving it to a subsequent pass of ksmd to
      rediscover the identity and merge them back into one ksm page.
      Disappointingly primitive: but the alternative would have to accumulate
      unswappable info about the swapped out ksm pages, limiting swappability.
      
      Remove page_add_ksm_rmap(): page_add_anon_rmap() now has to allow for the
      particular case it was handling, so just use it instead.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5ad64688
    • Hugh Dickins's avatar
      ksm: fix mlockfreed to munlocked · 73848b46
      Hugh Dickins authored
      When KSM merges an mlocked page, it has been forgetting to munlock it:
      that's been left to free_page_mlock(), which reports it in /proc/vmstat as
      unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked (and
      whinges "Page flag mlocked set for process" in mmotm, whereas mainline is
      silently forgiving).  Call munlock_vma_page() to fix that.
      Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      73848b46