1. 14 Apr, 2015 40 commits
    • Balasubramani Vivekanandan's avatar
      memcg: print cgroup information when system panics due to panic_on_oom · 2415b9f5
      Balasubramani Vivekanandan authored
      If kernel panics due to oom, caused by a cgroup reaching its limit, when
      'compulsory panic_on_oom' is enabled, then we will only see that the OOM
      happened because of "compulsory panic_on_oom is enabled" but this doesn't
      tell the difference between mempolicy and memcg.  And dumping system wide
      information is plain wrong and more confusing.  This patch provides the
      information of the cgroup whose limit triggerred panic
      Signed-off-by: default avatarBalasubramani Vivekanandan <balasubramani_vivekanandan@mentor.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2415b9f5
    • Mel Gorman's avatar
      mm: numa: remove migrate_ratelimited · 2a8e7002
      Mel Gorman authored
      This code is dead since commit 9e645ab6 ("sched/numa: Continue PTE
      scanning even if migrate rate limited") so remove it.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2a8e7002
    • Kees Cook's avatar
      mm: fold arch_randomize_brk into ARCH_HAS_ELF_RANDOMIZE · 204db6ed
      Kees Cook authored
      The arch_randomize_brk() function is used on several architectures,
      even those that don't support ET_DYN ASLR. To avoid bulky extern/#define
      tricks, consolidate the support under CONFIG_ARCH_HAS_ELF_RANDOMIZE for
      the architectures that support it, while still handling CONFIG_COMPAT_BRK.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: "David A. Long" <dave.long@linaro.org>
      Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Arun Chandran <achandran@mvista.com>
      Cc: Yann Droneaud <ydroneaud@opteya.com>
      Cc: Min-Hua Chen <orca.chen@gmail.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Alex Smith <alex@alex-smith.me.uk>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: Vineeth Vijayan <vvijayan@mvista.com>
      Cc: Jeff Bailey <jeffbailey@google.com>
      Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Behan Webster <behanw@converseincode.com>
      Cc: Ismael Ripoll <iripoll@upv.es>
      Cc: Jan-Simon Mller <dl9pf@gmx.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      204db6ed
    • Kees Cook's avatar
      mm: split ET_DYN ASLR from mmap ASLR · d1fd836d
      Kees Cook authored
      This fixes the "offset2lib" weakness in ASLR for arm, arm64, mips,
      powerpc, and x86.  The problem is that if there is a leak of ASLR from
      the executable (ET_DYN), it means a leak of shared library offset as
      well (mmap), and vice versa.  Further details and a PoC of this attack
      is available here:
      
        http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html
      
      With this patch, a PIE linked executable (ET_DYN) has its own ASLR
      region:
      
        $ ./show_mmaps_pie
        54859ccd6000-54859ccd7000 r-xp  ...  /tmp/show_mmaps_pie
        54859ced6000-54859ced7000 r--p  ...  /tmp/show_mmaps_pie
        54859ced7000-54859ced8000 rw-p  ...  /tmp/show_mmaps_pie
        7f75be764000-7f75be91f000 r-xp  ...  /lib/x86_64-linux-gnu/libc.so.6
        7f75be91f000-7f75beb1f000 ---p  ...  /lib/x86_64-linux-gnu/libc.so.6
        7f75beb1f000-7f75beb23000 r--p  ...  /lib/x86_64-linux-gnu/libc.so.6
        7f75beb23000-7f75beb25000 rw-p  ...  /lib/x86_64-linux-gnu/libc.so.6
        7f75beb25000-7f75beb2a000 rw-p  ...
        7f75beb2a000-7f75beb4d000 r-xp  ...  /lib64/ld-linux-x86-64.so.2
        7f75bed45000-7f75bed46000 rw-p  ...
        7f75bed46000-7f75bed47000 r-xp  ...
        7f75bed47000-7f75bed4c000 rw-p  ...
        7f75bed4c000-7f75bed4d000 r--p  ...  /lib64/ld-linux-x86-64.so.2
        7f75bed4d000-7f75bed4e000 rw-p  ...  /lib64/ld-linux-x86-64.so.2
        7f75bed4e000-7f75bed4f000 rw-p  ...
        7fffb3741000-7fffb3762000 rw-p  ...  [stack]
        7fffb377b000-7fffb377d000 r--p  ...  [vvar]
        7fffb377d000-7fffb377f000 r-xp  ...  [vdso]
      
      The change is to add a call the newly created arch_mmap_rnd() into the
      ELF loader for handling ET_DYN ASLR in a separate region from mmap ASLR,
      as was already done on s390.  Removes CONFIG_BINFMT_ELF_RANDOMIZE_PIE,
      which is no longer needed.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reported-by: default avatarHector Marco-Gisbert <hecmargi@upv.es>
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: "David A. Long" <dave.long@linaro.org>
      Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Arun Chandran <achandran@mvista.com>
      Cc: Yann Droneaud <ydroneaud@opteya.com>
      Cc: Min-Hua Chen <orca.chen@gmail.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Alex Smith <alex@alex-smith.me.uk>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: Vineeth Vijayan <vvijayan@mvista.com>
      Cc: Jeff Bailey <jeffbailey@google.com>
      Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Behan Webster <behanw@converseincode.com>
      Cc: Ismael Ripoll <iripoll@upv.es>
      Cc: Jan-Simon Mller <dl9pf@gmx.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d1fd836d
    • Kees Cook's avatar
      s390: redefine randomize_et_dyn for ELF_ET_DYN_BASE · c6f5b001
      Kees Cook authored
      In preparation for moving ET_DYN randomization into the ELF loader (which
      requires a static ELF_ET_DYN_BASE), this redefines s390's existing ET_DYN
      randomization in a call to arch_mmap_rnd(). This refactoring results in
      the same ET_DYN randomization on s390.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c6f5b001
    • Kees Cook's avatar
      mm: expose arch_mmap_rnd when available · 2b68f6ca
      Kees Cook authored
      When an architecture fully supports randomizing the ELF load location,
      a per-arch mmap_rnd() function is used to find a randomized mmap base.
      In preparation for randomizing the location of ET_DYN binaries
      separately from mmap, this renames and exports these functions as
      arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE
      for describing this feature on architectures that support it
      (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390
      already supports a separated ET_DYN ASLR from mmap ASLR without the
      ARCH_BINFMT_ELF_RANDOMIZE_PIE logic).
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: "David A. Long" <dave.long@linaro.org>
      Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Arun Chandran <achandran@mvista.com>
      Cc: Yann Droneaud <ydroneaud@opteya.com>
      Cc: Min-Hua Chen <orca.chen@gmail.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Alex Smith <alex@alex-smith.me.uk>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: Vineeth Vijayan <vvijayan@mvista.com>
      Cc: Jeff Bailey <jeffbailey@google.com>
      Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Behan Webster <behanw@converseincode.com>
      Cc: Ismael Ripoll <iripoll@upv.es>
      Cc: Jan-Simon Mller <dl9pf@gmx.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2b68f6ca
    • Kees Cook's avatar
      s390: standardize mmap_rnd() usage · 8e89a356
      Kees Cook authored
      In preparation for splitting out ET_DYN ASLR, this refactors the use of
      mmap_rnd() to be used similarly to arm and x86, and extracts the
      checking of PF_RANDOMIZE.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e89a356
    • Kees Cook's avatar
      powerpc: standardize mmap_rnd() usage · ed632274
      Kees Cook authored
      In preparation for splitting out ET_DYN ASLR, this refactors the use of
      mmap_rnd() to be used similarly to arm and x86.
      
      (Can mmap ASLR be safely enabled in the legacy mmap case here?  Other
      archs use "mm->mmap_base = TASK_UNMAPPED_BASE + random_factor".)
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ed632274
    • Kees Cook's avatar
      mips: extract logic for mmap_rnd() · 1f0569df
      Kees Cook authored
      In preparation for splitting out ET_DYN ASLR, extract the mmap ASLR
      selection into a separate function.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f0569df
    • Kees Cook's avatar
      arm64: standardize mmap_rnd() usage · dd04cff1
      Kees Cook authored
      In preparation for splitting out ET_DYN ASLR, this refactors the use of
      mmap_rnd() to be used similarly to arm and x86.  This additionally
      enables mmap ASLR on legacy mmap layouts, which appeared to be missing
      on arm64, and was already supported on arm.  Additionally removes a
      copy/pasted declaration of an unused function.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd04cff1
    • Kees Cook's avatar
      x86: standardize mmap_rnd() usage · 82168140
      Kees Cook authored
      In preparation for splitting out ET_DYN ASLR, this refactors the use of
      mmap_rnd() to be used similarly to arm, and extracts the checking of
      PF_RANDOMIZE.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      82168140
    • Kees Cook's avatar
      arm: factor out mmap ASLR into mmap_rnd · fbbc400f
      Kees Cook authored
      To address the "offset2lib" ASLR weakness[1], this separates ET_DYN ASLR
      from mmap ASLR, as already done on s390.  The architectures that are
      already randomizing mmap (arm, arm64, mips, powerpc, s390, and x86), have
      their various forms of arch_mmap_rnd() made available via the new
      CONFIG_ARCH_HAS_ELF_RANDOMIZE.  For these architectures,
      arch_randomize_brk() is collapsed as well.
      
      This is an alternative to the solutions in:
      https://lkml.org/lkml/2015/2/23/442
      
      I've been able to test x86 and arm, and the buildbot (so far) seems happy
      with building the rest.
      
      [1] http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html
      
      This patch (of 10):
      
      In preparation for splitting out ET_DYN ASLR, this moves the ASLR
      calculations for mmap on ARM into a separate routine, similar to x86.
      This also removes the redundant check of personality (PF_RANDOMIZE is
      already set before calling arch_pick_mmap_layout).
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: "David A. Long" <dave.long@linaro.org>
      Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Arun Chandran <achandran@mvista.com>
      Cc: Yann Droneaud <ydroneaud@opteya.com>
      Cc: Min-Hua Chen <orca.chen@gmail.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Alex Smith <alex@alex-smith.me.uk>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: Vineeth Vijayan <vvijayan@mvista.com>
      Cc: Jeff Bailey <jeffbailey@google.com>
      Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Behan Webster <behanw@converseincode.com>
      Cc: Ismael Ripoll <iripoll@upv.es>
      Cc: Jan-Simon Mller <dl9pf@gmx.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fbbc400f
    • Michael Davidson's avatar
      fs/binfmt_elf.c: fix bug in loading of PIE binaries · a87938b2
      Michael Davidson authored
      With CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE enabled, and a normal top-down
      address allocation strategy, load_elf_binary() will attempt to map a PIE
      binary into an address range immediately below mm->mmap_base.
      
      Unfortunately, load_elf_ binary() does not take account of the need to
      allocate sufficient space for the entire binary which means that, while
      the first PT_LOAD segment is mapped below mm->mmap_base, the subsequent
      PT_LOAD segment(s) end up being mapped above mm->mmap_base into the are
      that is supposed to be the "gap" between the stack and the binary.
      
      Since the size of the "gap" on x86_64 is only guaranteed to be 128MB this
      means that binaries with large data segments > 128MB can end up mapping
      part of their data segment over their stack resulting in corruption of the
      stack (and the data segment once the binary starts to run).
      
      Any PIE binary with a data segment > 128MB is vulnerable to this although
      address randomization means that the actual gap between the stack and the
      end of the binary is normally greater than 128MB.  The larger the data
      segment of the binary the higher the probability of failure.
      
      Fix this by calculating the total size of the binary in the same way as
      load_elf_interp().
      Signed-off-by: default avatarMichael Davidson <md@google.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a87938b2
    • Chen Gang's avatar
      mm: memcontrol: let mem_cgroup_move_account() have effect only if MMU enabled · b1b0deab
      Chen Gang authored
      When !MMU, it will report warning. The related warning with allmodconfig
      under c6x:
      
          CC      mm/memcontrol.o
        mm/memcontrol.c:2802:12: warning: 'mem_cgroup_move_account' defined but not used [-Wunused-function]
         static int mem_cgroup_move_account(struct page *page,
                    ^
      Signed-off-by: default avatarChen Gang <gang.chen.5i5j@gmail.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1b0deab
    • Toshi Kani's avatar
      x86, mm: support huge KVA mappings on x86 · 6b637835
      Toshi Kani authored
      Implement huge KVA mapping interfaces on x86.
      
      On x86, MTRRs can override PAT memory types with a 4KB granularity.  When
      using a huge page, MTRRs can override the memory type of the huge page,
      which may lead a performance penalty.  The processor can also behave in an
      undefined manner if a huge page is mapped to a memory range that MTRRs
      have mapped with multiple different memory types.  Therefore, the mapping
      code falls back to use a smaller page size toward 4KB when a mapping range
      is covered by non-WB type of MTRRs.  The WB type of MTRRs has no affect on
      the PAT memory types.
      
      pud_set_huge() and pmd_set_huge() call mtrr_type_lookup() to see if a
      given range is covered by MTRRs.  MTRR_TYPE_WRBACK indicates that the
      range is either covered by WB or not covered and the MTRR default value is
      set to WB.  0xFF indicates that MTRRs are disabled.
      
      HAVE_ARCH_HUGE_VMAP is selected when X86_64 or X86_32 with X86_PAE is set.
       X86_32 without X86_PAE is not supported since such config can unlikey be
      benefited from this feature, and there was an issue found in testing.
      
      [fengguang.wu@intel.com: ioremap_pud_capable can be static]
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6b637835
    • Toshi Kani's avatar
      x86, mm: support huge I/O mapping capability I/F · 5d72b4fb
      Toshi Kani authored
      Implement huge I/O mapping capability interfaces for ioremap() on x86.
      
      IOREMAP_MAX_ORDER is defined to PUD_SHIFT on x86/64 and PMD_SHIFT on
      x86/32, which overrides the default value defined in <linux/vmalloc.h>.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5d72b4fb
    • Toshi Kani's avatar
      mm: change vunmap to tear down huge KVA mappings · b9820d8f
      Toshi Kani authored
      Change vunmap_pmd_range() and vunmap_pud_range() to tear down huge KVA
      mappings when they are set.  pud_clear_huge() and pmd_clear_huge() return
      zero when no-operation is performed, i.e.  huge page mapping was not used.
      
      These changes are only enabled when CONFIG_HAVE_ARCH_HUGE_VMAP is defined
      on the architecture.
      
      [akpm@linux-foundation.org: use consistent code layout]
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b9820d8f
    • Toshi Kani's avatar
      mm: change ioremap to set up huge I/O mappings · e61ce6ad
      Toshi Kani authored
      ioremap_pud_range() and ioremap_pmd_range() are changed to create huge I/O
      mappings when their capability is enabled, and a request meets required
      conditions -- both virtual & physical addresses are aligned by their huge
      page size, and a requested range fufills their huge page size.  When
      pud_set_huge() or pmd_set_huge() returns zero, i.e.  no-operation is
      performed, the code simply falls back to the next level.
      
      The changes are only enabled when CONFIG_HAVE_ARCH_HUGE_VMAP is defined on
      the architecture.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e61ce6ad
    • Toshi Kani's avatar
      lib/ioremap.c: add huge I/O map capability interfaces · 0ddab1d2
      Toshi Kani authored
      Add ioremap_pud_enabled() and ioremap_pmd_enabled(), which return 1 when
      I/O mappings with pud/pmd are enabled on the kernel.
      
      ioremap_huge_init() calls arch_ioremap_pud_supported() and
      arch_ioremap_pmd_supported() to initialize the capabilities at boot-time.
      
      A new kernel option "nohugeiomap" is also added, so that user can disable
      the huge I/O map capabilities when necessary.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ddab1d2
    • Toshi Kani's avatar
      mm: change __get_vm_area_node() to use fls_long() · 0f616be1
      Toshi Kani authored
      ioremap() and its related interfaces are used to create I/O mappings to
      memory-mapped I/O devices.  The mapping sizes of the traditional I/O
      devices are relatively small.  Non-volatile memory (NVM), however, has
      many GB and is going to have TB soon.  It is not very efficient to create
      large I/O mappings with 4KB.
      
      This patchset extends the ioremap() interfaces to transparently create I/O
      mappings with huge pages whenever possible.  ioremap() continues to use
      4KB mappings when a huge page does not fit into a requested range.  There
      is no change necessary to the drivers using ioremap().  A requested
      physical address must be aligned by a huge page size (1GB or 2MB on x86)
      for using huge page mapping, though.  The kernel huge I/O mapping will
      improve performance of NVM and other devices with large memory, and reduce
      the time to create their mappings as well.
      
      On x86, MTRRs can override PAT memory types with a 4KB granularity.  When
      using a huge page, MTRRs can override the memory type of the huge page,
      which may lead a performance penalty.  The processor can also behave in an
      undefined manner if a huge page is mapped to a memory range that MTRRs
      have mapped with multiple different memory types.  Therefore, the mapping
      code falls back to use a smaller page size toward 4KB when a mapping range
      is covered by non-WB type of MTRRs.  The WB type of MTRRs has no affect on
      the PAT memory types.
      
      The patchset introduces HAVE_ARCH_HUGE_VMAP, which indicates that the arch
      supports huge KVA mappings for ioremap().  User may specify a new kernel
      option "nohugeiomap" to disable the huge I/O mapping capability of
      ioremap() when necessary.
      
      Patch 1-4 change common files to support huge I/O mappings.  There is no
      change in the functinalities unless HAVE_ARCH_HUGE_VMAP is defined on the
      architecture of the system.
      
      Patch 5-6 implement the HAVE_ARCH_HUGE_VMAP funcs on x86, and set
      HAVE_ARCH_HUGE_VMAP on x86.
      
      This patch (of 6):
      
      __get_vm_area_node() takes unsigned long size, which is a 64-bit value on
      a 64-bit kernel.  However, fls(size) simply ignores the upper 32-bit.
      Change to use fls_long() to handle the size properly.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f616be1
    • Yaowei Bai's avatar
      42ff2703
    • Michal Hocko's avatar
      sparc: clarify __GFP_NOFAIL allocation · f91e8d6d
      Michal Hocko authored
      Commit 920c3ed7 ("[SPARC64]: Add basic infrastructure for MD
      add/remove notification") has added __GFP_NOFAIL for the allocation
      request but it hasn't mentioned why is this strict requirement really
      needed.  The code was handling an allocation failure and propagated it
      properly up the callchain so it is not clear why it is needed.
      
      Dave has clarified the intention when I tried to remove the flag as not
      being necessary:
      
      : It is a serious failure.
      :
      : If we miss an MDESC update due to this allocation failure, the update
      : is not an event which gets retransmitted so we will lose the updated
      : machine description forever.
      :
      : We really need this allocation to succeed.
      
      So add a comment to clarify the nofail flag and get rid of the failure
      check because __GFP_NOFAIL allocation doesn't fail.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Vipul Pandya <vipul@chelsio.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f91e8d6d
    • Michal Hocko's avatar
      mm: clarify __GFP_NOFAIL deprecation status · 64775719
      Michal Hocko authored
      __GFP_NOFAIL is documented as a deprecated flag since commit
      478352e7 ("mm: add comment about deprecation of __GFP_NOFAIL").
      
      This has discouraged people from using it but in some cases an opencoded
      endless loop around allocator has been used instead.  So the allocator
      is not aware of the de facto __GFP_NOFAIL allocation because this
      information was not communicated properly.
      
      Let's make clear that if the allocation context really cannot afford
      failure because there is no good failure policy then using __GFP_NOFAIL
      is preferable to opencoding the loop outside of the allocator.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Vipul Pandya <vipul@chelsio.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      64775719
    • Sasha Levin's avatar
      mm: cma: constify and use correct signness in mm/cma.c · ac173824
      Sasha Levin authored
      Constify function parameters and use correct signness where needed.
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Acked-by: default avatarGregory Fong <gregory.0xf0@gmail.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ac173824
    • David Rientjes's avatar
      kernel, cpuset: remove exception for __GFP_THISNODE · 6e276d2a
      David Rientjes authored
      Nothing calls __cpuset_node_allowed() with __GFP_THISNODE set anymore, so
      remove the obscure comment about it and its special-case exception.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Cc: Jarno Rajahalme <jrajahalme@nicira.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6e276d2a
    • David Rientjes's avatar
      mm, thp: really limit transparent hugepage allocation to local node · 5265047a
      David Rientjes authored
      Commit 077fcf11 ("mm/thp: allocate transparent hugepages on local
      node") restructured alloc_hugepage_vma() with the intent of only
      allocating transparent hugepages locally when there was not an effective
      interleave mempolicy.
      
      alloc_pages_exact_node() does not limit the allocation to the single node,
      however, but rather prefers it.  This is because __GFP_THISNODE is not set
      which would cause the node-local nodemask to be passed.  Without it, only
      a nodemask that prefers the local node is passed.
      
      Fix this by passing __GFP_THISNODE and falling back to small pages when
      the allocation fails.
      
      Commit 9f1b868a ("mm: thp: khugepaged: add policy for finding target
      node") suffers from a similar problem for khugepaged, which is also fixed.
      
      Fixes: 077fcf11 ("mm/thp: allocate transparent hugepages on local node")
      Fixes: 9f1b868a ("mm: thp: khugepaged: add policy for finding target node")
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Cc: Jarno Rajahalme <jrajahalme@nicira.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5265047a
    • David Rientjes's avatar
      mm: remove GFP_THISNODE · 4167e9b2
      David Rientjes authored
      NOTE: this is not about __GFP_THISNODE, this is only about GFP_THISNODE.
      
      GFP_THISNODE is a secret combination of gfp bits that have different
      behavior than expected.  It is a combination of __GFP_THISNODE,
      __GFP_NORETRY, and __GFP_NOWARN and is special-cased in the page
      allocator slowpath to fail without trying reclaim even though it may be
      used in combination with __GFP_WAIT.
      
      An example of the problem this creates: commit e97ca8e5 ("mm: fix
      GFP_THISNODE callers and clarify") fixed up many users of GFP_THISNODE
      that really just wanted __GFP_THISNODE.  The problem doesn't end there,
      however, because even it was a no-op for alloc_misplaced_dst_page(),
      which also sets __GFP_NORETRY and __GFP_NOWARN, and
      migrate_misplaced_transhuge_page(), where __GFP_NORETRY and __GFP_NOWAIT
      is set in GFP_TRANSHUGE.  Converting GFP_THISNODE to __GFP_THISNODE is a
      no-op in these cases since the page allocator special-cases
      __GFP_THISNODE && __GFP_NORETRY && __GFP_NOWARN.
      
      It's time to just remove GFP_THISNODE entirely.  We leave __GFP_THISNODE
      to restrict an allocation to a local node, but remove GFP_THISNODE and
      its obscurity.  Instead, we require that a caller clear __GFP_WAIT if it
      wants to avoid reclaim.
      
      This allows the aforementioned functions to actually reclaim as they
      should.  It also enables any future callers that want to do
      __GFP_THISNODE but also __GFP_NORETRY && __GFP_NOWARN to reclaim.  The
      rule is simple: if you don't want to reclaim, then don't set __GFP_WAIT.
      
      Aside: ovs_flow_stats_update() really wants to avoid reclaim as well, so
      it is unchanged.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Acked-by: default avatarPekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Cc: Jarno Rajahalme <jrajahalme@nicira.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4167e9b2
    • David Rientjes's avatar
      mm, mempolicy: migrate_to_node should only migrate to node · b360edb4
      David Rientjes authored
      migrate_to_node() is intended to migrate a page from one source node to
      a target node.
      
      Today, migrate_to_node() could end up migrating to any node, not only
      the target node.  This is because the page migration allocator,
      new_node_page() does not pass __GFP_THISNODE to
      alloc_pages_exact_node().  This causes the target node to be preferred
      but allows fallback to any other node in order of affinity.
      
      Prevent this by allocating with __GFP_THISNODE.  If memory is not
      available, -ENOMEM will be returned as appropriate.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b360edb4
    • Vladimir Davydov's avatar
      cleancache: remove limit on the number of cleancache enabled filesystems · 3cb29d11
      Vladimir Davydov authored
      The limit equals 32 and is imposed by the number of entries in the
      fs_poolid_map and shared_fs_poolid_map.  Nowadays it is insufficient,
      because with containers on board a Linux host can have hundreds of
      active fs mounts.
      
      These maps were introduced by commit 49a9ab81 ("mm: cleancache:
      lazy initialization to allow tmem backends to build/run as modules") in
      order to allow compiling cleancache drivers as modules.  Real pool ids
      are stored in these maps while super_block->cleancache_poolid points to
      an entry in the map, so that on cleancache registration we can walk over
      all (if there are <= 32 of them, of course) cleancache-enabled super
      blocks and assign real pool ids.
      
      Actually, there is absolutely no need in these maps, because we can
      iterate over all super blocks immediately using iterate_supers.  This is
      not racy, because cleancache_init_ops is called from mount_fs with
      super_block->s_umount held for writing, while iterate_supers takes this
      semaphore for reading, so if we call iterate_supers after setting
      cleancache_ops, all super blocks that had been created before
      cleancache_register_ops was called will be assigned pool ids by the
      action function of iterate_supers while all newer super blocks will
      receive it in cleancache_init_fs.
      
      This patch therefore removes the maps and hence the artificial limit on
      the number of cleancache enabled filesystems.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3cb29d11
    • Vladimir Davydov's avatar
      cleancache: forbid overriding cleancache_ops · 53d85c98
      Vladimir Davydov authored
      Currently, cleancache_register_ops returns the previous value of
      cleancache_ops to allow chaining.  However, chaining, as it is
      implemented now, is extremely dangerous due to possible pool id
      collisions.  Suppose, a new cleancache driver is registered after the
      previous one assigned an id to a super block.  If the new driver assigns
      the same id to another super block, which is perfectly possible, we will
      have two different filesystems using the same id.  No matter if the new
      driver implements chaining or not, we are likely to get data corruption
      with such a configuration eventually.
      
      This patch therefore disables the ability to override cleancache_ops
      altogether as potentially dangerous.  If there is already cleancache
      driver registered, all further calls to cleancache_register_ops will
      return EBUSY.  Since no user of cleancache implements chaining, we only
      need to make minor changes to the code outside the cleancache core.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      53d85c98
    • Vladimir Davydov's avatar
      cleancache: zap uuid arg of cleancache_init_shared_fs · 9de16262
      Vladimir Davydov authored
      Use super_block->s_uuid instead.  Every shared filesystem using cleancache
      must now initialize super_block->s_uuid before calling
      cleancache_init_shared_fs.  The only one on the tree, ocfs2, already meets
      this requirement.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9de16262
    • Vladimir Davydov's avatar
      ocfs2: copy fs uuid to superblock · 58be19dc
      Vladimir Davydov authored
      Currently, maximal number of cleancache enabled filesystems equals 32,
      which is insufficient nowadays, because a Linux host can have hundreds
      of containers on board, each of which might want its own filesystem.
      This patch set targets at removing this limitation - see patch 4 for
      more details.  Patches 1-3 prepare the code for this change.
      
      This patch (of 4):
      
      This will allow us to remove the uuid argument from
      cleancache_init_shared_fs.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      58be19dc
    • Shachar Raindel's avatar
      mm: refactor do_wp_page handling of shared vma into a function · 93e478d4
      Shachar Raindel authored
      The do_wp_page function is extremely long.  Extract the logic for
      handling a page belonging to a shared vma into a function of its own.
      
      This helps the readability of the code, without doing any functional
      change in it.
      Signed-off-by: default avatarShachar Raindel <raindel@mellanox.com>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93e478d4
    • Shachar Raindel's avatar
      mm: refactor do_wp_page, extract the page copy flow · 2f38ab2c
      Shachar Raindel authored
      In some cases, do_wp_page had to copy the page suffering a write fault
      to a new location.  If the function logic decided that to do this, it
      was done by jumping with a "goto" operation to the relevant code block.
      This made the code really hard to understand.  It is also against the
      kernel coding style guidelines.
      
      This patch extracts the page copy and page table update logic to a
      separate function.  It also clean up the naming, from "gotten" to
      "wp_page_copy", and adds few comments.
      Signed-off-by: default avatarShachar Raindel <raindel@mellanox.com>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2f38ab2c
    • Shachar Raindel's avatar
      mm: refactor do_wp_page - rewrite the unlock flow · 28766805
      Shachar Raindel authored
      When do_wp_page is ending, in several cases it needs to unlock the pages
      and ptls it was accessing.
      
      Currently, this logic was "called" by using a goto jump.  This makes
      following the control flow of the function harder.  Readability was
      further hampered by the unlock case containing large amount of logic
      needed only in one of the 3 cases.
      
      Using goto for cleanup is generally allowed.  However, moving the
      trivial unlocking flows to the relevant call sites allow deeper
      refactoring in the next patch.
      Signed-off-by: default avatarShachar Raindel <raindel@mellanox.com>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      28766805
    • Shachar Raindel's avatar
      mm: refactor do_wp_page, extract the reuse case · 4e047f89
      Shachar Raindel authored
      Currently do_wp_page contains 265 code lines.  It also contains 9 goto
      statements, of which 5 are targeting labels which are not cleanup
      related.  This makes the function extremely difficult to understand.
      
      The following patches are an attempt at breaking the function to its
      basic components, and making it easier to understand.
      
      The patches are straight forward function extractions from do_wp_page.
      As we extract functions, we remove unneeded parameters and simplify the
      code as much as possible.  However, the functionality is supposed to
      remain completely unchanged.  The patches also attempt to document the
      functionality of each extracted function.  In patch 2, we split the
      unlock logic to the contain logic relevant to specific needs of each use
      case, instead of having huge number of conditional decisions in a single
      unlock flow.
      
      This patch (of 4):
      
      When do_wp_page is ending, in several cases it needs to reuse the existing
      page.  This is achieved by making the page table writable, and possibly
      updating the page-cache state.
      
      Currently, this logic was "called" by using a goto jump.  This makes
      following the control flow of the function harder.  It is also against the
      coding style guidelines for using goto.
      
      As the code can easily be refactored into a specialized function, refactor
      it out and simplify the code flow in do_wp_page.
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4e047f89
    • Kirill A. Shutemov's avatar
      mm: do not add nr_pmds into mm_struct if PMD is folded · 5a3fbef3
      Kirill A. Shutemov authored
      CONFIG_PGTABLE_LEVELS is now available on every architecture and we can
      use it to check if we need to add nr_pmds into mm_struct.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5a3fbef3
    • Kirill A. Shutemov's avatar
      mm: define default PGTABLE_LEVELS to two · 235a8f02
      Kirill A. Shutemov authored
      By this time all architectures which support more than two page table
      levels should be covered.  This patch add default definiton of
      PGTABLE_LEVELS equal 2.
      
      We also add assert to detect inconsistence between CONFIG_PGTABLE_LEVELS
      and __PAGETABLE_PMD_FOLDED/__PAGETABLE_PUD_FOLDED.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      235a8f02
    • Kirill A. Shutemov's avatar
      x86: expose number of page table levels on Kconfig level · 98233368
      Kirill A. Shutemov authored
      We would want to use number of page table level to define mm_struct.
      Let's expose it as CONFIG_PGTABLE_LEVELS.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      98233368
    • Kirill A. Shutemov's avatar
      um: expose number of page table levels · 6b8ce2a1
      Kirill A. Shutemov authored
      We would want to use number of page table level to define mm_struct.
      Let's expose it as CONFIG_PGTABLE_LEVELS.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarRichard Weinberger <richard@nod.at>
      Cc: Jeff Dike <jdike@addtoit.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6b8ce2a1