An error occurred fetching the project authors.
  1. 23 Feb, 2019 1 commit
  2. 04 Jan, 2019 1 commit
    • Joel Fernandes (Google)'s avatar
      mm: treewide: remove unused address argument from pte_alloc functions · 4cf58924
      Joel Fernandes (Google) authored
      Patch series "Add support for fast mremap".
      
      This series speeds up the mremap(2) syscall by copying page tables at
      the PMD level even for non-THP systems.  There is concern that the extra
      'address' argument that mremap passes to pte_alloc may do something
      subtle architecture related in the future that may make the scheme not
      work.  Also we find that there is no point in passing the 'address' to
      pte_alloc since its unused.  This patch therefore removes this argument
      tree-wide resulting in a nice negative diff as well.  Also ensuring
      along the way that the enabled architectures do not do anything funky
      with the 'address' argument that goes unnoticed by the optimization.
      
      Build and boot tested on x86-64.  Build tested on arm64.  The config
      enablement patch for arm64 will be posted in the future after more
      testing.
      
      The changes were obtained by applying the following Coccinelle script.
      (thanks Julia for answering all Coccinelle questions!).
      Following fix ups were done manually:
      * Removal of address argument from  pte_fragment_alloc
      * Removal of pte_alloc_one_fast definitions from m68k and microblaze.
      
      // Options: --include-headers --no-includes
      // Note: I split the 'identifier fn' line, so if you are manually
      // running it, please unsplit it so it runs for you.
      
      virtual patch
      
      @pte_alloc_func_def depends on patch exists@
      identifier E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      type T2;
      @@
      
       fn(...
      - , T2 E2
       )
       { ... }
      
      @pte_alloc_func_proto_noarg depends on patch exists@
      type T1, T2, T3, T4;
      identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1, T2);
      + T3 fn(T1);
      |
      - T3 fn(T1, T2, T4);
      + T3 fn(T1, T2);
      )
      
      @pte_alloc_func_proto depends on patch exists@
      identifier E1, E2, E4;
      type T1, T2, T3, T4;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1 E1, T2 E2);
      + T3 fn(T1 E1);
      |
      - T3 fn(T1 E1, T2 E2, T4 E4);
      + T3 fn(T1 E1, T2 E2);
      )
      
      @pte_alloc_func_call depends on patch exists@
      expression E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
       fn(...
      -,  E2
       )
      
      @pte_alloc_macro depends on patch exists@
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      identifier a, b, c;
      expression e;
      position p;
      @@
      
      (
      - #define fn(a, b, c) e
      + #define fn(a, b) e
      |
      - #define fn(a, b) e
      + #define fn(a) e
      )
      
      Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.comSigned-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Suggested-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Julia Lawall <Julia.Lawall@lip6.fr>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4cf58924
  3. 19 Dec, 2018 1 commit
    • Christophe Leroy's avatar
      powerpc: implement CONFIG_DEBUG_VIRTUAL · 6bf752da
      Christophe Leroy authored
      This patch implements CONFIG_DEBUG_VIRTUAL to warn about
      incorrect use of virt_to_phys() and page_to_phys()
      
      Below is the result of test_debug_virtual:
      
      [    1.438746] WARNING: CPU: 0 PID: 1 at ./arch/powerpc/include/asm/io.h:808 test_debug_virtual_init+0x3c/0xd4
      [    1.448156] CPU: 0 PID: 1 Comm: swapper Not tainted 4.20.0-rc5-00560-g6bfb52e23a00-dirty #532
      [    1.457259] NIP:  c066c550 LR: c0650ccc CTR: c066c514
      [    1.462257] REGS: c900bdb0 TRAP: 0700   Not tainted  (4.20.0-rc5-00560-g6bfb52e23a00-dirty)
      [    1.471184] MSR:  00029032 <EE,ME,IR,DR,RI>  CR: 48000422  XER: 20000000
      [    1.477811]
      [    1.477811] GPR00: c0650ccc c900be60 c60d0000 00000000 006000c0 c9000000 00009032 c7fa0020
      [    1.477811] GPR08: 00002400 00000001 09000000 00000000 c07b5d04 00000000 c00037d8 00000000
      [    1.477811] GPR16: 00000000 00000000 00000000 00000000 c0760000 c0740000 00000092 c0685bb0
      [    1.477811] GPR24: c065042c c068a734 c0685b8c 00000006 00000000 c0760000 c075c3c0 ffffffff
      [    1.512711] NIP [c066c550] test_debug_virtual_init+0x3c/0xd4
      [    1.518315] LR [c0650ccc] do_one_initcall+0x8c/0x1cc
      [    1.523163] Call Trace:
      [    1.525595] [c900be60] [c0567340] 0xc0567340 (unreliable)
      [    1.530954] [c900be90] [c0650ccc] do_one_initcall+0x8c/0x1cc
      [    1.536551] [c900bef0] [c0651000] kernel_init_freeable+0x1f4/0x2cc
      [    1.542658] [c900bf30] [c00037ec] kernel_init+0x14/0x110
      [    1.547913] [c900bf40] [c000e1d0] ret_from_kernel_thread+0x14/0x1c
      [    1.553971] Instruction dump:
      [    1.556909] 3ca50100 bfa10024 54a5000e 3fa0c076 7c0802a6 3d454000 813dc204 554893be
      [    1.564566] 7d294010 7d294910 90010034 39290001 <0f090000> 7c3e0b78 955e0008 3fe0c062
      [    1.572425] ---[ end trace 6f6984225b280ad6 ]---
      [    1.577467] PA: 0x09000000 for VA: 0xc9000000
      [    1.581799] PA: 0x061e8f50 for VA: 0xc61e8f50
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6bf752da
  4. 04 Dec, 2018 1 commit
  5. 26 Nov, 2018 1 commit
  6. 31 Oct, 2018 1 commit
    • Mike Rapoport's avatar
      memblock: rename memblock_alloc{_nid,_try_nid} to memblock_phys_alloc* · 9a8dd708
      Mike Rapoport authored
      Make it explicit that the caller gets a physical address rather than a
      virtual one.
      
      This will also allow using meblock_alloc prefix for memblock allocations
      returning virtual address, which is done in the following patches.
      
      The conversion is done using the following semantic patch:
      
      @@
      expression e1, e2, e3;
      @@
      (
      - memblock_alloc(e1, e2)
      + memblock_phys_alloc(e1, e2)
      |
      - memblock_alloc_nid(e1, e2, e3)
      + memblock_phys_alloc_nid(e1, e2, e3)
      |
      - memblock_alloc_try_nid(e1, e2, e3)
      + memblock_phys_alloc_try_nid(e1, e2, e3)
      )
      
      Link: http://lkml.kernel.org/r/1536927045-23536-7-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9a8dd708
  7. 14 Oct, 2018 4 commits
  8. 13 Oct, 2018 1 commit
  9. 31 Mar, 2018 2 commits
  10. 16 Jan, 2018 1 commit
    • Christophe Leroy's avatar
      powerpc/mm: extend _PAGE_PRIVILEGED to all CPUs · 812fadcb
      Christophe Leroy authored
      commit ac29c640 ("powerpc/mm: Replace _PAGE_USER with
      _PAGE_PRIVILEGED") introduced _PAGE_PRIVILEGED for BOOK3S/64
      
      This patch generalises _PAGE_PRIVILEGED for all CPUs, allowing
      to have either _PAGE_PRIVILEGED or _PAGE_USER or both.
      
      PPC_8xx has a _PAGE_SHARED flag which is set for and only for
      all non user pages. Lets rename it _PAGE_PRIVILEGED to remove
      confusion as it has nothing to do with Linux shared pages.
      
      On BookE, there's a _PAGE_BAP_SR which has to be set for kernel
      pages: defining _PAGE_PRIVILEGED as _PAGE_BAP_SR will make
      this generic
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      812fadcb
  11. 04 Oct, 2017 1 commit
    • Guenter Roeck's avatar
      powerpc/mm: Call flush_tlb_kernel_range with interrupts enabled · 7c6a4f3b
      Guenter Roeck authored
      flush_tlb_kernel_range() may call smp_call_function_many() which expects
      interrupts to be enabled. This results in a traceback.
      
      WARNING: CPU: 0 PID: 1 at kernel/smp.c:416 smp_call_function_many+0xcc/0x2fc
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc1-00009-g0666f560 #1
      task: cf830000 task.stack: cf82e000
      NIP:  c00a93c8 LR: c00a9634 CTR: 00000001
      REGS: cf82fde0 TRAP: 0700   Not tainted  (4.14.0-rc1-00009-g0666f560)
      MSR:  00021000 <CE,ME>  CR: 24000082  XER: 00000000
      
      GPR00: c00a9634 cf82fe90 cf830000 c050ad3c c0015a54 00000000 00000001 00000001
      GPR08: 00000001 00000000 00000000 cf82e000 24000084 00000000 c0003150 00000000
      GPR16: 00000000 00000000 00000000 00000000 00000000 00000001 00000000 c0510000
      GPR24: 00000000 c0015a54 00000000 c050ad3c c051823c c050ad3c 00000025 00000000
      NIP [c00a93c8] smp_call_function_many+0xcc/0x2fc
      LR [c00a9634] smp_call_function+0x3c/0x50
      Call Trace:
      [cf82fe90] [00000010] 0x10 (unreliable)
      [cf82fed0] [c00a9634] smp_call_function+0x3c/0x50
      [cf82fee0] [c0015d2c] flush_tlb_kernel_range+0x20/0x38
      [cf82fef0] [c001524c] mark_initmem_nx+0x154/0x16c
      [cf82ff20] [c001484c] free_initmem+0x20/0x4c
      [cf82ff30] [c000316c] kernel_init+0x1c/0x108
      [cf82ff40] [c000f3a8] ret_from_kernel_thread+0x5c/0x64
      Instruction dump:
      7c0803a6 7d808120 38210040 4e800020 3d20c052 812981a0 2f890000 40beffac
      3d20c051 8929ac64 2f890000 40beff9c <0fe00000> 4bffff94 7fc3f378 7f64db78
      
      Fixes: 3184cc4b ("powerpc/mm: Fix kernel RAM protection after freeing ...")
      Fixes: e611939f ("powerpc/mm: Ensure change_page_attr() doesn't ...")
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Reviewed-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7c6a4f3b
  12. 15 Aug, 2017 5 commits
  13. 05 Jun, 2017 2 commits
  14. 02 Jun, 2017 1 commit
  15. 10 Dec, 2016 1 commit
    • Christophe Leroy's avatar
      powerpc: port 64 bits pgtable_cache to 32 bits · 9b081e10
      Christophe Leroy authored
      Today powerpc64 uses a set of pgtable_caches while powerpc32 uses
      standard pages when using 4k pages and a single pgtable_cache
      if using other size pages.
      
      In preparation of implementing huge pages on the 8xx, this patch
      replaces the specific powerpc32 handling by the 64 bits approach.
      
      This is done by:
      * moving 64 bits pgtable_cache_add() and pgtable_cache_init()
      in a new file called init-common.c
      * modifying pgtable_cache_init() to also handle the case
      without PMD
      * removing the 32 bits version of pgtable_cache_add() and
      pgtable_cache_init()
      * copying related header contents from 64 bits into both the
      book3s/32 and nohash/32 header files
      
      On the 8xx, the following cache sizes will be used:
      * 4k pages mode:
      - PGT_CACHE(10) for PGD
      - PGT_CACHE(3) for 512k hugepage tables
      * 16k pages mode:
      - PGT_CACHE(6) for PGD
      - PGT_CACHE(7) for 512k hugepage tables
      - PGT_CACHE(3) for 8M hugepage tables
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarScott Wood <oss@buserror.net>
      9b081e10
  16. 02 Aug, 2016 1 commit
  17. 25 Jun, 2016 1 commit
    • Michal Hocko's avatar
      tree wide: get rid of __GFP_REPEAT for order-0 allocations part I · 32d6bd90
      Michal Hocko authored
      This is the third version of the patchset previously sent [1].  I have
      basically only rebased it on top of 4.7-rc1 tree and dropped "dm: get
      rid of superfluous gfp flags" which went through dm tree.  I am sending
      it now because it is tree wide and chances for conflicts are reduced
      considerably when we want to target rc2.  I plan to send the next step
      and rename the flag and move to a better semantic later during this
      release cycle so we will have a new semantic ready for 4.8 merge window
      hopefully.
      
      Motivation:
      
      While working on something unrelated I've checked the current usage of
      __GFP_REPEAT in the tree.  It seems that a majority of the usage is and
      always has been bogus because __GFP_REPEAT has always been about costly
      high order allocations while we are using it for order-0 or very small
      orders very often.  It seems that a big pile of them is just a
      copy&paste when a code has been adopted from one arch to another.
      
      I think it makes some sense to get rid of them because they are just
      making the semantic more unclear.  Please note that GFP_REPEAT is
      documented as
      
      * __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
      
      * _might_ fail.  This depends upon the particular VM implementation.
        while !costly requests have basically nofail semantic.  So one could
        reasonably expect that order-0 request with __GFP_REPEAT will not loop
        for ever.  This is not implemented right now though.
      
      I would like to move on with __GFP_REPEAT and define a better semantic
      for it.
      
        $ git grep __GFP_REPEAT origin/master | wc -l
        111
        $ git grep __GFP_REPEAT | wc -l
        36
      
      So we are down to the third after this patch series.  The remaining
      places really seem to be relying on __GFP_REPEAT due to large allocation
      requests.  This still needs some double checking which I will do later
      after all the simple ones are sorted out.
      
      I am touching a lot of arch specific code here and I hope I got it right
      but as a matter of fact I even didn't compile test for some archs as I
      do not have cross compiler for them.  Patches should be quite trivial to
      review for stupid compile mistakes though.  The tricky parts are usually
      hidden by macro definitions and thats where I would appreciate help from
      arch maintainers.
      
      [1] http://lkml.kernel.org/r/1461849846-27209-1-git-send-email-mhocko@kernel.org
      
      This patch (of 19):
      
      __GFP_REPEAT has a rather weak semantic but since it has been introduced
      around 2.6.12 it has been ignored for low order allocations.  Yet we
      have the full kernel tree with its usage for apparently order-0
      allocations.  This is really confusing because __GFP_REPEAT is
      explicitly documented to allow allocation failures which is a weaker
      semantic than the current order-0 has (basically nofail).
      
      Let's simply drop __GFP_REPEAT from those places.  This would allow to
      identify place which really need allocator to retry harder and formulate
      a more specific semantic for what the flag is supposed to do actually.
      
      Link: http://lkml.kernel.org/r/1464599699-30131-2-git-send-email-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com> [for tile]
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: John Crispin <blogic@openwrt.org>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      32d6bd90
  18. 12 Mar, 2016 1 commit
    • Christophe Leroy's avatar
      powerpc32: PAGE_EXEC required for inittext · 060ef9d8
      Christophe Leroy authored
      PAGE_EXEC is required for inittext, otherwise CONFIG_DEBUG_PAGEALLOC
      ends up with an Oops
      
      [    0.000000] Inode-cache hash table entries: 8192 (order: 1, 32768 bytes)
      [    0.000000] Sorting __ex_table...
      [    0.000000] bootmem::free_all_bootmem_core nid=0 start=0 end=2000
      [    0.000000] Unable to handle kernel paging request for instruction fetch
      [    0.000000] Faulting instruction address: 0xc045b970
      [    0.000000] Oops: Kernel access of bad area, sig: 11 [#1]
      [    0.000000] PREEMPT DEBUG_PAGEALLOC CMPC885
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.18.25-local-dirty #1673
      [    0.000000] task: c04d83d0 ti: c04f8000 task.ti: c04f8000
      [    0.000000] NIP: c045b970 LR: c045b970 CTR: 0000000a
      [    0.000000] REGS: c04f9ea0 TRAP: 0400   Not tainted  (3.18.25-local-dirty)
      [    0.000000] MSR: 08001032 <ME,IR,DR,RI>  CR: 39955d35  XER: a000ff40
      [    0.000000]
      GPR00: c045b970 c04f9f50 c04d83d0 00000000 ffffffff c04dcdf4 00000048 c04f6b10
      GPR08: c04f6ab0 00000001 c0563488 c04f6ab0 c04f8000 00000000 00000000 b6db6db7
      GPR16: 00003474 00000180 00002000 c7fec000 00000000 000003ff 00000176 c0415014
      GPR24: c0471018 c0414ee8 c05304e8 c03aeaac c0510000 c0471018 c0471010 00000000
      [    0.000000] NIP [c045b970] free_all_bootmem+0x164/0x228
      [    0.000000] LR [c045b970] free_all_bootmem+0x164/0x228
      [    0.000000] Call Trace:
      [    0.000000] [c04f9f50] [c045b970] free_all_bootmem+0x164/0x228 (unreliable)
      [    0.000000] [c04f9fa0] [c0454044] mem_init+0x3c/0xd0
      [    0.000000] [c04f9fb0] [c045080c] start_kernel+0x1f4/0x390
      [    0.000000] [c04f9ff0] [c0002214] start_here+0x38/0x98
      [    0.000000] Instruction dump:
      [    0.000000] 2f150000 7f968840 72a90001 3ad60001 56b5f87e 419a0028 419e0024 41a20018
      [    0.000000] 807cc20c 38800000 7c638214 4bffd2f5 <3a940001> 3a100024 4bffffc8 7e368b78
      [    0.000000] ---[ end trace dc8fa200cb88537f ]---
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarScott Wood <oss@buserror.net>
      060ef9d8
  19. 11 Mar, 2016 2 commits
  20. 10 Apr, 2015 2 commits
  21. 07 Apr, 2015 1 commit
  22. 23 Mar, 2015 1 commit
  23. 30 Jan, 2015 3 commits
  24. 13 Dec, 2014 1 commit
    • Joonsoo Kim's avatar
      mm/debug-pagealloc: make debug-pagealloc boottime configurable · 031bc574
      Joonsoo Kim authored
      Now, we have prepared to avoid using debug-pagealloc in boottime.  So
      introduce new kernel-parameter to disable debug-pagealloc in boottime, and
      makes related functions to be disabled in this case.
      
      Only non-intuitive part is change of guard page functions.  Because guard
      page is effective only if debug-pagealloc is enabled, turning off
      according to debug-pagealloc is reasonable thing to do.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Jungsoo Son <jungsoo.son@lge.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      031bc574
  25. 09 Nov, 2014 1 commit
  26. 28 Jul, 2014 1 commit
  27. 09 Jan, 2014 1 commit
    • Scott Wood's avatar
      powerpc: add barrier after writing kernel PTE · 47ce8af4
      Scott Wood authored
      There is no barrier between something like ioremap() writing to
      a PTE, and returning the value to a caller that may then store the
      pointer in a place that is visible to other CPUs.  Such callers
      generally don't perform barriers of their own.
      
      Even if callers of ioremap() and similar things did use barriers,
      the most logical choise would be smp_wmb(), which is not
      architecturally sufficient when BookE hardware tablewalk is used.  A
      full sync is specified by the architecture.
      
      For userspace mappings, OTOH, we generally already have an lwsync due
      to locking, and if we occasionally take a spurious fault due to not
      having a full sync with hardware tablewalk, it will not be fatal
      because we will retry rather than oops.
      Signed-off-by: default avatarScott Wood <scottwood@freescale.com>
      47ce8af4