An error occurred fetching the project authors.
  1. 13 Sep, 2019 3 commits
  2. 06 Sep, 2019 1 commit
  3. 28 Aug, 2019 2 commits
  4. 27 Aug, 2019 1 commit
  5. 21 Aug, 2019 2 commits
  6. 10 Aug, 2019 1 commit
    • Christoph Hellwig's avatar
      dma-mapping: fix page attributes for dma_mmap_* · 33dcb37c
      Christoph Hellwig authored
      All the way back to introducing dma_common_mmap we've defaulted to mark
      the pages as uncached.  But this is wrong for DMA coherent devices.
      Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
      flag is only treated special on the alloc side for non-coherent devices.
      
      Introduce a new dma_pgprot helper that deals with the check for coherent
      devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
      and we thus ensure no aliasing of page attributes happens, which makes
      the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
      remaining ones.
      
      Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
      we'll phase it out soon.
      
      Fixes: 64ccc9c0 ("common: dma-mapping: add support for generic dma_mmap_* calls")
      Reported-by: default avatarShawn Anastasio <shawn@anastas.io>
      Reported-by: default avatarGavin Li <git@thegavinli.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
      33dcb37c
  7. 05 Aug, 2019 1 commit
  8. 19 Jul, 2019 1 commit
    • Shawn Anastasio's avatar
      powerpc/dma: Fix invalid DMA mmap behavior · b4fc36e6
      Shawn Anastasio authored
      The refactor of powerpc DMA functions in commit 6666cc17
      ("powerpc/dma: remove dma_nommu_mmap_coherent") incorrectly
      changes the way DMA mappings are handled on powerpc.
      Since this change, all mapped pages are marked as cache-inhibited
      through the default implementation of arch_dma_mmap_pgprot.
      This differs from the previous behavior of only marking pages
      in noncoherent mappings as cache-inhibited and has resulted in
      sporadic system crashes in certain hardware configurations and
      workloads (see Bugzilla).
      
      This commit restores the previous correct behavior by providing
      an implementation of arch_dma_mmap_pgprot that only marks
      pages in noncoherent mappings as cache-inhibited. As this behavior
      should be universal for all powerpc platforms a new file,
      dma-generic.c, was created to store it.
      
      Fixes: 6666cc17 ("powerpc/dma: remove dma_nommu_mmap_coherent")
      # NOTE: fixes commit 6666cc17 released in v5.1.
      # Consider a stable tag:
      # Cc: stable@vger.kernel.org # v5.1+
      # NOTE: fixes commit 6666cc17 released in v5.1.
      # Consider a stable tag:
      # Cc: stable@vger.kernel.org # v5.1+
      Cc: stable@vger.kernel.org # v5.1+
      Signed-off-by: default avatarShawn Anastasio <shawn@anastas.io>
      Reviewed-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20190717235437.12908-1-shawn@anastas.io
      b4fc36e6
  9. 17 Jul, 2019 1 commit
  10. 12 Jul, 2019 2 commits
  11. 04 Jul, 2019 3 commits
  12. 03 Jul, 2019 1 commit
    • Michael Neuling's avatar
      powerpc: Fix compile issue with force DAWR · a278e7ea
      Michael Neuling authored
      If you compile with KVM but without CONFIG_HAVE_HW_BREAKPOINT you fail
      at linking with:
        arch/powerpc/kvm/book3s_hv_rmhandlers.o:(.text+0x708): undefined reference to `dawr_force_enable'
      
      This was caused by commit c1fe190c ("powerpc: Add force enable of
      DAWR on P9 option").
      
      This moves a bunch of code around to fix this. It moves a lot of the
      DAWR code in a new file and creates a new CONFIG_PPC_DAWR to enable
      compiling it.
      
      Fixes: c1fe190c ("powerpc: Add force enable of DAWR on P9 option")
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      [mpe: Minor formatting in set_dawr()]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      a278e7ea
  13. 01 Jul, 2019 1 commit
  14. 19 Jun, 2019 1 commit
    • Nicholas Piggin's avatar
      powerpc/64s/radix: Enable HAVE_ARCH_HUGE_VMAP · d909f910
      Nicholas Piggin authored
      This sets the HAVE_ARCH_HUGE_VMAP option, and defines the required
      page table functions.
      
      This enables huge (2MB and 1GB) ioremap mappings. I don't have a
      benchmark for this change, but huge vmap will be used by a later core
      kernel change to enable huge vmalloc memory mappings. This improves
      cached `git diff` performance by about 5% on a 2-node POWER9 with 32MB
      size dentry cache hash.
      
        Profiling git diff dTLB misses with a vanilla kernel:
      
        81.75%  git      [kernel.vmlinux]    [k] __d_lookup_rcu
         7.21%  git      [kernel.vmlinux]    [k] strncpy_from_user
         1.77%  git      [kernel.vmlinux]    [k] find_get_entry
         1.59%  git      [kernel.vmlinux]    [k] kmem_cache_free
      
                  40,168      dTLB-miss
             0.100342754 seconds time elapsed
      
        With powerpc huge vmalloc:
      
                   2,987      dTLB-miss
             0.095933138 seconds time elapsed
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d909f910
  15. 08 Jun, 2019 1 commit
  16. 14 May, 2019 1 commit
    • Mike Rapoport's avatar
      mm: memblock: make keeping memblock memory opt-in rather than opt-out · 350e88ba
      Mike Rapoport authored
      Most architectures do not need the memblock memory after the page
      allocator is initialized, but only few enable ARCH_DISCARD_MEMBLOCK in the
      arch Kconfig.
      
      Replacing ARCH_DISCARD_MEMBLOCK with ARCH_KEEP_MEMBLOCK and inverting the
      logic makes it clear which architectures actually use memblock after
      system initialization and skips the necessity to add ARCH_DISCARD_MEMBLOCK
      to the architectures that are still missing that option.
      
      Link: http://lkml.kernel.org/r/1556102150-32517-1-git-send-email-rppt@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      350e88ba
  17. 03 May, 2019 1 commit
  18. 02 May, 2019 4 commits
  19. 08 Apr, 2019 1 commit
  20. 03 Apr, 2019 3 commits
    • Waiman Long's avatar
      locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs · 390a0c62
      Waiman Long authored
      Currently, we have two different implementation of rwsem:
      
       1) CONFIG_RWSEM_GENERIC_SPINLOCK (rwsem-spinlock.c)
       2) CONFIG_RWSEM_XCHGADD_ALGORITHM (rwsem-xadd.c)
      
      As we are going to use a single generic implementation for rwsem-xadd.c
      and no architecture-specific code will be needed, there is no point
      in keeping two different implementations of rwsem. In most cases, the
      performance of rwsem-spinlock.c will be worse. It also doesn't get all
      the performance tuning and optimizations that had been implemented in
      rwsem-xadd.c over the years.
      
      For simplication, we are going to remove rwsem-spinlock.c and make all
      architectures use a single implementation of rwsem - rwsem-xadd.c.
      
      All references to RWSEM_GENERIC_SPINLOCK and RWSEM_XCHGADD_ALGORITHM
      in the code are removed.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-riscv@lists.infradead.org
      Cc: linux-um@lists.infradead.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: openrisc@lists.librecores.org
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Link: https://lkml.kernel.org/r/20190322143008.21313-3-longman@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      390a0c62
    • Peter Zijlstra's avatar
      asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE · 96bc9567
      Peter Zijlstra authored
      Make issuing a TLB invalidate for page-table pages the normal case.
      
      The reason is twofold:
      
       - too many invalidates is safer than too few,
       - most architectures use the linux page-tables natively
         and would thus require this.
      
      Make it an opt-out, instead of an opt-in.
      
      No change in behavior intended.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      96bc9567
    • Peter Zijlstra's avatar
      asm-generic/tlb, arch: Provide CONFIG_HAVE_MMU_GATHER_PAGE_SIZE · ed6a7935
      Peter Zijlstra authored
      Move the mmu_gather::page_size things into the generic code instead of
      PowerPC specific bits.
      
      No change in behavior intended.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nick Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ed6a7935
  21. 23 Feb, 2019 8 commits
    • Christophe Leroy's avatar
      powerpc: Activate CONFIG_THREAD_INFO_IN_TASK · ed1cd6de
      Christophe Leroy authored
      This patch activates CONFIG_THREAD_INFO_IN_TASK which
      moves the thread_info into task_struct.
      
      Moving thread_info into task_struct has the following advantages:
        - It protects thread_info from corruption in the case of stack
          overflows.
        - Its address is harder to determine if stack addresses are leaked,
          making a number of attacks more difficult.
      
      This has the following consequences:
        - thread_info is now located at the beginning of task_struct.
        - The 'cpu' field is now in task_struct, and only exists when
          CONFIG_SMP is active.
        - thread_info doesn't have anymore the 'task' field.
      
      This patch:
        - Removes all recopy of thread_info struct when the stack changes.
        - Changes the CURRENT_THREAD_INFO() macro to point to current.
        - Selects CONFIG_THREAD_INFO_IN_TASK.
        - Modifies raw_smp_processor_id() to get ->cpu from current without
          including linux/sched.h to avoid circular inclusion and without
          including asm/asm-offsets.h to avoid symbol names duplication
          between ASM constants and C constants.
        - Modifies klp_init_thread_info() to take a task_struct pointer
          argument.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      [mpe: Add task_stack.h to livepatch.h to fix build fails]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ed1cd6de
    • Andrew Donnellan's avatar
      powerpc: Enable kcov · fb0b0a73
      Andrew Donnellan authored
      kcov provides kernel coverage data that's useful for fuzzing tools like
      syzkaller.
      
      Wire up kcov support on powerpc. Disable kcov instrumentation on the same
      files where we currently disable gcov and UBSan instrumentation, plus some
      additional exclusions which appear necessary to boot on book3e machines.
      Signed-off-by: default avatarAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Acked-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Tested-by: Daniel Axtens <dja@axtens.net> # e6500
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fb0b0a73
    • Christophe Leroy's avatar
      powerpc/kconfig: make _etext and data areas alignment configurable on 8xx · 8f54a6f7
      Christophe Leroy authored
      On 8xx, large pages (512kb or 8M) are used to map kernel linear
      memory. Aligning to 8M reduces TLB misses as only 8M pages are used
      in that case. We make 8M the default for data.
      
      This patchs allows the user to do it via Kconfig.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8f54a6f7
    • Christophe Leroy's avatar
      powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX · d5f17ee9
      Christophe Leroy authored
      This patch implements handling of STRICT_KERNEL_RWX with
      large TLBs directly in the TLB miss handlers.
      
      To do so, etext and sinittext are aligned on 512kB boundaries
      and the miss handlers use 512kB pages instead of 8Mb pages for
      addresses close to the boundaries.
      
      It sets RO PP flags for addresses under sinittext.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d5f17ee9
    • Christophe Leroy's avatar
      powerpc/kconfig: make _etext and data areas alignment configurable on Book3s 32 · 0f4a9041
      Christophe Leroy authored
      Depending on the number of available BATs for mapping the different
      kernel areas, it might be needed to increase the alignment of _etext
      and/or of data areas.
      
      This patchs allows the user to do it via Kconfig.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      0f4a9041
    • Christophe Leroy's avatar
      powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX · 63b2bc61
      Christophe Leroy authored
      Today, STRICT_KERNEL_RWX is based on the use of regular pages
      to map kernel pages.
      
      On Book3s 32, it has three consequences:
      - Using pages instead of BAT for mapping kernel linear memory severely
      impacts performance.
      - Exec protection is not effective because no-execute cannot be set at
      page level (except on 603 which doesn't have hash tables)
      - Write protection is not effective because PP bits do not provide RO
      mode for kernel-only pages (except on 603 which handles it in software
      via PAGE_DIRTY)
      
      On the 603+, we have:
      - Independent IBAT and DBAT allowing limitation of exec parts.
      - NX bit can be set in segment registers to forbit execution on memory
      mapped by pages.
      - RO mode on DBATs even for kernel-only blocks.
      
      On the 601, there is nothing much we can do other than warn the user
      about it, because:
      - BATs are common to instructions and data.
      - BAT do not provide RO mode for kernel-only blocks.
      - segment registers don't have the NX bit.
      
      In order to use IBAT for exec protection, this patch:
      - Aligns _etext to BAT block sizes (128kb)
      - Set NX bit in kernel segment register (Except on vmalloc area when
      CONFIG_MODULES is selected)
      - Maps kernel text with IBATs.
      
      In order to use DBAT for exec protection, this patch:
      - Aligns RW DATA to BAT block sizes (4M)
      - Maps kernel RO area with write prohibited DBATs
      - Maps remaining memory with remaining DBATs
      
      Here is what we get with this patch on a 832x when activating
      STRICT_KERNEL_RWX:
      
      Symbols:
      c0000000 T _stext
      c0680000 R __start_rodata
      c0680000 R _etext
      c0800000 T __init_begin
      c0800000 T _sinittext
      
      ~# cat /sys/kernel/debug/block_address_translation
      ---[ Instruction Block Address Translation ]---
      0: 0xc0000000-0xc03fffff 0x00000000 Kernel EXEC coherent
      1: 0xc0400000-0xc05fffff 0x00400000 Kernel EXEC coherent
      2: 0xc0600000-0xc067ffff 0x00600000 Kernel EXEC coherent
      3:         -
      4:         -
      5:         -
      6:         -
      7:         -
      
      ---[ Data Block Address Translation ]---
      0: 0xc0000000-0xc07fffff 0x00000000 Kernel RO coherent
      1: 0xc0800000-0xc0ffffff 0x00800000 Kernel RW coherent
      2: 0xc1000000-0xc1ffffff 0x01000000 Kernel RW coherent
      3: 0xc2000000-0xc3ffffff 0x02000000 Kernel RW coherent
      4: 0xc4000000-0xc7ffffff 0x04000000 Kernel RW coherent
      5: 0xc8000000-0xcfffffff 0x08000000 Kernel RW coherent
      6: 0xd0000000-0xdfffffff 0x10000000 Kernel RW coherent
      7:         -
      
      ~# cat /sys/kernel/debug/segment_registers
      ---[ User Segments ]---
      0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xa085d0
      0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xa086e1
      0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xa087f2
      0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xa08903
      0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xa08a14
      0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xa08b25
      0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xa08c36
      0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xa08d47
      0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xa08e58
      0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xa08f69
      0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xa0907a
      0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xa0918b
      
      ---[ Kernel Segments ]---
      0xc0000000-0xcfffffff Kern key 0 User key 1 No Exec VSID 0x000ccc
      0xd0000000-0xdfffffff Kern key 0 User key 1 No Exec VSID 0x000ddd
      0xe0000000-0xefffffff Kern key 0 User key 1 No Exec VSID 0x000eee
      0xf0000000-0xffffffff Kern key 0 User key 1 No Exec VSID 0x000fff
      
      Aligning _etext to 128kb allows to map up to 32Mb text with 8 IBATs:
      16Mb + 8Mb + 4Mb + 2Mb + 1Mb + 512kb + 256kb + 128kb (+ 128kb) = 32Mb
      (A 9th IBAT is unneeded as 32Mb would need only a single 32Mb block)
      
      Aligning data to 4M allows to map up to 512Mb data with 8 DBATs:
      16Mb + 8Mb + 4Mb + 4Mb + 32Mb + 64Mb + 128Mb + 256Mb = 512Mb
      
      Because some processors only have 4 BATs and because some targets need
      DBATs for mapping other areas, the following patch will allow to
      modify _etext and data alignment.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      63b2bc61
    • Christophe Leroy's avatar
      powerpc/kconfig: define CONFIG_DATA_SHIFT and CONFIG_ETEXT_SHIFT · 166d97d9
      Christophe Leroy authored
      CONFIG_STRICT_KERNEL_RWX requires a special alignment
      for DATA for some subarches. Today it is just defined
      as an #ifdef in vmlinux.lds.S
      
      In order to get more flexibility, this patch moves the
      definition of this alignment in Kconfig
      
      On some subarches, CONFIG_STRICT_KERNEL_RWX will
      require a special alignment of _etext.
      
      This patch also adds a configuration item for it in Kconfig
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      166d97d9
    • Christophe Leroy's avatar
      powerpc/kconfig: define PAGE_SHIFT inside Kconfig · 555f4fdb
      Christophe Leroy authored
      This patch defined CONFIG_PPC_PAGE_SHIFT in order
      to be able to use PAGE_SHIFT value inside Kconfig.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      555f4fdb