1. 04 Feb, 2020 40 commits
    • Peter Zijlstra's avatar
      asm-generic/tlb: provide MMU_GATHER_TABLE_FREE · 0d6e24d4
      Peter Zijlstra authored
      As described in the comment, the correct order for freeing pages is:
      
       1) unhook page
       2) TLB invalidate page
       3) free page
      
      This order equally applies to page directories.
      
      Currently there are two correct options:
      
       - use tlb_remove_page(), when all page directores are full pages and
         there are no futher contraints placed by things like software
         walkers (HAVE_FAST_GUP).
      
       - use MMU_GATHER_RCU_TABLE_FREE and tlb_remove_table() when the
         architecture does not do IPI based TLB invalidate and has
         HAVE_FAST_GUP (or software TLB fill).
      
      This however leaves architectures that don't have page based directories
      but don't need RCU in a bind.  For those, provide MMU_GATHER_TABLE_FREE,
      which provides the independent batching for directories without the
      additional RCU freeing.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-10-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0d6e24d4
    • Peter Zijlstra's avatar
      asm-generic/tlb: rename HAVE_MMU_GATHER_NO_GATHER · 580a586c
      Peter Zijlstra authored
      Towards a more consistent naming scheme.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-9-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      580a586c
    • Peter Zijlstra's avatar
      asm-generic/tlb: rename HAVE_MMU_GATHER_PAGE_SIZE · 3af4bd03
      Peter Zijlstra authored
      Towards a more consistent naming scheme.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-8-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3af4bd03
    • Peter Zijlstra's avatar
      asm-generic/tlb: rename HAVE_RCU_TABLE_FREE · ff2e6d72
      Peter Zijlstra authored
      Towards a more consistent naming scheme.
      
      [akpm@linux-foundation.org: fix sparc64 Kconfig]
      Link: http://lkml.kernel.org/r/20200116064531.483522-7-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff2e6d72
    • Peter Zijlstra's avatar
      asm-generic/tlb: add missing CONFIG symbol · 27796d03
      Peter Zijlstra authored
      Without this the symbol will not actually end up in .config files.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-6-aneesh.kumar@linux.ibm.com
      Fixes: a30e32bd ("asm-generic/tlb: Provide generic tlb_flush() based on flush_tlb_mm()")
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      27796d03
    • Peter Zijlstra's avatar
      asm-gemeric/tlb: remove stray function declarations · 491a49ff
      Peter Zijlstra authored
      We removed the actual functions a while ago.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-5-aneesh.kumar@linux.ibm.com
      Fixes: 1808d65b ("asm-generic/tlb: Remove arch_tlb*_mmu()")
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      491a49ff
    • Peter Zijlstra's avatar
      asm-generic/tlb: avoid potential double flush · 0758cd83
      Peter Zijlstra authored
      Aneesh reported that:
      
      	tlb_flush_mmu()
      	  tlb_flush_mmu_tlbonly()
      	    tlb_flush()			<-- #1
      	  tlb_flush_mmu_free()
      	    tlb_table_flush()
      	      tlb_table_invalidate()
      		tlb_flush_mmu_tlbonly()
      		  tlb_flush()		<-- #2
      
      does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not
      clear tlb->end in that case.
      
      Observe that any caller to __tlb_adjust_range() also sets at least one of
      the tlb->freed_tables || tlb->cleared_p* bits, and those are
      unconditionally cleared by __tlb_reset_range().
      
      Change the condition for actually issuing TLBI to having one of those bits
      set, as opposed to having tlb->end != 0.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Reported-by: default avatar"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0758cd83
    • Peter Zijlstra's avatar
      mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush · 0ed13259
      Peter Zijlstra authored
      Architectures for which we have hardware walkers of Linux page table
      should flush TLB on mmu gather batch allocation failures and batch flush.
      Some architectures like POWER supports multiple translation modes (hash
      and radix) and in the case of POWER only radix translation mode needs the
      above TLBI.  This is because for hash translation mode kernel wants to
      avoid this extra flush since there are no hardware walkers of linux page
      table.  With radix translation, the hardware also walks linux page table
      and with that, kernel needs to make sure to TLB invalidate page walk cache
      before page table pages are freed.
      
      More details in commit d86564a2 ("mm/tlb, x86/mm: Support invalidating
      TLB caches for RCU_TABLE_FREE")
      
      The changes to sparc are to make sure we keep the old behavior since we
      are now removing HAVE_RCU_TABLE_NO_INVALIDATE.  The default value for
      tlb_needs_table_invalidate is to always force an invalidate and sparc can
      avoid the table invalidate.  Hence we define tlb_needs_table_invalidate to
      false for sparc architecture.
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-3-aneesh.kumar@linux.ibm.com
      Fixes: a46cc7a9 ("powerpc/mm/radix: Improve TLB/PWC flushes")
      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au>	[powerpc]
      Cc: <stable@vger.kernel.org>	[4.14+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ed13259
    • Aneesh Kumar K.V's avatar
      powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case · 12e4d53f
      Aneesh Kumar K.V authored
      Patch series "Fixup page directory freeing", v4.
      
      This is a repost of patch series from Peter with the arch specific changes
      except ppc64 dropped.  ppc64 changes are added here because we are redoing
      the patch series on top of ppc64 changes.  This makes it easy to backport
      these changes.  Only the first 2 patches need to be backported to stable.
      
      The thing is, on anything SMP, freeing page directories should observe the
      exact same order as normal page freeing:
      
       1) unhook page/directory
       2) TLB invalidate
       3) free page/directory
      
      Without this, any concurrent page-table walk could end up with a
      Use-after-Free.  This is esp.  trivial for anything that has software
      page-table walkers (HAVE_FAST_GUP / software TLB fill) or the hardware
      caches partial page-walks (ie.  caches page directories).
      
      Even on UP this might give issues since mmu_gather is preemptible these
      days.  An interrupt or preempted task accessing user pages might stumble
      into the free page if the hardware caches page directories.
      
      This patch series fixes ppc64 and add generic MMU_GATHER changes to
      support the conversion of other architectures.  I haven't added patches
      w.r.t other architecture because they are yet to be acked.
      
      This patch (of 9):
      
      A followup patch is going to make sure we correctly invalidate page walk
      cache before we free page table pages.  In order to keep things simple
      enable RCU_TABLE_FREE even for !SMP so that we don't have to fixup the
      !SMP case differently in the followup patch
      
      !SMP case is right now broken for radix translation w.r.t page walk
      cache flush.  We can get interrupted in between page table free and
      that would imply we have page walk cache entries pointing to tables
      which got freed already.  Michael said "both our platforms that run on
      Power9 force SMP on in Kconfig, so the !SMP case is unlikely to be a
      problem for anyone in practice, unless they've hacked their kernel to
      build it !SMP."
      
      Link: http://lkml.kernel.org/r/20200116064531.483522-2-aneesh.kumar@linux.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      12e4d53f
    • Steven Price's avatar
      x86: mm: avoid allocating struct mm_struct on the stack · e47690d7
      Steven Price authored
      struct mm_struct is quite large (~1664 bytes) and so allocating on the
      stack may cause problems as the kernel stack size is small.
      
      Since ptdump_walk_pgd_level_core() was only allocating the structure so
      that it could modify the pgd argument we can instead introduce a pgd
      override in struct mm_walk and pass this down the call stack to where it
      is needed.
      
      Since the correct mm_struct is now being passed down, it is now also
      unnecessary to take the mmap_sem semaphore because ptdump_walk_pgd() will
      now take the semaphore on the real mm.
      
      [steven.price@arm.com: restore missed arm64 changes]
        Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
      Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e47690d7
    • Steven Price's avatar
      mm: ptdump: reduce level numbers by 1 in note_page() · f8f0d0b6
      Steven Price authored
      Rather than having to increment the 'depth' number by 1 in ptdump_hole(),
      let's change the meaning of 'level' in note_page() since that makes the
      code simplier.
      
      Note that for x86, the level numbers were previously increased by 1 in
      commit 45dcd209 ("x86/mm/dump_pagetables: Fix printout of p4d level")
      and the comment "Bit 7 has a different meaning" was not updated, so this
      change also makes the code match the comment again.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-24-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f8f0d0b6
    • Steven Price's avatar
      arm64: mm: display non-present entries in ptdump · 9c7869c7
      Steven Price authored
      Previously the /sys/kernel/debug/kernel_page_tables file would only show
      lines for entries present in the page tables.  However it is useful to
      also show non-present entries as this makes the size and level of the
      holes more visible.  This aligns the behaviour with x86 which also shows
      holes.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-23-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9c7869c7
    • Steven Price's avatar
      arm64: mm: convert mm/dump.c to use walk_page_range() · 102f45fd
      Steven Price authored
      Now walk_page_range() can walk kernel page tables, we can switch the arm64
      ptdump code over to using it, simplifying the code.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-22-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      102f45fd
    • Steven Price's avatar
      x86: mm: convert dump_pagetables to use walk_page_range · 2ae27137
      Steven Price authored
      Make use of the new functionality in walk_page_range to remove the arch
      page walking code and use the generic code to walk the page tables.
      
      The effective permissions are passed down the chain using new fields in
      struct pg_state.
      
      The KASAN optimisation is implemented by setting action=CONTINUE in the
      callbacks to skip an entire tree of entries.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-21-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2ae27137
    • Steven Price's avatar
      mm: add generic ptdump · 30d621f6
      Steven Price authored
      Add a generic version of page table dumping that architectures can opt-in
      to.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-20-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      30d621f6
    • Steven Price's avatar
      x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct · c5cfae12
      Steven Price authored
      To enable x86 to use the generic walk_page_range() function, the callers
      of ptdump_walk_pgd_level_debugfs() need to pass in the mm_struct.
      
      This means that ptdump_walk_pgd_level_core() is now always passed a valid
      pgd, so drop the support for pgd==NULL.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-19-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c5cfae12
    • Steven Price's avatar
      x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct · e455248d
      Steven Price authored
      To enable x86 to use the generic walk_page_range() function, the callers
      of ptdump_walk_pgd_level() need to pass an mm_struct rather than the raw
      pgd_t pointer.  Luckily since commit 7e904a91 ("efi: Use efi_mm in x86
      as well as ARM") we now have an mm_struct for EFI on x86.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-18-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e455248d
    • Steven Price's avatar
      x86: mm: point to struct seq_file from struct pg_state · 74d2aaa1
      Steven Price authored
      mm/dump_pagetables.c passes both struct seq_file and struct pg_state down
      the chain of walk_*_level() functions to be passed to note_page().
      Instead place the struct seq_file in struct pg_state and access it from
      struct pg_state (which is private to this file) in note_page().
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-17-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      74d2aaa1
    • Steven Price's avatar
      mm: pagewalk: add 'depth' parameter to pte_hole · b7a16c7a
      Steven Price authored
      The pte_hole() callback is called at multiple levels of the page tables.
      Code dumping the kernel page tables needs to know what at what depth the
      missing entry is.  Add this is an extra parameter to pte_hole().  When the
      depth isn't know (e.g.  processing a vma) then -1 is passed.
      
      The depth that is reported is the actual level where the entry is missing
      (ignoring any folding that is in place), i.e.  any levels where
      PTRS_PER_P?D is set to 1 are ignored.
      
      Note that depth starts at 0 for a PGD so that PUD/PMD/PTE retain their
      natural numbers as levels 2/3/4.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-16-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Tested-by: default avatarZong Li <zong.li@sifive.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b7a16c7a
    • Steven Price's avatar
      mm: pagewalk: fix termination condition in walk_pte_range() · c02a9875
      Steven Price authored
      If walk_pte_range() is called with a 'end' argument that is beyond the
      last page of memory (e.g.  ~0UL) then the comparison between 'addr' and
      'end' will always fail and the loop will be infinite.  Instead change the
      comparison to >= while accounting for overflow.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-15-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c02a9875
    • Steven Price's avatar
      mm: pagewalk: don't lock PTEs for walk_page_range_novma() · fbf56346
      Steven Price authored
      walk_page_range_novma() can be used to walk page tables or the kernel or
      for firmware.  These page tables may contain entries that are not backed
      by a struct page and so it isn't (in general) possible to take the PTE
      lock for the pte_entry() callback.  So update walk_pte_range() to only
      take the lock when no_vma==false by splitting out the inner loop to a
      separate function and add a comment explaining the difference to
      walk_page_range_novma().
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-14-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fbf56346
    • Steven Price's avatar
      mm: pagewalk: allow walking without vma · 488ae6a2
      Steven Price authored
      Since 48684a65: "mm: pagewalk: fix misbehavior of walk_page_range for
      vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole,
      because it lacks a vma.
      
      This means each arch has re-implemented page table walking when needed,
      for example in the per-arch ptdump walker.
      
      Remove the requirement to have a vma in the generic code and add a new
      function walk_page_range_novma() which ignores the VMAs and simply walks
      the page tables.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-13-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      488ae6a2
    • Steven Price's avatar
      mm: pagewalk: add p4d_entry() and pgd_entry() · 3afc4236
      Steven Price authored
      pgd_entry() and pud_entry() were removed by commit 0b1fbfe5
      ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no
      users.  We're about to add users so reintroduce them, along with
      p4d_entry() as we now have 5 levels of tables.
      
      Note that commit a00cc7d9 ("mm, x86: add support for PUD-sized
      transparent hugepages") already re-added pud_entry() but with different
      semantics to the other callbacks.  This commit reverts the semantics back
      to match the other callbacks.
      
      To support hmm.c which now uses the new semantics of pud_entry() a new
      member ('action') of struct mm_walk is added which allows the callbacks to
      either descend (ACTION_SUBTREE, the default), skip (ACTION_CONTINUE) or
      repeat the callback (ACTION_AGAIN).  hmm.c is then updated to call
      pud_trans_huge_lock() itself and make use of the splitting/retry logic of
      the core code.
      
      After this change pud_entry() is called for all entries, not just
      transparent huge pages.
      
      [arnd@arndb.de: fix unused variable warning]
       Link: http://lkml.kernel.org/r/20200107204607.1533842-1-arnd@arndb.de
      Link: http://lkml.kernel.org/r/20191218162402.45610-12-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3afc4236
    • Steven Price's avatar
      x86: mm: add p?d_leaf() definitions · 757b2a4a
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For x86 we already have p?d_large() functions, so simply add macros to
      provide the generic p?d_leaf() names for the generic code.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-11-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      757b2a4a
    • Steven Price's avatar
      sparc: mm: add p?d_leaf() definitions · 80942493
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For sparc 64 bit, pmd_large() and pud_large() are already provided, so add
      macros to provide the p?d_leaf names required by the generic code.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-10-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80942493
    • Steven Price's avatar
      s390: mm: add p?d_leaf() definitions · 8d2109f2
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For s390, pud_large() and pmd_large() are already implemented as static
      inline functions.  Add a macro to provide the p?d_leaf names for the
      generic code to use.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-9-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8d2109f2
    • Steven Price's avatar
      riscv: mm: add p?d_leaf() definitions · af6513ea
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For riscv a page is a leaf page when it has a read, write or execute bit
      set on it.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-8-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Reviewed-by: default avatarAlexandre Ghiti <alex@ghiti.fr>
      Reviewed-by: default avatarZong Li <zong.li@sifive.com>
      Acked-by: Paul Walmsley <paul.walmsley@sifive.com>	[arch/riscv]
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      af6513ea
    • Steven Price's avatar
      powerpc: mm: add p?d_leaf() definitions · 070434b1
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For powerpc p?d_is_leaf() functions already exist.  Export them using the
      new p?d_leaf() name.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-7-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      070434b1
    • Steven Price's avatar
      mips: mm: add p?d_leaf() definitions · 501b8104
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      If _PAGE_HUGE is defined we can simply look for it.  When not defined we
      can be confident that there are no leaf pages in existence and fall back
      on the generic implementation (added in a later patch) which returns 0.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-6-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarPaul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      501b8104
    • Steven Price's avatar
      arm64: mm: add p?d_leaf() definitions · 8aa82df3
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information will be provided by the
      p?d_leaf() functions/macros.
      
      For arm64, we already have p?d_sect() macros which we can reuse for
      p?d_leaf().
      
      pud_sect() is defined as a dummy function when CONFIG_PGTABLE_LEVELS < 3
      or CONFIG_ARM64_64K_PAGES is defined.  However when the kernel is
      configured this way then architecturally it isn't allowed to have a large
      page at this level, and any code using these page walking macros is
      implicitly relying on the page size/number of levels being the same as the
      kernel.  So it is safe to reuse this for p?d_leaf() as it is an
      architectural restriction.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-5-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8aa82df3
    • Steven Price's avatar
      arm: mm: add p?d_leaf() definitions · 8a0af66b
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information is provided by the
      p?d_leaf() functions/macros.
      
      For arm pmd_large() already exists and does what we want.  So simply
      provide the generic pmd_leaf() name.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-4-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8a0af66b
    • Steven Price's avatar
      arc: mm: add p?d_leaf() definitions · 4f6b2c08
      Steven Price authored
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information will be provided by the
      p?d_leaf() functions/macros.
      
      For arc, we only have two levels, so only pmd_leaf() is needed.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-3-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarVineet Gupta <vgupta@synopsys.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f6b2c08
    • Steven Price's avatar
      mm: add generic p?d_leaf() macros · 93fab1b2
      Steven Price authored
      Patch series "Generic page walk and ptdump", v17.
      
      Many architectures current have a debugfs file for dumping the kernel page
      tables.  Currently each architecture has to implement custom functions for
      this because the details of walking the page tables used by the kernel are
      different between architectures.
      
      This series extends the capabilities of walk_page_range() so that it can
      deal with the page tables of the kernel (which have no VMAs and can
      contain larger huge pages than exist for user space).  A generic PTDUMP
      implementation is the implemented making use of the new functionality of
      walk_page_range() and finally arm64 and x86 are switch to using it,
      removing the custom table walkers.
      
      To enable a generic page table walker to walk the unusual mappings of the
      kernel we need to implement a set of functions which let us know when the
      walker has reached the leaf entry.  After a suggestion from Will Deacon
      I've chosen the name p?d_leaf() as this (hopefully) describes the purpose
      (and is a new name so has no historic baggage).  Some architectures have
      p?d_large macros but this is easily confused with "large pages".
      
      This series ends with a generic PTDUMP implemention for arm64 and x86.
      
      Mostly this is a clean up and there should be very little functional
      change.  The exceptions are:
      
      * arm64 PTDUMP debugfs now displays pages which aren't present (patch 22).
      
      * arm64 has the ability to efficiently process KASAN pages (which
        previously only x86 implemented).  This means that the combination of
        KASAN and DEBUG_WX is now useable.
      
      This patch (of 23):
      
      Exposing the pud/pgd levels of the page tables to walk_page_range() means
      we may come across the exotic large mappings that come with large areas of
      contiguous memory (such as the kernel's linear map).
      
      For architectures that don't provide all p?d_leaf() macros, provide
      generic do nothing default that are suitable where there cannot be leaf
      pages at that level.  Futher patches will add implementations for
      individual architectures.
      
      The name p?d_leaf() is chosen to minimize the confusion with existing uses
      of "large" pages and "huge" pages which do not necessary mean that the
      entry is a leaf (for example it may be a set of contiguous entries that
      only take 1 TLB slot).  For the purpose of walking the page tables we
      don't need to know how it will be represented in the TLB, but we do need
      to know for sure if it is a leaf of the tree.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-2-steven.price@arm.comSigned-off-by: default avatarSteven Price <steven.price@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93fab1b2
    • Florian Westphal's avatar
      mm: remove __krealloc · 1c948715
      Florian Westphal authored
      Since 5.5-rc1 the last user of this function is gone, so remove the
      functionality.
      
      See commit
      2ad9d774 ("netfilter: conntrack: free extension area immediately")
      for details.
      
      Link: http://lkml.kernel.org/r/20191212223442.22141-1-fw@strlen.deSigned-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1c948715
    • Randy Dunlap's avatar
      pinctrl: fix pxa2xx.c build warnings · 9a8c8b43
      Randy Dunlap authored
      Add #include of <linux/pinctrl/machine.h> to fix build
      warnings in pinctrl-pxa2xx.c.  Fixes these warnings:
      
      In file included from ../drivers/pinctrl/pxa/pinctrl-pxa2xx.c:24:0:
      ../drivers/pinctrl/pxa/../pinctrl-utils.h:36:8: warning: `enum pinctrl_map_type' declared inside parameter list [enabled by default]
         enum pinctrl_map_type type);
              ^
      ../drivers/pinctrl/pxa/../pinctrl-utils.h:36:8: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default]
      
      Link: http://lkml.kernel.org/r/0024542e-cba9-8f13-6c18-32d0050a6007@infradead.orgSigned-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Robert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9a8c8b43
    • Andrew Morton's avatar
      drivers/block/null_blk_main.c: fix uninitialized var warnings · 046755a2
      Andrew Morton authored
      With gcc-7.2, many instances of
      
      drivers/block/null_blk_main.c: In function ‘nullb_device_zone_nr_conv_store’:
      drivers/block/null_blk_main.c:291:12: warning: ‘new_value’ may be used uninitialized in this function [-Wmaybe-uninitialized]
        dev->NAME = new_value;      \
                  ^
      drivers/block/null_blk_main.c:279:7: note: ‘new_value’ was declared here
        TYPE new_value;       \
             ^
      
      Presumably notabug, so use uninitialized_var() to suppress them.
      
      Cc: Shaohua Li <shli@fb.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      046755a2
    • Andrew Morton's avatar
      drivers/block/null_blk_main.c: fix layout · ca0a95a6
      Andrew Morton authored
      Each line here overflows 80 cols by exactly one character.  Delete one tab
      per line to fix.
      
      Cc: Shaohua Li <shli@fb.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca0a95a6
    • Lu Shuaibing's avatar
      ipc/msg.c: consolidate all xxxctl_down() functions · 889b3317
      Lu Shuaibing authored
      A use of uninitialized memory in msgctl_down() because msqid64 in
      ksys_msgctl hasn't been initialized.  The local | msqid64 | is created in
      ksys_msgctl() and then passed into msgctl_down().  Along the way msqid64
      is never initialized before msgctl_down() checks msqid64->msg_qbytes.
      
      KUMSAN(KernelUninitializedMemorySantizer, a new error detection tool)
      reports:
      
      ==================================================================
      BUG: KUMSAN: use of uninitialized memory in msgctl_down+0x94/0x300
      Read of size 8 at addr ffff88806bb97eb8 by task syz-executor707/2022
      
      CPU: 0 PID: 2022 Comm: syz-executor707 Not tainted 5.2.0-rc4+ #63
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
      Call Trace:
       dump_stack+0x75/0xae
       __kumsan_report+0x17c/0x3e6
       kumsan_report+0xe/0x20
       msgctl_down+0x94/0x300
       ksys_msgctl.constprop.14+0xef/0x260
       do_syscall_64+0x7e/0x1f0
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      RIP: 0033:0x4400e9
      Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 fb 13 fc ff c3 66 2e 0f 1f 84 00 00 00 00
      RSP: 002b:00007ffd869e0598 EFLAGS: 00000246 ORIG_RAX: 0000000000000047
      RAX: ffffffffffffffda RBX: 00000000004002c8 RCX: 00000000004400e9
      RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
      RBP: 00000000006ca018 R08: 0000000000000000 R09: 0000000000000000
      R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000401970
      R13: 0000000000401a00 R14: 0000000000000000 R15: 0000000000000000
      
      The buggy address belongs to the page:
      page:ffffea0001aee5c0 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0
      flags: 0x100000000000000()
      raw: 0100000000000000 0000000000000000 ffffffff01ae0101 0000000000000000
      raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
      page dumped because: kumsan: bad access detected
      ==================================================================
      
      Syzkaller reproducer:
      msgctl$IPC_RMID(0x0, 0x0)
      
      C reproducer:
      // autogenerated by syzkaller (https://github.com/google/syzkaller)
      
      int main(void)
      {
        syscall(__NR_mmap, 0x20000000, 0x1000000, 3, 0x32, -1, 0);
        syscall(__NR_msgctl, 0, 0, 0);
        return 0;
      }
      
      [natechancellor@gmail.com: adjust indentation in ksys_msgctl]
        Link: https://github.com/ClangBuiltLinux/linux/issues/829
        Link: http://lkml.kernel.org/r/20191218032932.37479-1-natechancellor@gmail.com
      Link: http://lkml.kernel.org/r/20190613014044.24234-1-shuaibinglu@126.comSigned-off-by: default avatarLu Shuaibing <shuaibinglu@126.com>
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Suggested-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: NeilBrown <neilb@suse.com>
      From: Andrew Morton <akpm@linux-foundation.org>
      Subject: drivers/block/null_blk_main.c: fix layout
      
      Each line here overflows 80 cols by exactly one character.  Delete one tab
      per line to fix.
      
      Cc: Shaohua Li <shli@fb.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      889b3317
    • Manfred Spraul's avatar
      ipc/sem.c: document and update memory barriers · 8116b54e
      Manfred Spraul authored
      Document and update the memory barriers in ipc/sem.c:
      
      - Add smp_store_release() to wake_up_sem_queue_prepare() and
        document why it is needed.
      
      - Read q->status using READ_ONCE+smp_acquire__after_ctrl_dep().
        as the pair for the barrier inside wake_up_sem_queue_prepare().
      
      - Add comments to all barriers, and mention the rules in the block
        regarding locking.
      
      - Switch to using wake_q_add_safe().
      
      Link: http://lkml.kernel.org/r/20191020123305.14715-6-manfred@colorfullife.comSigned-off-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: <1vier1@web.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8116b54e
    • Manfred Spraul's avatar
      ipc/msg.c: update and document memory barriers · 0d97a82b
      Manfred Spraul authored
      Transfer findings from ipc/mqueue.c:
      
      - A control barrier was missing for the lockless receive case So in
        theory, not yet initialized data may have been copied to user space -
        obviously only for architectures where control barriers are not NOP.
      
      - use smp_store_release().  In theory, the refount may have been
        decreased to 0 already when wake_q_add() tries to get a reference.
      
      Link: http://lkml.kernel.org/r/20191020123305.14715-5-manfred@colorfullife.comSigned-off-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: <1vier1@web.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0d97a82b