1. 17 Feb, 2021 1 commit
  2. 13 Feb, 2021 1 commit
    • Heiko Carstens's avatar
      s390,alpha: switch to 64-bit ino_t · 96c0a6a7
      Heiko Carstens authored
      s390 and alpha are the only 64 bit architectures with a 32-bit ino_t.
      Since this is quite unusual this causes bugs from time to time.
      
      See e.g. commit ebce3eb2 ("ceph: fix inode number handling on
      arches with 32-bit ino_t") for an example.
      
      This (obviously) also prevents s390 and alpha to use 64-bit ino_t for
      tmpfs. See commit b85a7a8b
      
       ("tmpfs: disallow CONFIG_TMPFS_INODE64
      on s390").
      
      Therefore switch both s390 and alpha to 64-bit ino_t. This should only
      have an effect on the ustat system call. To prevent ABI breakage
      define struct ustat compatible to the old layout and change
      sys_ustat() accordingly.
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarHeiko Carstens <hca@linux.ibm.com>
      Signed-off-by: default avatarVasily Gorbik <gor@linux.ibm.com>
      96c0a6a7
  3. 29 Jan, 2021 1 commit
  4. 21 Jan, 2021 1 commit
  5. 14 Jan, 2021 2 commits
  6. 06 Jan, 2021 1 commit
    • Al Viro's avatar
      [amd64] clean PRSTATUS_SIZE/SET_PR_FPVALID up properly · 7facdc42
      Al Viro authored
      
      To get rid of hardcoded size/offset in those macros we need to have
      a definition of i386 variant of struct elf_prstatus.  However, we can't
      do that in asm/compat.h - the types needed for that are not there and
      adding an include of asm/user32.h into asm/compat.h would cause a lot
      of mess.
      
      That could be conveniently done in elfcore-compat.h, but currently there
      is nowhere to put arch-dependent parts of it - no asm/elfcore-compat.h.
      So we introduce a new file (asm/elfcore-compat.h, present on architectures
      that have CONFIG_ARCH_HAS_ELFCORE_COMPAT set, currently only on x86),
      have it pulled by linux/elfcore-compat.h and move the definitions there.
      
      As a side benefit, we don't need to worry about accidental inclusion of
      that file into binfmt_elf.c itself, so we don't need the dance with
      COMPAT_PRSTATUS_SIZE, etc. - only fs/compat_binfmt_elf.c will see
      that header.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      7facdc42
  7. 28 Dec, 2020 1 commit
  8. 22 Dec, 2020 1 commit
  9. 15 Dec, 2020 4 commits
    • Mike Rapoport's avatar
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport authored
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.org
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • Mike Rapoport's avatar
      arm, arm64: move free_unused_memmap() to generic mm · 4f5b0c17
      Mike Rapoport authored
      ARM and ARM64 free unused parts of the memory map just before the
      initialization of the page allocator. To allow holes in the memory map both
      architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.
      
      Allowing holes in the memory map for FLATMEM may be useful for small
      machines, such as ARC and m68k and will enable those architectures to cease
      using DISCONTIGMEM and still support more than one memory bank.
      
      Move the functions that free unused memory map to generic mm and enable
      them in case HAVE_ARCH_PFN_VALID=y.
      
      Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.org
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Meelis Roos <mroos@linux.ee>
      Cc: Michael Schmitz <schmitzmic@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f5b0c17
    • Kalesh Singh's avatar
      mm: speedup mremap on 1GB or larger regions · c49dd340
      Kalesh Singh authored
      Android needs to move large memory regions for garbage collection.  The GC
      requires moving physical pages of multi-gigabyte heap using mremap.
      During this move, the application threads have to be paused for
      correctness.  It is critical to keep this pause as short as possible to
      avoid jitters during user interaction.
      
      Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD level if
      the source and destination addresses are PUD-aligned.  For
      CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves PGD
      entries, since the PUD entry is “folded back” onto the PGD entry.  Add
      HAVE_MOVE_PUD so that architectures where moving at the PUD level isn't
      supported/tested can turn this off by not selecting the config.
      
      Link: https://lkml.kernel.org/r/20201014005320.2233162-4-kaleshsingh@google.com
      
      Signed-off-by: default avatarKalesh Singh <kaleshsingh@google.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Brauner <christian.brauner@ubuntu.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Hassan Naveed <hnaveed@wavecomp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Minchan Kim <minchan@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c49dd340
    • Colin Ian King's avatar
      arch/Kconfig: fix spelling mistakes · a86ecfa6
      Colin Ian King authored
      There are a few spelling mistakes in the Kconfig comments and help text.
      Fix these.
      
      Link: https://lkml.kernel.org/r/20201207155004.171962-1-colin.king@canonical.com
      
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a86ecfa6
  10. 14 Dec, 2020 1 commit
    • Steven Rostedt (VMware)'s avatar
      Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS" · adab66b7
      Steven Rostedt (VMware) authored
      It was believed that metag was the only architecture that required the ring
      buffer to keep 8 byte words aligned on 8 byte architectures, and with its
      removal, it was assumed that the ring buffer code did not need to handle
      this case. It appears that sparc64 also requires this.
      
      The following was reported on a sparc64 boot up:
      
         kernel: futex hash table entries: 65536 (order: 9, 4194304 bytes, linear)
         kernel: Running postponed tracer tests:
         kernel: Testing tracer function:
         kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
         kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140
         kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
         kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140
         kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140
         kernel: PASSED
      
      Need to put back the 64BIT aligned code for the ring buffer.
      
      Link: https://lore.kernel.org/r/CADxRZqzXQRYgKc=y-KV=S_yHL+Y8Ay2mh5ezeZUnpRvg+syWKw@mail.gmail.com
      
      Cc: stable@vger.kernel.org
      Fixes: 86b3de60
      
       ("ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS")
      Reported-by: default avatarAnatoly Pugachev <matorola@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      adab66b7
  11. 02 Dec, 2020 1 commit
  12. 01 Dec, 2020 1 commit
    • Nathan Chancellor's avatar
      kbuild: Hoist '--orphan-handling' into Kconfig · 59612b24
      Nathan Chancellor authored
      Currently, '--orphan-handling=warn' is spread out across four different
      architectures in their respective Makefiles, which makes it a little
      unruly to deal with in case it needs to be disabled for a specific
      linker version (in this case, ld.lld 10.0.1).
      
      To make it easier to control this, hoist this warning into Kconfig and
      the main Makefile so that disabling it is simpler, as the warning will
      only be enabled in a couple places (main Makefile and a couple of
      compressed boot folders that blow away LDFLAGS_vmlinx) and making it
      conditional is easier due to Kconfig syntax. One small additional
      benefit of this is saving a call to ld-option on incremental builds
      because we will have already evaluated it for CONFIG_LD_ORPHAN_WARN.
      
      To keep the list of supported architectures the same, introduce
      CONFIG_ARCH_WANT_LD_ORPHAN_WARN, which an architecture can select to
      gain this automatically after all of the sections are specified and size
      asserted. A special thanks to Kees Cook for the help text on this
      config.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/1187
      
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Tested-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      59612b24
  13. 20 Nov, 2020 1 commit
  14. 19 Nov, 2020 1 commit
    • Frederic Weisbecker's avatar
      context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK · 83c2da2e
      Frederic Weisbecker authored
      
      Historically, context tracking had to deal with fragile entry code path,
      ie: before user_exit() is called and after user_enter() is called, in
      case some of those spots would call schedule() or use RCU. On such
      cases, the site had to be protected between exception_enter() and
      exception_exit() that save the context tracking state in the task stack.
      
      Such sleepable fragile code path had many different origins: tracing,
      exceptions, early or late calls to context tracking on syscalls...
      
      Aside of that not being pretty, saving the context tracking state on
      the task stack forces us to run context tracking on all CPUs, including
      housekeepers, and prevents us to completely shutdown nohz_full at
      runtime on a CPU in the future as context tracking and its overhead
      would still need to run system wide.
      
      Now thanks to the extensive efforts to sanitize x86 entry code, those
      conditions have been removed and we can now get rid of these workarounds
      in this architecture.
      
      Create a Kconfig feature to express this achievement.
      Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20201117151637.259084-2-frederic@kernel.org
      83c2da2e
  15. 08 Oct, 2020 1 commit
  16. 16 Sep, 2020 1 commit
    • Nicholas Piggin's avatar
      mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race · d53c3dfb
      Nicholas Piggin authored
      Reading and modifying current->mm and current->active_mm and switching
      mm should be done with irqs off, to prevent races seeing an intermediate
      state.
      
      This is similar to commit 38cf307c
      
       ("mm: fix kthread_use_mm() vs TLB
      invalidate"). At exec-time when the new mm is activated, the old one
      should usually be single-threaded and no longer used, unless something
      else is holding an mm_users reference (which may be possible).
      
      Absent other mm_users, there is also a race with preemption and lazy tlb
      switching. Consider the kernel_execve case where the current thread is
      using a lazy tlb active mm:
      
        call_usermodehelper()
          kernel_execve()
            old_mm = current->mm;
            active_mm = current->active_mm;
            *** preempt *** -------------------->  schedule()
                                                     prev->active_mm = NULL;
                                                     mmdrop(prev active_mm);
                                                   ...
                            <--------------------  schedule()
            current->mm = mm;
            current->active_mm = mm;
            if (!old_mm)
                mmdrop(active_mm);
      
      If we switch back to the kernel thread from a different mm, there is a
      double free of the old active_mm, and a missing free of the new one.
      
      Closing this race only requires interrupts to be disabled while ->mm
      and ->active_mm are being switched, but the TLB problem requires also
      holding interrupts off over activate_mm. Unfortunately not all archs
      can do that yet, e.g., arm defers the switch if irqs are disabled and
      expects finish_arch_post_lock_switch() to be called to complete the
      flush; um takes a blocking lock in activate_mm().
      
      So as a first step, disable interrupts across the mm/active_mm updates
      to close the lazy tlb preempt race, and provide an arch option to
      extend that to activate_mm which allows architectures doing IPI based
      TLB shootdowns to close the second race.
      
      This is a bit ugly, but in the interest of fixing the bug and backporting
      before all architectures are converted this is a compromise.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200914045219.3736466-2-npiggin@gmail.com
      d53c3dfb
  17. 09 Sep, 2020 1 commit
  18. 01 Sep, 2020 3 commits
  19. 06 Aug, 2020 1 commit
  20. 24 Jul, 2020 1 commit
    • Thomas Gleixner's avatar
      entry: Provide generic syscall entry functionality · 142781e1
      Thomas Gleixner authored
      
      On syscall entry certain work needs to be done:
      
         - Establish state (lockdep, context tracking, tracing)
         - Conditional work (ptrace, seccomp, audit...)
      
      This code is needlessly duplicated and  different in all
      architectures.
      
      Provide a generic version based on the x86 implementation which has all the
      RCU and instrumentation bits right.
      
      As interrupt/exception entry from user space needs parts of the same
      functionality, provide a function for this as well.
      
      syscall_enter_from_user_mode() and irqentry_enter_from_user_mode() must be
      called right after the low level ASM entry. The calling code must be
      non-instrumentable. After the functions returns state is correct and the
      subsequent functions can be instrumented.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Link: https://lkml.kernel.org/r/20200722220519.513463269@linutronix.de
      142781e1
  21. 07 Jul, 2020 1 commit
  22. 04 Jul, 2020 1 commit
  23. 26 Jun, 2020 1 commit
  24. 13 Jun, 2020 1 commit
    • Masahiro Yamada's avatar
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada authored
      Since commit 84af7a61
      
       ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  25. 18 May, 2020 1 commit
  26. 15 May, 2020 2 commits
    • Sami Tolvanen's avatar
      scs: Disable when function graph tracing is enabled · ddc9863e
      Sami Tolvanen authored
      
      The graph tracer hooks returns by modifying frame records on the
      (regular) stack, but with SCS the return address is taken from the
      shadow stack, and the value in the frame record has no effect. As we
      don't currently have a mechanism to determine the corresponding slot
      on the shadow stack (and to pass this through the ftrace
      infrastructure), for now let's disable SCS when the graph tracer is
      enabled.
      
      With SCS the return address is taken from the shadow stack and the
      value in the frame record has no effect. The mcount based graph tracer
      hooks returns by modifying frame records on the (regular) stack, and
      thus is not compatible. The patchable-function-entry graph tracer
      used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved
      to the shadow stack, and is compatible.
      
      Modifying the mcount based graph tracer to work with SCS would require
      a mechanism to determine the corresponding slot on the shadow stack
      (and to pass this through the ftrace infrastructure), and we expect
      that everyone will eventually move to the patchable-function-entry
      based graph tracer anyway, so for now let's disable SCS when the
      mcount-based graph tracer is enabled.
      
      SCS and patchable-function-entry are both supported from LLVM 10.x.
      Signed-off-by: default avatarSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      ddc9863e
    • Sami Tolvanen's avatar
      scs: Add support for Clang's Shadow Call Stack (SCS) · d08b9f0c
      Sami Tolvanen authored
      This change adds generic support for Clang's Shadow Call Stack,
      which uses a shadow stack to protect return addresses from being
      overwritten by an attacker. Details are available here:
      
        https://clang.llvm.org/docs/ShadowCallStack.html
      
      
      
      Note that security guarantees in the kernel differ from the ones
      documented for user space. The kernel must store addresses of
      shadow stacks in memory, which means an attacker capable reading
      and writing arbitrary memory may be able to locate them and hijack
      control flow by modifying the stacks.
      Signed-off-by: default avatarSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      [will: Numerous cosmetic changes]
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      d08b9f0c
  27. 13 May, 2020 1 commit
  28. 16 Mar, 2020 3 commits
  29. 06 Mar, 2020 1 commit
  30. 14 Feb, 2020 1 commit
    • Frederic Weisbecker's avatar
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker authored
      
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  31. 04 Feb, 2020 1 commit