An error occurred fetching the project authors.
  1. 09 Feb, 2024 1 commit
  2. 30 Jan, 2024 1 commit
  3. 05 Dec, 2023 1 commit
  4. 23 Nov, 2023 1 commit
    • Masahiro Yamada's avatar
      arm64: add dependency between vmlinuz.efi and Image · c0a85742
      Masahiro Yamada authored
      A common issue in Makefile is a race in parallel building.
      
      You need to be careful to prevent multiple threads from writing to the
      same file simultaneously.
      
      Commit 3939f334 ("ARM: 8418/1: add boot image dependencies to not
      generate invalid images") addressed such a bad scenario.
      
      A similar symptom occurs with the following command:
      
        $ make -j$(nproc) ARCH=arm64 Image vmlinuz.efi
          [ snip ]
          SORTTAB vmlinux
          OBJCOPY arch/arm64/boot/Image
          OBJCOPY arch/arm64/boot/Image
          AS      arch/arm64/boot/zboot-header.o
          PAD     arch/arm64/boot/vmlinux.bin
          GZIP    arch/arm64/boot/vmlinuz
          OBJCOPY arch/arm64/boot/vmlinuz.o
          LD      arch/arm64/boot/vmlinuz.efi.elf
          OBJCOPY arch/arm64/boot/vmlinuz.efi
      
      The log "OBJCOPY arch/arm64/boot/Image" is displayed twice.
      
      It indicates that two threads simultaneously enter arch/arm64/boot/
      and write to arch/arm64/boot/Image.
      
      It occasionally leads to a build failure:
      
        $ make -j$(nproc) ARCH=arm64 Image vmlinuz.efi
          [ snip ]
          SORTTAB vmlinux
          OBJCOPY arch/arm64/boot/Image
          PAD     arch/arm64/boot/vmlinux.bin
        truncate: Invalid number: 'arch/arm64/boot/vmlinux.bin'
        make[2]: *** [drivers/firmware/efi/libstub/Makefile.zboot:13:
        arch/arm64/boot/vmlinux.bin] Error 1
        make[2]: *** Deleting file 'arch/arm64/boot/vmlinux.bin'
        make[1]: *** [arch/arm64/Makefile:163: vmlinuz.efi] Error 2
        make[1]: *** Waiting for unfinished jobs....
        make: *** [Makefile:234: __sub-make] Error 2
      
      vmlinuz.efi depends on Image, but such a dependency is not specified
      in arch/arm64/Makefile.
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarSImon Glass <sjg@chromium.org>
      Link: https://lore.kernel.org/r/20231119053234.2367621-1-masahiroy@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c0a85742
  5. 28 Oct, 2023 1 commit
    • Masahiro Yamada's avatar
      kbuild: unify vdso_install rules · 56769ba4
      Masahiro Yamada authored
      Currently, there is no standard implementation for vdso_install,
      leading to various issues:
      
       1. Code duplication
      
          Many architectures duplicate similar code just for copying files
          to the install destination.
      
          Some architectures (arm, sparc, x86) create build-id symlinks,
          introducing more code duplication.
      
       2. Unintended updates of in-tree build artifacts
      
          The vdso_install rule depends on the vdso files to install.
          It may update in-tree build artifacts. This can be problematic,
          as explained in commit 19514fc6 ("arm, kbuild: make
          "make install" not depend on vmlinux").
      
       3. Broken code in some architectures
      
          Makefile code is often copied from one architecture to another
          without proper adaptation.
      
          'make vdso_install' for parisc does not work.
      
          'make vdso_install' for s390 installs vdso64, but not vdso32.
      
      To address these problems, this commit introduces a generic vdso_install
      rule.
      
      Architectures that support vdso_install need to define vdso-install-y
      in arch/*/Makefile. vdso-install-y lists the files to install.
      
      For example, arch/x86/Makefile looks like this:
      
        vdso-install-$(CONFIG_X86_64)           += arch/x86/entry/vdso/vdso64.so.dbg
        vdso-install-$(CONFIG_X86_X32_ABI)      += arch/x86/entry/vdso/vdsox32.so.dbg
        vdso-install-$(CONFIG_X86_32)           += arch/x86/entry/vdso/vdso32.so.dbg
        vdso-install-$(CONFIG_IA32_EMULATION)   += arch/x86/entry/vdso/vdso32.so.dbg
      
      These files will be installed to $(MODLIB)/vdso/ with the .dbg suffix,
      if exists, stripped away.
      
      vdso-install-y can optionally take the second field after the colon
      separator. This is needed because some architectures install a vdso
      file as a different base name.
      
      The following is a snippet from arch/arm64/Makefile.
      
        vdso-install-$(CONFIG_COMPAT_VDSO)      += arch/arm64/kernel/vdso32/vdso.so.dbg:vdso32.so
      
      This will rename vdso.so.dbg to vdso32.so during installation. If such
      architectures change their implementation so that the base names match,
      this workaround will go away.
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      Acked-by: Sven Schnelle <svens@linux.ibm.com> # s390
      Reviewed-by: default avatarNicolas Schier <nicolas@fjasle.eu>
      Reviewed-by: default avatarGuo Ren <guoren@kernel.org>
      Acked-by: Helge Deller <deller@gmx.de>  # parisc
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      56769ba4
  6. 13 Feb, 2023 1 commit
  7. 31 Jan, 2023 2 commits
    • Mark Rutland's avatar
      arm64: pauth: don't sign leaf functions · c68cf528
      Mark Rutland authored
      Currently, when CONFIG_ARM64_PTR_AUTH_KERNEL=y (and
      CONFIG_UNWIND_PATCH_PAC_INTO_SCS=n), we enable pointer authentication
      for all functions, including leaf functions. This isn't necessary, and
      is unfortunate for a few reasons:
      
      * Any PACIASP instruction is implicitly a `BTI C` landing pad, and
        forcing the addition of a PACIASP in every function introduces a
        larger set of BTI gadgets than is necessary.
      
      * The PACIASP and AUTIASP instructions make leaf functions larger than
        necessary, bloating the kernel Image. For a defconfig v6.2-rc3 kernel,
        this appears to add ~64KiB relative to not signing leaf functions,
        which is unfortunate but not entirely onerous.
      
      * The PACIASP and AUTIASP instructions potentially make leaf functions
        more expensive in terms of performance and/or power. For many trivial
        leaf functions, this is clearly unnecessary, e.g.
      
        | <arch_local_save_flags>:
        |        d503233f        paciasp
        |        d53b4220        mrs     x0, daif
        |        d50323bf        autiasp
        |        d65f03c0        ret
      
        | <calibration_delay_done>:
        |        d503233f        paciasp
        |        d50323bf        autiasp
        |        d65f03c0        ret
        |        d503201f        nop
      
      * When CONFIG_UNWIND_PATCH_PAC_INTO_SCS=y we disable pointer
        authentication for leaf functions, so clearly this is not functionally
        necessary, indicates we have an inconsistent threat model, and
        convolutes the Makefile logic.
      
      We've used pointer authentication in leaf functions since the
      introduction of in-kernel pointer authentication in commit:
      
        74afda40 ("arm64: compile the kernel with ptrauth return address signing")
      
      ... but at the time we had no rationale for signing leaf functions.
      
      Subsequently, we considered avoiding signing leaf functions:
      
        https://lore.kernel.org/linux-arm-kernel/1586856741-26839-1-git-send-email-amit.kachhap@arm.com/
        https://lore.kernel.org/linux-arm-kernel/1588149371-20310-1-git-send-email-amit.kachhap@arm.com/
      
      ... however at the time we didn't have an abundance of reasons to avoid
      signing leaf functions as above (e.g. the BTI case), we had no hardware
      to make performance measurements, and it was reasoned that this gave
      some level of protection against a limited set of code-reuse gadgets
      which would fall through to a RET. We documented this in commit:
      
        717b938e ("arm64: Document why we enable PAC support for leaf functions")
      
      Notably, this was before we supported any forward-edge CFI scheme (e.g.
      Arm BTI, or Clang CFI/kCFI), which would prevent jumping into the middle
      of a function.
      
      In addition, even with signing forced for leaf functions, AUTIASP may be
      placed before a number of instructions which might constitute such a
      gadget, e.g.
      
      | <user_regs_reset_single_step>:
      |        f9400022        ldr     x2, [x1]
      |        d503233f        paciasp
      |        d50323bf        autiasp
      |        f9408401        ldr     x1, [x0, #264]
      |        720b005f        tst     w2, #0x200000
      |        b26b0022        orr     x2, x1, #0x200000
      |        926af821        and     x1, x1, #0xffffffffffdfffff
      |        9a820021        csel    x1, x1, x2, eq  // eq = none
      |        f9008401        str     x1, [x0, #264]
      |        d65f03c0        ret
      
      | <fpsimd_cpu_dead>:
      |        2a0003e3        mov     w3, w0
      |        9000ff42        adrp    x2, ffff800009ffd000 <xen_dynamic_chip+0x48>
      |        9120e042        add     x2, x2, #0x838
      |        52800000        mov     w0, #0x0                        // #0
      |        d503233f        paciasp
      |        f000d041        adrp    x1, ffff800009a20000 <this_cpu_vector>
      |        d50323bf        autiasp
      |        9102c021        add     x1, x1, #0xb0
      |        f8635842        ldr     x2, [x2, w3, uxtw #3]
      |        f821685f        str     xzr, [x2, x1]
      |        d65f03c0        ret
      |        d503201f        nop
      
      So generally, trying to use AUTIASP to detect such gadgetization is not
      robust, and this is dealt with far better by forward-edge CFI (which is
      designed to prevent such cases). We should bite the bullet and stop
      pretending that AUTIASP is a mitigation for such forward-edge
      gadgetization.
      
      For the above reasons, this patch has the kernel consistently sign
      non-leaf functions and avoid signing leaf functions.
      
      Considering a defconfig v6.2-rc3 kernel built with LLVM 15.0.6:
      
      * The vmlinux is ~43KiB smaller:
      
        | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
        | -rwxr-xr-x 1 mark mark 338547808 Jan 25 17:17 vmlinux-after
        | -rwxr-xr-x 1 mark mark 338591472 Jan 25 17:22 vmlinux-before
      
      * The resulting Image is 64KiB smaller:
      
        | [mark@lakrids:~/src/linux]% ls -al Image-*
        | -rwxr-xr-x 1 mark mark 32702976 Jan 25 17:17 Image-after
        | -rwxr-xr-x 1 mark mark 32768512 Jan 25 17:22 Image-before
      
      * There are ~400 fewer BTI gadgets:
      
        | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
        |    1219 bti     c
        |   61982 paciasp
      
        | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
        |   10099 bti     c
        |   52699 paciasp
      
        Which is +8880 BTIs, and -9283 PACIASPs, for -403 unnecessary BTI
        gadgets. While this is small relative to the total, distinguishing the
        two cases will make it easier to analyse and reduce this set further
        in future.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230131105809.991288-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c68cf528
    • Mark Rutland's avatar
      arm64: unify asm-arch manipulation · 1e249c41
      Mark Rutland authored
      Assemblers will reject instructions not supported by a target
      architecture version, and so we must explicitly tell the assembler the
      latest architecture version for which we want to assemble instructions
      from.
      
      We've added a few AS_HAS_ARMV8_<N> definitions for this, in addition to
      an inconsistently named AS_HAS_PAC definition, from which arm64's
      top-level Makefile determines the architecture version that we intend to
      target, and generates the `asm-arch` variable.
      
      To make this a bit clearer and easier to maintain, this patch reworks
      the Makefile to determine asm-arch in a single if-else-endif chain.
      AS_HAS_PAC, which is defined when the assembler supports
      `-march=armv8.3-a`, is renamed to AS_HAS_ARMV8_3.
      
      As the logic for armv8.3-a is lifted out of the block handling pointer
      authentication, `asm-arch` may now be set to armv8.3-a regardless of
      whether support for pointer authentication is selected. This means that
      it will be possible to assemble armv8.3-a instructions even if we didn't
      intend to, but this is consistent with our handling of other
      architecture versions, and the compiler won't generate armv8.3-a
      instructions regardless.
      
      For the moment there's no need for an CONFIG_AS_HAS_ARMV8_1, as the code
      for LSE atomics and LDAPR use individual `.arch_extension` entries and
      do not require the baseline asm arch to be bumped to armv8.1-a. The
      other armv8.1-a features (e.g. PAN) do not require assembler support.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230131105809.991288-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1e249c41
  8. 24 Jan, 2023 1 commit
    • Mark Rutland's avatar
      arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS · baaf553d
      Mark Rutland authored
      This patch enables support for DYNAMIC_FTRACE_WITH_CALL_OPS on arm64.
      This allows each ftrace callsite to provide an ftrace_ops to the common
      ftrace trampoline, allowing each callsite to invoke distinct tracer
      functions without the need to fall back to list processing or to
      allocate custom trampolines for each callsite. This significantly speeds
      up cases where multiple distinct trace functions are used and callsites
      are mostly traced by a single tracer.
      
      The main idea is to place a pointer to the ftrace_ops as a literal at a
      fixed offset from the function entry point, which can be recovered by
      the common ftrace trampoline. Using a 64-bit literal avoids branch range
      limitations, and permits the ops to be swapped atomically without
      special considerations that apply to code-patching. In future this will
      also allow for the implementation of DYNAMIC_FTRACE_WITH_DIRECT_CALLS
      without branch range limitations by using additional fields in struct
      ftrace_ops.
      
      As noted in the core patch adding support for
      DYNAMIC_FTRACE_WITH_CALL_OPS, this approach allows for directly invoking
      ftrace_ops::func even for ftrace_ops which are dynamically-allocated (or
      part of a module), without going via ftrace_ops_list_func.
      
      Currently, this approach is not compatible with CLANG_CFI, as the
      presence/absence of pre-function NOPs changes the offset of the
      pre-function type hash, and there's no existing mechanism to ensure a
      consistent offset for instrumented and uninstrumented functions. When
      CLANG_CFI is enabled, the existing scheme with a global ops->func
      pointer is used, and there should be no functional change. I am
      currently working with others to allow the two to work together in
      future (though this will liekly require updated compiler support).
      
      I've benchamrked this with the ftrace_ops sample module [1], which is
      not currently upstream, but available at:
      
        https://lore.kernel.org/lkml/20230103124912.2948963-1-mark.rutland@arm.com
        git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git ftrace-ops-sample-20230109
      
      Using that module I measured the total time taken for 100,000 calls to a
      trivial instrumented function, with a number of tracers enabled with
      relevant filters (which would apply to the instrumented function) and a
      number of tracers enabled with irrelevant filters (which would not apply
      to the instrumented function). I tested on an M1 MacBook Pro, running
      under a HVF-accelerated QEMU VM (i.e. on real hardware).
      
      Before this patch:
      
        Number of tracers     || Total time  | Per-call average time (ns)
        Relevant | Irrelevant || (ns)        | Total        | Overhead
        =========+============++=============+==============+============
               0 |          0 ||      94,583 |         0.95 |           -
               0 |          1 ||      93,709 |         0.94 |           -
               0 |          2 ||      93,666 |         0.94 |           -
               0 |         10 ||      93,709 |         0.94 |           -
               0 |        100 ||      93,792 |         0.94 |           -
        ---------+------------++-------------+--------------+------------
               1 |          1 ||   6,467,833 |        64.68 |       63.73
               1 |          2 ||   7,509,708 |        75.10 |       74.15
               1 |         10 ||  23,786,792 |       237.87 |      236.92
               1 |        100 || 106,432,500 |     1,064.43 |     1063.38
        ---------+------------++-------------+--------------+------------
               1 |          0 ||   1,431,875 |        14.32 |       13.37
               2 |          0 ||   6,456,334 |        64.56 |       63.62
              10 |          0 ||  22,717,000 |       227.17 |      226.22
             100 |          0 || 103,293,667 |      1032.94 |     1031.99
        ---------+------------++-------------+--------------+--------------
      
        Note: per-call overhead is estimated relative to the baseline case
        with 0 relevant tracers and 0 irrelevant tracers.
      
      After this patch
      
        Number of tracers     || Total time  | Per-call average time (ns)
        Relevant | Irrelevant || (ns)        | Total        | Overhead
        =========+============++=============+==============+============
               0 |          0 ||      94,541 |         0.95 |           -
               0 |          1 ||      93,666 |         0.94 |           -
               0 |          2 ||      93,709 |         0.94 |           -
               0 |         10 ||      93,667 |         0.94 |           -
               0 |        100 ||      93,792 |         0.94 |           -
        ---------+------------++-------------+--------------+------------
               1 |          1 ||     281,000 |         2.81 |        1.86
               1 |          2 ||     281,042 |         2.81 |        1.87
               1 |         10 ||     280,958 |         2.81 |        1.86
               1 |        100 ||     281,250 |         2.81 |        1.87
        ---------+------------++-------------+--------------+------------
               1 |          0 ||     280,959 |         2.81 |        1.86
               2 |          0 ||   6,502,708 |        65.03 |       64.08
              10 |          0 ||  18,681,209 |       186.81 |      185.87
             100 |          0 || 103,550,458 |     1,035.50 |     1034.56
        ---------+------------++-------------+--------------+------------
      
        Note: per-call overhead is estimated relative to the baseline case
        with 0 relevant tracers and 0 irrelevant tracers.
      
      As can be seen from the above:
      
      a) Whenever there is a single relevant tracer function associated with a
         tracee, the overhead of invoking the tracer is constant, and does not
         scale with the number of tracers which are *not* associated with that
         tracee.
      
      b) The overhead for a single relevant tracer has dropped to ~1/7 of the
         overhead prior to this series (from 13.37ns to 1.86ns). This is
         largely due to permitting calls to dynamically-allocated ftrace_ops
         without going through ftrace_ops_list_func.
      
      I've run the ftrace selftests from v6.2-rc3, which reports:
      
      | # of passed:  110
      | # of failed:  0
      | # of unresolved:  3
      | # of untested:  0
      | # of unsupported:  0
      | # of xfailed:  1
      | # of undefined(test bug):  0
      
      ... where the unresolved entries were the tests for DIRECT functions
      (which are not supported), and the checkbashisms selftest (which is
      irrelevant here):
      
      | [8] Test ftrace direct functions against tracers        [UNRESOLVED]
      | [9] Test ftrace direct functions against kprobes        [UNRESOLVED]
      | [62] Meta-selftest: Checkbashisms       [UNRESOLVED]
      
      ... with all other tests passing (or failing as expected).
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Florent Revest <revest@chromium.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230123134603.1064407-9-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      baaf553d
  9. 18 Nov, 2022 1 commit
    • Mark Rutland's avatar
      ftrace: arm64: move from REGS to ARGS · 26299b3f
      Mark Rutland authored
      This commit replaces arm64's support for FTRACE_WITH_REGS with support
      for FTRACE_WITH_ARGS. This removes some overhead and complexity, and
      removes some latent issues with inconsistent presentation of struct
      pt_regs (which can only be reliably saved/restored at exception
      boundaries).
      
      FTRACE_WITH_REGS has been supported on arm64 since commit:
      
        3b23e499 ("arm64: implement ftrace with regs")
      
      As noted in the commit message, the major reasons for implementing
      FTRACE_WITH_REGS were:
      
      (1) To make it possible to use the ftrace graph tracer with pointer
          authentication, where it's necessary to snapshot/manipulate the LR
          before it is signed by the instrumented function.
      
      (2) To make it possible to implement LIVEPATCH in future, where we need
          to hook function entry before an instrumented function manipulates
          the stack or argument registers. Practically speaking, we need to
          preserve the argument/return registers, PC, LR, and SP.
      
      Neither of these need a struct pt_regs, and only require the set of
      registers which are live at function call/return boundaries. Our calling
      convention is defined by "Procedure Call Standard for the Arm:registered: 64-bit
      Architecture (AArch64)" (AKA "AAPCS64"), which can currently be found
      at:
      
        https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst
      
      Per AAPCS64, all function call argument and return values are held in
      the following GPRs:
      
      * X0 - X7 : parameter / result registers
      * X8      : indirect result location register
      * SP      : stack pointer (AKA SP)
      
      Additionally, ad function call boundaries, the following GPRs hold
      context/return information:
      
      * X29 : frame pointer (AKA FP)
      * X30 : link register (AKA LR)
      
      ... and for ftrace we need to capture the instrumented address:
      
       * PC  : program counter
      
      No other GPRs are relevant, as none of the other arguments hold
      parameters or return values:
      
      * X9  - X17 : temporaries, may be clobbered
      * X18       : shadow call stack pointer (or temorary)
      * X19 - X28 : callee saved
      
      This patch implements FTRACE_WITH_ARGS for arm64, only saving/restoring
      the minimal set of registers necessary. This is always sufficient to
      manipulate control flow (e.g. for live-patching) or to manipulate
      function arguments and return values.
      
      This reduces the necessary stack usage from 336 bytes for pt_regs down
      to 112 bytes for ftrace_regs + 32 bytes for two frame records, freeing
      up 188 bytes. This could be reduced further with changes to the
      unwinder.
      
      As there is no longer a need to save different sets of registers for
      different features, we no longer need distinct `ftrace_caller` and
      `ftrace_regs_caller` trampolines. This allows the trampoline assembly to
      be simpler, and simplifies code which previously had to handle the two
      trampolines.
      
      I've tested this with the ftrace selftests, where there are no
      unexpected failures.
      Co-developed-by: default avatarFlorent Revest <revest@chromium.org>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarFlorent Revest <revest@chromium.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarMasami Hiramatsu (Google) <mhiramat@kernel.org>
      Reviewed-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      Link: https://lore.kernel.org/r/20221103170520.931305-5-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      26299b3f
  10. 09 Nov, 2022 2 commits
  11. 02 Oct, 2022 1 commit
    • Masahiro Yamada's avatar
      kbuild: remove head-y syntax · ce697cce
      Masahiro Yamada authored
      Kbuild puts the objects listed in head-y at the head of vmlinux.
      Conventionally, we do this for head*.S, which contains the kernel entry
      point.
      
      A counter approach is to control the section order by the linker script.
      Actually, the code marked as __HEAD goes into the ".head.text" section,
      which is placed before the normal ".text" section.
      
      I do not know if both of them are needed. From the build system
      perspective, head-y is not mandatory. If you can achieve the proper code
      placement by the linker script only, it would be cleaner.
      
      I collected the current head-y objects into head-object-list.txt. It is
      a whitelist. My hope is it will be reduced in the long run.
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      Tested-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: default avatarNicolas Schier <nicolas@fjasle.eu>
      ce697cce
  12. 20 Sep, 2022 1 commit
  13. 11 May, 2022 1 commit
  14. 14 Dec, 2021 1 commit
  15. 24 Oct, 2021 1 commit
  16. 11 Aug, 2021 1 commit
    • Andrew Delgadillo's avatar
      arm64: clean vdso & vdso32 files · 017f5fb9
      Andrew Delgadillo authored
      commit a5b8ca97 ("arm64: do not descend to vdso directories twice")
      changes the cleaning behavior of arm64's vdso files, in that vdso.lds,
      vdso.so, and vdso.so.dbg are not removed upon a 'make clean/mrproper':
      
      $ make defconfig ARCH=arm64
      $ make ARCH=arm64
      $ make mrproper ARCH=arm64
      $ git clean -nxdf
      Would remove arch/arm64/kernel/vdso/vdso.lds
      Would remove arch/arm64/kernel/vdso/vdso.so
      Would remove arch/arm64/kernel/vdso/vdso.so.dbg
      
      To remedy this, manually descend into arch/arm64/kernel/vdso upon
      cleaning.
      
      After this commit:
      $ make defconfig ARCH=arm64
      $ make ARCH=arm64
      $ make mrproper ARCH=arm64
      $ git clean -nxdf
      <empty>
      
      Similar results are obtained for the vdso32 equivalent.
      Signed-off-by: default avatarAndrew Delgadillo <adelg@google.com>
      Cc: stable@vger.kernel.org
      Fixes: a5b8ca97 ("arm64: do not descend to vdso directories twice")
      Link: https://lore.kernel.org/r/20210810231755.1743524-1-adelg@google.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      017f5fb9
  17. 03 Aug, 2021 2 commits
  18. 15 Jun, 2021 1 commit
  19. 26 May, 2021 1 commit
  20. 10 May, 2021 1 commit
    • Mark Brown's avatar
      arm64: Generate cpucaps.h · 0c6c2d36
      Mark Brown authored
      The arm64 code allocates an internal constant to every CPU feature it can
      detect, distinct from the public hwcap numbers we use to expose some
      features to userspace. Currently this is maintained manually which is an
      irritating source of conflicts when working on new features, to avoid this
      replace the header with a simple text file listing the names we've assigned
      and sort it to minimise conflicts.
      
      As part of doing this we also do the Kbuild hookup required to hook up
      an arch tools directory and to generate header files in there.
      
      This will result in a renumbering and reordering of the existing constants,
      since they are all internal only the values should not be important. The
      reordering will impact the order in which some steps in enumeration handle
      features but the algorithm is not intended to depend on this and I haven't
      seen any issues when testing. Due to the UAO cpucap having been removed in
      the past we end up with ARM64_NCAPS being 1 smaller than it was before.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20210428121231.11219-1-broonie@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      0c6c2d36
  21. 24 Apr, 2021 1 commit
  22. 20 Jan, 2021 1 commit
  23. 05 Jan, 2021 1 commit
  24. 22 Dec, 2020 2 commits
  25. 01 Dec, 2020 1 commit
    • Nathan Chancellor's avatar
      kbuild: Hoist '--orphan-handling' into Kconfig · 59612b24
      Nathan Chancellor authored
      Currently, '--orphan-handling=warn' is spread out across four different
      architectures in their respective Makefiles, which makes it a little
      unruly to deal with in case it needs to be disabled for a specific
      linker version (in this case, ld.lld 10.0.1).
      
      To make it easier to control this, hoist this warning into Kconfig and
      the main Makefile so that disabling it is simpler, as the warning will
      only be enabled in a couple places (main Makefile and a couple of
      compressed boot folders that blow away LDFLAGS_vmlinx) and making it
      conditional is easier due to Kconfig syntax. One small additional
      benefit of this is saving a call to ld-option on incremental builds
      because we will have already evaluated it for CONFIG_LD_ORPHAN_WARN.
      
      To keep the list of supported architectures the same, introduce
      CONFIG_ARCH_WANT_LD_ORPHAN_WARN, which an architecture can select to
      gain this automatically after all of the sections are specified and size
      asserted. A special thanks to Kees Cook for the help text on this
      config.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/1187Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Tested-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      59612b24
  26. 20 Oct, 2020 1 commit
  27. 24 Sep, 2020 1 commit
  28. 07 Sep, 2020 1 commit
  29. 03 Sep, 2020 1 commit
  30. 01 Sep, 2020 1 commit
  31. 28 Aug, 2020 2 commits
  32. 21 Aug, 2020 1 commit
  33. 15 Jul, 2020 1 commit
  34. 09 Jul, 2020 1 commit
  35. 15 Jun, 2020 1 commit