1. 14 Mar, 2022 8 commits
    • Will Deacon's avatar
      Merge branch 'for-next/mm' into for-next/core · 20fd2ed1
      Will Deacon authored
      * for-next/mm:
        Documentation: vmcoreinfo: Fix htmldocs warning
        arm64/mm: Drop use_1G_block()
        arm64: avoid flushing icache multiple times on contiguous HugeTLB
        arm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges
        arm64/hugetlb: Define __hugetlb_valid_size()
        arm64/mm: avoid fixmap race condition when create pud mapping
        arm64/mm: Consolidate TCR_EL1 fields
      20fd2ed1
    • Will Deacon's avatar
      Merge branch 'for-next/misc' into for-next/core · b3ea0eaf
      Will Deacon authored
      * for-next/misc:
        arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition
        arm64: clean up tools Makefile
        arm64: drop unused includes of <linux/personality.h>
        arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones
        arm64: prevent instrumentation of bp hardening callbacks
        arm64: cpufeature: Remove cpu_has_fwb() check
        arm64: atomics: remove redundant static branch
        arm64: entry: Save some nops when CONFIG_ARM64_PSEUDO_NMI is not set
      b3ea0eaf
    • Will Deacon's avatar
      Merge branch 'for-next/linkage' into for-next/core · 563c4635
      Will Deacon authored
      * for-next/linkage:
        arm64: module: remove (NOLOAD) from linker script
        linkage: remove SYM_FUNC_{START,END}_ALIAS()
        x86: clean up symbol aliasing
        arm64: clean up symbol aliasing
        linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
      563c4635
    • Will Deacon's avatar
      Merge branch 'for-next/kselftest' into for-next/core · 839d0758
      Will Deacon authored
      * for-next/kselftest:
        kselftest/arm64: Log the PIDs of the parent and child in sve-ptrace
        kselftest/arm64: signal: Allow tests to be incompatible with features
        kselftest/arm64: mte: user_mem: test a wider range of values
        kselftest/arm64: mte: user_mem: add more test types
        kselftest/arm64: mte: user_mem: add test type enum
        kselftest/arm64: mte: user_mem: check different offsets and sizes
        kselftest/arm64: mte: user_mem: rework error handling
        kselftest/arm64: mte: user_mem: introduce tag_offset and tag_len
        kselftest/arm64: Remove local definitions of MTE prctls
        kselftest/arm64: Remove local ARRAY_SIZE() definitions
      839d0758
    • Will Deacon's avatar
      Merge branch 'for-next/insn' into for-next/core · b7323ae6
      Will Deacon authored
      * for-next/insn:
        arm64: insn: add encoders for atomic operations
        arm64: move AARCH64_BREAK_FAULT into insn-def.h
        arm64: insn: Generate 64 bit mask immediates correctly
      b7323ae6
    • Will Deacon's avatar
      Merge branch 'for-next/errata' into for-next/core · cd92fdfc
      Will Deacon authored
      * for-next/errata:
        arm64: Add cavium_erratum_23154_cpus missing sentinel
        irqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR
      cd92fdfc
    • Will Deacon's avatar
      Merge branch 'for-next/docs' into for-next/core · b523d6b8
      Will Deacon authored
      * for-next/docs:
        arm64/mte: Clarify mode reported by PR_GET_TAGGED_ADDR_CTRL
        arm64: booting.rst: Clarify on requiring non-secure EL2
      b523d6b8
    • Will Deacon's avatar
      Merge branch 'for-next/coredump' into for-next/core · 0d3d0315
      Will Deacon authored
      * for-next/coredump:
        arm64: Change elfcore for_each_mte_vma() to use VMA iterator
        arm64: mte: Document the core dump file format
        arm64: mte: Dump the MTE tags in the core file
        arm64: mte: Define the number of bytes for storing the tags in a page
        elf: Introduce the ARM MTE ELF segment type
        elfcore: Replace CONFIG_{IA64, UML} checks with a new option
      0d3d0315
  2. 09 Mar, 2022 3 commits
  3. 08 Mar, 2022 3 commits
  4. 07 Mar, 2022 6 commits
    • Mark Brown's avatar
      kselftest/arm64: Log the PIDs of the parent and child in sve-ptrace · e2dc49ef
      Mark Brown authored
      If the test triggers a problem it may well result in a log message from
      the kernel such as a WARN() or BUG(). If these include a PID it can help
      with debugging to know if it was the parent or child process that triggered
      the issue, since the test is just creating a new thread the process name
      will be the same either way. Print the PIDs of the parent and child on
      startup so users have this information to hand should it be needed.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Reviewed-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Link: https://lore.kernel.org/r/20220303192817.2732509-1-broonie@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      e2dc49ef
    • Linu Cherian's avatar
      irqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR · 24a147bc
      Linu Cherian authored
      When a IAR register read races with a GIC interrupt RELEASE event,
      GIC-CPU interface could wrongly return a valid INTID to the CPU
      for an interrupt that is already released(non activated) instead of 0x3ff.
      
      As a side effect, an interrupt handler could run twice, once with
      interrupt priority and then with idle priority.
      
      As a workaround, gic_read_iar is updated so that it will return a
      valid interrupt ID only if there is a change in the active priority list
      after the IAR read on all the affected Silicons.
      
      Since there are silicon variants where both 23154 and 38545 are applicable,
      workaround for erratum 23154 has been extended to address both of them.
      Signed-off-by: default avatarLinu Cherian <lcherian@marvell.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/20220307143014.22758-1-lcherian@marvell.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      24a147bc
    • Anshuman Khandual's avatar
      arm64/mm: Drop use_1G_block() · 1310222c
      Anshuman Khandual authored
      pud_sect_supported() already checks for PUD level block mapping support i.e
      on ARM64_4K_PAGES config. Hence pud_sect_supported(), along with some other
      required alignment checks can help completely drop use_1G_block().
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/1644988012-25455-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      1310222c
    • Muchun Song's avatar
      arm64: avoid flushing icache multiple times on contiguous HugeTLB · cf5a501d
      Muchun Song authored
      When a contiguous HugeTLB page is mapped, set_pte_at() will be called
      CONT_PTES/CONT_PMDS times.  Therefore, __sync_icache_dcache() will
      flush cache multiple times if the page is executable (to ensure
      the I-D cache coherency).  However, the first flushing cache already
      covers subsequent cache flush operations.  So only flusing cache
      for the head page if it is a HugeTLB page to avoid redundant cache
      flushing.  In the next patch, it is also depends on this change
      since the tail vmemmap pages of HugeTLB is mapped with read-only
      meanning only head page struct can be modified.
      Signed-off-by: default avatarMuchun Song <songmuchun@bytedance.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/20220302084624.33340-1-songmuchun@bytedance.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      cf5a501d
    • Mark Rutland's avatar
      arm64: prevent instrumentation of bp hardening callbacks · 614c0b9f
      Mark Rutland authored
      We may call arm64_apply_bp_hardening() early during entry (e.g. in
      el0_ia()) before it is safe to run instrumented code. Unfortunately this
      may result in running instrumented code in two cases:
      
      * The hardening callbacks called by arm64_apply_bp_hardening() are not
        marked as `noinstr`, and have been observed to be instrumented when
        compiled with either GCC or LLVM.
      
      * Since arm64_apply_bp_hardening() itself is only marked as `inline`
        rather than `__always_inline`, it is possible that the compiler
        decides to place it out-of-line, whereupon it may be instrumented.
      
      For example, with defconfig built with clang 13.0.0,
      call_hvc_arch_workaround_1() is compiled as:
      
      | <call_hvc_arch_workaround_1>:
      |        d503233f        paciasp
      |        f81f0ffe        str     x30, [sp, #-16]!
      |        320183e0        mov     w0, #0x80008000
      |        d503201f        nop
      |        d4000002        hvc     #0x0
      |        f84107fe        ldr     x30, [sp], #16
      |        d50323bf        autiasp
      |        d65f03c0        ret
      
      ... but when CONFIG_FTRACE=y and CONFIG_KCOV=y this is compiled as:
      
      | <call_hvc_arch_workaround_1>:
      |        d503245f        bti     c
      |        d503201f        nop
      |        d503201f        nop
      |        d503233f        paciasp
      |        a9bf7bfd        stp     x29, x30, [sp, #-16]!
      |        910003fd        mov     x29, sp
      |        94000000        bl      0 <__sanitizer_cov_trace_pc>
      |        320183e0        mov     w0, #0x80008000
      |        d503201f        nop
      |        d4000002        hvc     #0x0
      |        a8c17bfd        ldp     x29, x30, [sp], #16
      |        d50323bf        autiasp
      |        d65f03c0        ret
      
      ... with a patchable function entry registered with ftrace, and a direct
      call to __sanitizer_cov_trace_pc(). Neither of these are safe early
      during entry sequences.
      
      This patch avoids the unsafe instrumentation by marking
      arm64_apply_bp_hardening() as `__always_inline` and by marking the
      hardening functions as `noinstr`. This avoids the potential for
      instrumentation, and causes clang to consistently generate the function
      as with the defconfig sample.
      
      Note: in the defconfig compilation, when CONFIG_SVE=y, x30 is spilled to
      the stack without being placed in a frame record, which will result in a
      missing entry if call_hvc_arch_workaround_1() is backtraced. Similar is
      true of qcom_link_stack_sanitisation(), where inline asm spills the LR
      to a GPR prior to corrupting it. This is not a significant issue
      presently as we will only backtrace here if an exception is taken, and
      in such cases we may omit entries for other reasons today.
      
      The relevant hardening functions were introduced in commits:
      
        ec82b567 ("arm64: Implement branch predictor hardening for Falkor")
        b092201e ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")
      
      ... and these were subsequently moved in commit:
      
        d4647f0a ("arm64: Rewrite Spectre-v2 mitigation code")
      
      The arm64_apply_bp_hardening() function was introduced in commit:
      
        0f15adbb ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")
      
      ... and was subsequently moved and reworked in commit:
      
        6279017e ("KVM: arm64: Move BP hardening helpers into spectre.h")
      
      Fixes: ec82b567 ("arm64: Implement branch predictor hardening for Falkor")
      Fixes: b092201e ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")
      Fixes: d4647f0a ("arm64: Rewrite Spectre-v2 mitigation code")
      Fixes: 0f15adbb ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")
      Fixes: 6279017e ("KVM: arm64: Move BP hardening helpers into spectre.h")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20220224181028.512873-1-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      614c0b9f
    • Huang Shijie's avatar
      arm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges · 2369f171
      Huang Shijie authored
      The following interrelated ranges are needed by the kdump crash tool:
      	MODULES_VADDR ~ MODULES_END,
      	VMALLOC_START ~ VMALLOC_END,
      	VMEMMAP_START ~ VMEMMAP_END
      
      Since these values change from time to time, it is preferable to export
      them via vmcoreinfo than to change the crash's code frequently.
      Signed-off-by: default avatarHuang Shijie <shijie@os.amperecomputing.com>
      Link: https://lore.kernel.org/r/20220209092642.9181-1-shijie@os.amperecomputing.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      2369f171
  5. 25 Feb, 2022 4 commits
  6. 22 Feb, 2022 7 commits
    • Anshuman Khandual's avatar
      arm64/hugetlb: Define __hugetlb_valid_size() · a8a733b2
      Anshuman Khandual authored
      arch_hugetlb_valid_size() can be just factored out to create another helper
      to be used in arch_hugetlb_migration_supported() as well. This just defines
      __hugetlb_valid_size() for that purpose.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Link: https://lore.kernel.org/r/1645073557-6150-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      a8a733b2
    • Hou Tao's avatar
      arm64: insn: add encoders for atomic operations · fa1114d9
      Hou Tao authored
      It is a preparation patch for eBPF atomic supports under arm64. eBPF
      needs support atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and
      atomic[64]_{xchg|cmpxchg}. The ordering semantics of eBPF atomics are
      the same with the implementations in linux kernel.
      
      Add three helpers to support LDCLR/LDEOR/LDSET/SWP, CAS and DMB
      instructions. STADD/STCLR/STEOR/STSET are simply encoded as aliases for
      LDADD/LDCLR/LDEOR/LDSET with XZR as the destination register, so no extra
      helper is added. atomic_fetch_add() and other atomic ops needs support for
      STLXR instruction, so extend enum aarch64_insn_ldst_type to do that.
      
      LDADD/LDEOR/LDSET/SWP and CAS instructions are only available when LSE
      atomics is enabled, so just return AARCH64_BREAK_FAULT directly in
      these newly-added helpers if CONFIG_ARM64_LSE_ATOMICS is disabled.
      Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
      Link: https://lore.kernel.org/r/20220217072232.1186625-3-houtao1@huawei.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      fa1114d9
    • Hou Tao's avatar
      arm64: move AARCH64_BREAK_FAULT into insn-def.h · 97e58e39
      Hou Tao authored
      If CONFIG_ARM64_LSE_ATOMICS is off, encoders for LSE-related instructions
      can return AARCH64_BREAK_FAULT directly in insn.h. In order to access
      AARCH64_BREAK_FAULT in insn.h, we can not include debug-monitors.h in
      insn.h, because debug-monitors.h has already depends on insn.h, so just
      move AARCH64_BREAK_FAULT into insn-def.h.
      
      It will be used by the following patch to eliminate unnecessary LSE-related
      encoders when CONFIG_ARM64_LSE_ATOMICS is off.
      Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
      Link: https://lore.kernel.org/r/20220217072232.1186625-2-houtao1@huawei.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      97e58e39
    • Mark Rutland's avatar
      linkage: remove SYM_FUNC_{START,END}_ALIAS() · be9aea74
      Mark Rutland authored
      Now that all aliases are defined using SYM_FUNC_ALIAS(), remove the old
      SYM_FUNC_{START,END}_ALIAS() macros.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20220216162229.1076788-5-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      be9aea74
    • Mark Rutland's avatar
      x86: clean up symbol aliasing · 7be2e319
      Mark Rutland authored
      Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
      to simplify the definition of function aliases across arch/x86.
      
      For clarity, where there are multiple annotations such as
      EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
      example, where a function has a name and an alias which are both
      exported, this is organised as:
      
      	SYM_FUNC_START(func)
      	    ... asm insns ...
      	SYM_FUNC_END(func)
      	EXPORT_SYMBOL(func)
      
      	SYM_FUNC_ALIAS(alias, func)
      	EXPORT_SYMBOL(alias)
      
      Where there are only aliases and no exports or other annotations, I have
      not bothered with line spacing, e.g.
      
      	SYM_FUNC_START(func)
      	    ... asm insns ...
      	SYM_FUNC_END(func)
      	SYM_FUNC_ALIAS(alias, func)
      
      The tools/perf/ copies of memset_64.S and memset_64.S are updated
      likewise to avoid the build system complaining these are mismatched:
      
      | Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
      | diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S
      | Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S'
      | diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20220216162229.1076788-4-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      7be2e319
    • Mark Rutland's avatar
      arm64: clean up symbol aliasing · 0f61f6be
      Mark Rutland authored
      Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
      to simplify and more consistently define function aliases across
      arch/arm64.
      
      Aliases are now defined in terms of a canonical function name. For
      position-independent functions I've made the __pi_<func> name the
      canonical name, and defined other alises in terms of this.
      
      The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
      and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
      also obscures the fact that the __pi_<func> fymbol is global and the
      <func> symbol is weak. For clarity, I have removed these macros and used
      SYM_FUNC_{START,END}() directly with the __pi_<func> name.
      
      For example:
      
      	SYM_FUNC_START_WEAK_PI(func)
      	... asm insns ...
      	SYM_FUNC_END_PI(func)
      	EXPORT_SYMBOL(func)
      
      ... becomes:
      
      	SYM_FUNC_START(__pi_func)
      	... asm insns ...
      	SYM_FUNC_END(__pi_func)
      
      	SYM_FUNC_ALIAS_WEAK(func, __pi_func)
      	EXPORT_SYMBOL(func)
      
      For clarity, where there are multiple annotations such as
      EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
      example, where a function has a name and an alias which are both
      exported, this is organised as:
      
      	SYM_FUNC_START(func)
      	... asm insns ...
      	SYM_FUNC_END(func)
      	EXPORT_SYMBOL(func)
      
      	SYM_FUNC_ALIAS(alias, func)
      	EXPORT_SYMBOL(alias)
      
      For consistency with the other string functions, I've defined strrchr as
      a position-independent function, as it can safely be used as such even
      though we have no users today.
      
      As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
      removed. The common versions will be removed by a subsequent patch.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      0f61f6be
    • Mark Rutland's avatar
      linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}() · e0891269
      Mark Rutland authored
      Currently aliasing an asm function requires adding START and END
      annotations for each name, as per Documentation/asm-annotations.rst:
      
      	SYM_FUNC_START_ALIAS(__memset)
      	SYM_FUNC_START(memset)
      	    ... asm insns ...
      	SYM_FUNC_END(memset)
      	SYM_FUNC_END_ALIAS(__memset)
      
      This is more painful than necessary to maintain, especially where a
      function has many aliases, some of which we may wish to define
      conditionally. For example, arm64's memcpy/memmove implementation (which
      uses some arch-specific SYM_*() helpers) has:
      
      	SYM_FUNC_START_ALIAS(__memmove)
      	SYM_FUNC_START_ALIAS_WEAK_PI(memmove)
      	SYM_FUNC_START_ALIAS(__memcpy)
      	SYM_FUNC_START_WEAK_PI(memcpy)
      	    ... asm insns ...
      	SYM_FUNC_END_PI(memcpy)
      	EXPORT_SYMBOL(memcpy)
      	SYM_FUNC_END_ALIAS(__memcpy)
      	EXPORT_SYMBOL(__memcpy)
      	SYM_FUNC_END_ALIAS_PI(memmove)
      	EXPORT_SYMBOL(memmove)
      	SYM_FUNC_END_ALIAS(__memmove)
      	EXPORT_SYMBOL(__memmove)
      	SYM_FUNC_START(name)
      
      It would be much nicer if we could define the aliases *after* the
      standard function definition. This would avoid the need to specify each
      symbol name twice, and would make it easier to spot the canonical
      function definition.
      
      This patch adds new macros to allow us to do so, which allows the above
      example to be rewritten more succinctly as:
      
      	SYM_FUNC_START(__pi_memcpy)
      	    ... asm insns ...
      	SYM_FUNC_END(__pi_memcpy)
      
      	SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
      	EXPORT_SYMBOL(__memcpy)
      	SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
      	EXPORT_SYMBOL(memcpy)
      
      	SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
      	SYM_FUNC_ALIAS(__memmove, __pi_memmove)
      	EXPORT_SYMBOL(__memmove)
      	SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
      	EXPORT_SYMBOL(memmove)
      
      The reduction in duplication will also make it possible to replace some
      uses of WEAK with more accurate Kconfig guards, e.g.
      
      	#ifndef CONFIG_KASAN
      	SYM_FUNC_ALIAS(memmove, __memmove)
      	EXPORT_SYMBOL(memmove)
      	#endif
      
      ... which should make it easier to ensure that symbols are neither used
      nor overidden unexpectedly.
      
      The existing SYM_FUNC_START_ALIAS() and SYM_FUNC_START_LOCAL_ALIAS() are
      marked as deprecated, and will be removed once existing users are moved
      over to the new scheme.
      
      The tools/perf/ copy of linkage.h is updated to match. A subsequent
      patch will depend upon this when updating the x86 asm annotations.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20220216162229.1076788-2-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      e0891269
  7. 15 Feb, 2022 9 commits