1. 20 Feb, 2023 1 commit
    • Mark Rutland's avatar
      arm64: fix .idmap.text assertion for large kernels · d5417081
      Mark Rutland authored
      When building a kernel with many debug options enabled (which happens in
      test configurations use by myself and syzbot), the kernel can become
      large enough that portions of .text can be more than 128M away from
      .idmap.text (which is placed inside the .rodata section). Where idmap
      code branches into .text, the linker will place veneers in the
      .idmap.text section to make those branches possible.
      
      Unfortunately, as Ard reports, GNU LD has bseen observed to add 4K of
      padding when adding such veneers, e.g.
      
      | .idmap.text    0xffffffc01e48e5c0      0x32c arch/arm64/mm/proc.o
      |                0xffffffc01e48e5c0                idmap_cpu_replace_ttbr1
      |                0xffffffc01e48e600                idmap_kpti_install_ng_mappings
      |                0xffffffc01e48e800                __cpu_setup
      | *fill*         0xffffffc01e48e8ec        0x4
      | .idmap.text.stub
      |                0xffffffc01e48e8f0       0x18 linker stubs
      |                0xffffffc01e48f8f0                __idmap_text_end = .
      |                0xffffffc01e48f000                . = ALIGN (0x1000)
      | *fill*         0xffffffc01e48f8f0      0x710
      |                0xffffffc01e490000                idmap_pg_dir = .
      
      This makes the __idmap_text_start .. __idmap_text_end region bigger than
      the 4K we require it to fit within, and triggers an assertion in arm64's
      vmlinux.lds.S, which breaks the build:
      
      | LD      .tmp_vmlinux.kallsyms1
      | aarch64-linux-gnu-ld: ID map text too big or misaligned
      | make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1
      | make: *** [Makefile:1264: vmlinux] Error 2
      
      Avoid this by using an `ADRP+ADD+BLR` sequence for branches out of
      .idmap.text, which avoids the need for veneers. These branches are only
      executed once per boot, and only when the MMU is on, so there should be
      no noticeable performance penalty in replacing `BL` with `ADRP+ADD+BLR`.
      
      At the same time, remove the "x" and "w" attributes when placing code in
      .idmap.text, as these are not necessary, and this will prevent the
      linker from assuming that it is safe to place PLTs into .idmap.text,
      causing it to warn if and when there are out-of-range branches within
      .idmap.text, e.g.
      
      |   LD      .tmp_vmlinux.kallsyms1
      | arch/arm64/kernel/head.o: in function `primary_entry':
      | (.idmap.text+0x1c): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | arch/arm64/kernel/head.o: in function `init_el2':
      | (.idmap.text+0x88): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | make[1]: *** [scripts/Makefile.vmlinux:34: vmlinux] Error 1
      | make: *** [Makefile:1252: vmlinux] Error 2
      
      Thus, if future changes add out-of-range branches in .idmap.text, it
      should be easy enough to identify those from the resulting linker
      errors.
      
      Reported-by: syzbot+f8ac312e31226e23302b@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-arm-kernel/00000000000028ea4105f4e2ef54@google.com/Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20230220162317.1581208-1-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d5417081
  2. 10 Feb, 2023 3 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/signal' into for-next/core · ad4a4d3a
      Catalin Marinas authored
      * for-next/signal:
        : Signal handling cleanups
        arm64/signal: Only read new data when parsing the ZT context
        arm64/signal: Only read new data when parsing the ZA context
        arm64/signal: Only read new data when parsing the SVE context
        arm64/signal: Avoid rereading context frame sizes
        arm64/signal: Make interface for restore_fpsimd_context() consistent
        arm64/signal: Remove redundant size validation from parse_user_sigframe()
        arm64/signal: Don't redundantly verify FPSIMD magic
      ad4a4d3a
    • Catalin Marinas's avatar
      Merge branch 'for-next/sysreg-hwcaps' into for-next/core · 96004636
      Catalin Marinas authored
      * for-next/sysreg-hwcaps:
        : Make use of sysreg helpers for hwcaps
        arm64/cpufeature: Use helper macros to specify hwcaps
        arm64/cpufeature: Always use symbolic name for feature value in hwcaps
        arm64/sysreg: Initial unsigned annotations for ID registers
        arm64/sysreg: Initial annotation of signed ID registers
        arm64/sysreg: Allow enumerations to be declared as signed or unsigned
      96004636
    • Catalin Marinas's avatar
      Merge branches 'for-next/sysreg', 'for-next/sme', 'for-next/kselftest',... · 156010ed
      Catalin Marinas authored
      Merge branches 'for-next/sysreg', 'for-next/sme', 'for-next/kselftest', 'for-next/misc', 'for-next/sme2', 'for-next/tpidr2', 'for-next/scs', 'for-next/compat-hwcap', 'for-next/ftrace', 'for-next/efi-boot-mmu-on', 'for-next/ptrauth' and 'for-next/pseudo-nmi', remote-tracking branch 'arm64/for-next/perf' into for-next/core
      
      * arm64/for-next/perf:
        perf: arm_spe: Print the version of SPE detected
        perf: arm_spe: Add support for SPEv1.2 inverted event filtering
        perf: Add perf_event_attr::config3
        drivers/perf: fsl_imx8_ddr_perf: Remove set-but-not-used variable
        perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' event
        perf: arm_spe: Use new PMSIDR_EL1 register enums
        perf: arm_spe: Drop BIT() and use FIELD_GET/PREP accessors
        arm64/sysreg: Convert SPE registers to automatic generation
        arm64: Drop SYS_ from SPE register defines
        perf: arm_spe: Use feature numbering for PMSEVFR_EL1 defines
        perf/marvell: Add ACPI support to TAD uncore driver
        perf/marvell: Add ACPI support to DDR uncore driver
        perf/arm-cmn: Reset DTM_PMU_CONFIG at probe
        drivers/perf: hisi: Extract initialization of "cpa_pmu->pmu"
        drivers/perf: hisi: Simplify the parameters of hisi_pmu_init()
        drivers/perf: hisi: Advertise the PERF_PMU_CAP_NO_EXCLUDE capability
      
      * for-next/sysreg:
        : arm64 sysreg and cpufeature fixes/updates
        KVM: arm64: Use symbolic definition for ISR_EL1.A
        arm64/sysreg: Add definition of ISR_EL1
        arm64/sysreg: Add definition for ICC_NMIAR1_EL1
        arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK()
        arm64/sysreg: Fix errors in 32 bit enumeration values
        arm64/cpufeature: Fix field sign for DIT hwcap detection
      
      * for-next/sme:
        : SME-related updates
        arm64/sme: Optimise SME exit on syscall entry
        arm64/sme: Don't use streaming mode to probe the maximum SME VL
        arm64/ptrace: Use system_supports_tpidr2() to check for TPIDR2 support
      
      * for-next/kselftest: (23 commits)
        : arm64 kselftest fixes and improvements
        kselftest/arm64: Don't require FA64 for streaming SVE+ZA tests
        kselftest/arm64: Copy whole EXTRA context
        kselftest/arm64: Fix enumeration of systems without 128 bit SME for SSVE+ZA
        kselftest/arm64: Fix enumeration of systems without 128 bit SME
        kselftest/arm64: Don't require FA64 for streaming SVE tests
        kselftest/arm64: Limit the maximum VL we try to set via ptrace
        kselftest/arm64: Correct buffer size for SME ZA storage
        kselftest/arm64: Remove the local NUM_VL definition
        kselftest/arm64: Verify simultaneous SSVE and ZA context generation
        kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM set
        kselftest/arm64: Remove spurious comment from MTE test Makefile
        kselftest/arm64: Support build of MTE tests with clang
        kselftest/arm64: Initialise current at build time in signal tests
        kselftest/arm64: Don't pass headers to the compiler as source
        kselftest/arm64: Remove redundant _start labels from FP tests
        kselftest/arm64: Fix .pushsection for strings in FP tests
        kselftest/arm64: Run BTI selftests on systems without BTI
        kselftest/arm64: Fix test numbering when skipping tests
        kselftest/arm64: Skip non-power of 2 SVE vector lengths in fp-stress
        kselftest/arm64: Only enumerate power of two VLs in syscall-abi
        ...
      
      * for-next/misc:
        : Miscellaneous arm64 updates
        arm64/mm: Intercept pfn changes in set_pte_at()
        Documentation: arm64: correct spelling
        arm64: traps: attempt to dump all instructions
        arm64: Apply dynamic shadow call stack patching in two passes
        arm64: el2_setup.h: fix spelling typo in comments
        arm64: Kconfig: fix spelling
        arm64: cpufeature: Use kstrtobool() instead of strtobool()
        arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault path
        arm64: make ARCH_FORCE_MAX_ORDER selectable
      
      * for-next/sme2: (23 commits)
        : Support for arm64 SME 2 and 2.1
        arm64/sme: Fix __finalise_el2 SMEver check
        kselftest/arm64: Remove redundant _start labels from zt-test
        kselftest/arm64: Add coverage of SME 2 and 2.1 hwcaps
        kselftest/arm64: Add coverage of the ZT ptrace regset
        kselftest/arm64: Add SME2 coverage to syscall-abi
        kselftest/arm64: Add test coverage for ZT register signal frames
        kselftest/arm64: Teach the generic signal context validation about ZT
        kselftest/arm64: Enumerate SME2 in the signal test utility code
        kselftest/arm64: Cover ZT in the FP stress test
        kselftest/arm64: Add a stress test program for ZT0
        arm64/sme: Add hwcaps for SME 2 and 2.1 features
        arm64/sme: Implement ZT0 ptrace support
        arm64/sme: Implement signal handling for ZT
        arm64/sme: Implement context switching for ZT0
        arm64/sme: Provide storage for ZT0
        arm64/sme: Add basic enumeration for SME2
        arm64/sme: Enable host kernel to access ZT0
        arm64/sme: Manually encode ZT0 load and store instructions
        arm64/esr: Document ISS for ZT0 being disabled
        arm64/sme: Document SME 2 and SME 2.1 ABI
        ...
      
      * for-next/tpidr2:
        : Include TPIDR2 in the signal context
        kselftest/arm64: Add test case for TPIDR2 signal frame records
        kselftest/arm64: Add TPIDR2 to the set of known signal context records
        arm64/signal: Include TPIDR2 in the signal context
        arm64/sme: Document ABI for TPIDR2 signal information
      
      * for-next/scs:
        : arm64: harden shadow call stack pointer handling
        arm64: Stash shadow stack pointer in the task struct on interrupt
        arm64: Always load shadow stack pointer directly from the task struct
      
      * for-next/compat-hwcap:
        : arm64: Expose compat ARMv8 AArch32 features (HWCAPs)
        arm64: Add compat hwcap SSBS
        arm64: Add compat hwcap SB
        arm64: Add compat hwcap I8MM
        arm64: Add compat hwcap ASIMDBF16
        arm64: Add compat hwcap ASIMDFHM
        arm64: Add compat hwcap ASIMDDP
        arm64: Add compat hwcap FPHP and ASIMDHP
      
      * for-next/ftrace:
        : Add arm64 support for DYNAMICE_FTRACE_WITH_CALL_OPS
        arm64: avoid executing padding bytes during kexec / hibernation
        arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
        arm64: ftrace: Update stale comment
        arm64: patching: Add aarch64_insn_write_literal_u64()
        arm64: insn: Add helpers for BTI
        arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT
        ACPI: Don't build ACPICA with '-Os'
        Compiler attributes: GCC cold function alignment workarounds
        ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS
      
      * for-next/efi-boot-mmu-on:
        : Permit arm64 EFI boot with MMU and caches on
        arm64: kprobes: Drop ID map text from kprobes blacklist
        arm64: head: Switch endianness before populating the ID map
        efi: arm64: enter with MMU and caches enabled
        arm64: head: Clean the ID map and the HYP text to the PoC if needed
        arm64: head: avoid cache invalidation when entering with the MMU on
        arm64: head: record the MMU state at primary entry
        arm64: kernel: move identity map out of .text mapping
        arm64: head: Move all finalise_el2 calls to after __enable_mmu
      
      * for-next/ptrauth:
        : arm64 pointer authentication cleanup
        arm64: pauth: don't sign leaf functions
        arm64: unify asm-arch manipulation
      
      * for-next/pseudo-nmi:
        : Pseudo-NMI code generation optimisations
        arm64: irqflags: use alternative branches for pseudo-NMI logic
        arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
        arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS
        arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING
        arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
      156010ed
  3. 07 Feb, 2023 6 commits
  4. 06 Feb, 2023 1 commit
  5. 03 Feb, 2023 1 commit
  6. 01 Feb, 2023 17 commits
  7. 31 Jan, 2023 11 commits
    • Mark Rutland's avatar
      arm64: irqflags: use alternative branches for pseudo-NMI logic · a5f61cc6
      Mark Rutland authored
      Due to the way we use alternatives in the irqflags code, even when
      CONFIG_ARM64_PSEUDO_NMI=n, we generate unused alternative code for
      pseudo-NMI management. This patch reworks the irqflags code to remove
      the redundant code when CONFIG_ARM64_PSEUDO_NMI=n, which benefits the
      more common case, and will permit further rework of our DAIF management
      (e.g. in preparation for ARMv8.8-A's NMI feature).
      
      Prior to this patch a defconfig kernel has hundreds of redundant
      instructions to access ICC_PMR_EL1 (which should only need to be
      manipulated in setup code), which this patch removes:
      
      | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before-defconfig | grep icc_pmr_el1 | wc -l
      | 885
      | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after-defconfig | grep icc_pmr_el1 | wc -l
      | 5
      
      Those instructions alone account for more than 3KiB of kernel text, and
      will be associated with additional alt_instr entries, padding and
      branches, etc.
      
      These redundant instructions exist because we use alternative sequences
      for to choose between DAIF / PMR management in irqflags.h, and even when
      CONFIG_ARM64_PSEUDO_NMI=n, those alternative sequences will generate the
      code for PMR management, along with alt_instr entries. We use
      alternatives here as this was necessary to ensure that we never
      encounter a mismatched local_irq_save() ... local_irq_restore() sequence
      in the middle of patching, which was possible to see if we used static
      keys to choose between DAIF and PMR management.
      
      Since commit:
      
        21fb26bf ("arm64: alternatives: add alternative_has_feature_*()")
      
      ... we have a mechanism to use alternatives similarly to static keys,
      allowing us to write the bulk of the logic in C code while also being
      able to rely on all sites being patched in one go, and avoiding a
      mismatched mismatched local_irq_save() ... local_irq_restore() sequence
      during patching.
      
      This patch rewrites arm64's local_irq_*() functions to use alternative
      branches. This allows for the pseudo-NMI code to be entirely elided when
      CONFIG_ARM64_PSEUDO_NMI=n, making a defconfig Image 64KiB smaller, and
      not affectint the size of an Image with CONFIG_ARM64_PSEUDO_NMI=y:
      
      | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
      | -rwxr-xr-x 1 mark mark 137473432 Jan 18 11:11 vmlinux-after-defconfig
      | -rwxr-xr-x 1 mark mark 137918776 Jan 18 11:15 vmlinux-after-pnmi
      | -rwxr-xr-x 1 mark mark 137380152 Jan 18 11:03 vmlinux-before-defconfig
      | -rwxr-xr-x 1 mark mark 137523704 Jan 18 11:08 vmlinux-before-pnmi
      | [mark@lakrids:~/src/linux]% ls -al Image-*
      | -rw-r--r-- 1 mark mark 38646272 Jan 18 11:11 Image-after-defconfig
      | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:14 Image-after-pnmi
      | -rw-r--r-- 1 mark mark 38711808 Jan 18 11:03 Image-before-defconfig
      | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:08 Image-before-pnmi
      
      Some sensitive code depends on being run with interrupts enabled or with
      interrupts disabled, and so when enabling or disabling interrupts we
      must ensure that the compiler does not move such code around the actual
      enable/disable. Before this patch, that was ensured by the combined asm
      volatile blocks having memory clobbers (and any sensitive code either
      being asm volatile, or touching memory). This patch consistently uses
      explicit barrier() operations before and after the enable/disable, which
      allows us to use the usual sysreg accessors (which are asm volatile) to
      manipulate the interrupt masks. The use of pmr_sync() is pulled within
      this critical section for consistency.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a5f61cc6
    • Mark Rutland's avatar
      arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap · 8bf0a804
      Mark Rutland authored
      When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR
      value to determine whether to signal an IRQ to a PE, and consequently
      after a change to the PMR value, a DSB SY may be required to ensure that
      interrupts are signalled to a CPU in finite time. When PMHE == 0b0,
      interrupts are always signalled to the relevant PE, and all masking
      occurs locally, without requiring a DSB SY.
      
      Since commit:
      
        f2266504 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear")
      
      ... we handle this dynamically: in most cases a static key is used to
      determine whether to issue a DSB SY, but the entry code must read from
      ICC_CTLR_EL1 as static keys aren't accessible from plain assembly.
      
      It would be much nicer to use an alternative instruction sequence for
      the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the
      entry code, and for most other code this will result in simpler code
      generation with fewer instructions and fewer branches.
      
      This patch adds a new ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap which is
      only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in
      use). This allows us to replace the existing users of the
      `gic_pmr_sync` static key with alternative sequences which default to a
      DSB SY and are relaxed to a NOP when PMHE is not in use.
      
      The entry assembly management of the PMR is slightly restructured to use
      a branch (rather than multiple NOPs) when priority masking is not in
      use. This is more in keeping with other alternatives in the entry
      assembly, and permits the use of a separate alternatives for the
      PMHE-dependent DSB SY (and removal of the conditional branch this
      currently requires). For consistency I've adjusted both the save and
      restore paths.
      
      According to bloat-o-meter, when building defconfig +
      CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB:
      
      | add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700)
      
      The resulting vmlinux is ~66KiB smaller, though the resulting Image size
      is unchanged due to padding and alignment:
      
      | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
      | -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after
      | -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before
      | [mark@lakrids:~/src/linux]% ls -al Image-*
      | -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after
      | -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before
      
      Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on
      secondary CPUs. As of this patch this is verified by the cpufeature code
      when using GIC priority masking (i.e. when using pseudo-NMIs).
      
      Note that since commit:
      
        7e3a57fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements")
      
      ... Documentation/arm64/booting.rst specifies:
      
      |      - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
      |        all CPUs the kernel is executing on, and must stay constant
      |        for the lifetime of the kernel.
      
      ... so that should not adversely affect any compliant systems, and as
      we'll only check for the absense of PMHE when using pseudo-NMIs, this
      will only fire when such mismatch will adversely affect the system.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8bf0a804
    • Mark Rutland's avatar
      arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS · 4b43f1cd
      Mark Rutland authored
      Currently the arm64_cpu_capabilities structure for
      ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as
      the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so
      that can_use_gic_priorities() can use has_useable_gicv3_cpuif().
      
      This duplication isn't ideal for the legibility of the code, and sets a
      bad example for any ARM64_HAS_GIC_* definitions added by subsequent
      patches.
      
      Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the
      ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why
      this is safe. Subsequent patches will use the same pattern where one
      cpucap depends upon another.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-4-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4b43f1cd
    • Mark Rutland's avatar
      arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING · c888b7bd
      Mark Rutland authored
      Subsequent patches will add more GIC-related cpucaps. When we do so, it
      would be nice to give them a consistent HAS_GIC_* prefix.
      
      In preparation for doing so, this patch renames the existing
      ARM64_HAS_IRQ_PRIO_MASKING cap to ARM64_HAS_GIC_PRIO_MASKING.
      
      The cpucaps file was hand-modified; all other changes were scripted
      with:
      
        find . -type f -name '*.[chS]' -print0 | \
          xargs -0 sed -i 's/ARM64_HAS_IRQ_PRIO_MASKING/ARM64_HAS_GIC_PRIO_MASKING/'
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c888b7bd
    • Mark Rutland's avatar
      arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS · 0e62ccb9
      Mark Rutland authored
      Subsequent patches will add more GIC-related cpucaps. When we do so, it
      would be nice to give them a consistent HAS_GIC_* prefix.
      
      In preparation for doing so, this patch renames the existing
      ARM64_HAS_SYSREG_GIC_CPUIF cap to ARM64_HAS_GIC_CPUIF_SYSREGS.
      
      The 'CPUIF_SYSREGS' suffix is chosen so that this will be ordered ahead
      of other ARM64_HAS_GIC_* definitions in subsequent patches.
      
      The cpucaps file was hand-modified; all other changes were scripted
      with:
      
        find . -type f -name '*.[chS]' -print0 | \
          xargs -0 sed -i
          's/ARM64_HAS_SYSREG_GIC_CPUIF/ARM64_HAS_GIC_CPUIF_SYSREGS/'
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      0e62ccb9
    • Mark Rutland's avatar
      arm64: pauth: don't sign leaf functions · c68cf528
      Mark Rutland authored
      Currently, when CONFIG_ARM64_PTR_AUTH_KERNEL=y (and
      CONFIG_UNWIND_PATCH_PAC_INTO_SCS=n), we enable pointer authentication
      for all functions, including leaf functions. This isn't necessary, and
      is unfortunate for a few reasons:
      
      * Any PACIASP instruction is implicitly a `BTI C` landing pad, and
        forcing the addition of a PACIASP in every function introduces a
        larger set of BTI gadgets than is necessary.
      
      * The PACIASP and AUTIASP instructions make leaf functions larger than
        necessary, bloating the kernel Image. For a defconfig v6.2-rc3 kernel,
        this appears to add ~64KiB relative to not signing leaf functions,
        which is unfortunate but not entirely onerous.
      
      * The PACIASP and AUTIASP instructions potentially make leaf functions
        more expensive in terms of performance and/or power. For many trivial
        leaf functions, this is clearly unnecessary, e.g.
      
        | <arch_local_save_flags>:
        |        d503233f        paciasp
        |        d53b4220        mrs     x0, daif
        |        d50323bf        autiasp
        |        d65f03c0        ret
      
        | <calibration_delay_done>:
        |        d503233f        paciasp
        |        d50323bf        autiasp
        |        d65f03c0        ret
        |        d503201f        nop
      
      * When CONFIG_UNWIND_PATCH_PAC_INTO_SCS=y we disable pointer
        authentication for leaf functions, so clearly this is not functionally
        necessary, indicates we have an inconsistent threat model, and
        convolutes the Makefile logic.
      
      We've used pointer authentication in leaf functions since the
      introduction of in-kernel pointer authentication in commit:
      
        74afda40 ("arm64: compile the kernel with ptrauth return address signing")
      
      ... but at the time we had no rationale for signing leaf functions.
      
      Subsequently, we considered avoiding signing leaf functions:
      
        https://lore.kernel.org/linux-arm-kernel/1586856741-26839-1-git-send-email-amit.kachhap@arm.com/
        https://lore.kernel.org/linux-arm-kernel/1588149371-20310-1-git-send-email-amit.kachhap@arm.com/
      
      ... however at the time we didn't have an abundance of reasons to avoid
      signing leaf functions as above (e.g. the BTI case), we had no hardware
      to make performance measurements, and it was reasoned that this gave
      some level of protection against a limited set of code-reuse gadgets
      which would fall through to a RET. We documented this in commit:
      
        717b938e ("arm64: Document why we enable PAC support for leaf functions")
      
      Notably, this was before we supported any forward-edge CFI scheme (e.g.
      Arm BTI, or Clang CFI/kCFI), which would prevent jumping into the middle
      of a function.
      
      In addition, even with signing forced for leaf functions, AUTIASP may be
      placed before a number of instructions which might constitute such a
      gadget, e.g.
      
      | <user_regs_reset_single_step>:
      |        f9400022        ldr     x2, [x1]
      |        d503233f        paciasp
      |        d50323bf        autiasp
      |        f9408401        ldr     x1, [x0, #264]
      |        720b005f        tst     w2, #0x200000
      |        b26b0022        orr     x2, x1, #0x200000
      |        926af821        and     x1, x1, #0xffffffffffdfffff
      |        9a820021        csel    x1, x1, x2, eq  // eq = none
      |        f9008401        str     x1, [x0, #264]
      |        d65f03c0        ret
      
      | <fpsimd_cpu_dead>:
      |        2a0003e3        mov     w3, w0
      |        9000ff42        adrp    x2, ffff800009ffd000 <xen_dynamic_chip+0x48>
      |        9120e042        add     x2, x2, #0x838
      |        52800000        mov     w0, #0x0                        // #0
      |        d503233f        paciasp
      |        f000d041        adrp    x1, ffff800009a20000 <this_cpu_vector>
      |        d50323bf        autiasp
      |        9102c021        add     x1, x1, #0xb0
      |        f8635842        ldr     x2, [x2, w3, uxtw #3]
      |        f821685f        str     xzr, [x2, x1]
      |        d65f03c0        ret
      |        d503201f        nop
      
      So generally, trying to use AUTIASP to detect such gadgetization is not
      robust, and this is dealt with far better by forward-edge CFI (which is
      designed to prevent such cases). We should bite the bullet and stop
      pretending that AUTIASP is a mitigation for such forward-edge
      gadgetization.
      
      For the above reasons, this patch has the kernel consistently sign
      non-leaf functions and avoid signing leaf functions.
      
      Considering a defconfig v6.2-rc3 kernel built with LLVM 15.0.6:
      
      * The vmlinux is ~43KiB smaller:
      
        | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
        | -rwxr-xr-x 1 mark mark 338547808 Jan 25 17:17 vmlinux-after
        | -rwxr-xr-x 1 mark mark 338591472 Jan 25 17:22 vmlinux-before
      
      * The resulting Image is 64KiB smaller:
      
        | [mark@lakrids:~/src/linux]% ls -al Image-*
        | -rwxr-xr-x 1 mark mark 32702976 Jan 25 17:17 Image-after
        | -rwxr-xr-x 1 mark mark 32768512 Jan 25 17:22 Image-before
      
      * There are ~400 fewer BTI gadgets:
      
        | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
        |    1219 bti     c
        |   61982 paciasp
      
        | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c
        |   10099 bti     c
        |   52699 paciasp
      
        Which is +8880 BTIs, and -9283 PACIASPs, for -403 unnecessary BTI
        gadgets. While this is small relative to the total, distinguishing the
        two cases will make it easier to analyse and reduce this set further
        in future.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230131105809.991288-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c68cf528
    • Mark Rutland's avatar
      arm64: unify asm-arch manipulation · 1e249c41
      Mark Rutland authored
      Assemblers will reject instructions not supported by a target
      architecture version, and so we must explicitly tell the assembler the
      latest architecture version for which we want to assemble instructions
      from.
      
      We've added a few AS_HAS_ARMV8_<N> definitions for this, in addition to
      an inconsistently named AS_HAS_PAC definition, from which arm64's
      top-level Makefile determines the architecture version that we intend to
      target, and generates the `asm-arch` variable.
      
      To make this a bit clearer and easier to maintain, this patch reworks
      the Makefile to determine asm-arch in a single if-else-endif chain.
      AS_HAS_PAC, which is defined when the assembler supports
      `-march=armv8.3-a`, is renamed to AS_HAS_ARMV8_3.
      
      As the logic for armv8.3-a is lifted out of the block handling pointer
      authentication, `asm-arch` may now be set to armv8.3-a regardless of
      whether support for pointer authentication is selected. This means that
      it will be possible to assemble armv8.3-a instructions even if we didn't
      intend to, but this is consistent with our handling of other
      architecture versions, and the compiler won't generate armv8.3-a
      instructions regardless.
      
      For the moment there's no need for an CONFIG_AS_HAS_ARMV8_1, as the code
      for LSE atomics and LDAPR use individual `.arch_extension` entries and
      do not require the baseline asm arch to be bumped to armv8.1-a. The
      other armv8.1-a features (e.g. PAN) do not require assembler support.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230131105809.991288-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1e249c41
    • Mark Brown's avatar
      kselftest/arm64: Remove redundant _start labels from zt-test · b2ab432b
      Mark Brown authored
      The newly added zt-test program copied the pattern from the other FP
      stress test programs of having a redundant _start label which is
      rejected by clang, as we did in a parallel series for the other tests
      remove the label so we can build with clang.
      
      No functional change.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20230130-arm64-fix-sme2-clang-v1-1-3ce81d99ea8f@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b2ab432b
    • Anshuman Khandual's avatar
      arm64/mm: Intercept pfn changes in set_pte_at() · 004fc58f
      Anshuman Khandual authored
      Changing pfn on a user page table mapped entry, without first going through
      break-before-make (BBM) procedure is unsafe. This just updates set_pte_at()
      to intercept such changes, via an updated pgattr_change_is_safe(). This new
      check happens via __check_racy_pte_update(), which has now been renamed as
      __check_safe_pte_update().
      
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Link: https://lore.kernel.org/r/20230130121457.1607675-1-anshuman.khandual@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      004fc58f
    • Randy Dunlap's avatar
      Documentation: arm64: correct spelling · a70f00e7
      Randy Dunlap authored
      Correct spelling problems for Documentation/arm64/ as reported
      by codespell.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: linux-doc@vger.kernel.org
      Reviewed-by: default avatarMukesh Ojha <quic_mojha@quicinc.com>
      Link: https://lore.kernel.org/r/20230127064005.1558-3-rdunlap@infradead.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a70f00e7
    • Mark Brown's avatar
      kselftest/arm64: Limit the maximum VL we try to set via ptrace · 89ff30b9
      Mark Brown authored
      When SVE was initially merged we chose to export the maximum VQ in the ABI
      as being 512, rather more than the architecturally supported maximum of 16.
      For the ptrace tests this results in us generating a lot of test cases and
      hence log output which are redundant since a system couldn't possibly
      support them. Instead only check values up to the current architectural
      limit, plus one more so that we're covering the constraining of higher
      vector lengths.
      
      This makes no practical difference to our test coverage, speeds things up
      on slower consoles and makes the output much more managable.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20230111-arm64-kselftest-ptrace-max-vl-v1-1-8167f41d1ad8@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      89ff30b9