- 24 Feb, 2023 1 commit
-
-
Catalin Marinas authored
Revert the HUGETLB_PAGE_FREE_VMEMMAP selection from commit 1e63ac08 ("arm64: mm: hugetlb: enable HUGETLB_PAGE_FREE_VMEMMAP for arm64") but keep the flush_dcache_page() compound_head() change as it aligns with the corresponding check in the __sync_icache_dcache() function. The original config option was renamed in commit 47010c04 ("mm: hugetlb_vmemmap: cleanup CONFIG_HUGETLB_PAGE_FREE_VMEMMAP*") to HUGETLB_PAGE_OPTIMIZE_VMEMMAP and the flush_dcache_page() check was further simplified by commit 2da1c309 ("mm: hugetlb_vmemmap: delete hugetlb_optimize_vmemmap_enabled()"). The reason for the revert is that the generic vmemmap_remap_pte() function changes both the permissions (writeable to read-only) and the output address (pfn) of the vmemmap ptes. This is deemed UNPREDICTABLE by the Arm architecture without a break-before-make sequence (make the PTE invalid, TLBI, write the new valid PTE). However, such sequence is not possible since the vmemmap may be concurrently accessed by the kernel. Disable the optimisation until a better solution is found. Fixes: 1e63ac08 ("arm64: mm: hugetlb: enable HUGETLB_PAGE_FREE_VMEMMAP for arm64") Cc: <stable@vger.kernel.org> # 5.19.x Cc: Muchun Song <muchun.song@linux.dev> Cc: Will Deacon <will@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/Y9pZALdn3pKiJUeQ@arm.comReviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230222175232.540851-1-catalin.marinas@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 22 Feb, 2023 3 commits
-
-
Sangmoon Kim authored
Commit 0f2cb928 ("arm64: consistently pass ESR_ELx to die()") caused all callers to pass the ESR_ELx value to die(). For consistency, this patch also adds esr to die() call of cfi_handler. Also, when CFI error occurs, die handlers can use ESR_ELx value. Signed-off-by: Sangmoon Kim <sangmoon.kim@samsung.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230220073441.2753-1-sangmoon.kim@samsung.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Support for SME without SVE is architecturally valid and has now been tested well enough so let's remove the warning message that is displayed at boot. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230209-arm64-sme-no-sve-v1-1-74eb3df2f878@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
During page migration, the copy_highpage function is used to copy the page data to the target page. If the source page is a userspace page with MTE tags, the KASAN tag of the target page must have the match-all tag in order to avoid tag check faults during subsequent accesses to the page by the kernel. However, the target page may have been allocated in a number of ways, some of which will use the KASAN allocator and will therefore end up setting the KASAN tag to a non-match-all tag. Therefore, update the target page's KASAN tag to match the source page. We ended up unintentionally fixing this issue as a result of a bad merge conflict resolution between commit e059853d ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") and commit 20794545 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""), which preserved a tag reset for PG_mte_tagged pages which was considered to be unnecessary at the time. Because SW tags KASAN uses separate tag storage, update the code to only reset the tags when HW tags KASAN is enabled. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/If303d8a709438d3ff5af5fd85706505830f52e0cReported-by: "Kuan-Ying Lee (李冠穎)" <Kuan-Ying.Lee@mediatek.com> Cc: <stable@vger.kernel.org> # 6.1 Fixes: 20794545 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"") Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20230215050911.1433132-1-pcc@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Feb, 2023 1 commit
-
-
Mark Rutland authored
When building a kernel with many debug options enabled (which happens in test configurations use by myself and syzbot), the kernel can become large enough that portions of .text can be more than 128M away from .idmap.text (which is placed inside the .rodata section). Where idmap code branches into .text, the linker will place veneers in the .idmap.text section to make those branches possible. Unfortunately, as Ard reports, GNU LD has bseen observed to add 4K of padding when adding such veneers, e.g. | .idmap.text 0xffffffc01e48e5c0 0x32c arch/arm64/mm/proc.o | 0xffffffc01e48e5c0 idmap_cpu_replace_ttbr1 | 0xffffffc01e48e600 idmap_kpti_install_ng_mappings | 0xffffffc01e48e800 __cpu_setup | *fill* 0xffffffc01e48e8ec 0x4 | .idmap.text.stub | 0xffffffc01e48e8f0 0x18 linker stubs | 0xffffffc01e48f8f0 __idmap_text_end = . | 0xffffffc01e48f000 . = ALIGN (0x1000) | *fill* 0xffffffc01e48f8f0 0x710 | 0xffffffc01e490000 idmap_pg_dir = . This makes the __idmap_text_start .. __idmap_text_end region bigger than the 4K we require it to fit within, and triggers an assertion in arm64's vmlinux.lds.S, which breaks the build: | LD .tmp_vmlinux.kallsyms1 | aarch64-linux-gnu-ld: ID map text too big or misaligned | make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1 | make: *** [Makefile:1264: vmlinux] Error 2 Avoid this by using an `ADRP+ADD+BLR` sequence for branches out of .idmap.text, which avoids the need for veneers. These branches are only executed once per boot, and only when the MMU is on, so there should be no noticeable performance penalty in replacing `BL` with `ADRP+ADD+BLR`. At the same time, remove the "x" and "w" attributes when placing code in .idmap.text, as these are not necessary, and this will prevent the linker from assuming that it is safe to place PLTs into .idmap.text, causing it to warn if and when there are out-of-range branches within .idmap.text, e.g. | LD .tmp_vmlinux.kallsyms1 | arch/arm64/kernel/head.o: in function `primary_entry': | (.idmap.text+0x1c): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o | arch/arm64/kernel/head.o: in function `init_el2': | (.idmap.text+0x88): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o | make[1]: *** [scripts/Makefile.vmlinux:34: vmlinux] Error 1 | make: *** [Makefile:1252: vmlinux] Error 2 Thus, if future changes add out-of-range branches in .idmap.text, it should be easy enough to identify those from the resulting linker errors. Reported-by: syzbot+f8ac312e31226e23302b@syzkaller.appspotmail.com Link: https://lore.kernel.org/linux-arm-kernel/00000000000028ea4105f4e2ef54@google.com/Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Will Deacon <will@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230220162317.1581208-1-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 10 Feb, 2023 3 commits
-
-
Catalin Marinas authored
* for-next/signal: : Signal handling cleanups arm64/signal: Only read new data when parsing the ZT context arm64/signal: Only read new data when parsing the ZA context arm64/signal: Only read new data when parsing the SVE context arm64/signal: Avoid rereading context frame sizes arm64/signal: Make interface for restore_fpsimd_context() consistent arm64/signal: Remove redundant size validation from parse_user_sigframe() arm64/signal: Don't redundantly verify FPSIMD magic
-
Catalin Marinas authored
* for-next/sysreg-hwcaps: : Make use of sysreg helpers for hwcaps arm64/cpufeature: Use helper macros to specify hwcaps arm64/cpufeature: Always use symbolic name for feature value in hwcaps arm64/sysreg: Initial unsigned annotations for ID registers arm64/sysreg: Initial annotation of signed ID registers arm64/sysreg: Allow enumerations to be declared as signed or unsigned
-
Catalin Marinas authored
Merge branches 'for-next/sysreg', 'for-next/sme', 'for-next/kselftest', 'for-next/misc', 'for-next/sme2', 'for-next/tpidr2', 'for-next/scs', 'for-next/compat-hwcap', 'for-next/ftrace', 'for-next/efi-boot-mmu-on', 'for-next/ptrauth' and 'for-next/pseudo-nmi', remote-tracking branch 'arm64/for-next/perf' into for-next/core * arm64/for-next/perf: perf: arm_spe: Print the version of SPE detected perf: arm_spe: Add support for SPEv1.2 inverted event filtering perf: Add perf_event_attr::config3 drivers/perf: fsl_imx8_ddr_perf: Remove set-but-not-used variable perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' event perf: arm_spe: Use new PMSIDR_EL1 register enums perf: arm_spe: Drop BIT() and use FIELD_GET/PREP accessors arm64/sysreg: Convert SPE registers to automatic generation arm64: Drop SYS_ from SPE register defines perf: arm_spe: Use feature numbering for PMSEVFR_EL1 defines perf/marvell: Add ACPI support to TAD uncore driver perf/marvell: Add ACPI support to DDR uncore driver perf/arm-cmn: Reset DTM_PMU_CONFIG at probe drivers/perf: hisi: Extract initialization of "cpa_pmu->pmu" drivers/perf: hisi: Simplify the parameters of hisi_pmu_init() drivers/perf: hisi: Advertise the PERF_PMU_CAP_NO_EXCLUDE capability * for-next/sysreg: : arm64 sysreg and cpufeature fixes/updates KVM: arm64: Use symbolic definition for ISR_EL1.A arm64/sysreg: Add definition of ISR_EL1 arm64/sysreg: Add definition for ICC_NMIAR1_EL1 arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK() arm64/sysreg: Fix errors in 32 bit enumeration values arm64/cpufeature: Fix field sign for DIT hwcap detection * for-next/sme: : SME-related updates arm64/sme: Optimise SME exit on syscall entry arm64/sme: Don't use streaming mode to probe the maximum SME VL arm64/ptrace: Use system_supports_tpidr2() to check for TPIDR2 support * for-next/kselftest: (23 commits) : arm64 kselftest fixes and improvements kselftest/arm64: Don't require FA64 for streaming SVE+ZA tests kselftest/arm64: Copy whole EXTRA context kselftest/arm64: Fix enumeration of systems without 128 bit SME for SSVE+ZA kselftest/arm64: Fix enumeration of systems without 128 bit SME kselftest/arm64: Don't require FA64 for streaming SVE tests kselftest/arm64: Limit the maximum VL we try to set via ptrace kselftest/arm64: Correct buffer size for SME ZA storage kselftest/arm64: Remove the local NUM_VL definition kselftest/arm64: Verify simultaneous SSVE and ZA context generation kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM set kselftest/arm64: Remove spurious comment from MTE test Makefile kselftest/arm64: Support build of MTE tests with clang kselftest/arm64: Initialise current at build time in signal tests kselftest/arm64: Don't pass headers to the compiler as source kselftest/arm64: Remove redundant _start labels from FP tests kselftest/arm64: Fix .pushsection for strings in FP tests kselftest/arm64: Run BTI selftests on systems without BTI kselftest/arm64: Fix test numbering when skipping tests kselftest/arm64: Skip non-power of 2 SVE vector lengths in fp-stress kselftest/arm64: Only enumerate power of two VLs in syscall-abi ... * for-next/misc: : Miscellaneous arm64 updates arm64/mm: Intercept pfn changes in set_pte_at() Documentation: arm64: correct spelling arm64: traps: attempt to dump all instructions arm64: Apply dynamic shadow call stack patching in two passes arm64: el2_setup.h: fix spelling typo in comments arm64: Kconfig: fix spelling arm64: cpufeature: Use kstrtobool() instead of strtobool() arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault path arm64: make ARCH_FORCE_MAX_ORDER selectable * for-next/sme2: (23 commits) : Support for arm64 SME 2 and 2.1 arm64/sme: Fix __finalise_el2 SMEver check kselftest/arm64: Remove redundant _start labels from zt-test kselftest/arm64: Add coverage of SME 2 and 2.1 hwcaps kselftest/arm64: Add coverage of the ZT ptrace regset kselftest/arm64: Add SME2 coverage to syscall-abi kselftest/arm64: Add test coverage for ZT register signal frames kselftest/arm64: Teach the generic signal context validation about ZT kselftest/arm64: Enumerate SME2 in the signal test utility code kselftest/arm64: Cover ZT in the FP stress test kselftest/arm64: Add a stress test program for ZT0 arm64/sme: Add hwcaps for SME 2 and 2.1 features arm64/sme: Implement ZT0 ptrace support arm64/sme: Implement signal handling for ZT arm64/sme: Implement context switching for ZT0 arm64/sme: Provide storage for ZT0 arm64/sme: Add basic enumeration for SME2 arm64/sme: Enable host kernel to access ZT0 arm64/sme: Manually encode ZT0 load and store instructions arm64/esr: Document ISS for ZT0 being disabled arm64/sme: Document SME 2 and SME 2.1 ABI ... * for-next/tpidr2: : Include TPIDR2 in the signal context kselftest/arm64: Add test case for TPIDR2 signal frame records kselftest/arm64: Add TPIDR2 to the set of known signal context records arm64/signal: Include TPIDR2 in the signal context arm64/sme: Document ABI for TPIDR2 signal information * for-next/scs: : arm64: harden shadow call stack pointer handling arm64: Stash shadow stack pointer in the task struct on interrupt arm64: Always load shadow stack pointer directly from the task struct * for-next/compat-hwcap: : arm64: Expose compat ARMv8 AArch32 features (HWCAPs) arm64: Add compat hwcap SSBS arm64: Add compat hwcap SB arm64: Add compat hwcap I8MM arm64: Add compat hwcap ASIMDBF16 arm64: Add compat hwcap ASIMDFHM arm64: Add compat hwcap ASIMDDP arm64: Add compat hwcap FPHP and ASIMDHP * for-next/ftrace: : Add arm64 support for DYNAMICE_FTRACE_WITH_CALL_OPS arm64: avoid executing padding bytes during kexec / hibernation arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS arm64: ftrace: Update stale comment arm64: patching: Add aarch64_insn_write_literal_u64() arm64: insn: Add helpers for BTI arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT ACPI: Don't build ACPICA with '-Os' Compiler attributes: GCC cold function alignment workarounds ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS * for-next/efi-boot-mmu-on: : Permit arm64 EFI boot with MMU and caches on arm64: kprobes: Drop ID map text from kprobes blacklist arm64: head: Switch endianness before populating the ID map efi: arm64: enter with MMU and caches enabled arm64: head: Clean the ID map and the HYP text to the PoC if needed arm64: head: avoid cache invalidation when entering with the MMU on arm64: head: record the MMU state at primary entry arm64: kernel: move identity map out of .text mapping arm64: head: Move all finalise_el2 calls to after __enable_mmu * for-next/ptrauth: : arm64 pointer authentication cleanup arm64: pauth: don't sign leaf functions arm64: unify asm-arch manipulation * for-next/pseudo-nmi: : Pseudo-NMI code generation optimisations arm64: irqflags: use alternative branches for pseudo-NMI logic arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
-
- 07 Feb, 2023 6 commits
-
-
Mark Brown authored
During early development a dependedncy was added on having FA64 available so we could use the full FPSIMD register set in the signal handler which got copied over into the SSVE+ZA registers test case. Subsequently the ABI was finialised so the handler is run with streaming mode disabled meaning this is redundant but the dependency was never removed, do so now. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230202-arm64-kselftest-sve-za-fa64-v1-1-5c5f3dabe441@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When copying the EXTRA context our calculation of the amount of data we need to copy is incorrect, we only calculate the amount of data needed within uc_mcontext.__reserved, not taking account of the fixed portion of the context. Add in the offset of the reserved data so that we copy everything we should. This will only cause test failures in cases where the last context in the EXTRA context is smaller than the missing data since we don't currently validate any of the register data and all the buffers we copy into are statically allocated so default to zero meaning that if we walk beyond the end of what we copied we'll encounter what looks like a context with magic and length both 0 which is a valid terminator record. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230201-arm64-kselftest-full-extra-v1-1-93741f32dd29@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The ID mapped text region is never accessed via the normal kernel mapping of text, and so it was moved into .rodata instead. This means it is no longer considered as a suitable place for kprobes by default, and the explicit blacklist is unnecessary, and actually results in an error message at boot: kprobes: Failed to populate blacklist (error -22), kprobes not restricted, be careful using them! So stop blacklisting the ID map text explicitly. Fixes: af7249b3 ("arm64: kernel: move identity map out of .text mapping") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230204101807.2862321-1-ardb@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Rob Herring authored
There's up to 4 versions of SPE now. Let's add the version that's been detected to the driver's informational print out. Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230206204746.1452942-1-robh@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Rob Herring authored
Arm SPEv1.2 (Arm v8.7/v9.2) adds a new feature called Inverted Event Filter which excludes samples matching the event filter. The feature mirrors the existing event filter in PMSEVFR_EL1 adding a new register, PMSNEVFR_EL1, which has the same event bit assignments. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-8-327f860daf28@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Rob Herring authored
Arm SPEv1.2 adds another 64-bits of event filtering control. As the existing perf_event_attr::configN fields are all used up for SPE PMU, an additional field is needed. Add a new 'config3' field. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-7-327f860daf28@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
- 06 Feb, 2023 1 commit
-
-
Marc Zyngier authored
When checking for ID_AA64SMFR0_EL1.SMEver, __check_override assumes that the ID_AA64SMFR0_EL1 value is in x1, and the intent of the code is to reuse value read a few lines above. However, as the comment says at the beginning of the macro, x1 will be clobbered, and the checks always fails. The easiest fix is just to reload the id register before checking it. Fixes: f122576f ("arm64/sme: Enable host kernel to access ZT0") Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 03 Feb, 2023 1 commit
-
-
Sascha Hauer authored
active_events is set but not used, remove it. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Link: https://lore.kernel.org/r/20230203121509.3580245-1-s.hauer@pengutronix.deSigned-off-by: Will Deacon <will@kernel.org>
-
- 01 Feb, 2023 17 commits
-
-
Mark Brown authored
When we parse the ZT signal context we read the entire context from userspace, including the generic signal context header which was already read by parse_user_sigframe() and padding bytes that we ignore. Avoid the possibility of relying on the second read of the data read twice by only reading the data which we are actually going to use. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-7-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When we parse the ZA signal context we read the entire context from userspace, including the generic signal context header which was already read by parse_user_sigframe() and padding bytes that we ignore. Avoid the possibility of relying on the second read of the data read twice by only reading the data which we are actually going to use. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-6-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When we parse the SVE signal context we read the entire context from userspace, including the generic signal context header which was already read by parse_user_sigframe() and padding bytes that we ignore. Avoid the possibility of relying on the second read of the data read twice by only reading the data which we are actually going to use. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-5-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We need to read the sizes of the signal context frames as part of parsing the overall signal context in parse_user_sigframe(). In the cases where we defer frame specific parsing to other functions those functions (other than the recently added TPIDR2 parser) reread the size and validate the version they read, opening the possibility that the value may change. Avoid this possibility by passing the size read in parse_user_sigframe() through user_ctxs and referring to that. For consistency we move the size check for the TPIDR2 context into the TPIDR2 parsing function. Note that for SVE, ZA and ZT contexts we still read the size again but after this change we no longer use the value, further changes will avoid the read. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-4-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Instead of taking a pointer to struct user_ctxs like the other two restore_blah_context() functions the FPSIMD function takes a pointer to the user struct it should read. Change it to be consistent with the rest, both for consistency and to prepare for changes which avoid rereading data that has already been read by the core parsing code. There should be no functional change from this patch. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-3-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
There is some minimal size validation in parse_user_sigframe() however all of the individual parsing functions perform frame specific validation of the sizing information, remove the frame specific size checks in the core so that there isn't any confusion about what we validate for size. Since the checks in the SVE and ZA parsing are after we have read the relevant context and since they won't report an error if the frame is undersized they are adjusted to check for this before doing anything else. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-2-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We validate that the magic in the struct fpsimd_context is correct in restore_fpsimd_context() but this is redundant since parse_user_sigframe() uses this magic to decide to call the function in the first place. Remove the extra validation. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221212-arm64-signal-cleanup-v3-1-4545c94b20ff@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
Patches on this branch depend on the branches merged above.
-
Mark Brown authored
At present the hwcaps are hard to read and a bit error prone since the macros used to specify matches require us to write out the register name multiple times and explicitly specify the width of the field, hopefully using the correct constant. Now that all the ID registers are generated we can improve this somewhat by redoing the macros so that we specify the register, field and minimum value symbolically and use token pasting to initialise the capability struct with the appropriate values. We move from specifying like this: HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_BT_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_BT_IMP, CAP_HWCAP, KERNEL_HWCAP_BTI), to this: HWCAP_CAP(ID_AA64PFR1_EL1, BT, IMP, CAP_HWCAP, KERNEL_HWCAP_BTI), which is shorter due to having less duplicate information and makes it much harder to make an error like specifying the wrong field width or an invalid enumeration value since everything must be a constant defined for the sysreg and names are only typed once. There should be no functional effect from this change, a check of the generated .rodata showed no differences. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221207-arm64-sysreg-helpers-v4-5-25b6b3fb9d18@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Our table of hwcaps sometimes uses the defined constant to specify the enumeration value they are attempting to match but in some cases an unadorned number is used. In preparation for using helper macros to to specify the hwcaps less verbosely replace the magic numbers with their constants, this will hopefully make the conversion to helper macros easier to review. There should be no functional effect from this change, a check of the generate .rodata showed no differences. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221207-arm64-sysreg-helpers-v4-4-25b6b3fb9d18@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
In order to allow the simplification of way we declare hwcaps annotate most of the unsigned fields in the identification registers as such. This is not a complete annotation, it does cover all the cases where we already annotate signedness of the field in the hwcaps and some others which I happened to look at and seemed clear but there will be more and nothing outside the identification registers was even looked at. Other fields can be annotated as incrementally as people have the time and need to do so. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221207-arm64-sysreg-helpers-v4-3-25b6b3fb9d18@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We currently annotate a few bitfields as signed in hwcaps, update all of these to be SignedEnum in the sysreg generation. Further signed bitfields can be done incrementally, this is the minimum required for the conversion of the hwcaps to use token pasting to simplify their declaration. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221207-arm64-sysreg-helpers-v4-2-25b6b3fb9d18@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Many of our enumerations follow a standard scheme where the values can be treated as signed however there are some where the value must be treated as signed and others that are simple enumerations where there is no clear ordering to the values. Provide new field types SignedEnum and UnsignedEnum which allows the signedness to be specified in the sysreg definition and emit a REG_FIELD_SIGNED define for these which is a boolean corresponding to our current FTR_UNSIGNED and FTR_SIGNED macros. Existing Enums will need to be converted, since these do not have a define generated anyone wishing to use the sign of one of these will need to explicitly annotate that field so nothing should start going wrong by default. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221207-arm64-sysreg-helpers-v4-1-25b6b3fb9d18@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
Merge branches 'for-next/sysreg', 'for-next/compat-hwcap' and 'for-next/sme2' into for-next/sysreg-hwcaps Patches on this branch depend on the branches merged above.
-
Mark Brown authored
The current signal handling tests for SME do not account for the fact that unlike SVE all SME vector lengths are optional so we can't guarantee that we will encounter the minimum possible VL, they will hang enumerating VLs on such systems. Abort enumeration when we find the lowest VL in the newly added ssve_za_regs test. Fixes: bc69da5f ("kselftest/arm64: Verify simultaneous SSVE and ZA context generation") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230131-arm64-kselftest-sig-sme-no-128-v1-2-d47c13dc8e1e@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
The current signal handling tests for SME do not account for the fact that unlike SVE all SME vector lengths are optional so we can't guarantee that we will encounter the minimum possible VL, they will hang enumerating VLs on such systems. Abort enumeration when we find the lowest VL. Fixes: 4963aeb3 ("kselftest/arm64: signal: Add SME signal handling tests") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230131-arm64-kselftest-sig-sme-no-128-v1-1-d47c13dc8e1e@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
During early development a dependedncy was added on having FA64 available so we could use the full FPSIMD register set in the signal handler. Subsequently the ABI was finialised so the handler is run with streaming mode disabled meaning this is redundant but the dependency was never removed, do so now. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230131-arm64-kselfetest-ssve-fa64-v1-1-f418efcc2b60@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 31 Jan, 2023 7 commits
-
-
Mark Rutland authored
Due to the way we use alternatives in the irqflags code, even when CONFIG_ARM64_PSEUDO_NMI=n, we generate unused alternative code for pseudo-NMI management. This patch reworks the irqflags code to remove the redundant code when CONFIG_ARM64_PSEUDO_NMI=n, which benefits the more common case, and will permit further rework of our DAIF management (e.g. in preparation for ARMv8.8-A's NMI feature). Prior to this patch a defconfig kernel has hundreds of redundant instructions to access ICC_PMR_EL1 (which should only need to be manipulated in setup code), which this patch removes: | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before-defconfig | grep icc_pmr_el1 | wc -l | 885 | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after-defconfig | grep icc_pmr_el1 | wc -l | 5 Those instructions alone account for more than 3KiB of kernel text, and will be associated with additional alt_instr entries, padding and branches, etc. These redundant instructions exist because we use alternative sequences for to choose between DAIF / PMR management in irqflags.h, and even when CONFIG_ARM64_PSEUDO_NMI=n, those alternative sequences will generate the code for PMR management, along with alt_instr entries. We use alternatives here as this was necessary to ensure that we never encounter a mismatched local_irq_save() ... local_irq_restore() sequence in the middle of patching, which was possible to see if we used static keys to choose between DAIF and PMR management. Since commit: 21fb26bf ("arm64: alternatives: add alternative_has_feature_*()") ... we have a mechanism to use alternatives similarly to static keys, allowing us to write the bulk of the logic in C code while also being able to rely on all sites being patched in one go, and avoiding a mismatched mismatched local_irq_save() ... local_irq_restore() sequence during patching. This patch rewrites arm64's local_irq_*() functions to use alternative branches. This allows for the pseudo-NMI code to be entirely elided when CONFIG_ARM64_PSEUDO_NMI=n, making a defconfig Image 64KiB smaller, and not affectint the size of an Image with CONFIG_ARM64_PSEUDO_NMI=y: | [mark@lakrids:~/src/linux]% ls -al vmlinux-* | -rwxr-xr-x 1 mark mark 137473432 Jan 18 11:11 vmlinux-after-defconfig | -rwxr-xr-x 1 mark mark 137918776 Jan 18 11:15 vmlinux-after-pnmi | -rwxr-xr-x 1 mark mark 137380152 Jan 18 11:03 vmlinux-before-defconfig | -rwxr-xr-x 1 mark mark 137523704 Jan 18 11:08 vmlinux-before-pnmi | [mark@lakrids:~/src/linux]% ls -al Image-* | -rw-r--r-- 1 mark mark 38646272 Jan 18 11:11 Image-after-defconfig | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:14 Image-after-pnmi | -rw-r--r-- 1 mark mark 38711808 Jan 18 11:03 Image-before-defconfig | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:08 Image-before-pnmi Some sensitive code depends on being run with interrupts enabled or with interrupts disabled, and so when enabling or disabling interrupts we must ensure that the compiler does not move such code around the actual enable/disable. Before this patch, that was ensured by the combined asm volatile blocks having memory clobbers (and any sensitive code either being asm volatile, or touching memory). This patch consistently uses explicit barrier() operations before and after the enable/disable, which allows us to use the usual sysreg accessors (which are asm volatile) to manipulate the interrupt masks. The use of pmr_sync() is pulled within this critical section for consistency. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230130145429.903791-6-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR value to determine whether to signal an IRQ to a PE, and consequently after a change to the PMR value, a DSB SY may be required to ensure that interrupts are signalled to a CPU in finite time. When PMHE == 0b0, interrupts are always signalled to the relevant PE, and all masking occurs locally, without requiring a DSB SY. Since commit: f2266504 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear") ... we handle this dynamically: in most cases a static key is used to determine whether to issue a DSB SY, but the entry code must read from ICC_CTLR_EL1 as static keys aren't accessible from plain assembly. It would be much nicer to use an alternative instruction sequence for the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the entry code, and for most other code this will result in simpler code generation with fewer instructions and fewer branches. This patch adds a new ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap which is only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in use). This allows us to replace the existing users of the `gic_pmr_sync` static key with alternative sequences which default to a DSB SY and are relaxed to a NOP when PMHE is not in use. The entry assembly management of the PMR is slightly restructured to use a branch (rather than multiple NOPs) when priority masking is not in use. This is more in keeping with other alternatives in the entry assembly, and permits the use of a separate alternatives for the PMHE-dependent DSB SY (and removal of the conditional branch this currently requires). For consistency I've adjusted both the save and restore paths. According to bloat-o-meter, when building defconfig + CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB: | add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700) The resulting vmlinux is ~66KiB smaller, though the resulting Image size is unchanged due to padding and alignment: | [mark@lakrids:~/src/linux]% ls -al vmlinux-* | -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after | -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before | [mark@lakrids:~/src/linux]% ls -al Image-* | -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after | -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on secondary CPUs. As of this patch this is verified by the cpufeature code when using GIC priority masking (i.e. when using pseudo-NMIs). Note that since commit: 7e3a57fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements") ... Documentation/arm64/booting.rst specifies: | - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across | all CPUs the kernel is executing on, and must stay constant | for the lifetime of the kernel. ... so that should not adversely affect any compliant systems, and as we'll only check for the absense of PMHE when using pseudo-NMIs, this will only fire when such mismatch will adversely affect the system. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230130145429.903791-5-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Currently the arm64_cpu_capabilities structure for ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so that can_use_gic_priorities() can use has_useable_gicv3_cpuif(). This duplication isn't ideal for the legibility of the code, and sets a bad example for any ARM64_HAS_GIC_* definitions added by subsequent patches. Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why this is safe. Subsequent patches will use the same pattern where one cpucap depends upon another. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230130145429.903791-4-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Subsequent patches will add more GIC-related cpucaps. When we do so, it would be nice to give them a consistent HAS_GIC_* prefix. In preparation for doing so, this patch renames the existing ARM64_HAS_IRQ_PRIO_MASKING cap to ARM64_HAS_GIC_PRIO_MASKING. The cpucaps file was hand-modified; all other changes were scripted with: find . -type f -name '*.[chS]' -print0 | \ xargs -0 sed -i 's/ARM64_HAS_IRQ_PRIO_MASKING/ARM64_HAS_GIC_PRIO_MASKING/' There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230130145429.903791-3-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Subsequent patches will add more GIC-related cpucaps. When we do so, it would be nice to give them a consistent HAS_GIC_* prefix. In preparation for doing so, this patch renames the existing ARM64_HAS_SYSREG_GIC_CPUIF cap to ARM64_HAS_GIC_CPUIF_SYSREGS. The 'CPUIF_SYSREGS' suffix is chosen so that this will be ordered ahead of other ARM64_HAS_GIC_* definitions in subsequent patches. The cpucaps file was hand-modified; all other changes were scripted with: find . -type f -name '*.[chS]' -print0 | \ xargs -0 sed -i 's/ARM64_HAS_SYSREG_GIC_CPUIF/ARM64_HAS_GIC_CPUIF_SYSREGS/' There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230130145429.903791-2-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Currently, when CONFIG_ARM64_PTR_AUTH_KERNEL=y (and CONFIG_UNWIND_PATCH_PAC_INTO_SCS=n), we enable pointer authentication for all functions, including leaf functions. This isn't necessary, and is unfortunate for a few reasons: * Any PACIASP instruction is implicitly a `BTI C` landing pad, and forcing the addition of a PACIASP in every function introduces a larger set of BTI gadgets than is necessary. * The PACIASP and AUTIASP instructions make leaf functions larger than necessary, bloating the kernel Image. For a defconfig v6.2-rc3 kernel, this appears to add ~64KiB relative to not signing leaf functions, which is unfortunate but not entirely onerous. * The PACIASP and AUTIASP instructions potentially make leaf functions more expensive in terms of performance and/or power. For many trivial leaf functions, this is clearly unnecessary, e.g. | <arch_local_save_flags>: | d503233f paciasp | d53b4220 mrs x0, daif | d50323bf autiasp | d65f03c0 ret | <calibration_delay_done>: | d503233f paciasp | d50323bf autiasp | d65f03c0 ret | d503201f nop * When CONFIG_UNWIND_PATCH_PAC_INTO_SCS=y we disable pointer authentication for leaf functions, so clearly this is not functionally necessary, indicates we have an inconsistent threat model, and convolutes the Makefile logic. We've used pointer authentication in leaf functions since the introduction of in-kernel pointer authentication in commit: 74afda40 ("arm64: compile the kernel with ptrauth return address signing") ... but at the time we had no rationale for signing leaf functions. Subsequently, we considered avoiding signing leaf functions: https://lore.kernel.org/linux-arm-kernel/1586856741-26839-1-git-send-email-amit.kachhap@arm.com/ https://lore.kernel.org/linux-arm-kernel/1588149371-20310-1-git-send-email-amit.kachhap@arm.com/ ... however at the time we didn't have an abundance of reasons to avoid signing leaf functions as above (e.g. the BTI case), we had no hardware to make performance measurements, and it was reasoned that this gave some level of protection against a limited set of code-reuse gadgets which would fall through to a RET. We documented this in commit: 717b938e ("arm64: Document why we enable PAC support for leaf functions") Notably, this was before we supported any forward-edge CFI scheme (e.g. Arm BTI, or Clang CFI/kCFI), which would prevent jumping into the middle of a function. In addition, even with signing forced for leaf functions, AUTIASP may be placed before a number of instructions which might constitute such a gadget, e.g. | <user_regs_reset_single_step>: | f9400022 ldr x2, [x1] | d503233f paciasp | d50323bf autiasp | f9408401 ldr x1, [x0, #264] | 720b005f tst w2, #0x200000 | b26b0022 orr x2, x1, #0x200000 | 926af821 and x1, x1, #0xffffffffffdfffff | 9a820021 csel x1, x1, x2, eq // eq = none | f9008401 str x1, [x0, #264] | d65f03c0 ret | <fpsimd_cpu_dead>: | 2a0003e3 mov w3, w0 | 9000ff42 adrp x2, ffff800009ffd000 <xen_dynamic_chip+0x48> | 9120e042 add x2, x2, #0x838 | 52800000 mov w0, #0x0 // #0 | d503233f paciasp | f000d041 adrp x1, ffff800009a20000 <this_cpu_vector> | d50323bf autiasp | 9102c021 add x1, x1, #0xb0 | f8635842 ldr x2, [x2, w3, uxtw #3] | f821685f str xzr, [x2, x1] | d65f03c0 ret | d503201f nop So generally, trying to use AUTIASP to detect such gadgetization is not robust, and this is dealt with far better by forward-edge CFI (which is designed to prevent such cases). We should bite the bullet and stop pretending that AUTIASP is a mitigation for such forward-edge gadgetization. For the above reasons, this patch has the kernel consistently sign non-leaf functions and avoid signing leaf functions. Considering a defconfig v6.2-rc3 kernel built with LLVM 15.0.6: * The vmlinux is ~43KiB smaller: | [mark@lakrids:~/src/linux]% ls -al vmlinux-* | -rwxr-xr-x 1 mark mark 338547808 Jan 25 17:17 vmlinux-after | -rwxr-xr-x 1 mark mark 338591472 Jan 25 17:22 vmlinux-before * The resulting Image is 64KiB smaller: | [mark@lakrids:~/src/linux]% ls -al Image-* | -rwxr-xr-x 1 mark mark 32702976 Jan 25 17:17 Image-after | -rwxr-xr-x 1 mark mark 32768512 Jan 25 17:22 Image-before * There are ~400 fewer BTI gadgets: | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c | 1219 bti c | 61982 paciasp | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after 2> /dev/null | grep -ow 'paciasp\|bti\sc\?' | sort | uniq -c | 10099 bti c | 52699 paciasp Which is +8880 BTIs, and -9283 PACIASPs, for -403 unnecessary BTI gadgets. While this is small relative to the total, distinguishing the two cases will make it easier to analyse and reduce this set further in future. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230131105809.991288-3-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Assemblers will reject instructions not supported by a target architecture version, and so we must explicitly tell the assembler the latest architecture version for which we want to assemble instructions from. We've added a few AS_HAS_ARMV8_<N> definitions for this, in addition to an inconsistently named AS_HAS_PAC definition, from which arm64's top-level Makefile determines the architecture version that we intend to target, and generates the `asm-arch` variable. To make this a bit clearer and easier to maintain, this patch reworks the Makefile to determine asm-arch in a single if-else-endif chain. AS_HAS_PAC, which is defined when the assembler supports `-march=armv8.3-a`, is renamed to AS_HAS_ARMV8_3. As the logic for armv8.3-a is lifted out of the block handling pointer authentication, `asm-arch` may now be set to armv8.3-a regardless of whether support for pointer authentication is selected. This means that it will be possible to assemble armv8.3-a instructions even if we didn't intend to, but this is consistent with our handling of other architecture versions, and the compiler won't generate armv8.3-a instructions regardless. For the moment there's no need for an CONFIG_AS_HAS_ARMV8_1, as the code for LSE atomics and LDAPR use individual `.arch_extension` entries and do not require the baseline asm arch to be bumped to armv8.1-a. The other armv8.1-a features (e.g. PAN) do not require assembler support. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230131105809.991288-2-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-