- 22 Jan, 2020 8 commits
-
-
Mark Rutland authored
Since commit: d44f1b8d ("arm64: KVM/mm: Move SEA handling behind a single 'claim' interface") ... the top-level APEI SEA handler has the shape: 1. current_flags = arch_local_save_flags() 2. local_daif_restore(DAIF_ERRCTX) 3. <GHES handler> 4. local_daif_restore(current_flags) However, since commit: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") ... when pseudo-NMIs (pNMIs) are in use, arch_local_save_flags() will save the PMR value rather than the DAIF flags. The combination of these two commits means that the APEI SEA handler will erroneously attempt to restore the PMR value into DAIF. Fix this by factoring local_daif_save_flags() out of local_daif_save(), so that we can consistently save DAIF in step #1, regardless of whether pNMIs are in use. Both commits were introduced concurrently in v5.0. Cc: <stable@vger.kernel.org> Fixes: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Fixes: d44f1b8d ("arm64: KVM/mm: Move SEA handling behind a single 'claim' interface") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
* for-next/rng: (2 commits) arm64: Use v8.5-RNG entropy for KASLR seed ...
-
Will Deacon authored
* for-next/errata: (3 commits) arm64: Workaround for Cortex-A55 erratum 1530923 ...
-
Will Deacon authored
* for-next/asm-annotations: (6 commits) arm64: kernel: Correct annotation of end of el0_sync ...
-
Will Deacon authored
Merge branches 'for-next/acpi', 'for-next/cpufeatures', 'for-next/csum', 'for-next/e0pd', 'for-next/entry', 'for-next/kbuild', 'for-next/kexec/cleanup', 'for-next/kexec/file-kdump', 'for-next/misc', 'for-next/nofpsimd', 'for-next/perf' and 'for-next/scs' into for-next/core * for-next/acpi: ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map() * for-next/cpufeatures: (2 commits) arm64: Introduce ID_ISAR6 CPU register ... * for-next/csum: (2 commits) arm64: csum: Fix pathological zero-length calls ... * for-next/e0pd: (7 commits) arm64: kconfig: Fix alignment of E0PD help text ... * for-next/entry: (5 commits) arm64: entry: cleanup sp_el0 manipulation ... * for-next/kbuild: (4 commits) arm64: kbuild: remove compressed images on 'make ARCH=arm64 (dist)clean' ... * for-next/kexec/cleanup: (11 commits) Revert "arm64: kexec: make dtb_mem always enabled" ... * for-next/kexec/file-kdump: (2 commits) arm64: kexec_file: add crash dump support ... * for-next/misc: (12 commits) arm64: entry: Avoid empty alternatives entries ... * for-next/nofpsimd: (7 commits) arm64: nofpsmid: Handle TIF_FOREIGN_FPSTATE flag cleanly ... * for-next/perf: (2 commits) perf/imx_ddr: Fix cpu hotplug state cleanup ... * for-next/scs: (6 commits) arm64: kernel: avoid x18 in __cpu_soft_restart ...
-
Will Deacon authored
Remove the additional space. Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
When seeding KALSR on a system where we have architecture level random number generation make use of that entropy, mixing it in with the seed passed by the bootloader. Since this is run very early in init before feature detection is complete we open code rather than use archrandom.h. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Richard Henderson authored
Expose the ID_AA64ISAR0.RNDR field to userspace, as the RNG system registers are always available at EL0. Implement arch_get_random_seed_long using RNDR. Given that the TRNG is likely to be a shared resource between cores, and VMs, do not explicitly force re-seeding with RNDRRS. In order to avoid code complexity and potential issues with hetrogenous systems only provide values after cpufeature has finalized the system capabilities. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [Modified to only function after cpufeature has finalized the system capabilities and move all the code into the header -- broonie] Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> [will: Advertise HWCAP via /proc/cpuinfo] Signed-off-by: Will Deacon <will@kernel.org>
-
- 21 Jan, 2020 3 commits
-
-
Dirk Behme authored
Since v4.3-rc1 commit 0723c05f ("arm64: enable more compressed Image formats"), it is possible to build Image.{bz2,lz4,lzma,lzo} AArch64 images. However, the commit missed adding support for removing those images on 'make ARCH=arm64 (dist)clean'. Fix this by adding them to the target list. Make sure to match the order of the recipes in the makefile. Cc: stable@vger.kernel.org # v4.3+ Fixes: 0723c05f ("arm64: enable more compressed Image formats") Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com> Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com> Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Julien Thierry authored
kernel_ventry will create alternative entries to potentially replace 0 instructions with 0 instructions for EL1 vectors. While this does not cause an issue, it pointlessly takes up some bytes in the alternatives section. Do not generate such entries. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <jthierry@redhat.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Vladimir Murzin authored
arm64 provides always working implementation of futex_atomic_cmpxchg_inatomic(), so there is no need to check it runtime. Reported-by: Piyush swami <Piyush.swami@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
- 17 Jan, 2020 10 commits
-
-
Robin Murphy authored
In validating the checksumming results of the new routine, I sadly neglected to test its not-checksumming results. Thus it slipped through that the one case where @buff is already dword-aligned and @len = 0 manages to defeat the tail-masking logic and behave as if @len = 8. For a zero length it doesn't make much sense to deference @buff anyway, so just add an early return (which has essentially zero impact on performance). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
The kernel stashes the current task struct in sp_el0 so that this can be acquired consistently/cheaply when required. When we take an exception from EL0 we have to: 1) stash the original sp_el0 value 2) find the current task 3) update sp_el0 with the current task pointer Currently steps #1 and #2 occur in one place, and step #3 a while later. As the value of sp_el0 is immaterial between these points, let's move them together to make the code clearer and minimize ifdeffery. This necessitates moving the comment for MDSCR_EL1.SS. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
For most of the exception entry code, <foo>_handler() is the first C function called from the entry assembly in entry-common.c, and external functions handling the bulk of the logic are called do_<foo>(). For consistency, apply this scheme to el0_svc_handler and el0_svc_compat_handler, renaming them to do_el0_svc and do_el0_svc_compat respectively. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
Almost all functions in entry-common.c are marked notrace, with el1_undef and el1_inv being the only exceptions. We appear to have done this on the assumption that there were no exception registers that we needed to snapshot, and thus it was safe to run trace code that might result in further exceptions and clobber those registers. However, until we inherit the DAIF flags, our irq flag tracing is stale, and this discrepancy could set off warnings in some configurations. For example if CONFIG_DEBUG_LOCKDEP is selected and a trace function calls into any flag-checking locking routines. Given we don't expect to trigger el1_undef or el1_inv unless something is already wrong, any irqflag warnigns are liable to mask the information we'd actually care about. Let's keep things simple and mark el1_undef and el1_inv as notrace. Developers can trace do_undefinstr and bad_mode if they really want to monitor these cases. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
These days arm64 kernels are always SMP, and thus smp_dmb is an overly-long way of writing dmb. Naturally, no-one uses it. Remove the unused macro. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
We haven't needed the inherit_daif macro since commit: ed3768db ("arm64: entry: convert el1_sync to C") ... which converted all callers to C and the local_daif_inherit function. Remove the unused macro. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Hanjun Guo authored
The IORT specification [0] (Section 3, table 4, page 9) defines the 'Number of IDs' as 'The number of IDs in the range minus one'. However, the IORT ID mapping function iort_id_map() treats the 'Number of IDs' field as if it were the full IDs mapping count, with the following check in place to detect out of boundary input IDs: InputID >= Input base + Number of IDs This check is flawed in that it considers the 'Number of IDs' field as the full number of IDs mapping and disregards the 'minus one' from the IDs count. The correct check in iort_id_map() should be implemented as: InputID > Input base + Number of IDs this implements the specification correctly but unfortunately it breaks existing firmwares that erroneously set the 'Number of IDs' as the full IDs mapping count rather than IDs mapping count minus one. e.g. PCI hostbridge mapping entry 1: Input base: 0x1000 ID Count: 0x100 Output base: 0x1000 Output reference: 0xC4 //ITS reference PCI hostbridge mapping entry 2: Input base: 0x1100 ID Count: 0x100 Output base: 0x2000 Output reference: 0xD4 //ITS reference Two mapping entries which the second entry's Input base = the first entry's Input base + ID count, so for InputID 0x1100 and with the correct InputID check in place in iort_id_map() the kernel would map the InputID to ITS 0xC4 not 0xD4 as it would be expected. Therefore, to keep supporting existing flawed firmwares, introduce a workaround that instructs the kernel to use the old InputID range check logic in iort_id_map(), so that we can support both firmwares written with the flawed 'Number of IDs' logic and the correct one as defined in the specifications. [0]: http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdfReported-by: Pankaj Bansal <pankaj.bansal@nxp.com> Link: https://lore.kernel.org/linux-acpi/20191215203303.29811-1-pankaj.bansal@nxp.com/Signed-off-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Pankaj Bansal <pankaj.bansal@nxp.com> Cc: Will Deacon <will@kernel.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Dave Martin authored
The asm-generic/mman.h definitions are used by a few architectures that also define arch-specific PROT flags with value 0x10 and 0x20. This currently applies to sparc and powerpc for 0x10, while arm64 will soon join with 0x10 and 0x20. To help future maintainers, document the use of this flag in the asm-generic header too. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Dave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: reserve 0x20 as well] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Catalin Marinas authored
Currently, the arm64 __cpu_setup has hard-coded constants for the memory attributes that go into the MAIR_EL1 register. Define proper macros in asm/sysreg.h and make use of them in proc.S. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Sai Prakash Ranjan authored
The "silver" KRYO3XX and KRYO4XX CPU cores are not affected by Spectre variant 2. Add them to spectre_v2 safe list to correct the spurious ARM_SMCCC_ARCH_WORKAROUND_1 warning and vulnerability status reported under sysfs. Reviewed-by: Stephen Boyd <swboyd@chromium.org> Tested-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> [will: tweaked commit message to remove stale mention of "gold" cores] Signed-off-by: Will Deacon <will@kernel.org>
-
- 16 Jan, 2020 11 commits
-
-
Ard Biesheuvel authored
The code in __cpu_soft_restart() uses x18 as an arbitrary temp register, which will shortly be disallowed. So use x8 instead. Link: https://patchwork.kernel.org/patch/9836877/Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [Sami: updated commit message] Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
In preparation of reserving x18, stop treating it as caller save in the KVM guest entry/exit code. Currently, the code assumes there is no need to preserve it for the host, given that it would have been assumed clobbered anyway by the function call to __guest_enter(). Instead, preserve its value and restore it upon return. Link: https://patchwork.kernel.org/patch/9836891/Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [Sami: updated commit message, switched from x18 to x29 for the guest context] Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
Register x18 will no longer be used as a caller save register in the future, so stop using it in the copy_page() code. Link: https://patchwork.kernel.org/patch/9836869/Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [Sami: changed the offset and bias to be explicit] Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Sami Tolvanen authored
idmap_kpti_install_ng_mappings uses x18 as a temporary register, which will result in a conflict when x18 is reserved. Use x16 and x17 instead where needed. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Sami Tolvanen authored
LLVM's integrated assembler fails with the following error when building KVM: <inline asm>:12:6: error: expected absolute expression .if kvm_update_va_mask == 0 ^ <inline asm>:21:6: error: expected absolute expression .if kvm_update_va_mask == 0 ^ <inline asm>:24:2: error: unrecognized instruction mnemonic NOT_AN_INSTRUCTION ^ LLVM ERROR: Error parsing inline asm These errors come from ALTERNATIVE_CB and __ALTERNATIVE_CFG, which test for the existence of the callback parameter in inline assembly using the following expression: " .if " __stringify(cb) " == 0\n" This works with GNU as, but isn't supported by LLVM. This change splits __ALTERNATIVE_CFG and ALTINSTR_ENTRY into separate macros to fix the LLVM build. Link: https://github.com/ClangBuiltLinux/linux/issues/472Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Sami Tolvanen authored
Unlike gcc, clang considers each inline assembly block to be independent and therefore, when using the integrated assembler for inline assembly, any preambles that enable features must be repeated in each block. This change defines __LSE_PREAMBLE and adds it to each inline assembly block that has LSE instructions, which allows them to be compiled also with clang's assembler. Link: https://github.com/ClangBuiltLinux/linux/issues/671Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Andrew Murray <andrew.murray@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Robin Murphy authored
Apparently there exist certain workloads which rely heavily on software checksumming, for which the generic do_csum() implementation becomes a significant bottleneck. Therefore let's give arm64 its own optimised version - for ease of maintenance this foregoes assembly or intrisics, and is thus not actually arm64-specific, but does rely heavily on C idioms that translate well to the A64 ISA and the typical load/store capabilities of most ARMv8 CPU cores. The resulting increase in checksum throughput scales nicely with buffer size, tending towards 4x for a small in-order core (Cortex-A53), and up to 6x or more for an aggressive big core (Ampere eMAG). Reported-by: Lingyan Huang <huanglingyan2@huawei.com> Tested-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Vladimir Murzin authored
We can extend user ASID space if it turns out that system does not require KPTI. We start with kernel ASIDs reserved because CPU caps are not finalized yet and free them up lazily on the next rollover if we confirm than KPTI is not in use. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Steven Price authored
Cortex-A55 erratum 1530923 allows TLB entries to be allocated as a result of a speculative AT instruction. This may happen in the middle of a guest world switch while the relevant VMSA configuration is in an inconsistent state, leading to erroneous content being allocated into TLBs. The same workaround as is used for Cortex-A76 erratum 1165522 (WORKAROUND_SPECULATIVE_AT_VHE) can be used here. Note that this mandates the use of VHE on affected parts. Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Steven Price authored
To match SPECULATIVE_AT_VHE let's also have a generic name for the NVHE variant. Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Steven Price authored
Cortex-A55 is affected by a similar erratum, so rename the existing workaround for errarum 1165522 so it can be used for both errata. Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
- 15 Jan, 2020 8 commits
-
-
Will Deacon authored
Rather than open-code the extraction of the E0PD field from the MMFR2 register, we can use the cpuid_feature_extract_unsigned_field() helper instead. Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Now that the decision to use non-global mappings is stored in a variable, the check to avoid enabling them for the terminally broken ThunderX1 platform can be simplified so that it is only keyed off the MIDR value. Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Vladimir Murzin authored
Use the new 'as-instr' Kconfig macro to define CONFIG_BROKEN_GAS_INST directly, making it available everywhere. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [will: Drop redundant 'y if' logic] Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
Refactor the code which checks to see if we need to use non-global mappings to use a variable instead of checking with the CPU capabilities each time, doing the initial check for KPTI early in boot before we start allocating memory so we still avoid transitioning to non-global mappings in common cases. Since this variable always matches our decision about non-global mappings this means we can also combine arm64_kernel_use_ng_mappings() and arm64_unmap_kernel_at_el0() into a single function, the variable simply stores the result and the decision code is elsewhere. We could just have the users check the variable directly but having a function makes it clear that these uses are read-only. The result is that we simplify the code a bit and reduces the amount of code executed at runtime. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
Since E0PD is intended to fulfil the same role as KPTI we don't need to use KPTI on CPUs where E0PD is available, we can rely on E0PD instead. Change the check that forces KPTI on when KASLR is enabled to check for E0PD before doing so, CPUs with E0PD are not expected to be affected by meltdown so should not need to enable KPTI for other reasons. Since E0PD is a system capability we will still enable KPTI if any of the CPUs in the system lacks E0PD, this will rewrite any global mappings that were established in systems where some but not all CPUs support E0PD. We may transiently have a mix of global and non-global mappings while booting since we use the local CPU when deciding if KPTI will be required prior to completing CPU enumeration but any global mappings will be converted to non-global ones when KPTI is applied. KPTI can still be forced on from the command line if required. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
In preparation for integrating E0PD support with KASLR factor out the checks for interaction between KASLR and KPTI done in boot context into a new function kaslr_requires_kpti(), in the process clarifying the distinction between what we do in boot context and what we do at runtime. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
Kernel Page Table Isolation (KPTI) is used to mitigate some speculation based security issues by ensuring that the kernel is not mapped when userspace is running but this approach is expensive and is incompatible with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an alternative to this which ensures that accesses from userspace to the kernel's half of the memory map to always fault with constant time, preventing timing attacks without requiring constant unmapping and remapping or preventing legitimate accesses. Currently this feature will only be enabled if all CPUs in the system support E0PD, if some CPUs do not support the feature at boot time then the feature will not be enabled and in the unlikely event that a late CPU is the first CPU to lack the feature then we will reject that CPU. This initial patch does not yet integrate with KPTI, this will be dealt with in followup patches. Ideally we could ensure that by default we don't use KPTI on CPUs where E0PD is present. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: Fixed typo in Kconfig text] Signed-off-by: Will Deacon <will@kernel.org>
-
Catalin Marinas authored
As the Kconfig syntax gained support for $(as-instr) tests, move the LSE gas support detection from Makefile to the main arm64 Kconfig and remove the additional CONFIG_AS_LSE definition and check. Cc: Will Deacon <will@kernel.org> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-