- 09 Oct, 2020 1 commit
-
-
Will Deacon authored
This reverts commit 353e228e. Qian Cai reports that TX2 no longer boots with his .config as it appears that task_cpu() gets instrumented and used before KASAN has been initialised. Although Mark has a proposed fix, let's take the safe option of reverting this for now and sorting it out properly later. Link: https://lore.kernel.org/r/711bc57a314d8d646b41307008db2845b7537b3d.camel@redhat.comReported-by: Qian Cai <cai@redhat.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
- 07 Oct, 2020 2 commits
-
-
Will Deacon authored
Late patches for 5.10: MTE selftests, minor KCSAN preparation and removal of some unused prototypes. (Amit Daniel Kachhap and others) * for-next/late-arrivals: arm64: random: Remove no longer needed prototypes arm64: initialize per-cpu offsets earlier kselftest/arm64: Check mte tagged user address in kernel kselftest/arm64: Verify KSM page merge for MTE pages kselftest/arm64: Verify all different mmap MTE options kselftest/arm64: Check forked child mte memory accessibility kselftest/arm64: Verify mte tag inclusion via prctl kselftest/arm64: Add utilities and a test to validate mte memory
-
Andre Przywara authored
Commit 9bceb80b ("arm64: kaslr: Use standard early random function") removed the direct calls of the __arm64_rndr() and __early_cpu_has_rndr() functions, but left the dummy prototypes in the #else branch of the #ifdef CONFIG_ARCH_RANDOM guard. Remove the redundant prototypes, as they have no users outside of this header file. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20201006194453.36519-1-andre.przywara@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 05 Oct, 2020 7 commits
-
-
Mark Rutland authored
The current initialization of the per-cpu offset register is difficult to follow and this initialization is not always early enough for upcoming instrumentation with KCSAN, where the instrumentation callbacks use the per-cpu offset. To make it possible to support KCSAN, and to simplify reasoning about early bringup code, let's initialize the per-cpu offset earlier, before we run any C code that may consume it. To do so, this patch adds a new init_this_cpu_offset() helper that's called before the usual primary/secondary start functions. For consistency, this is also used to re-initialize the per-cpu offset after the runtime per-cpu areas have been allocated (which can change CPU0's offset). So that init_this_cpu_offset() isn't subject to any instrumentation that might consume the per-cpu offset, it is marked with noinstr, preventing instrumentation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201005164303.21389-1-mark.rutland@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
Add a testcase to check that user address with valid/invalid mte tag works in kernel mode. This test verifies that the kernel API's __arch_copy_from_user/__arch_copy_to_user works by considering if the user pointer has valid/invalid allocation tags. In MTE sync mode, file memory read/write and other similar interfaces fails if a user memory with invalid tag is accessed in kernel. In async mode no such failure occurs. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-7-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
Add a testcase to check that KSM should not merge pages containing same data with same/different MTE tag values. This testcase has one positive tests and passes if page merging happens according to the above rule. It also saves and restores any modified ksm sysfs entries. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-6-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
This testcase checks the different unsupported/supported options for mmap if used with PROT_MTE memory protection flag. These checks are, * Either pstate.tco enable or prctl PR_MTE_TCF_NONE option should not cause any tag mismatch faults. * Different combinations of anonymous/file memory mmap, mprotect, sync/async error mode and private/shared mappings should work. * mprotect should not be able to clear the PROT_MTE page property. Co-developed-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-5-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
This test covers the mte memory behaviour of the forked process with different mapping properties and flags. It checks that all bytes of forked child memory are accessible with the same tag as that of the parent and memory accessed outside the tag range causes fault to occur. Co-developed-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-4-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
This testcase verifies that the tag generated with "irg" instruction contains only included tags. This is done via prtcl call. This test covers 4 scenarios, * At least one included tag. * More than one included tags. * All included. * None included. Co-developed-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-3-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Amit Daniel Kachhap authored
This test checks that the memory tag is present after mte allocation and the memory is accessible with those tags. This testcase verifies all sync, async and none mte error reporting mode. The allocated mte buffers are verified for Allocated range (no error expected while accessing buffer), Underflow range, and Overflow range. Different test scenarios covered here are, * Verify that mte memory are accessible at byte/block level. * Force underflow and overflow to occur and check the data consistency. * Check to/from between tagged and untagged memory. * Check that initial allocated memory to have 0 tag. This change also creates the necessary infrastructure to add mte test cases. MTE kselftests can use the several utility functions provided here to add wide variety of mte test scenarios. GCC compiler need flag '-march=armv8.5-a+memtag' so those flags are verified before compilation. The mte testcases can be launched with kselftest framework as, make TARGETS=arm64 ARM64_SUBTARGETS=mte kselftest or compiled as, make -C tools/testing/selftests TARGETS=arm64 ARM64_SUBTARGETS=mte CC='compiler' Co-developed-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Gabor Kertesz <gabor.kertesz@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201002115630.24683-2-amit.kachhap@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 02 Oct, 2020 3 commits
-
-
Will Deacon authored
Add userspace support for the Memory Tagging Extension introduced by Armv8.5. (Catalin Marinas and others) * for-next/mte: (30 commits) arm64: mte: Fix typo in memory tagging ABI documentation arm64: mte: Add Memory Tagging Extension documentation arm64: mte: Kconfig entry arm64: mte: Save tags when hibernating arm64: mte: Enable swap of tagged pages mm: Add arch hooks for saving/restoring tags fs: Handle intra-page faults in copy_mount_options() arm64: mte: ptrace: Add NT_ARM_TAGGED_ADDR_CTRL regset arm64: mte: ptrace: Add PTRACE_{PEEK,POKE}MTETAGS support arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks arm64: mte: Restore the GCR_EL1 register after a suspend arm64: mte: Allow user control of the generated random tags via prctl() arm64: mte: Allow user control of the tag check mode via prctl() mm: Allow arm64 mmap(PROT_MTE) on RAM-based files arm64: mte: Validate the PROT_MTE request via arch_validate_flags() mm: Introduce arch_validate_flags() arm64: mte: Add PROT_MTE support to mmap() and mprotect() mm: Introduce arch_calc_vm_flag_bits() arm64: mte: Tags-aware aware memcmp_pages() implementation arm64: Avoid unnecessary clear_user_page() indirection ...
-
Will Deacon authored
Fix and subsequently rewrite Spectre mitigations, including the addition of support for PR_SPEC_DISABLE_NOEXEC. (Will Deacon and Marc Zyngier) * for-next/ghostbusters: (22 commits) arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() option arm64: Pull in task_stack_page() to Spectre-v4 mitigation code KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabled arm64: Get rid of arm64_ssbd_state KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state() KVM: arm64: Get rid of kvm_arm_have_ssbd() KVM: arm64: Simplify handling of ARCH_WORKAROUND_2 arm64: Rewrite Spectre-v4 mitigation code arm64: Move SSBD prctl() handler alongside other spectre mitigation code arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4 arm64: Treat SSBS as a non-strict system feature arm64: Group start_thread() functions together KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2 arm64: Rewrite Spectre-v2 mitigation code arm64: Introduce separate file for spectre mitigations and reporting arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 KVM: arm64: Simplify install_bp_hardening_cb() KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE arm64: Remove Spectre-related CONFIG_* options arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUs ...
-
Will Deacon authored
Merge branches 'for-next/acpi', 'for-next/boot', 'for-next/bpf', 'for-next/cpuinfo', 'for-next/fpsimd', 'for-next/misc', 'for-next/mm', 'for-next/pci', 'for-next/perf', 'for-next/ptrauth', 'for-next/sdei', 'for-next/selftests', 'for-next/stacktrace', 'for-next/svm', 'for-next/topology', 'for-next/tpyos' and 'for-next/vdso' into for-next/core Remove unused functions and parameters from ACPI IORT code. (Zenghui Yu via Lorenzo Pieralisi) * for-next/acpi: ACPI/IORT: Remove the unused inline functions ACPI/IORT: Drop the unused @ops of iort_add_device_replay() Remove redundant code and fix documentation of caching behaviour for the HVC_SOFT_RESTART hypercall. (Pingfan Liu) * for-next/boot: Documentation/kvm/arm: improve description of HVC_SOFT_RESTART arm64/relocate_kernel: remove redundant code Improve reporting of unexpected kernel traps due to BPF JIT failure. (Will Deacon) * for-next/bpf: arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM Improve robustness of user-visible HWCAP strings and their corresponding numerical constants. (Anshuman Khandual) * for-next/cpuinfo: arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions Cleanups to handling of SVE and FPSIMD register state in preparation for potential future optimisation of handling across syscalls. (Julien Grall) * for-next/fpsimd: arm64/sve: Implement a helper to load SVE registers from FPSIMD state arm64/sve: Implement a helper to flush SVE registers arm64/fpsimdmacros: Allow the macro "for" to be used in more cases arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LEN arm64/signal: Update the comment in preserve_sve_context arm64/fpsimd: Update documentation of do_sve_acc Miscellaneous changes. (Tian Tao and others) * for-next/misc: arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE arm64: mm: Fix missing-prototypes in pageattr.c arm64/fpsimd: Fix missing-prototypes in fpsimd.c arm64: hibernate: Remove unused including <linux/version.h> arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR() arm64: Remove the unused include statements arm64: get rid of TEXT_OFFSET arm64: traps: Add str of description to panic() in die() Memory management updates and cleanups. (Anshuman Khandual and others) * for-next/mm: arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op arm64/mm: Unify CONT_PMD_SHIFT arm64/mm: Unify CONT_PTE_SHIFT arm64/mm: Remove CONT_RANGE_OFFSET arm64/mm: Enable THP migration arm64/mm: Change THP helpers to comply with generic MM semantics arm64/mm/ptdump: Add address markers for BPF regions Allow prefetchable PCI BARs to be exposed to userspace using normal non-cacheable mappings. (Clint Sbisa) * for-next/pci: arm64: Enable PCI write-combine resources under sysfs Perf/PMU driver updates. (Julien Thierry and others) * for-next/perf: perf: arm-cmn: Fix conversion specifiers for node type perf: arm-cmn: Fix unsigned comparison to less than zero arm_pmu: arm64: Use NMIs for PMU arm_pmu: Introduce pmu_irq_ops KVM: arm64: pmu: Make overflow handler NMI safe arm64: perf: Defer irq_work to IPI_IRQ_WORK arm64: perf: Remove PMU locking arm64: perf: Avoid PMXEV* indirection arm64: perf: Add missing ISB in armv8pmu_enable_counter() perf: Add Arm CMN-600 PMU driver perf: Add Arm CMN-600 DT binding arm64: perf: Add support caps under sysfs drivers/perf: thunderx2_pmu: Fix memory resource error handling drivers/perf: xgene_pmu: Fix uninitialized resource struct perf: arm_dsu: Support DSU ACPI devices arm64: perf: Remove unnecessary event_idx check drivers/perf: hisi: Add missing include of linux/module.h arm64: perf: Add general hardware LLC events for PMUv3 Support for the Armv8.3 Pointer Authentication enhancements. (By Amit Daniel Kachhap) * for-next/ptrauth: arm64: kprobe: clarify the comment of steppable hint instructions arm64: kprobe: disable probe of fault prone ptrauth instruction arm64: cpufeature: Modify address authentication cpufeature to exact arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements arm64: traps: Allow force_signal_inject to pass esr error code arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Tonnes of cleanup to the SDEI driver. (Gavin Shan) * for-next/sdei: firmware: arm_sdei: Remove _sdei_event_unregister() firmware: arm_sdei: Remove _sdei_event_register() firmware: arm_sdei: Introduce sdei_do_local_call() firmware: arm_sdei: Cleanup on cross call function firmware: arm_sdei: Remove while loop in sdei_event_unregister() firmware: arm_sdei: Remove while loop in sdei_event_register() firmware: arm_sdei: Remove redundant error message in sdei_probe() firmware: arm_sdei: Remove duplicate check in sdei_get_conduit() firmware: arm_sdei: Unregister driver on error in sdei_init() firmware: arm_sdei: Avoid nested statements in sdei_init() firmware: arm_sdei: Retrieve event number from event instance firmware: arm_sdei: Common block for failing path in sdei_event_create() firmware: arm_sdei: Remove sdei_is_err() Selftests for Pointer Authentication and FPSIMD/SVE context-switching. (Mark Brown and Boyan Karatotev) * for-next/selftests: selftests: arm64: Add build and documentation for FP tests selftests: arm64: Add wrapper scripts for stress tests selftests: arm64: Add utility to set SVE vector lengths selftests: arm64: Add stress tests for FPSMID and SVE context switching selftests: arm64: Add test for the SVE ptrace interface selftests: arm64: Test case for enumeration of SVE vector lengths kselftests/arm64: add PAuth tests for single threaded consistency and differently initialized keys kselftests/arm64: add PAuth test for whether exec() changes keys kselftests/arm64: add nop checks for PAuth tests kselftests/arm64: add a basic Pointer Authentication test Implementation of ARCH_STACKWALK for unwinding. (Mark Brown) * for-next/stacktrace: arm64: Move console stack display code to stacktrace.c arm64: stacktrace: Convert to ARCH_STACKWALK arm64: stacktrace: Make stack walk callback consistent with generic code stacktrace: Remove reliable argument from arch_stack_walk() callback Support for ASID pinning, which is required when sharing page-tables with the SMMU. (Jean-Philippe Brucker) * for-next/svm: arm64: cpufeature: Export symbol read_sanitised_ftr_reg() arm64: mm: Pin down ASIDs for sharing mm with devices Rely on firmware tables for establishing CPU topology. (Valentin Schneider) * for-next/topology: arm64: topology: Stop using MPIDR for topology information Spelling fixes. (Xiaoming Ni and Yanfei Xu) * for-next/tpyos: arm64/numa: Fix a typo in comment of arm64_numa_init arm64: fix some spelling mistakes in the comments by codespell vDSO cleanups. (Will Deacon) * for-next/vdso: arm64: vdso: Fix unusual formatting in *setup_additional_pages() arm64: vdso32: Remove a bunch of #ifdef CONFIG_COMPAT_VDSO guards
-
- 01 Oct, 2020 4 commits
-
-
Will Deacon authored
The node type field is an enum type, so print it as a 32-bit quantity rather than as an unsigned short. Link: https://lore.kernel.org/r/202009302350.QIzfkx62-lkp@intel.comReported-by: kernel test robot <lkp@intel.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Ensure that the 'irq' field of 'struct arm_cmn_dtc' is a signed int so that it can be compared '< 0'. Link: https://lore.kernel.org/r/20200929170835.GA15956@embeddedor Addresses-Coverity-ID: 1497488 ("Unsigned compared against 0") Fixes: 0ba64770 ("perf: Add Arm CMN-600 PMU driver") Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local TLB after setting the bit when detected support for the feature. Although this isn't strictly necessary, since we can happily operate with the bit effectively clear, the current code uses an ISB in a half-hearted attempt to make the change effective, so let's just fix that up. Link: https://lore.kernel.org/r/20201001110405.18617-1-will@kernel.orgReviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Our use of broadcast TLB maintenance means that spurious page-faults that have been handled already by another CPU do not require additional TLB maintenance. Make flush_tlb_fix_spurious_fault() a no-op and rely on the existing TLB invalidation instead. Add an explicit flush_tlb_page() when making a page dirty, as the TLB is permitted to cache the old read-only entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200728092220.GA21800@willie-the-truckSigned-off-by: Will Deacon <will@kernel.org>
-
- 29 Sep, 2020 20 commits
-
-
Will Deacon authored
The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl() allows the SSB mitigation to be enabled only until the next execve(), at which point the state will revert back to PR_SPEC_ENABLE and the mitigation will be disabled. Add support for PR_SPEC_DISABLE_NOEXEC on arm64. Reported-by: Anthony Steinhauser <asteinhauser@google.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The kbuild robot reports that we're relying on an implicit inclusion to get a definition of task_stack_page() in the Spectre-v4 mitigation code, which is not always in place for some configurations: | arch/arm64/kernel/proton-pack.c:329:2: error: implicit declaration of function 'task_stack_page' [-Werror,-Wimplicit-function-declaration] | task_pt_regs(task)->pstate |= val; | ^ | arch/arm64/include/asm/processor.h:268:36: note: expanded from macro 'task_pt_regs' | ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) | ^ | arch/arm64/kernel/proton-pack.c:329:2: note: did you mean 'task_spread_page'? Add the missing include to fix the build error. Fixes: a44acf477220 ("arm64: Move SSBD prctl() handler alongside other spectre mitigation code") Reported-by: Anthony Steinhauser <asteinhauser@google.com> Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009260013.Ul7AD29w%lkp@intel.comSigned-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%lkp@intel.comSigned-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Out with the old ghost, in with the new... Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Convert the KVM WA2 code to using the Spectre infrastructure, making the code much more readable. It also allows us to take SSBS into account for the mitigation. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
kvm_arm_have_ssbd() is now completely unused, get rid of it. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The is_ttbrX_addr() functions have somehow ended up in the middle of the start_thread() functions, so move them out of the way to keep the code readable. Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
If the system is not affected by Spectre-v2, then advertise to the KVM guest that it is not affected, without the need for a safelist in the guest. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Use is_hyp_mode_available() to detect whether or not we need to patch the KVM vectors for branch hardening, which avoids the need to take the vector pointers as parameters. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Commit 606f8e7b ("arm64: capabilities: Use linear array for detection and verification") changed the way we deal with per-CPU errata by only calling the .matches() callback until one CPU is found to be affected. At this point, .matches() stop being called, and .cpu_enable() will be called on all CPUs. This breaks the ARCH_WORKAROUND_2 handling, as only a single CPU will be mitigated. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 606f8e7b ("arm64: capabilities: Use linear array for detection and verification") Cc: <stable@vger.kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
-
- 28 Sep, 2020 3 commits
-
-
Will Deacon authored
We offer both PTRACE_PEEKMTETAGS and PTRACE_POKEMTETAGS requests via ptrace(). Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Jean-Philippe Brucker authored
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to share CPU page tables with devices. Allow the driver to be built as module by exporting the read_sanitized_ftr_reg() cpufeature symbol. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-7-jean-philippe@linaro.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Jean-Philippe Brucker authored
To enable address space sharing with the IOMMU, introduce arm64_mm_context_get() and arm64_mm_context_put(), that pin down a context and ensure that it will keep its ASID after a rollover. Export the symbols to let the modular SMMUv3 driver use them. Pinning is necessary because a device constantly needs a valid ASID, unlike tasks that only require one when running. Without pinning, we would need to notify the IOMMU when we're about to use a new ASID for a task, and it would get complicated when a new task is assigned a shared ASID. Consider the following scenario with no ASID pinned: 1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1) 2. Task t2 is scheduled on CPUx, gets ASID (1, 2) 3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1) We would now have to immediately generate a new ASID for t1, notify the IOMMU, and finally enable task tn. We are holding the lock during all that time, since we can't afford having another CPU trigger a rollover. The IOMMU issues invalidation commands that can take tens of milliseconds. It gets needlessly complicated. All we wanted to do was schedule task tn, that has no business with the IOMMU. By letting the IOMMU pin tasks when needed, we avoid stalling the slow path, and let the pinning fail when we're out of shareable ASIDs. After a rollover, the allocator expects at least one ASID to be available in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS - 1) is the maximum number of ASIDs that can be shared with the IOMMU. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-5-jean-philippe@linaro.orgSigned-off-by: Will Deacon <will@kernel.org>
-