Commit 4afe8e79 authored by Suzuki K Poulose's avatar Suzuki K Poulose Committed by Catalin Marinas

arm64: cpufeature: Trap CTR_EL0 access only where it is necessary

When there is a mismatch in the CTR_EL0 field, we trap
access to CTR from EL0 on all CPUs to expose the safe
value. However, we could skip trapping on a CPU which
matches the safe value.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent 1602df02
...@@ -99,7 +99,12 @@ has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, ...@@ -99,7 +99,12 @@ has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
static void static void
cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
{ {
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0); u64 mask = arm64_ftr_reg_ctrel0.strict_mask;
/* Trap CTR_EL0 access on this CPU, only if it has a mismatch */
if ((read_cpuid_cachetype() & mask) !=
(arm64_ftr_reg_ctrel0.sys_val & mask))
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
} }
atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment