Commit da12d273 authored by Catalin Marinas's avatar Catalin Marinas

Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei',...

Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', 'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core

* for-next/memory-hotremove:
  : Memory hot-remove support for arm64
  arm64/mm: Enable memory hot remove
  arm64/mm: Hold memory hotplug lock while walking for kernel page table dump

* for-next/arm_sdei:
  : SDEI: fix double locking on return from hibernate and clean-up
  firmware: arm_sdei: clean up sdei_event_create()
  firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp
  firmware: arm_sdei: fix possible double-lock on hibernate error path
  firmware: arm_sdei: fix double-lock on hibernate with shared events

* for-next/amu:
  : ARMv8.4 Activity Monitors support
  clocksource/drivers/arm_arch_timer: validate arch_timer_rate
  arm64: use activity monitors for frequency invariance
  cpufreq: add function to get the hardware max frequency
  Documentation: arm64: document support for the AMU extension
  arm64/kvm: disable access to AMU registers from kvm guests
  arm64: trap to EL1 accesses to AMU counters from EL0
  arm64: add support for the AMU extension v1

* for-next/final-cap-helper:
  : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it
  arm64: kvm: hyp: use cpus_have_final_cap()
  arm64: cpufeature: add cpus_have_final_cap()

* for-next/cpu_ops-cleanup:
  : cpu_ops[] access code clean-up
  arm64: Introduce get_cpu_ops() helper function
  arm64: Rename cpu_read_ops() to init_cpu_ops()
  arm64: Declare ACPI parking protocol CPU operation if needed

* for-next/misc:
  : Various fixes and clean-ups
  arm64: define __alloc_zeroed_user_highpage
  arm64/kernel: Simplify __cpu_up() by bailing out early
  arm64: remove redundant blank for '=' operator
  arm64: kexec_file: Fixed code style.
  arm64: add blank after 'if'
  arm64: fix spelling mistake "ca not" -> "cannot"
  arm64: entry: unmask IRQ in el0_sp()
  arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI)
  arm64: csum: Optimise IPv6 header checksum
  arch/arm64: fix typo in a comment
  arm64: remove gratuitious/stray .ltorg stanzas
  arm64: Update comment for ASID() macro
  arm64: mm: convert cpu_do_switch_mm() to C
  arm64: fix NUMA Kconfig typos

* for-next/perf:
  : arm64 perf updates
  arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
  KVM: arm64: limit PMU version to PMUv3 for ARMv8.1
  arm64: cpufeature: Extract capped perfmon fields
  arm64: perf: Clean up enable/disable calls
  perf: arm-ccn: Use scnprintf() for robustness
  arm64: perf: Support new DT compatibles
  arm64: perf: Refactor PMU init callbacks
  perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
=======================================================
Activity Monitors Unit (AMU) extension in AArch64 Linux
=======================================================
Author: Ionela Voinescu <ionela.voinescu@arm.com>
Date: 2019-09-10
This document briefly describes the provision of Activity Monitors Unit
support in AArch64 Linux.
Architecture overview
---------------------
The activity monitors extension is an optional extension introduced by the
ARMv8.4 CPU architecture.
The activity monitors unit, implemented in each CPU, provides performance
counters intended for system management use. The AMU extension provides a
system register interface to the counter registers and also supports an
optional external memory-mapped interface.
Version 1 of the Activity Monitors architecture implements a counter group
of four fixed and architecturally defined 64-bit event counters.
- CPU cycle counter: increments at the frequency of the CPU.
- Constant counter: increments at the fixed frequency of the system
clock.
- Instructions retired: increments with every architecturally executed
instruction.
- Memory stall cycles: counts instruction dispatch stall cycles caused by
misses in the last level cache within the clock domain.
When in WFI or WFE these counters do not increment.
The Activity Monitors architecture provides space for up to 16 architected
event counters. Future versions of the architecture may use this space to
implement additional architected event counters.
Additionally, version 1 implements a counter group of up to 16 auxiliary
64-bit event counters.
On cold reset all counters reset to 0.
Basic support
-------------
The kernel can safely run a mix of CPUs with and without support for the
activity monitors extension. Therefore, when CONFIG_ARM64_AMU_EXTN is
selected we unconditionally enable the capability to allow any late CPU
(secondary or hotplugged) to detect and use the feature.
When the feature is detected on a CPU, we flag the availability of the
feature but this does not guarantee the correct functionality of the
counters, only the presence of the extension.
Firmware (code running at higher exception levels, e.g. arm-tf) support is
needed to:
- Enable access for lower exception levels (EL2 and EL1) to the AMU
registers.
- Enable the counters. If not enabled these will read as 0.
- Save/restore the counters before/after the CPU is being put/brought up
from the 'off' power state.
When using kernels that have this feature enabled but boot with broken
firmware the user may experience panics or lockups when accessing the
counter registers. Even if these symptoms are not observed, the values
returned by the register reads might not correctly reflect reality. Most
commonly, the counters will read as 0, indicating that they are not
enabled.
If proper support is not provided in firmware it's best to disable
CONFIG_ARM64_AMU_EXTN. To be noted that for security reasons, this does not
bypass the setting of AMUSERENR_EL0 to trap accesses from EL0 (userspace) to
EL1 (kernel). Therefore, firmware should still ensure accesses to AMU registers
are not trapped in EL2/EL3.
The fixed counters of AMUv1 are accessible though the following system
register definitions:
- SYS_AMEVCNTR0_CORE_EL0
- SYS_AMEVCNTR0_CONST_EL0
- SYS_AMEVCNTR0_INST_RET_EL0
- SYS_AMEVCNTR0_MEM_STALL_EL0
Auxiliary platform specific counters can be accessed using
SYS_AMEVCNTR1_EL0(n), where n is a value between 0 and 15.
Details can be found in: arch/arm64/include/asm/sysreg.h.
Userspace access
----------------
Currently, access from userspace to the AMU registers is disabled due to:
- Security reasons: they might expose information about code executed in
secure mode.
- Purpose: AMU counters are intended for system management use.
Also, the presence of the feature is not visible to userspace.
Virtualization
--------------
Currently, access from userspace (EL0) and kernelspace (EL1) on the KVM
guest side is disabled due to:
- Security reasons: they might expose information about code executed
by other guests or the host.
Any attempt to access the AMU registers will result in an UNDEFINED
exception being injected into the guest.
...@@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met:
- HCR_EL2.APK (bit 40) must be initialised to 0b1 - HCR_EL2.APK (bit 40) must be initialised to 0b1
- HCR_EL2.API (bit 41) must be initialised to 0b1 - HCR_EL2.API (bit 41) must be initialised to 0b1
For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
- If EL3 is present:
CPTR_EL3.TAM (bit 30) must be initialised to 0b0
CPTR_EL2.TAM (bit 30) must be initialised to 0b0
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.
- If the kernel is entered at EL1:
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.
The requirements described above for CPU mode, caches, MMUs, architected The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level. enter the kernel in the same exception level.
......
...@@ -6,6 +6,7 @@ ARM64 Architecture ...@@ -6,6 +6,7 @@ ARM64 Architecture
:maxdepth: 1 :maxdepth: 1
acpi_object_usage acpi_object_usage
amu
arm-acpi arm-acpi
booting booting
cpu-feature-registers cpu-feature-registers
......
...@@ -955,11 +955,11 @@ config HOTPLUG_CPU ...@@ -955,11 +955,11 @@ config HOTPLUG_CPU
# Common NUMA Features # Common NUMA Features
config NUMA config NUMA
bool "Numa Memory Allocation and Scheduler Support" bool "NUMA Memory Allocation and Scheduler Support"
select ACPI_NUMA if ACPI select ACPI_NUMA if ACPI
select OF_NUMA select OF_NUMA
help help
Enable NUMA (Non Uniform Memory Access) support. Enable NUMA (Non-Uniform Memory Access) support.
The kernel will try to allocate memory used by a CPU on the The kernel will try to allocate memory used by a CPU on the
local memory of the CPU and add some more local memory of the CPU and add some more
...@@ -1520,6 +1520,33 @@ config ARM64_PTR_AUTH ...@@ -1520,6 +1520,33 @@ config ARM64_PTR_AUTH
endmenu endmenu
menu "ARMv8.4 architectural features"
config ARM64_AMU_EXTN
bool "Enable support for the Activity Monitors Unit CPU extension"
default y
help
The activity monitors extension is an optional extension introduced
by the ARMv8.4 CPU architecture. This enables support for version 1
of the activity monitors architecture, AMUv1.
To enable the use of this extension on CPUs that implement it, say Y.
Note that for architectural reasons, firmware _must_ implement AMU
support when running on CPUs that present the activity monitors
extension. The required support is present in:
* Version 1.5 and later of the ARM Trusted Firmware
For kernels that have this configuration enabled but boot with broken
firmware, you may need to say N here until the firmware is fixed.
Otherwise you may experience firmware panics or lockups when
accessing the counter registers. Even if you are not observing these
symptoms, the values returned by the register reads might not
correctly reflect reality. Most commonly, the value read will be 0,
indicating that the counter is not enabled.
endmenu
menu "ARMv8.5 architectural features" menu "ARMv8.5 architectural features"
config ARM64_E0PD config ARM64_E0PD
......
...@@ -256,12 +256,6 @@ alternative_endif ...@@ -256,12 +256,6 @@ alternative_endif
ldr \rd, [\rn, #VMA_VM_MM] ldr \rd, [\rn, #VMA_VM_MM]
.endm .endm
/*
* mmid - get context id from mm pointer (mm->context.id)
*/
.macro mmid, rd, rn
ldr \rd, [\rn, #MM_CONTEXT_ID]
.endm
/* /*
* read_ctr - read CTR_EL0. If the system has mismatched register fields, * read_ctr - read CTR_EL0. If the system has mismatched register fields,
* provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
...@@ -430,6 +424,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU ...@@ -430,6 +424,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
9000: 9000:
.endm .endm
/*
* reset_amuserenr_el0 - reset AMUSERENR_EL0 if AMUv1 present
*/
.macro reset_amuserenr_el0, tmpreg
mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1
ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4
cbz \tmpreg, .Lskip_\@ // Skip if no AMU present
msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0
.Lskip_\@:
.endm
/* /*
* copy_page - copy src to dest using temp registers t1-t8 * copy_page - copy src to dest using temp registers t1-t8
*/ */
......
...@@ -5,7 +5,12 @@ ...@@ -5,7 +5,12 @@
#ifndef __ASM_CHECKSUM_H #ifndef __ASM_CHECKSUM_H
#define __ASM_CHECKSUM_H #define __ASM_CHECKSUM_H
#include <linux/types.h> #include <linux/in6.h>
#define _HAVE_ARCH_IPV6_CSUM
__sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline __sum16 csum_fold(__wsum csum) static inline __sum16 csum_fold(__wsum csum)
{ {
......
...@@ -55,12 +55,12 @@ struct cpu_operations { ...@@ -55,12 +55,12 @@ struct cpu_operations {
#endif #endif
}; };
extern const struct cpu_operations *cpu_ops[NR_CPUS]; int __init init_cpu_ops(int cpu);
int __init cpu_read_ops(int cpu); extern const struct cpu_operations *get_cpu_ops(int cpu);
static inline void __init cpu_read_bootcpu_ops(void) static inline void __init init_bootcpu_ops(void)
{ {
cpu_read_ops(0); init_cpu_ops(0);
} }
#endif /* ifndef __ASM_CPU_OPS_H */ #endif /* ifndef __ASM_CPU_OPS_H */
...@@ -58,7 +58,8 @@ ...@@ -58,7 +58,8 @@
#define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48 #define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48
#define ARM64_HAS_E0PD 49 #define ARM64_HAS_E0PD 49
#define ARM64_HAS_RNG 50 #define ARM64_HAS_RNG 50
#define ARM64_HAS_AMU_EXTN 51
#define ARM64_NCAPS 51 #define ARM64_NCAPS 52
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */
...@@ -390,14 +390,16 @@ unsigned long cpu_get_elf_hwcap2(void); ...@@ -390,14 +390,16 @@ unsigned long cpu_get_elf_hwcap2(void);
#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name)) #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name)) #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
/* System capability check for constant caps */ static __always_inline bool system_capabilities_finalized(void)
static __always_inline bool __cpus_have_const_cap(int num)
{ {
if (num >= ARM64_NCAPS) return static_branch_likely(&arm64_const_caps_ready);
return false;
return static_branch_unlikely(&cpu_hwcap_keys[num]);
} }
/*
* Test for a capability with a runtime check.
*
* Before the capability is detected, this returns false.
*/
static inline bool cpus_have_cap(unsigned int num) static inline bool cpus_have_cap(unsigned int num)
{ {
if (num >= ARM64_NCAPS) if (num >= ARM64_NCAPS)
...@@ -405,14 +407,53 @@ static inline bool cpus_have_cap(unsigned int num) ...@@ -405,14 +407,53 @@ static inline bool cpus_have_cap(unsigned int num)
return test_bit(num, cpu_hwcaps); return test_bit(num, cpu_hwcaps);
} }
/*
* Test for a capability without a runtime check.
*
* Before capabilities are finalized, this returns false.
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool __cpus_have_const_cap(int num)
{
if (num >= ARM64_NCAPS)
return false;
return static_branch_unlikely(&cpu_hwcap_keys[num]);
}
/*
* Test for a capability, possibly with a runtime check.
*
* Before capabilities are finalized, this behaves as cpus_have_cap().
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool cpus_have_const_cap(int num) static __always_inline bool cpus_have_const_cap(int num)
{ {
if (static_branch_likely(&arm64_const_caps_ready)) if (system_capabilities_finalized())
return __cpus_have_const_cap(num); return __cpus_have_const_cap(num);
else else
return cpus_have_cap(num); return cpus_have_cap(num);
} }
/*
* Test for a capability without a runtime check.
*
* Before capabilities are finalized, this will BUG().
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool cpus_have_final_cap(int num)
{
if (system_capabilities_finalized())
return __cpus_have_const_cap(num);
else
BUG();
}
static inline void cpus_set_cap(unsigned int num) static inline void cpus_set_cap(unsigned int num)
{ {
if (num >= ARM64_NCAPS) { if (num >= ARM64_NCAPS) {
...@@ -447,6 +488,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field) ...@@ -447,6 +488,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
return cpuid_feature_extract_unsigned_field_width(features, field, 4); return cpuid_feature_extract_unsigned_field_width(features, field, 4);
} }
/*
* Fields that identify the version of the Performance Monitors Extension do
* not follow the standard ID scheme. See ARM DDI 0487E.a page D13-2825,
* "Alternative ID scheme used for the Performance Monitors Extension version".
*/
static inline u64 __attribute_const__
cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
{
u64 val = cpuid_feature_extract_unsigned_field(features, field);
u64 mask = GENMASK_ULL(field + 3, field);
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
if (val == 0xf)
val = 0;
if (val > cap) {
features &= ~mask;
features |= (cap << field) & mask;
}
return features;
}
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp) static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
{ {
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift); return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
...@@ -613,11 +677,6 @@ static inline bool system_has_prio_mask_debugging(void) ...@@ -613,11 +677,6 @@ static inline bool system_has_prio_mask_debugging(void)
system_uses_irq_prio_masking(); system_uses_irq_prio_masking();
} }
static inline bool system_capabilities_finalized(void)
{
return static_branch_likely(&arm64_const_caps_ready);
}
#define ARM64_BP_HARDEN_UNKNOWN -1 #define ARM64_BP_HARDEN_UNKNOWN -1
#define ARM64_BP_HARDEN_WA_NEEDED 0 #define ARM64_BP_HARDEN_WA_NEEDED 0
#define ARM64_BP_HARDEN_NOT_REQUIRED 1 #define ARM64_BP_HARDEN_NOT_REQUIRED 1
...@@ -678,6 +737,11 @@ static inline bool cpu_has_hw_af(void) ...@@ -678,6 +737,11 @@ static inline bool cpu_has_hw_af(void)
ID_AA64MMFR1_HADBS_SHIFT); ID_AA64MMFR1_HADBS_SHIFT);
} }
#ifdef CONFIG_ARM64_AMU_EXTN
/* Check whether the cpu supports the Activity Monitors Unit (AMU) */
extern bool cpu_has_amu_feat(int cpu);
#endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif
...@@ -60,7 +60,7 @@ ...@@ -60,7 +60,7 @@
#define ESR_ELx_EC_BKPT32 (0x38) #define ESR_ELx_EC_BKPT32 (0x38)
/* Unallocated EC: 0x39 */ /* Unallocated EC: 0x39 */
#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */ #define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */
/* Unallocted EC: 0x3B */ /* Unallocated EC: 0x3B */
#define ESR_ELx_EC_BRK64 (0x3C) #define ESR_ELx_EC_BRK64 (0x3C)
/* Unallocated EC: 0x3D - 0x3F */ /* Unallocated EC: 0x3D - 0x3F */
#define ESR_ELx_EC_MAX (0x3F) #define ESR_ELx_EC_MAX (0x3F)
......
...@@ -267,6 +267,7 @@ ...@@ -267,6 +267,7 @@
/* Hyp Coprocessor Trap Register */ /* Hyp Coprocessor Trap Register */
#define CPTR_EL2_TCPAC (1 << 31) #define CPTR_EL2_TCPAC (1 << 31)
#define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
#define CPTR_EL2_TZ (1 << 8) #define CPTR_EL2_TZ (1 << 8)
......
...@@ -23,9 +23,9 @@ typedef struct { ...@@ -23,9 +23,9 @@ typedef struct {
} mm_context_t; } mm_context_t;
/* /*
* This macro is only used by the TLBI code, which cannot race with an * This macro is only used by the TLBI and low-level switch_mm() code,
* ASID change and therefore doesn't need to reload the counter using * neither of which can race with an ASID change. We therefore don't
* atomic64_read. * need to reload the counter using atomic64_read().
*/ */
#define ASID(mm) ((mm)->context.id.counter & 0xffff) #define ASID(mm) ((mm)->context.id.counter & 0xffff)
......
...@@ -46,6 +46,8 @@ static inline void cpu_set_reserved_ttbr0(void) ...@@ -46,6 +46,8 @@ static inline void cpu_set_reserved_ttbr0(void)
isb(); isb();
} }
void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm) static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
{ {
BUG_ON(pgd == swapper_pg_dir); BUG_ON(pgd == swapper_pg_dir);
......
...@@ -21,6 +21,10 @@ extern void __cpu_copy_user_page(void *to, const void *from, ...@@ -21,6 +21,10 @@ extern void __cpu_copy_user_page(void *to, const void *from,
extern void copy_page(void *to, const void *from); extern void copy_page(void *to, const void *from);
extern void clear_page(void *to); extern void clear_page(void *to);
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) #define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr)
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)
......
...@@ -176,9 +176,10 @@ ...@@ -176,9 +176,10 @@
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMU_PMCR_N_MASK 0x1f #define ARMV8_PMU_PMCR_N_MASK 0x1f
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */ #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
/* /*
* PMOVSR: counters overflow flag status reg * PMOVSR: counters overflow flag status reg
......
...@@ -13,11 +13,9 @@ ...@@ -13,11 +13,9 @@
#include <asm/page.h> #include <asm/page.h>
struct mm_struct;
struct cpu_suspend_ctx; struct cpu_suspend_ctx;
extern void cpu_do_idle(void); extern void cpu_do_idle(void);
extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr); extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr); extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
......
...@@ -386,6 +386,42 @@ ...@@ -386,6 +386,42 @@
#define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2)
#define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3)
/* Definitions for system register interface to AMU for ARMv8.4 onwards */
#define SYS_AM_EL0(crm, op2) sys_reg(3, 3, 13, (crm), (op2))
#define SYS_AMCR_EL0 SYS_AM_EL0(2, 0)
#define SYS_AMCFGR_EL0 SYS_AM_EL0(2, 1)
#define SYS_AMCGCR_EL0 SYS_AM_EL0(2, 2)
#define SYS_AMUSERENR_EL0 SYS_AM_EL0(2, 3)
#define SYS_AMCNTENCLR0_EL0 SYS_AM_EL0(2, 4)
#define SYS_AMCNTENSET0_EL0 SYS_AM_EL0(2, 5)
#define SYS_AMCNTENCLR1_EL0 SYS_AM_EL0(3, 0)
#define SYS_AMCNTENSET1_EL0 SYS_AM_EL0(3, 1)
/*
* Group 0 of activity monitors (architected):
* op0 op1 CRn CRm op2
* Counter: 11 011 1101 010:n<3> n<2:0>
* Type: 11 011 1101 011:n<3> n<2:0>
* n: 0-15
*
* Group 1 of activity monitors (auxiliary):
* op0 op1 CRn CRm op2
* Counter: 11 011 1101 110:n<3> n<2:0>
* Type: 11 011 1101 111:n<3> n<2:0>
* n: 0-15
*/
#define SYS_AMEVCNTR0_EL0(n) SYS_AM_EL0(4 + ((n) >> 3), (n) & 7)
#define SYS_AMEVTYPE0_EL0(n) SYS_AM_EL0(6 + ((n) >> 3), (n) & 7)
#define SYS_AMEVCNTR1_EL0(n) SYS_AM_EL0(12 + ((n) >> 3), (n) & 7)
#define SYS_AMEVTYPE1_EL0(n) SYS_AM_EL0(14 + ((n) >> 3), (n) & 7)
/* AMU v1: Fixed (architecturally defined) activity monitors */
#define SYS_AMEVCNTR0_CORE_EL0 SYS_AMEVCNTR0_EL0(0)
#define SYS_AMEVCNTR0_CONST_EL0 SYS_AMEVCNTR0_EL0(1)
#define SYS_AMEVCNTR0_INST_RET_EL0 SYS_AMEVCNTR0_EL0(2)
#define SYS_AMEVCNTR0_MEM_STALL SYS_AMEVCNTR0_EL0(3)
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0) #define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0)
...@@ -598,6 +634,7 @@ ...@@ -598,6 +634,7 @@
#define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56 #define ID_AA64PFR0_CSV2_SHIFT 56
#define ID_AA64PFR0_DIT_SHIFT 48 #define ID_AA64PFR0_DIT_SHIFT 48
#define ID_AA64PFR0_AMU_SHIFT 44
#define ID_AA64PFR0_SVE_SHIFT 32 #define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_RAS_SHIFT 28 #define ID_AA64PFR0_RAS_SHIFT 28
#define ID_AA64PFR0_GIC_SHIFT 24 #define ID_AA64PFR0_GIC_SHIFT 24
...@@ -608,6 +645,7 @@ ...@@ -608,6 +645,7 @@
#define ID_AA64PFR0_EL1_SHIFT 4 #define ID_AA64PFR0_EL1_SHIFT 4
#define ID_AA64PFR0_EL0_SHIFT 0 #define ID_AA64PFR0_EL0_SHIFT 0
#define ID_AA64PFR0_AMU 0x1
#define ID_AA64PFR0_SVE 0x1 #define ID_AA64PFR0_SVE 0x1
#define ID_AA64PFR0_RAS_V1 0x1 #define ID_AA64PFR0_RAS_V1 0x1
#define ID_AA64PFR0_FP_NI 0xf #define ID_AA64PFR0_FP_NI 0xf
...@@ -702,6 +740,16 @@ ...@@ -702,6 +740,16 @@
#define ID_AA64DFR0_TRACEVER_SHIFT 4 #define ID_AA64DFR0_TRACEVER_SHIFT 4
#define ID_AA64DFR0_DEBUGVER_SHIFT 0 #define ID_AA64DFR0_DEBUGVER_SHIFT 0
#define ID_AA64DFR0_PMUVER_8_0 0x1
#define ID_AA64DFR0_PMUVER_8_1 0x4
#define ID_AA64DFR0_PMUVER_8_4 0x5
#define ID_AA64DFR0_PMUVER_8_5 0x6
#define ID_AA64DFR0_PMUVER_IMP_DEF 0xf
#define ID_DFR0_PERFMON_SHIFT 24
#define ID_DFR0_PERFMON_8_1 0x4
#define ID_ISAR5_RDM_SHIFT 24 #define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16 #define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12 #define ID_ISAR5_SHA2_SHIFT 12
......
...@@ -16,6 +16,15 @@ int pcibus_to_node(struct pci_bus *bus); ...@@ -16,6 +16,15 @@ int pcibus_to_node(struct pci_bus *bus);
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
#ifdef CONFIG_ARM64_AMU_EXTN
/*
* Replace task scheduler's default counter-based
* frequency-invariance scale factor setting.
*/
void topology_scale_freq_tick(void);
#define arch_scale_freq_tick topology_scale_freq_tick
#endif /* CONFIG_ARM64_AMU_EXTN */
/* Replace task scheduler's default frequency-invariant accounting */ /* Replace task scheduler's default frequency-invariant accounting */
#define arch_scale_freq_capacity topology_get_freq_scale #define arch_scale_freq_capacity topology_get_freq_scale
......
...@@ -21,7 +21,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ ...@@ -21,7 +21,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
smp.o smp_spin_table.o topology.o smccc-call.o \ smp.o smp_spin_table.o topology.o smccc-call.o \
syscall.o syscall.o
extra-$(CONFIG_EFI) := efi-entry.o targets += efi-entry.o
OBJCOPYFLAGS := --prefix-symbols=__efistub_ OBJCOPYFLAGS := --prefix-symbols=__efistub_
$(obj)/%.stub.o: $(obj)/%.o FORCE $(obj)/%.stub.o: $(obj)/%.o FORCE
......
...@@ -630,7 +630,7 @@ static int __init armv8_deprecated_init(void) ...@@ -630,7 +630,7 @@ static int __init armv8_deprecated_init(void)
register_insn_emulation(&cp15_barrier_ops); register_insn_emulation(&cp15_barrier_ops);
if (IS_ENABLED(CONFIG_SETEND_EMULATION)) { if (IS_ENABLED(CONFIG_SETEND_EMULATION)) {
if(system_supports_mixed_endian_el0()) if (system_supports_mixed_endian_el0())
register_insn_emulation(&setend_ops); register_insn_emulation(&setend_ops);
else else
pr_info("setend instruction emulation is not supported on this system\n"); pr_info("setend instruction emulation is not supported on this system\n");
......
...@@ -15,10 +15,12 @@ ...@@ -15,10 +15,12 @@
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
extern const struct cpu_operations smp_spin_table_ops; extern const struct cpu_operations smp_spin_table_ops;
#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
extern const struct cpu_operations acpi_parking_protocol_ops; extern const struct cpu_operations acpi_parking_protocol_ops;
#endif
extern const struct cpu_operations cpu_psci_ops; extern const struct cpu_operations cpu_psci_ops;
const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init; static const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init;
static const struct cpu_operations *const dt_supported_cpu_ops[] __initconst = { static const struct cpu_operations *const dt_supported_cpu_ops[] __initconst = {
&smp_spin_table_ops, &smp_spin_table_ops,
...@@ -94,7 +96,7 @@ static const char *__init cpu_read_enable_method(int cpu) ...@@ -94,7 +96,7 @@ static const char *__init cpu_read_enable_method(int cpu)
/* /*
* Read a cpu's enable method and record it in cpu_ops. * Read a cpu's enable method and record it in cpu_ops.
*/ */
int __init cpu_read_ops(int cpu) int __init init_cpu_ops(int cpu)
{ {
const char *enable_method = cpu_read_enable_method(cpu); const char *enable_method = cpu_read_enable_method(cpu);
...@@ -109,3 +111,8 @@ int __init cpu_read_ops(int cpu) ...@@ -109,3 +111,8 @@ int __init cpu_read_ops(int cpu)
return 0; return 0;
} }
const struct cpu_operations *get_cpu_ops(int cpu)
{
return cpu_ops[cpu];
}
...@@ -163,6 +163,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ...@@ -163,6 +163,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
...@@ -551,7 +552,7 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) ...@@ -551,7 +552,7 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
BUG_ON(!reg); BUG_ON(!reg);
for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) { for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
u64 ftr_mask = arm64_ftr_mask(ftrp); u64 ftr_mask = arm64_ftr_mask(ftrp);
s64 ftr_new = arm64_ftr_value(ftrp, new); s64 ftr_new = arm64_ftr_value(ftrp, new);
...@@ -1222,6 +1223,57 @@ static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap, ...@@ -1222,6 +1223,57 @@ static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap,
#endif #endif
#ifdef CONFIG_ARM64_AMU_EXTN
/*
* The "amu_cpus" cpumask only signals that the CPU implementation for the
* flagged CPUs supports the Activity Monitors Unit (AMU) but does not provide
* information regarding all the events that it supports. When a CPU bit is
* set in the cpumask, the user of this feature can only rely on the presence
* of the 4 fixed counters for that CPU. But this does not guarantee that the
* counters are enabled or access to these counters is enabled by code
* executed at higher exception levels (firmware).
*/
static struct cpumask amu_cpus __read_mostly;
bool cpu_has_amu_feat(int cpu)
{
return cpumask_test_cpu(cpu, &amu_cpus);
}
/* Initialize the use of AMU counters for frequency invariance */
extern void init_cpu_freq_invariance_counters(void);
static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
{
if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) {
pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
smp_processor_id());
cpumask_set_cpu(smp_processor_id(), &amu_cpus);
init_cpu_freq_invariance_counters();
}
}
static bool has_amu(const struct arm64_cpu_capabilities *cap,
int __unused)
{
/*
* The AMU extension is a non-conflicting feature: the kernel can
* safely run a mix of CPUs with and without support for the
* activity monitors extension. Therefore, unconditionally enable
* the capability to allow any late CPU to use the feature.
*
* With this feature unconditionally enabled, the cpu_enable
* function will be called for all CPUs that match the criteria,
* including secondary and hotplugged, marking this feature as
* present on that respective CPU. The enable function will also
* print a detection message.
*/
return true;
}
#endif
#ifdef CONFIG_ARM64_VHE #ifdef CONFIG_ARM64_VHE
static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused) static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused)
{ {
...@@ -1499,6 +1551,24 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -1499,6 +1551,24 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_clear_disr, .cpu_enable = cpu_clear_disr,
}, },
#endif /* CONFIG_ARM64_RAS_EXTN */ #endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_AMU_EXTN
{
/*
* The feature is enabled by default if CONFIG_ARM64_AMU_EXTN=y.
* Therefore, don't provide .desc as we don't want the detection
* message to be shown until at least one CPU is detected to
* support the feature.
*/
.capability = ARM64_HAS_AMU_EXTN,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
.matches = has_amu,
.sys_reg = SYS_ID_AA64PFR0_EL1,
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64PFR0_AMU_SHIFT,
.min_field_value = ID_AA64PFR0_AMU,
.cpu_enable = cpu_amu_enable,
},
#endif /* CONFIG_ARM64_AMU_EXTN */
{ {
.desc = "Data cache clean to the PoU not required for I/D coherence", .desc = "Data cache clean to the PoU not required for I/D coherence",
.capability = ARM64_HAS_CACHE_IDC, .capability = ARM64_HAS_CACHE_IDC,
......
...@@ -18,11 +18,11 @@ ...@@ -18,11 +18,11 @@
int arm_cpuidle_init(unsigned int cpu) int arm_cpuidle_init(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
int ret = -EOPNOTSUPP; int ret = -EOPNOTSUPP;
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_suspend && if (ops && ops->cpu_suspend && ops->cpu_init_idle)
cpu_ops[cpu]->cpu_init_idle) ret = ops->cpu_init_idle(cpu);
ret = cpu_ops[cpu]->cpu_init_idle(cpu);
return ret; return ret;
} }
...@@ -37,8 +37,9 @@ int arm_cpuidle_init(unsigned int cpu) ...@@ -37,8 +37,9 @@ int arm_cpuidle_init(unsigned int cpu)
int arm_cpuidle_suspend(int index) int arm_cpuidle_suspend(int index)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_suspend(index); return ops->cpu_suspend(index);
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
......
...@@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(el0_pc); ...@@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(el0_pc);
static void notrace el0_sp(struct pt_regs *regs, unsigned long esr) static void notrace el0_sp(struct pt_regs *regs, unsigned long esr)
{ {
user_exit_irqoff(); user_exit_irqoff();
local_daif_restore(DAIF_PROCCTX_NOIRQ); local_daif_restore(DAIF_PROCCTX);
do_sp_pc_abort(regs->sp, esr, regs); do_sp_pc_abort(regs->sp, esr, regs);
} }
NOKPROBE_SYMBOL(el0_sp); NOKPROBE_SYMBOL(el0_sp);
......
...@@ -404,7 +404,6 @@ __create_page_tables: ...@@ -404,7 +404,6 @@ __create_page_tables:
ret x28 ret x28
ENDPROC(__create_page_tables) ENDPROC(__create_page_tables)
.ltorg
/* /*
* The following fragment of code is executed with the MMU enabled. * The following fragment of code is executed with the MMU enabled.
......
...@@ -110,8 +110,6 @@ ENTRY(swsusp_arch_suspend_exit) ...@@ -110,8 +110,6 @@ ENTRY(swsusp_arch_suspend_exit)
cbz x24, 3f /* Do we need to re-initialise EL2? */ cbz x24, 3f /* Do we need to re-initialise EL2? */
hvc #0 hvc #0
3: ret 3: ret
.ltorg
ENDPROC(swsusp_arch_suspend_exit) ENDPROC(swsusp_arch_suspend_exit)
/* /*
......
...@@ -121,7 +121,7 @@ static int setup_dtb(struct kimage *image, ...@@ -121,7 +121,7 @@ static int setup_dtb(struct kimage *image,
/* add kaslr-seed */ /* add kaslr-seed */
ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED); ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED);
if (ret == -FDT_ERR_NOTFOUND) if (ret == -FDT_ERR_NOTFOUND)
ret = 0; ret = 0;
else if (ret) else if (ret)
goto out; goto out;
......
...@@ -285,6 +285,17 @@ static struct attribute_group armv8_pmuv3_format_attr_group = { ...@@ -285,6 +285,17 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \ #define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1) (ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
/*
* We unconditionally enable ARMv8.5-PMU long event counter support
* (64-bit events) where supported. Indicate if this arm_pmu has long
* event counter support.
*/
static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
{
return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5);
}
/* /*
* We must chain two programmable counters for 64 bit events, * We must chain two programmable counters for 64 bit events,
* except when we have allocated the 64bit cycle counter (for CPU * except when we have allocated the 64bit cycle counter (for CPU
...@@ -294,9 +305,11 @@ static struct attribute_group armv8_pmuv3_format_attr_group = { ...@@ -294,9 +305,11 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
static inline bool armv8pmu_event_is_chained(struct perf_event *event) static inline bool armv8pmu_event_is_chained(struct perf_event *event)
{ {
int idx = event->hw.idx; int idx = event->hw.idx;
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
return !WARN_ON(idx < 0) && return !WARN_ON(idx < 0) &&
armv8pmu_event_is_64bit(event) && armv8pmu_event_is_64bit(event) &&
!armv8pmu_has_long_event(cpu_pmu) &&
(idx != ARMV8_IDX_CYCLE_COUNTER); (idx != ARMV8_IDX_CYCLE_COUNTER);
} }
...@@ -345,7 +358,7 @@ static inline void armv8pmu_select_counter(int idx) ...@@ -345,7 +358,7 @@ static inline void armv8pmu_select_counter(int idx)
isb(); isb();
} }
static inline u32 armv8pmu_read_evcntr(int idx) static inline u64 armv8pmu_read_evcntr(int idx)
{ {
armv8pmu_select_counter(idx); armv8pmu_select_counter(idx);
return read_sysreg(pmxevcntr_el0); return read_sysreg(pmxevcntr_el0);
...@@ -362,6 +375,44 @@ static inline u64 armv8pmu_read_hw_counter(struct perf_event *event) ...@@ -362,6 +375,44 @@ static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
return val; return val;
} }
/*
* The cycle counter is always a 64-bit counter. When ARMV8_PMU_PMCR_LP
* is set the event counters also become 64-bit counters. Unless the
* user has requested a long counter (attr.config1) then we want to
* interrupt upon 32-bit overflow - we achieve this by applying a bias.
*/
static bool armv8pmu_event_needs_bias(struct perf_event *event)
{
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx;
if (armv8pmu_event_is_64bit(event))
return false;
if (armv8pmu_has_long_event(cpu_pmu) ||
idx == ARMV8_IDX_CYCLE_COUNTER)
return true;
return false;
}
static u64 armv8pmu_bias_long_counter(struct perf_event *event, u64 value)
{
if (armv8pmu_event_needs_bias(event))
value |= GENMASK(63, 32);
return value;
}
static u64 armv8pmu_unbias_long_counter(struct perf_event *event, u64 value)
{
if (armv8pmu_event_needs_bias(event))
value &= ~GENMASK(63, 32);
return value;
}
static u64 armv8pmu_read_counter(struct perf_event *event) static u64 armv8pmu_read_counter(struct perf_event *event)
{ {
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
...@@ -377,10 +428,10 @@ static u64 armv8pmu_read_counter(struct perf_event *event) ...@@ -377,10 +428,10 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
else else
value = armv8pmu_read_hw_counter(event); value = armv8pmu_read_hw_counter(event);
return value; return armv8pmu_unbias_long_counter(event, value);
} }
static inline void armv8pmu_write_evcntr(int idx, u32 value) static inline void armv8pmu_write_evcntr(int idx, u64 value)
{ {
armv8pmu_select_counter(idx); armv8pmu_select_counter(idx);
write_sysreg(value, pmxevcntr_el0); write_sysreg(value, pmxevcntr_el0);
...@@ -405,20 +456,14 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value) ...@@ -405,20 +456,14 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx; int idx = hwc->idx;
value = armv8pmu_bias_long_counter(event, value);
if (!armv8pmu_counter_valid(cpu_pmu, idx)) if (!armv8pmu_counter_valid(cpu_pmu, idx))
pr_err("CPU%u writing wrong counter %d\n", pr_err("CPU%u writing wrong counter %d\n",
smp_processor_id(), idx); smp_processor_id(), idx);
else if (idx == ARMV8_IDX_CYCLE_COUNTER) { else if (idx == ARMV8_IDX_CYCLE_COUNTER)
/*
* The cycles counter is really a 64-bit counter.
* When treating it as a 32-bit counter, we only count
* the lower 32 bits, and set the upper 32-bits so that
* we get an interrupt upon 32-bit overflow.
*/
if (!armv8pmu_event_is_64bit(event))
value |= 0xffffffff00000000ULL;
write_sysreg(value, pmccntr_el0); write_sysreg(value, pmccntr_el0);
} else else
armv8pmu_write_hw_counter(event, value); armv8pmu_write_hw_counter(event, value);
} }
...@@ -450,86 +495,74 @@ static inline void armv8pmu_write_event_type(struct perf_event *event) ...@@ -450,86 +495,74 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
} }
} }
static inline int armv8pmu_enable_counter(int idx) static u32 armv8pmu_event_cnten_mask(struct perf_event *event)
{ {
u32 counter = ARMV8_IDX_TO_COUNTER(idx); int counter = ARMV8_IDX_TO_COUNTER(event->hw.idx);
write_sysreg(BIT(counter), pmcntenset_el0); u32 mask = BIT(counter);
return idx;
if (armv8pmu_event_is_chained(event))
mask |= BIT(counter - 1);
return mask;
}
static inline void armv8pmu_enable_counter(u32 mask)
{
write_sysreg(mask, pmcntenset_el0);
} }
static inline void armv8pmu_enable_event_counter(struct perf_event *event) static inline void armv8pmu_enable_event_counter(struct perf_event *event)
{ {
struct perf_event_attr *attr = &event->attr; struct perf_event_attr *attr = &event->attr;
int idx = event->hw.idx; u32 mask = armv8pmu_event_cnten_mask(event);
u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx));
if (armv8pmu_event_is_chained(event)) kvm_set_pmu_events(mask, attr);
counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1));
kvm_set_pmu_events(counter_bits, attr);
/* We rely on the hypervisor switch code to enable guest counters */ /* We rely on the hypervisor switch code to enable guest counters */
if (!kvm_pmu_counter_deferred(attr)) { if (!kvm_pmu_counter_deferred(attr))
armv8pmu_enable_counter(idx); armv8pmu_enable_counter(mask);
if (armv8pmu_event_is_chained(event))
armv8pmu_enable_counter(idx - 1);
}
} }
static inline int armv8pmu_disable_counter(int idx) static inline void armv8pmu_disable_counter(u32 mask)
{ {
u32 counter = ARMV8_IDX_TO_COUNTER(idx); write_sysreg(mask, pmcntenclr_el0);
write_sysreg(BIT(counter), pmcntenclr_el0);
return idx;
} }
static inline void armv8pmu_disable_event_counter(struct perf_event *event) static inline void armv8pmu_disable_event_counter(struct perf_event *event)
{ {
struct hw_perf_event *hwc = &event->hw;
struct perf_event_attr *attr = &event->attr; struct perf_event_attr *attr = &event->attr;
int idx = hwc->idx; u32 mask = armv8pmu_event_cnten_mask(event);
u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx));
if (armv8pmu_event_is_chained(event))
counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1));
kvm_clr_pmu_events(counter_bits); kvm_clr_pmu_events(mask);
/* We rely on the hypervisor switch code to disable guest counters */ /* We rely on the hypervisor switch code to disable guest counters */
if (!kvm_pmu_counter_deferred(attr)) { if (!kvm_pmu_counter_deferred(attr))
if (armv8pmu_event_is_chained(event)) armv8pmu_disable_counter(mask);
armv8pmu_disable_counter(idx - 1);
armv8pmu_disable_counter(idx);
}
} }
static inline int armv8pmu_enable_intens(int idx) static inline void armv8pmu_enable_intens(u32 mask)
{ {
u32 counter = ARMV8_IDX_TO_COUNTER(idx); write_sysreg(mask, pmintenset_el1);
write_sysreg(BIT(counter), pmintenset_el1);
return idx;
} }
static inline int armv8pmu_enable_event_irq(struct perf_event *event) static inline void armv8pmu_enable_event_irq(struct perf_event *event)
{ {
return armv8pmu_enable_intens(event->hw.idx); u32 counter = ARMV8_IDX_TO_COUNTER(event->hw.idx);
armv8pmu_enable_intens(BIT(counter));
} }
static inline int armv8pmu_disable_intens(int idx) static inline void armv8pmu_disable_intens(u32 mask)
{ {
u32 counter = ARMV8_IDX_TO_COUNTER(idx); write_sysreg(mask, pmintenclr_el1);
write_sysreg(BIT(counter), pmintenclr_el1);
isb(); isb();
/* Clear the overflow flag in case an interrupt is pending. */ /* Clear the overflow flag in case an interrupt is pending. */
write_sysreg(BIT(counter), pmovsclr_el0); write_sysreg(mask, pmovsclr_el0);
isb(); isb();
return idx;
} }
static inline int armv8pmu_disable_event_irq(struct perf_event *event) static inline void armv8pmu_disable_event_irq(struct perf_event *event)
{ {
return armv8pmu_disable_intens(event->hw.idx); u32 counter = ARMV8_IDX_TO_COUNTER(event->hw.idx);
armv8pmu_disable_intens(BIT(counter));
} }
static inline u32 armv8pmu_getreset_flags(void) static inline u32 armv8pmu_getreset_flags(void)
...@@ -743,7 +776,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, ...@@ -743,7 +776,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
/* /*
* Otherwise use events counters * Otherwise use events counters
*/ */
if (armv8pmu_event_is_64bit(event)) if (armv8pmu_event_is_64bit(event) &&
!armv8pmu_has_long_event(cpu_pmu))
return armv8pmu_get_chain_idx(cpuc, cpu_pmu); return armv8pmu_get_chain_idx(cpuc, cpu_pmu);
else else
return armv8pmu_get_single_idx(cpuc, cpu_pmu); return armv8pmu_get_single_idx(cpuc, cpu_pmu);
...@@ -815,13 +849,11 @@ static int armv8pmu_filter_match(struct perf_event *event) ...@@ -815,13 +849,11 @@ static int armv8pmu_filter_match(struct perf_event *event)
static void armv8pmu_reset(void *info) static void armv8pmu_reset(void *info)
{ {
struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
u32 idx, nb_cnt = cpu_pmu->num_events; u32 pmcr;
/* The counter and interrupt enable registers are unknown at reset. */ /* The counter and interrupt enable registers are unknown at reset. */
for (idx = ARMV8_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx) { armv8pmu_disable_counter(U32_MAX);
armv8pmu_disable_counter(idx); armv8pmu_disable_intens(U32_MAX);
armv8pmu_disable_intens(idx);
}
/* Clear the counters we flip at guest entry/exit */ /* Clear the counters we flip at guest entry/exit */
kvm_clr_pmu_events(U32_MAX); kvm_clr_pmu_events(U32_MAX);
...@@ -830,8 +862,13 @@ static void armv8pmu_reset(void *info) ...@@ -830,8 +862,13 @@ static void armv8pmu_reset(void *info)
* Initialize & Reset PMNC. Request overflow interrupt for * Initialize & Reset PMNC. Request overflow interrupt for
* 64 bit cycle counter but cheat in armv8pmu_write_counter(). * 64 bit cycle counter but cheat in armv8pmu_write_counter().
*/ */
armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC;
ARMV8_PMU_PMCR_LC);
/* Enable long event counter support where available */
if (armv8pmu_has_long_event(cpu_pmu))
pmcr |= ARMV8_PMU_PMCR_LP;
armv8pmu_pmcr_write(pmcr);
} }
static int __armv8_pmuv3_map_event(struct perf_event *event, static int __armv8_pmuv3_map_event(struct perf_event *event,
...@@ -914,6 +951,7 @@ static void __armv8pmu_probe_pmu(void *info) ...@@ -914,6 +951,7 @@ static void __armv8pmu_probe_pmu(void *info)
if (pmuver == 0xf || pmuver == 0) if (pmuver == 0xf || pmuver == 0)
return; return;
cpu_pmu->pmuver = pmuver;
probe->present = true; probe->present = true;
/* Read the nb of CNTx counters supported from PMNC */ /* Read the nb of CNTx counters supported from PMNC */
...@@ -953,7 +991,10 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) ...@@ -953,7 +991,10 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
return probe.present ? 0 : -ENODEV; return probe.present ? 0 : -ENODEV;
} }
static int armv8_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
int (*map_event)(struct perf_event *event),
const struct attribute_group *events,
const struct attribute_group *format)
{ {
int ret = armv8pmu_probe_pmu(cpu_pmu); int ret = armv8pmu_probe_pmu(cpu_pmu);
if (ret) if (ret)
...@@ -972,144 +1013,127 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu) ...@@ -972,144 +1013,127 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->set_event_filter = armv8pmu_set_event_filter; cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
cpu_pmu->filter_match = armv8pmu_filter_match; cpu_pmu->filter_match = armv8pmu_filter_match;
cpu_pmu->name = name;
cpu_pmu->map_event = map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ?
events : &armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] = format ?
format : &armv8_pmuv3_format_attr_group;
return 0; return 0;
} }
static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu) static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_pmuv3",
if (ret) armv8_pmuv3_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_pmuv3";
cpu_pmu->map_event = armv8_pmuv3_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_a34_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a34",
armv8_pmuv3_map_event, NULL, NULL);
} }
static int armv8_a35_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a35_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a35",
if (ret) armv8_a53_map_event, NULL, NULL);
return ret;
cpu_pmu->name = "armv8_cortex_a35";
cpu_pmu->map_event = armv8_a53_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0;
} }
static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a53",
if (ret) armv8_a53_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_cortex_a53";
cpu_pmu->map_event = armv8_a53_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_a55_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a55",
armv8_pmuv3_map_event, NULL, NULL);
} }
static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a57",
if (ret) armv8_a57_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_cortex_a57";
cpu_pmu->map_event = armv8_a57_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_a65_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a65",
armv8_pmuv3_map_event, NULL, NULL);
} }
static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a72",
if (ret) armv8_a57_map_event, NULL, NULL);
return ret;
cpu_pmu->name = "armv8_cortex_a72";
cpu_pmu->map_event = armv8_a57_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0;
} }
static int armv8_a73_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a73_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a73",
if (ret) armv8_a73_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_cortex_a73";
cpu_pmu->map_event = armv8_a73_map_event;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =
&armv8_pmuv3_events_attr_group;
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] =
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_a75_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a75",
armv8_pmuv3_map_event, NULL, NULL);
} }
static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_a76_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_cortex_a76",
if (ret) armv8_pmuv3_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_cavium_thunder"; static int armv8_a77_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->map_event = armv8_thunder_map_event; {
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = return armv8_pmu_init(cpu_pmu, "armv8_cortex_a77",
&armv8_pmuv3_events_attr_group; armv8_pmuv3_map_event, NULL, NULL);
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] = }
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_e1_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_neoverse_e1",
armv8_pmuv3_map_event, NULL, NULL);
} }
static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu) static int armv8_n1_pmu_init(struct arm_pmu *cpu_pmu)
{ {
int ret = armv8_pmu_init(cpu_pmu); return armv8_pmu_init(cpu_pmu, "armv8_neoverse_n1",
if (ret) armv8_pmuv3_map_event, NULL, NULL);
return ret; }
cpu_pmu->name = "armv8_brcm_vulcan"; static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->map_event = armv8_vulcan_map_event; {
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = return armv8_pmu_init(cpu_pmu, "armv8_cavium_thunder",
&armv8_pmuv3_events_attr_group; armv8_thunder_map_event, NULL, NULL);
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] = }
&armv8_pmuv3_format_attr_group;
return 0; static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
{
return armv8_pmu_init(cpu_pmu, "armv8_brcm_vulcan",
armv8_vulcan_map_event, NULL, NULL);
} }
static const struct of_device_id armv8_pmu_of_device_ids[] = { static const struct of_device_id armv8_pmu_of_device_ids[] = {
{.compatible = "arm,armv8-pmuv3", .data = armv8_pmuv3_init}, {.compatible = "arm,armv8-pmuv3", .data = armv8_pmuv3_init},
{.compatible = "arm,cortex-a34-pmu", .data = armv8_a34_pmu_init},
{.compatible = "arm,cortex-a35-pmu", .data = armv8_a35_pmu_init}, {.compatible = "arm,cortex-a35-pmu", .data = armv8_a35_pmu_init},
{.compatible = "arm,cortex-a53-pmu", .data = armv8_a53_pmu_init}, {.compatible = "arm,cortex-a53-pmu", .data = armv8_a53_pmu_init},
{.compatible = "arm,cortex-a55-pmu", .data = armv8_a55_pmu_init},
{.compatible = "arm,cortex-a57-pmu", .data = armv8_a57_pmu_init}, {.compatible = "arm,cortex-a57-pmu", .data = armv8_a57_pmu_init},
{.compatible = "arm,cortex-a65-pmu", .data = armv8_a65_pmu_init},
{.compatible = "arm,cortex-a72-pmu", .data = armv8_a72_pmu_init}, {.compatible = "arm,cortex-a72-pmu", .data = armv8_a72_pmu_init},
{.compatible = "arm,cortex-a73-pmu", .data = armv8_a73_pmu_init}, {.compatible = "arm,cortex-a73-pmu", .data = armv8_a73_pmu_init},
{.compatible = "arm,cortex-a75-pmu", .data = armv8_a75_pmu_init},
{.compatible = "arm,cortex-a76-pmu", .data = armv8_a76_pmu_init},
{.compatible = "arm,cortex-a77-pmu", .data = armv8_a77_pmu_init},
{.compatible = "arm,neoverse-e1-pmu", .data = armv8_e1_pmu_init},
{.compatible = "arm,neoverse-n1-pmu", .data = armv8_n1_pmu_init},
{.compatible = "cavium,thunder-pmu", .data = armv8_thunder_pmu_init}, {.compatible = "cavium,thunder-pmu", .data = armv8_thunder_pmu_init},
{.compatible = "brcm,vulcan-pmu", .data = armv8_vulcan_pmu_init}, {.compatible = "brcm,vulcan-pmu", .data = armv8_vulcan_pmu_init},
{}, {},
......
...@@ -344,7 +344,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -344,7 +344,7 @@ void __init setup_arch(char **cmdline_p)
else else
psci_acpi_init(); psci_acpi_init();
cpu_read_bootcpu_ops(); init_bootcpu_ops();
smp_init_cpus(); smp_init_cpus();
smp_build_mpidr_hash(); smp_build_mpidr_hash();
...@@ -371,8 +371,10 @@ void __init setup_arch(char **cmdline_p) ...@@ -371,8 +371,10 @@ void __init setup_arch(char **cmdline_p)
static inline bool cpu_can_disable(unsigned int cpu) static inline bool cpu_can_disable(unsigned int cpu)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_can_disable) const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_can_disable(cpu);
if (ops && ops->cpu_can_disable)
return ops->cpu_can_disable(cpu);
#endif #endif
return false; return false;
} }
......
...@@ -93,8 +93,10 @@ static inline int op_cpu_kill(unsigned int cpu) ...@@ -93,8 +93,10 @@ static inline int op_cpu_kill(unsigned int cpu)
*/ */
static int boot_secondary(unsigned int cpu, struct task_struct *idle) static int boot_secondary(unsigned int cpu, struct task_struct *idle)
{ {
if (cpu_ops[cpu]->cpu_boot) const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_boot(cpu);
if (ops->cpu_boot)
return ops->cpu_boot(cpu);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
...@@ -115,60 +117,55 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) ...@@ -115,60 +117,55 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
update_cpu_boot_status(CPU_MMU_OFF); update_cpu_boot_status(CPU_MMU_OFF);
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
/* /* Now bring the CPU into our world */
* Now bring the CPU into our world.
*/
ret = boot_secondary(cpu, idle); ret = boot_secondary(cpu, idle);
if (ret == 0) { if (ret) {
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
if (!cpu_online(cpu)) {
pr_crit("CPU%u: failed to come online\n", cpu);
ret = -EIO;
}
} else {
pr_err("CPU%u: failed to boot: %d\n", cpu, ret); pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
return ret; return ret;
} }
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
if (cpu_online(cpu))
return 0;
pr_crit("CPU%u: failed to come online\n", cpu);
secondary_data.task = NULL; secondary_data.task = NULL;
secondary_data.stack = NULL; secondary_data.stack = NULL;
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
status = READ_ONCE(secondary_data.status); status = READ_ONCE(secondary_data.status);
if (ret && status) { if (status == CPU_MMU_OFF)
status = READ_ONCE(__early_cpu_boot_status);
if (status == CPU_MMU_OFF) switch (status & CPU_BOOT_STATUS_MASK) {
status = READ_ONCE(__early_cpu_boot_status); default:
pr_err("CPU%u: failed in unknown state : 0x%lx\n",
switch (status & CPU_BOOT_STATUS_MASK) { cpu, status);
default: cpus_stuck_in_kernel++;
pr_err("CPU%u: failed in unknown state : 0x%lx\n", break;
cpu, status); case CPU_KILL_ME:
cpus_stuck_in_kernel++; if (!op_cpu_kill(cpu)) {
break; pr_crit("CPU%u: died during early boot\n", cpu);
case CPU_KILL_ME:
if (!op_cpu_kill(cpu)) {
pr_crit("CPU%u: died during early boot\n", cpu);
break;
}
pr_crit("CPU%u: may not have shut down cleanly\n", cpu);
/* Fall through */
case CPU_STUCK_IN_KERNEL:
pr_crit("CPU%u: is stuck in kernel\n", cpu);
if (status & CPU_STUCK_REASON_52_BIT_VA)
pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
if (status & CPU_STUCK_REASON_NO_GRAN)
pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K);
cpus_stuck_in_kernel++;
break; break;
case CPU_PANIC_KERNEL:
panic("CPU%u detected unsupported configuration\n", cpu);
} }
pr_crit("CPU%u: may not have shut down cleanly\n", cpu);
/* Fall through */
case CPU_STUCK_IN_KERNEL:
pr_crit("CPU%u: is stuck in kernel\n", cpu);
if (status & CPU_STUCK_REASON_52_BIT_VA)
pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
if (status & CPU_STUCK_REASON_NO_GRAN) {
pr_crit("CPU%u: does not support %luK granule\n",
cpu, PAGE_SIZE / SZ_1K);
}
cpus_stuck_in_kernel++;
break;
case CPU_PANIC_KERNEL:
panic("CPU%u detected unsupported configuration\n", cpu);
} }
return ret; return ret;
...@@ -196,6 +193,7 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -196,6 +193,7 @@ asmlinkage notrace void secondary_start_kernel(void)
{ {
u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
struct mm_struct *mm = &init_mm; struct mm_struct *mm = &init_mm;
const struct cpu_operations *ops;
unsigned int cpu; unsigned int cpu;
cpu = task_cpu(current); cpu = task_cpu(current);
...@@ -227,8 +225,9 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -227,8 +225,9 @@ asmlinkage notrace void secondary_start_kernel(void)
*/ */
check_local_cpu_capabilities(); check_local_cpu_capabilities();
if (cpu_ops[cpu]->cpu_postboot) ops = get_cpu_ops(cpu);
cpu_ops[cpu]->cpu_postboot(); if (ops->cpu_postboot)
ops->cpu_postboot();
/* /*
* Log the CPU info before it is marked online and might get read. * Log the CPU info before it is marked online and might get read.
...@@ -266,19 +265,21 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -266,19 +265,21 @@ asmlinkage notrace void secondary_start_kernel(void)
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static int op_cpu_disable(unsigned int cpu) static int op_cpu_disable(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
/* /*
* If we don't have a cpu_die method, abort before we reach the point * If we don't have a cpu_die method, abort before we reach the point
* of no return. CPU0 may not have an cpu_ops, so test for it. * of no return. CPU0 may not have an cpu_ops, so test for it.
*/ */
if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_die) if (!ops || !ops->cpu_die)
return -EOPNOTSUPP; return -EOPNOTSUPP;
/* /*
* We may need to abort a hot unplug for some other mechanism-specific * We may need to abort a hot unplug for some other mechanism-specific
* reason. * reason.
*/ */
if (cpu_ops[cpu]->cpu_disable) if (ops->cpu_disable)
return cpu_ops[cpu]->cpu_disable(cpu); return ops->cpu_disable(cpu);
return 0; return 0;
} }
...@@ -314,15 +315,17 @@ int __cpu_disable(void) ...@@ -314,15 +315,17 @@ int __cpu_disable(void)
static int op_cpu_kill(unsigned int cpu) static int op_cpu_kill(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
/* /*
* If we have no means of synchronising with the dying CPU, then assume * If we have no means of synchronising with the dying CPU, then assume
* that it is really dead. We can only wait for an arbitrary length of * that it is really dead. We can only wait for an arbitrary length of
* time and hope that it's dead, so let's skip the wait and just hope. * time and hope that it's dead, so let's skip the wait and just hope.
*/ */
if (!cpu_ops[cpu]->cpu_kill) if (!ops->cpu_kill)
return 0; return 0;
return cpu_ops[cpu]->cpu_kill(cpu); return ops->cpu_kill(cpu);
} }
/* /*
...@@ -357,6 +360,7 @@ void __cpu_die(unsigned int cpu) ...@@ -357,6 +360,7 @@ void __cpu_die(unsigned int cpu)
void cpu_die(void) void cpu_die(void)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(cpu);
idle_task_exit(); idle_task_exit();
...@@ -370,12 +374,22 @@ void cpu_die(void) ...@@ -370,12 +374,22 @@ void cpu_die(void)
* mechanism must perform all required cache maintenance to ensure that * mechanism must perform all required cache maintenance to ensure that
* no dirty lines are lost in the process of shutting down the CPU. * no dirty lines are lost in the process of shutting down the CPU.
*/ */
cpu_ops[cpu]->cpu_die(cpu); ops->cpu_die(cpu);
BUG(); BUG();
} }
#endif #endif
static void __cpu_try_die(int cpu)
{
#ifdef CONFIG_HOTPLUG_CPU
const struct cpu_operations *ops = get_cpu_ops(cpu);
if (ops && ops->cpu_die)
ops->cpu_die(cpu);
#endif
}
/* /*
* Kill the calling secondary CPU, early in bringup before it is turned * Kill the calling secondary CPU, early in bringup before it is turned
* online. * online.
...@@ -389,12 +403,11 @@ void cpu_die_early(void) ...@@ -389,12 +403,11 @@ void cpu_die_early(void)
/* Mark this CPU absent */ /* Mark this CPU absent */
set_cpu_present(cpu, 0); set_cpu_present(cpu, 0);
#ifdef CONFIG_HOTPLUG_CPU if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
update_cpu_boot_status(CPU_KILL_ME); update_cpu_boot_status(CPU_KILL_ME);
/* Check if we can park ourselves */ __cpu_try_die(cpu);
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_die) }
cpu_ops[cpu]->cpu_die(cpu);
#endif
update_cpu_boot_status(CPU_STUCK_IN_KERNEL); update_cpu_boot_status(CPU_STUCK_IN_KERNEL);
cpu_park_loop(); cpu_park_loop();
...@@ -488,10 +501,13 @@ static bool __init is_mpidr_duplicate(unsigned int cpu, u64 hwid) ...@@ -488,10 +501,13 @@ static bool __init is_mpidr_duplicate(unsigned int cpu, u64 hwid)
*/ */
static int __init smp_cpu_setup(int cpu) static int __init smp_cpu_setup(int cpu)
{ {
if (cpu_read_ops(cpu)) const struct cpu_operations *ops;
if (init_cpu_ops(cpu))
return -ENODEV; return -ENODEV;
if (cpu_ops[cpu]->cpu_init(cpu)) ops = get_cpu_ops(cpu);
if (ops->cpu_init(cpu))
return -ENODEV; return -ENODEV;
set_cpu_possible(cpu, true); set_cpu_possible(cpu, true);
...@@ -714,6 +730,7 @@ void __init smp_init_cpus(void) ...@@ -714,6 +730,7 @@ void __init smp_init_cpus(void)
void __init smp_prepare_cpus(unsigned int max_cpus) void __init smp_prepare_cpus(unsigned int max_cpus)
{ {
const struct cpu_operations *ops;
int err; int err;
unsigned int cpu; unsigned int cpu;
unsigned int this_cpu; unsigned int this_cpu;
...@@ -744,10 +761,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -744,10 +761,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
if (cpu == smp_processor_id()) if (cpu == smp_processor_id())
continue; continue;
if (!cpu_ops[cpu]) ops = get_cpu_ops(cpu);
if (!ops)
continue; continue;
err = cpu_ops[cpu]->cpu_prepare(cpu); err = ops->cpu_prepare(cpu);
if (err) if (err)
continue; continue;
...@@ -863,10 +881,8 @@ static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) ...@@ -863,10 +881,8 @@ static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs)
local_irq_disable(); local_irq_disable();
sdei_mask_local_cpu(); sdei_mask_local_cpu();
#ifdef CONFIG_HOTPLUG_CPU if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
if (cpu_ops[cpu]->cpu_die) __cpu_try_die(cpu);
cpu_ops[cpu]->cpu_die(cpu);
#endif
/* just in case */ /* just in case */
cpu_park_loop(); cpu_park_loop();
...@@ -1044,8 +1060,9 @@ static bool have_cpu_die(void) ...@@ -1044,8 +1060,9 @@ static bool have_cpu_die(void)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
int any_cpu = raw_smp_processor_id(); int any_cpu = raw_smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(any_cpu);
if (cpu_ops[any_cpu] && cpu_ops[any_cpu]->cpu_die) if (ops && ops->cpu_die)
return true; return true;
#endif #endif
return false; return false;
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
#include <linux/cacheinfo.h> #include <linux/cacheinfo.h>
#include <linux/cpufreq.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/percpu.h> #include <linux/percpu.h>
...@@ -120,4 +121,183 @@ int __init parse_acpi_topology(void) ...@@ -120,4 +121,183 @@ int __init parse_acpi_topology(void)
} }
#endif #endif
#ifdef CONFIG_ARM64_AMU_EXTN
#undef pr_fmt
#define pr_fmt(fmt) "AMU: " fmt
static DEFINE_PER_CPU_READ_MOSTLY(unsigned long, arch_max_freq_scale);
static DEFINE_PER_CPU(u64, arch_const_cycles_prev);
static DEFINE_PER_CPU(u64, arch_core_cycles_prev);
static cpumask_var_t amu_fie_cpus;
/* Initialize counter reference per-cpu variables for the current CPU */
void init_cpu_freq_invariance_counters(void)
{
this_cpu_write(arch_core_cycles_prev,
read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0));
this_cpu_write(arch_const_cycles_prev,
read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0));
}
static int validate_cpu_freq_invariance_counters(int cpu)
{
u64 max_freq_hz, ratio;
if (!cpu_has_amu_feat(cpu)) {
pr_debug("CPU%d: counters are not supported.\n", cpu);
return -EINVAL;
}
if (unlikely(!per_cpu(arch_const_cycles_prev, cpu) ||
!per_cpu(arch_core_cycles_prev, cpu))) {
pr_debug("CPU%d: cycle counters are not enabled.\n", cpu);
return -EINVAL;
}
/* Convert maximum frequency from KHz to Hz and validate */
max_freq_hz = cpufreq_get_hw_max_freq(cpu) * 1000;
if (unlikely(!max_freq_hz)) {
pr_debug("CPU%d: invalid maximum frequency.\n", cpu);
return -EINVAL;
}
/*
* Pre-compute the fixed ratio between the frequency of the constant
* counter and the maximum frequency of the CPU.
*
* const_freq
* arch_max_freq_scale = ---------------- * SCHED_CAPACITY_SCALE²
* cpuinfo_max_freq
*
* We use a factor of 2 * SCHED_CAPACITY_SHIFT -> SCHED_CAPACITY_SCALE²
* in order to ensure a good resolution for arch_max_freq_scale for
* very low arch timer frequencies (down to the KHz range which should
* be unlikely).
*/
ratio = (u64)arch_timer_get_rate() << (2 * SCHED_CAPACITY_SHIFT);
ratio = div64_u64(ratio, max_freq_hz);
if (!ratio) {
WARN_ONCE(1, "System timer frequency too low.\n");
return -EINVAL;
}
per_cpu(arch_max_freq_scale, cpu) = (unsigned long)ratio;
return 0;
}
static inline bool
enable_policy_freq_counters(int cpu, cpumask_var_t valid_cpus)
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
if (!policy) {
pr_debug("CPU%d: No cpufreq policy found.\n", cpu);
return false;
}
if (cpumask_subset(policy->related_cpus, valid_cpus))
cpumask_or(amu_fie_cpus, policy->related_cpus,
amu_fie_cpus);
cpufreq_cpu_put(policy);
return true;
}
static DEFINE_STATIC_KEY_FALSE(amu_fie_key);
#define amu_freq_invariant() static_branch_unlikely(&amu_fie_key)
static int __init init_amu_fie(void)
{
cpumask_var_t valid_cpus;
bool have_policy = false;
int ret = 0;
int cpu;
if (!zalloc_cpumask_var(&valid_cpus, GFP_KERNEL))
return -ENOMEM;
if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) {
ret = -ENOMEM;
goto free_valid_mask;
}
for_each_present_cpu(cpu) {
if (validate_cpu_freq_invariance_counters(cpu))
continue;
cpumask_set_cpu(cpu, valid_cpus);
have_policy |= enable_policy_freq_counters(cpu, valid_cpus);
}
/*
* If we are not restricted by cpufreq policies, we only enable
* the use of the AMU feature for FIE if all CPUs support AMU.
* Otherwise, enable_policy_freq_counters has already enabled
* policy cpus.
*/
if (!have_policy && cpumask_equal(valid_cpus, cpu_present_mask))
cpumask_or(amu_fie_cpus, amu_fie_cpus, valid_cpus);
if (!cpumask_empty(amu_fie_cpus)) {
pr_info("CPUs[%*pbl]: counters will be used for FIE.",
cpumask_pr_args(amu_fie_cpus));
static_branch_enable(&amu_fie_key);
}
free_valid_mask:
free_cpumask_var(valid_cpus);
return ret;
}
late_initcall_sync(init_amu_fie);
bool arch_freq_counters_available(struct cpumask *cpus)
{
return amu_freq_invariant() &&
cpumask_subset(cpus, amu_fie_cpus);
}
void topology_scale_freq_tick(void)
{
u64 prev_core_cnt, prev_const_cnt;
u64 core_cnt, const_cnt, scale;
int cpu = smp_processor_id();
if (!amu_freq_invariant())
return;
if (!cpumask_test_cpu(cpu, amu_fie_cpus))
return;
const_cnt = read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0);
core_cnt = read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0);
prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
prev_core_cnt = this_cpu_read(arch_core_cycles_prev);
if (unlikely(core_cnt <= prev_core_cnt ||
const_cnt <= prev_const_cnt))
goto store_and_exit;
/*
* /\core arch_max_freq_scale
* scale = ------- * --------------------
* /\const SCHED_CAPACITY_SCALE
*
* See validate_cpu_freq_invariance_counters() for details on
* arch_max_freq_scale and the use of SCHED_CAPACITY_SHIFT.
*/
scale = core_cnt - prev_core_cnt;
scale *= this_cpu_read(arch_max_freq_scale);
scale = div64_u64(scale >> SCHED_CAPACITY_SHIFT,
const_cnt - prev_const_cnt);
scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE);
this_cpu_write(freq_scale, (unsigned long)scale);
store_and_exit:
this_cpu_write(arch_core_cycles_prev, core_cnt);
this_cpu_write(arch_const_cycles_prev, const_cnt);
}
#endif /* CONFIG_ARM64_AMU_EXTN */
...@@ -98,6 +98,18 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) ...@@ -98,6 +98,18 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
val = read_sysreg(cpacr_el1); val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA; val |= CPACR_EL1_TTA;
val &= ~CPACR_EL1_ZEN; val &= ~CPACR_EL1_ZEN;
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
val |= CPTR_EL2_TAM;
if (update_fp_enabled(vcpu)) { if (update_fp_enabled(vcpu)) {
if (vcpu_has_sve(vcpu)) if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN; val |= CPACR_EL1_ZEN;
...@@ -119,7 +131,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) ...@@ -119,7 +131,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
__activate_traps_common(vcpu); __activate_traps_common(vcpu);
val = CPTR_EL2_DEFAULT; val = CPTR_EL2_DEFAULT;
val |= CPTR_EL2_TTA | CPTR_EL2_TZ; val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
if (!update_fp_enabled(vcpu)) { if (!update_fp_enabled(vcpu)) {
val |= CPTR_EL2_TFP; val |= CPTR_EL2_TFP;
__activate_traps_fpsimd32(vcpu); __activate_traps_fpsimd32(vcpu);
...@@ -127,7 +139,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) ...@@ -127,7 +139,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
write_sysreg(val, cptr_el2); write_sysreg(val, cptr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
isb(); isb();
...@@ -146,12 +158,12 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) ...@@ -146,12 +158,12 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
{ {
u64 hcr = vcpu->arch.hcr_el2; u64 hcr = vcpu->arch.hcr_el2;
if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM; hcr |= HCR_TVM;
write_sysreg(hcr, hcr_el2); write_sysreg(hcr, hcr_el2);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
if (has_vhe()) if (has_vhe())
...@@ -181,7 +193,7 @@ static void __hyp_text __deactivate_traps_nvhe(void) ...@@ -181,7 +193,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
{ {
u64 mdcr_el2 = read_sysreg(mdcr_el2); u64 mdcr_el2 = read_sysreg(mdcr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
u64 val; u64 val;
/* /*
...@@ -328,7 +340,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) ...@@ -328,7 +340,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
* resolve the IPA using the AT instruction. * resolve the IPA using the AT instruction.
*/ */
if (!(esr & ESR_ELx_S1PTW) && if (!(esr & ESR_ELx_S1PTW) &&
(cpus_have_const_cap(ARM64_WORKAROUND_834220) || (cpus_have_final_cap(ARM64_WORKAROUND_834220) ||
(esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
if (!__translate_far_to_hpfar(far, &hpfar)) if (!__translate_far_to_hpfar(far, &hpfar))
return false; return false;
...@@ -498,7 +510,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) ...@@ -498,7 +510,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
if (*exit_code != ARM_EXCEPTION_TRAP) if (*exit_code != ARM_EXCEPTION_TRAP)
goto exit; goto exit;
if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
handle_tx2_tvm(vcpu)) handle_tx2_tvm(vcpu))
return true; return true;
...@@ -555,7 +567,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) ...@@ -555,7 +567,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
{ {
if (!cpus_have_const_cap(ARM64_SSBD)) if (!cpus_have_final_cap(ARM64_SSBD))
return false; return false;
return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
......
...@@ -71,7 +71,7 @@ static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ct ...@@ -71,7 +71,7 @@ static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ct
ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR);
ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2);
} }
...@@ -118,7 +118,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ...@@ -118,7 +118,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2);
write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1);
if (!cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
} else if (!ctxt->__hyp_running_vcpu) { } else if (!ctxt->__hyp_running_vcpu) {
...@@ -149,7 +149,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ...@@ -149,7 +149,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1);
write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) &&
ctxt->__hyp_running_vcpu) { ctxt->__hyp_running_vcpu) {
/* /*
* Must only be done for host registers, hence the context * Must only be done for host registers, hence the context
...@@ -194,7 +194,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) ...@@ -194,7 +194,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR);
write_sysreg_el2(pstate, SYS_SPSR); write_sysreg_el2(pstate, SYS_SPSR);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2);
} }
......
...@@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, ...@@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
local_irq_save(cxt->flags); local_irq_save(cxt->flags);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
/* /*
* For CPUs that are affected by ARM errata 1165522 or 1530923, * For CPUs that are affected by ARM errata 1165522 or 1530923,
* we cannot trust stage-1 to be in a correct state at that * we cannot trust stage-1 to be in a correct state at that
...@@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, ...@@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
struct tlb_inv_context *cxt) struct tlb_inv_context *cxt)
{ {
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
u64 val; u64 val;
/* /*
...@@ -103,7 +103,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, ...@@ -103,7 +103,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
isb(); isb();
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
/* Restore the registers to what they were */ /* Restore the registers to what they were */
write_sysreg_el1(cxt->tcr, SYS_TCR); write_sysreg_el1(cxt->tcr, SYS_TCR);
write_sysreg_el1(cxt->sctlr, SYS_SCTLR); write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
...@@ -117,7 +117,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, ...@@ -117,7 +117,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
{ {
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
/* Ensure write of the host VMID */ /* Ensure write of the host VMID */
isb(); isb();
/* Restore the host's TCR_EL1 */ /* Restore the host's TCR_EL1 */
......
...@@ -1003,6 +1003,20 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ...@@ -1003,6 +1003,20 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \
access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
static bool access_amu(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
kvm_inject_undefined(vcpu);
return false;
}
/* Macro to expand the AMU counter and type registers*/
#define AMU_AMEVCNTR0_EL0(n) { SYS_DESC(SYS_AMEVCNTR0_EL0(n)), access_amu }
#define AMU_AMEVTYPE0_EL0(n) { SYS_DESC(SYS_AMEVTYPE0_EL0(n)), access_amu }
#define AMU_AMEVCNTR1_EL0(n) { SYS_DESC(SYS_AMEVCNTR1_EL0(n)), access_amu }
#define AMU_AMEVTYPE1_EL0(n) { SYS_DESC(SYS_AMEVTYPE1_EL0(n)), access_amu }
static bool trap_ptrauth(struct kvm_vcpu *vcpu, static bool trap_ptrauth(struct kvm_vcpu *vcpu,
struct sys_reg_params *p, struct sys_reg_params *p,
const struct sys_reg_desc *rd) const struct sys_reg_desc *rd)
...@@ -1078,13 +1092,25 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, ...@@ -1078,13 +1092,25 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
(u32)r->CRn, (u32)r->CRm, (u32)r->Op2); (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
u64 val = raz ? 0 : read_sanitised_ftr_reg(id); u64 val = raz ? 0 : read_sanitised_ftr_reg(id);
if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) { if (id == SYS_ID_AA64PFR0_EL1) {
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); if (!vcpu_has_sve(vcpu))
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT);
} else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
(0xfUL << ID_AA64ISAR1_API_SHIFT) | (0xfUL << ID_AA64ISAR1_API_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPA_SHIFT) | (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPI_SHIFT)); (0xfUL << ID_AA64ISAR1_GPI_SHIFT));
} else if (id == SYS_ID_AA64DFR0_EL1) {
/* Limit guests to PMUv3 for ARMv8.1 */
val = cpuid_feature_cap_perfmon_field(val,
ID_AA64DFR0_PMUVER_SHIFT,
ID_AA64DFR0_PMUVER_8_1);
} else if (id == SYS_ID_DFR0_EL1) {
/* Limit guests to PMUv3 for ARMv8.1 */
val = cpuid_feature_cap_perfmon_field(val,
ID_DFR0_PERFMON_SHIFT,
ID_DFR0_PERFMON_8_1);
} }
return val; return val;
...@@ -1565,6 +1591,79 @@ static const struct sys_reg_desc sys_reg_descs[] = { ...@@ -1565,6 +1591,79 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 },
{ SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 },
{ SYS_DESC(SYS_AMCR_EL0), access_amu },
{ SYS_DESC(SYS_AMCFGR_EL0), access_amu },
{ SYS_DESC(SYS_AMCGCR_EL0), access_amu },
{ SYS_DESC(SYS_AMUSERENR_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENCLR0_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENSET0_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENCLR1_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENSET1_EL0), access_amu },
AMU_AMEVCNTR0_EL0(0),
AMU_AMEVCNTR0_EL0(1),
AMU_AMEVCNTR0_EL0(2),
AMU_AMEVCNTR0_EL0(3),
AMU_AMEVCNTR0_EL0(4),
AMU_AMEVCNTR0_EL0(5),
AMU_AMEVCNTR0_EL0(6),
AMU_AMEVCNTR0_EL0(7),
AMU_AMEVCNTR0_EL0(8),
AMU_AMEVCNTR0_EL0(9),
AMU_AMEVCNTR0_EL0(10),
AMU_AMEVCNTR0_EL0(11),
AMU_AMEVCNTR0_EL0(12),
AMU_AMEVCNTR0_EL0(13),
AMU_AMEVCNTR0_EL0(14),
AMU_AMEVCNTR0_EL0(15),
AMU_AMEVTYPE0_EL0(0),
AMU_AMEVTYPE0_EL0(1),
AMU_AMEVTYPE0_EL0(2),
AMU_AMEVTYPE0_EL0(3),
AMU_AMEVTYPE0_EL0(4),
AMU_AMEVTYPE0_EL0(5),
AMU_AMEVTYPE0_EL0(6),
AMU_AMEVTYPE0_EL0(7),
AMU_AMEVTYPE0_EL0(8),
AMU_AMEVTYPE0_EL0(9),
AMU_AMEVTYPE0_EL0(10),
AMU_AMEVTYPE0_EL0(11),
AMU_AMEVTYPE0_EL0(12),
AMU_AMEVTYPE0_EL0(13),
AMU_AMEVTYPE0_EL0(14),
AMU_AMEVTYPE0_EL0(15),
AMU_AMEVCNTR1_EL0(0),
AMU_AMEVCNTR1_EL0(1),
AMU_AMEVCNTR1_EL0(2),
AMU_AMEVCNTR1_EL0(3),
AMU_AMEVCNTR1_EL0(4),
AMU_AMEVCNTR1_EL0(5),
AMU_AMEVCNTR1_EL0(6),
AMU_AMEVCNTR1_EL0(7),
AMU_AMEVCNTR1_EL0(8),
AMU_AMEVCNTR1_EL0(9),
AMU_AMEVCNTR1_EL0(10),
AMU_AMEVCNTR1_EL0(11),
AMU_AMEVCNTR1_EL0(12),
AMU_AMEVCNTR1_EL0(13),
AMU_AMEVCNTR1_EL0(14),
AMU_AMEVCNTR1_EL0(15),
AMU_AMEVTYPE1_EL0(0),
AMU_AMEVTYPE1_EL0(1),
AMU_AMEVTYPE1_EL0(2),
AMU_AMEVTYPE1_EL0(3),
AMU_AMEVTYPE1_EL0(4),
AMU_AMEVTYPE1_EL0(5),
AMU_AMEVTYPE1_EL0(6),
AMU_AMEVTYPE1_EL0(7),
AMU_AMEVTYPE1_EL0(8),
AMU_AMEVTYPE1_EL0(9),
AMU_AMEVTYPE1_EL0(10),
AMU_AMEVTYPE1_EL0(11),
AMU_AMEVTYPE1_EL0(12),
AMU_AMEVTYPE1_EL0(13),
AMU_AMEVTYPE1_EL0(14),
AMU_AMEVTYPE1_EL0(15),
{ SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer },
......
...@@ -124,3 +124,30 @@ unsigned int do_csum(const unsigned char *buff, int len) ...@@ -124,3 +124,30 @@ unsigned int do_csum(const unsigned char *buff, int len)
return sum >> 16; return sum >> 16;
} }
__sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum csum)
{
__uint128_t src, dst;
u64 sum = (__force u64)csum;
src = *(const __uint128_t *)saddr->s6_addr;
dst = *(const __uint128_t *)daddr->s6_addr;
sum += (__force u32)htonl(len);
#ifdef __LITTLE_ENDIAN
sum += (u32)proto << 24;
#else
sum += proto;
#endif
src += (src >> 64) | (src << 64);
dst += (dst >> 64) | (dst << 64);
sum = accumulate(sum, src >> 64);
sum = accumulate(sum, dst >> 64);
sum += ((sum >> 32) | (sum << 32));
return csum_fold((__force __wsum)(sum >> 32));
}
EXPORT_SYMBOL(csum_ipv6_magic);
...@@ -186,7 +186,7 @@ CPU_LE( rev data2, data2 ) ...@@ -186,7 +186,7 @@ CPU_LE( rev data2, data2 )
* as carry-propagation can corrupt the upper bits if the trailing * as carry-propagation can corrupt the upper bits if the trailing
* bytes in the string contain 0x01. * bytes in the string contain 0x01.
* However, if there is no NUL byte in the dword, we can generate * However, if there is no NUL byte in the dword, we can generate
* the result directly. We ca not just subtract the bytes as the * the result directly. We cannot just subtract the bytes as the
* MSB might be significant. * MSB might be significant.
*/ */
CPU_BE( cbnz has_nul, 1f ) CPU_BE( cbnz has_nul, 1f )
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* Copyright (C) 2012 ARM Ltd. * Copyright (C) 2012 ARM Ltd.
*/ */
#include <linux/bitfield.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -254,10 +255,37 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) ...@@ -254,10 +255,37 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
/* Errata workaround post TTBRx_EL1 update. */ /* Errata workaround post TTBRx_EL1 update. */
asmlinkage void post_ttbr_update_workaround(void) asmlinkage void post_ttbr_update_workaround(void)
{ {
if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456))
return;
asm(ALTERNATIVE("nop; nop; nop", asm(ALTERNATIVE("nop; nop; nop",
"ic iallu; dsb nsh; isb", "ic iallu; dsb nsh; isb",
ARM64_WORKAROUND_CAVIUM_27456, ARM64_WORKAROUND_CAVIUM_27456));
CONFIG_CAVIUM_ERRATUM_27456)); }
void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
{
unsigned long ttbr1 = read_sysreg(ttbr1_el1);
unsigned long asid = ASID(mm);
unsigned long ttbr0 = phys_to_ttbr(pgd_phys);
/* Skip CNP for the reserved ASID */
if (system_supports_cnp() && asid)
ttbr0 |= TTBR_CNP_BIT;
/* SW PAN needs a copy of the ASID in TTBR0 for entry */
if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN))
ttbr0 |= FIELD_PREP(TTBR_ASID_MASK, asid);
/* Set ASID in TTBR1 since TCR.A1 is set */
ttbr1 &= ~TTBR_ASID_MASK;
ttbr1 |= FIELD_PREP(TTBR_ASID_MASK, asid);
write_sysreg(ttbr1, ttbr1_el1);
isb();
write_sysreg(ttbr0, ttbr0_el1);
isb();
post_ttbr_update_workaround();
} }
static int asids_init(void) static int asids_init(void)
......
...@@ -131,6 +131,7 @@ alternative_endif ...@@ -131,6 +131,7 @@ alternative_endif
ubfx x11, x11, #1, #1 ubfx x11, x11, #1, #1
msr oslar_el1, x11 msr oslar_el1, x11
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x0 // Disable PMU access from EL0
reset_amuserenr_el0 x0 // Disable AMU access from EL0
alternative_if ARM64_HAS_RAS_EXTN alternative_if ARM64_HAS_RAS_EXTN
msr_s SYS_DISR_EL1, xzr msr_s SYS_DISR_EL1, xzr
...@@ -142,34 +143,6 @@ SYM_FUNC_END(cpu_do_resume) ...@@ -142,34 +143,6 @@ SYM_FUNC_END(cpu_do_resume)
.popsection .popsection
#endif #endif
/*
* cpu_do_switch_mm(pgd_phys, tsk)
*
* Set the translation table base pointer to be pgd_phys.
*
* - pgd_phys - physical address of new TTB
*/
SYM_FUNC_START(cpu_do_switch_mm)
mrs x2, ttbr1_el1
mmid x1, x1 // get mm->context.id
phys_to_ttbr x3, x0
alternative_if ARM64_HAS_CNP
cbz x1, 1f // skip CNP for reserved ASID
orr x3, x3, #TTBR_CNP_BIT
1:
alternative_else_nop_endif
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
bfi x3, x1, #48, #16 // set the ASID field in TTBR0
#endif
bfi x2, x1, #48, #16 // set the ASID
msr ttbr1_el1, x2 // in TTBR1 (since TCR.A1 is set)
isb
msr ttbr0_el1, x3 // now update TTBR0
isb
b post_ttbr_update_workaround // Back to C code...
SYM_FUNC_END(cpu_do_switch_mm)
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
.macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2 .macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
...@@ -423,6 +396,8 @@ SYM_FUNC_START(__cpu_setup) ...@@ -423,6 +396,8 @@ SYM_FUNC_START(__cpu_setup)
isb // Unmask debug exceptions now, isb // Unmask debug exceptions now,
enable_dbg // since this is per-cpu enable_dbg // since this is per-cpu
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x0 // Disable PMU access from EL0
reset_amuserenr_el0 x0 // Disable AMU access from EL0
/* /*
* Memory region attributes * Memory region attributes
*/ */
......
...@@ -21,6 +21,10 @@ ...@@ -21,6 +21,10 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/smp.h> #include <linux/smp.h>
__weak bool arch_freq_counters_available(struct cpumask *cpus)
{
return false;
}
DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
...@@ -29,6 +33,14 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, ...@@ -29,6 +33,14 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
unsigned long scale; unsigned long scale;
int i; int i;
/*
* If the use of counters for FIE is enabled, just return as we don't
* want to update the scale factor with information from CPUFREQ.
* Instead the scale factor will be updated from arch_scale_freq_tick.
*/
if (arch_freq_counters_available(cpus))
return;
scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq; scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq;
for_each_cpu(i, cpus) for_each_cpu(i, cpus)
......
...@@ -885,6 +885,17 @@ static int arch_timer_starting_cpu(unsigned int cpu) ...@@ -885,6 +885,17 @@ static int arch_timer_starting_cpu(unsigned int cpu)
return 0; return 0;
} }
static int validate_timer_rate(void)
{
if (!arch_timer_rate)
return -EINVAL;
/* Arch timer frequency < 1MHz can cause trouble */
WARN_ON(arch_timer_rate < 1000000);
return 0;
}
/* /*
* For historical reasons, when probing with DT we use whichever (non-zero) * For historical reasons, when probing with DT we use whichever (non-zero)
* rate was probed first, and don't verify that others match. If the first node * rate was probed first, and don't verify that others match. If the first node
...@@ -900,7 +911,7 @@ static void arch_timer_of_configure_rate(u32 rate, struct device_node *np) ...@@ -900,7 +911,7 @@ static void arch_timer_of_configure_rate(u32 rate, struct device_node *np)
arch_timer_rate = rate; arch_timer_rate = rate;
/* Check the timer frequency. */ /* Check the timer frequency. */
if (arch_timer_rate == 0) if (validate_timer_rate())
pr_warn("frequency not available\n"); pr_warn("frequency not available\n");
} }
...@@ -1594,9 +1605,10 @@ static int __init arch_timer_acpi_init(struct acpi_table_header *table) ...@@ -1594,9 +1605,10 @@ static int __init arch_timer_acpi_init(struct acpi_table_header *table)
* CNTFRQ value. This *must* be correct. * CNTFRQ value. This *must* be correct.
*/ */
arch_timer_rate = arch_timer_get_cntfrq(); arch_timer_rate = arch_timer_get_cntfrq();
if (!arch_timer_rate) { ret = validate_timer_rate();
if (ret) {
pr_err(FW_BUG "frequency not available.\n"); pr_err(FW_BUG "frequency not available.\n");
return -EINVAL; return ret;
} }
arch_timer_uses_ppi = arch_timer_select_ppi(); arch_timer_uses_ppi = arch_timer_select_ppi();
......
...@@ -1725,6 +1725,26 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu) ...@@ -1725,6 +1725,26 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu)
} }
EXPORT_SYMBOL(cpufreq_quick_get_max); EXPORT_SYMBOL(cpufreq_quick_get_max);
/**
* cpufreq_get_hw_max_freq - get the max hardware frequency of the CPU
* @cpu: CPU number
*
* The default return value is the max_freq field of cpuinfo.
*/
__weak unsigned int cpufreq_get_hw_max_freq(unsigned int cpu)
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
unsigned int ret_freq = 0;
if (policy) {
ret_freq = policy->cpuinfo.max_freq;
cpufreq_cpu_put(policy);
}
return ret_freq;
}
EXPORT_SYMBOL(cpufreq_get_hw_max_freq);
static unsigned int __cpufreq_get(struct cpufreq_policy *policy) static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
{ {
if (unlikely(policy_is_inactive(policy))) if (unlikely(policy_is_inactive(policy)))
......
...@@ -267,26 +267,19 @@ static struct sdei_event *sdei_event_create(u32 event_num, ...@@ -267,26 +267,19 @@ static struct sdei_event *sdei_event_create(u32 event_num,
event->private_registered = regs; event->private_registered = regs;
} }
if (sdei_event_find(event_num)) { spin_lock(&sdei_list_lock);
kfree(event->registered); list_add(&event->list, &sdei_list);
kfree(event); spin_unlock(&sdei_list_lock);
event = ERR_PTR(-EBUSY);
} else {
spin_lock(&sdei_list_lock);
list_add(&event->list, &sdei_list);
spin_unlock(&sdei_list_lock);
}
return event; return event;
} }
static void sdei_event_destroy(struct sdei_event *event) static void sdei_event_destroy_llocked(struct sdei_event *event)
{ {
lockdep_assert_held(&sdei_events_lock); lockdep_assert_held(&sdei_events_lock);
lockdep_assert_held(&sdei_list_lock);
spin_lock(&sdei_list_lock);
list_del(&event->list); list_del(&event->list);
spin_unlock(&sdei_list_lock);
if (event->type == SDEI_EVENT_TYPE_SHARED) if (event->type == SDEI_EVENT_TYPE_SHARED)
kfree(event->registered); kfree(event->registered);
...@@ -296,6 +289,13 @@ static void sdei_event_destroy(struct sdei_event *event) ...@@ -296,6 +289,13 @@ static void sdei_event_destroy(struct sdei_event *event)
kfree(event); kfree(event);
} }
static void sdei_event_destroy(struct sdei_event *event)
{
spin_lock(&sdei_list_lock);
sdei_event_destroy_llocked(event);
spin_unlock(&sdei_list_lock);
}
static int sdei_api_get_version(u64 *version) static int sdei_api_get_version(u64 *version)
{ {
return invoke_sdei_fn(SDEI_1_0_FN_SDEI_VERSION, 0, 0, 0, 0, 0, version); return invoke_sdei_fn(SDEI_1_0_FN_SDEI_VERSION, 0, 0, 0, 0, 0, version);
...@@ -412,14 +412,19 @@ int sdei_event_enable(u32 event_num) ...@@ -412,14 +412,19 @@ int sdei_event_enable(u32 event_num)
return -ENOENT; return -ENOENT;
} }
spin_lock(&sdei_list_lock);
event->reenable = true;
spin_unlock(&sdei_list_lock);
cpus_read_lock();
if (event->type == SDEI_EVENT_TYPE_SHARED) if (event->type == SDEI_EVENT_TYPE_SHARED)
err = sdei_api_event_enable(event->event_num); err = sdei_api_event_enable(event->event_num);
else else
err = sdei_do_cross_call(_local_event_enable, event); err = sdei_do_cross_call(_local_event_enable, event);
if (!err) {
spin_lock(&sdei_list_lock);
event->reenable = true;
spin_unlock(&sdei_list_lock);
}
cpus_read_unlock();
mutex_unlock(&sdei_events_lock); mutex_unlock(&sdei_events_lock);
return err; return err;
...@@ -491,11 +496,6 @@ static int _sdei_event_unregister(struct sdei_event *event) ...@@ -491,11 +496,6 @@ static int _sdei_event_unregister(struct sdei_event *event)
{ {
lockdep_assert_held(&sdei_events_lock); lockdep_assert_held(&sdei_events_lock);
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
if (event->type == SDEI_EVENT_TYPE_SHARED) if (event->type == SDEI_EVENT_TYPE_SHARED)
return sdei_api_event_unregister(event->event_num); return sdei_api_event_unregister(event->event_num);
...@@ -518,6 +518,11 @@ int sdei_event_unregister(u32 event_num) ...@@ -518,6 +518,11 @@ int sdei_event_unregister(u32 event_num)
break; break;
} }
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
err = _sdei_event_unregister(event); err = _sdei_event_unregister(event);
if (err) if (err)
break; break;
...@@ -585,26 +590,15 @@ static int _sdei_event_register(struct sdei_event *event) ...@@ -585,26 +590,15 @@ static int _sdei_event_register(struct sdei_event *event)
lockdep_assert_held(&sdei_events_lock); lockdep_assert_held(&sdei_events_lock);
spin_lock(&sdei_list_lock);
event->reregister = true;
spin_unlock(&sdei_list_lock);
if (event->type == SDEI_EVENT_TYPE_SHARED) if (event->type == SDEI_EVENT_TYPE_SHARED)
return sdei_api_event_register(event->event_num, return sdei_api_event_register(event->event_num,
sdei_entry_point, sdei_entry_point,
event->registered, event->registered,
SDEI_EVENT_REGISTER_RM_ANY, 0); SDEI_EVENT_REGISTER_RM_ANY, 0);
err = sdei_do_cross_call(_local_event_register, event); err = sdei_do_cross_call(_local_event_register, event);
if (err) { if (err)
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
sdei_do_cross_call(_local_event_unregister, event); sdei_do_cross_call(_local_event_unregister, event);
}
return err; return err;
} }
...@@ -632,12 +626,18 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) ...@@ -632,12 +626,18 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
break; break;
} }
cpus_read_lock();
err = _sdei_event_register(event); err = _sdei_event_register(event);
if (err) { if (err) {
sdei_event_destroy(event); sdei_event_destroy(event);
pr_warn("Failed to register event %u: %d\n", event_num, pr_warn("Failed to register event %u: %d\n", event_num,
err); err);
} else {
spin_lock(&sdei_list_lock);
event->reregister = true;
spin_unlock(&sdei_list_lock);
} }
cpus_read_unlock();
} while (0); } while (0);
mutex_unlock(&sdei_events_lock); mutex_unlock(&sdei_events_lock);
...@@ -645,16 +645,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) ...@@ -645,16 +645,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
} }
EXPORT_SYMBOL(sdei_event_register); EXPORT_SYMBOL(sdei_event_register);
static int sdei_reregister_event(struct sdei_event *event) static int sdei_reregister_event_llocked(struct sdei_event *event)
{ {
int err; int err;
lockdep_assert_held(&sdei_events_lock); lockdep_assert_held(&sdei_events_lock);
lockdep_assert_held(&sdei_list_lock);
err = _sdei_event_register(event); err = _sdei_event_register(event);
if (err) { if (err) {
pr_err("Failed to re-register event %u\n", event->event_num); pr_err("Failed to re-register event %u\n", event->event_num);
sdei_event_destroy(event); sdei_event_destroy_llocked(event);
return err; return err;
} }
...@@ -683,7 +684,7 @@ static int sdei_reregister_shared(void) ...@@ -683,7 +684,7 @@ static int sdei_reregister_shared(void)
continue; continue;
if (event->reregister) { if (event->reregister) {
err = sdei_reregister_event(event); err = sdei_reregister_event_llocked(event);
if (err) if (err)
break; break;
} }
......
...@@ -328,15 +328,15 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev, ...@@ -328,15 +328,15 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev,
struct arm_ccn_pmu_event, attr); struct arm_ccn_pmu_event, attr);
ssize_t res; ssize_t res;
res = snprintf(buf, PAGE_SIZE, "type=0x%x", event->type); res = scnprintf(buf, PAGE_SIZE, "type=0x%x", event->type);
if (event->event) if (event->event)
res += snprintf(buf + res, PAGE_SIZE - res, ",event=0x%x", res += scnprintf(buf + res, PAGE_SIZE - res, ",event=0x%x",
event->event); event->event);
if (event->def) if (event->def)
res += snprintf(buf + res, PAGE_SIZE - res, ",%s", res += scnprintf(buf + res, PAGE_SIZE - res, ",%s",
event->def); event->def);
if (event->mask) if (event->mask)
res += snprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x", res += scnprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x",
event->mask); event->mask);
/* Arguments required by an event */ /* Arguments required by an event */
...@@ -344,25 +344,25 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev, ...@@ -344,25 +344,25 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev,
case CCN_TYPE_CYCLES: case CCN_TYPE_CYCLES:
break; break;
case CCN_TYPE_XP: case CCN_TYPE_XP:
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",xp=?,vc=?"); ",xp=?,vc=?");
if (event->event == CCN_EVENT_WATCHPOINT) if (event->event == CCN_EVENT_WATCHPOINT)
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?"); ",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?");
else else
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",bus=?"); ",bus=?");
break; break;
case CCN_TYPE_MN: case CCN_TYPE_MN:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id); res += scnprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id);
break; break;
default: default:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=?"); res += scnprintf(buf + res, PAGE_SIZE - res, ",node=?");
break; break;
} }
res += snprintf(buf + res, PAGE_SIZE - res, "\n"); res += scnprintf(buf + res, PAGE_SIZE - res, "\n");
return res; return res;
} }
......
...@@ -831,7 +831,7 @@ static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages, ...@@ -831,7 +831,7 @@ static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
* parts and give userspace a fighting chance of getting some * parts and give userspace a fighting chance of getting some
* useful data out of it. * useful data out of it.
*/ */
if (!nr_pages || (snapshot && (nr_pages & 1))) if (snapshot && (nr_pages & 1))
return NULL; return NULL;
if (cpu == -1) if (cpu == -1)
......
...@@ -33,6 +33,8 @@ unsigned long topology_get_freq_scale(int cpu) ...@@ -33,6 +33,8 @@ unsigned long topology_get_freq_scale(int cpu)
return per_cpu(freq_scale, cpu); return per_cpu(freq_scale, cpu);
} }
bool arch_freq_counters_available(struct cpumask *cpus);
struct cpu_topology { struct cpu_topology {
int thread_id; int thread_id;
int core_id; int core_id;
......
...@@ -205,6 +205,7 @@ static inline bool policy_is_shared(struct cpufreq_policy *policy) ...@@ -205,6 +205,7 @@ static inline bool policy_is_shared(struct cpufreq_policy *policy)
unsigned int cpufreq_get(unsigned int cpu); unsigned int cpufreq_get(unsigned int cpu);
unsigned int cpufreq_quick_get(unsigned int cpu); unsigned int cpufreq_quick_get(unsigned int cpu);
unsigned int cpufreq_quick_get_max(unsigned int cpu); unsigned int cpufreq_quick_get_max(unsigned int cpu);
unsigned int cpufreq_get_hw_max_freq(unsigned int cpu);
void disable_cpufreq(void); void disable_cpufreq(void);
u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy); u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy);
...@@ -232,6 +233,10 @@ static inline unsigned int cpufreq_quick_get_max(unsigned int cpu) ...@@ -232,6 +233,10 @@ static inline unsigned int cpufreq_quick_get_max(unsigned int cpu)
{ {
return 0; return 0;
} }
static inline unsigned int cpufreq_get_hw_max_freq(unsigned int cpu)
{
return 0;
}
static inline void disable_cpufreq(void) { } static inline void disable_cpufreq(void) { }
#endif #endif
......
...@@ -80,6 +80,7 @@ struct arm_pmu { ...@@ -80,6 +80,7 @@ struct arm_pmu {
struct pmu pmu; struct pmu pmu;
cpumask_t supported_cpus; cpumask_t supported_cpus;
char *name; char *name;
int pmuver;
irqreturn_t (*handle_irq)(struct arm_pmu *pmu); irqreturn_t (*handle_irq)(struct arm_pmu *pmu);
void (*enable)(struct perf_event *event); void (*enable)(struct perf_event *event);
void (*disable)(struct perf_event *event); void (*disable)(struct perf_event *event);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment