Commit 27fd8085 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull misc x86 fixes from Ingo Molnar:

 - Follow up fixes for the BHI mitigations code

 - Fix !SPECULATION_MITIGATIONS bug not turning off mitigations as
   expected

 - Work around an APIC emulation bug when the kernel is built with Clang
   and run as a SEV guest

 - Follow up x86 topology fixes

* tag 'x86-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/cpu/amd: Move TOPOEXT enablement into the topology parser
  x86/cpu/amd: Make the NODEID_MSR union actually work
  x86/cpu/amd: Make the CPUID 0x80000008 parser correct
  x86/bugs: Replace CONFIG_SPECTRE_BHI_{ON,OFF} with CONFIG_MITIGATION_SPECTRE_BHI
  x86/bugs: Remove CONFIG_BHI_MITIGATION_AUTO and spectre_bhi=auto
  x86/bugs: Clarify that syscall hardening isn't a BHI mitigation
  x86/bugs: Fix BHI handling of RRSBA
  x86/bugs: Rename various 'ia32_cap' variables to 'x86_arch_cap_msr'
  x86/bugs: Cache the value of MSR_IA32_ARCH_CAPABILITIES
  x86/bugs: Fix BHI documentation
  x86/cpu: Actually turn off mitigations by default for SPECULATION_MITIGATIONS=n
  x86/topology: Don't update cpu_possible_map in topo_set_cpuids()
  x86/bugs: Fix return type of spectre_bhi_state()
  x86/apic: Force native_apic_mem_read() to use the MOV instruction
parents c748fc3b 7211274f
...@@ -439,12 +439,12 @@ The possible values in this file are: ...@@ -439,12 +439,12 @@ The possible values in this file are:
- System is protected by retpoline - System is protected by retpoline
* - BHI: BHI_DIS_S * - BHI: BHI_DIS_S
- System is protected by BHI_DIS_S - System is protected by BHI_DIS_S
* - BHI: SW loop; KVM SW loop * - BHI: SW loop, KVM SW loop
- System is protected by software clearing sequence - System is protected by software clearing sequence
* - BHI: Syscall hardening * - BHI: Vulnerable
- Syscalls are hardened against BHI - System is vulnerable to BHI
* - BHI: Syscall hardening; KVM: SW loop * - BHI: Vulnerable, KVM: SW loop
- System is protected from userspace attacks by syscall hardening; KVM is protected by software clearing sequence - System is vulnerable; KVM is protected by software clearing sequence
Full mitigation might require a microcode update from the CPU Full mitigation might require a microcode update from the CPU
vendor. When the necessary microcode is not available, the kernel will vendor. When the necessary microcode is not available, the kernel will
...@@ -661,18 +661,14 @@ kernel command line. ...@@ -661,18 +661,14 @@ kernel command line.
spectre_bhi= spectre_bhi=
[X86] Control mitigation of Branch History Injection [X86] Control mitigation of Branch History Injection
(BHI) vulnerability. Syscalls are hardened against BHI (BHI) vulnerability. This setting affects the deployment
regardless of this setting. This setting affects the deployment
of the HW BHI control and the SW BHB clearing sequence. of the HW BHI control and the SW BHB clearing sequence.
on on
unconditionally enable. (default) Enable the HW or SW mitigation as
needed.
off off
unconditionally disable. Disable the mitigation.
auto
enable if hardware mitigation
control(BHI_DIS_S) is available, otherwise
enable alternate mitigation in KVM.
For spectre_v2_user see Documentation/admin-guide/kernel-parameters.txt For spectre_v2_user see Documentation/admin-guide/kernel-parameters.txt
......
...@@ -3444,6 +3444,7 @@ ...@@ -3444,6 +3444,7 @@
retbleed=off [X86] retbleed=off [X86]
spec_rstack_overflow=off [X86] spec_rstack_overflow=off [X86]
spec_store_bypass_disable=off [X86,PPC] spec_store_bypass_disable=off [X86,PPC]
spectre_bhi=off [X86]
spectre_v2_user=off [X86] spectre_v2_user=off [X86]
srbds=off [X86,INTEL] srbds=off [X86,INTEL]
ssbd=force-off [ARM64] ssbd=force-off [ARM64]
...@@ -6064,16 +6065,13 @@ ...@@ -6064,16 +6065,13 @@
See Documentation/admin-guide/laptops/sonypi.rst See Documentation/admin-guide/laptops/sonypi.rst
spectre_bhi= [X86] Control mitigation of Branch History Injection spectre_bhi= [X86] Control mitigation of Branch History Injection
(BHI) vulnerability. Syscalls are hardened against BHI (BHI) vulnerability. This setting affects the
reglardless of this setting. This setting affects the
deployment of the HW BHI control and the SW BHB deployment of the HW BHI control and the SW BHB
clearing sequence. clearing sequence.
on - unconditionally enable. on - (default) Enable the HW or SW mitigation
off - unconditionally disable. as needed.
auto - (default) enable hardware mitigation off - Disable the mitigation.
(BHI_DIS_S) if available, otherwise enable
alternate mitigation in KVM.
spectre_v2= [X86,EARLY] Control mitigation of Spectre variant 2 spectre_v2= [X86,EARLY] Control mitigation of Spectre variant 2
(indirect branch speculation) vulnerability. (indirect branch speculation) vulnerability.
......
...@@ -2633,32 +2633,16 @@ config MITIGATION_RFDS ...@@ -2633,32 +2633,16 @@ config MITIGATION_RFDS
stored in floating point, vector and integer registers. stored in floating point, vector and integer registers.
See also <file:Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst> See also <file:Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst>
choice config MITIGATION_SPECTRE_BHI
prompt "Clear branch history" bool "Mitigate Spectre-BHB (Branch History Injection)"
depends on CPU_SUP_INTEL depends on CPU_SUP_INTEL
default SPECTRE_BHI_ON default y
help help
Enable BHI mitigations. BHI attacks are a form of Spectre V2 attacks Enable BHI mitigations. BHI attacks are a form of Spectre V2 attacks
where the branch history buffer is poisoned to speculatively steer where the branch history buffer is poisoned to speculatively steer
indirect branches. indirect branches.
See <file:Documentation/admin-guide/hw-vuln/spectre.rst> See <file:Documentation/admin-guide/hw-vuln/spectre.rst>
config SPECTRE_BHI_ON
bool "on"
help
Equivalent to setting spectre_bhi=on command line parameter.
config SPECTRE_BHI_OFF
bool "off"
help
Equivalent to setting spectre_bhi=off command line parameter.
config SPECTRE_BHI_AUTO
bool "auto"
depends on BROKEN
help
Equivalent to setting spectre_bhi=auto command line parameter.
endchoice
endif endif
config ARCH_HAS_ADD_PAGES config ARCH_HAS_ADD_PAGES
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/hardirq.h> #include <asm/hardirq.h>
#include <asm/io.h>
#define ARCH_APICTIMER_STOPS_ON_C3 1 #define ARCH_APICTIMER_STOPS_ON_C3 1
...@@ -98,7 +99,7 @@ static inline void native_apic_mem_write(u32 reg, u32 v) ...@@ -98,7 +99,7 @@ static inline void native_apic_mem_write(u32 reg, u32 v)
static inline u32 native_apic_mem_read(u32 reg) static inline u32 native_apic_mem_read(u32 reg)
{ {
return *((volatile u32 *)(APIC_BASE + reg)); return readl((void __iomem *)(APIC_BASE + reg));
} }
static inline void native_apic_mem_eoi(void) static inline void native_apic_mem_eoi(void)
......
...@@ -1687,11 +1687,11 @@ static int x2apic_state; ...@@ -1687,11 +1687,11 @@ static int x2apic_state;
static bool x2apic_hw_locked(void) static bool x2apic_hw_locked(void)
{ {
u64 ia32_cap; u64 x86_arch_cap_msr;
u64 msr; u64 msr;
ia32_cap = x86_read_arch_cap_msr(); x86_arch_cap_msr = x86_read_arch_cap_msr();
if (ia32_cap & ARCH_CAP_XAPIC_DISABLE) { if (x86_arch_cap_msr & ARCH_CAP_XAPIC_DISABLE) {
rdmsrl(MSR_IA32_XAPIC_DISABLE_STATUS, msr); rdmsrl(MSR_IA32_XAPIC_DISABLE_STATUS, msr);
return (msr & LEGACY_XAPIC_DISABLED); return (msr & LEGACY_XAPIC_DISABLED);
} }
......
...@@ -535,7 +535,6 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c) ...@@ -535,7 +535,6 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
static void early_init_amd(struct cpuinfo_x86 *c) static void early_init_amd(struct cpuinfo_x86 *c)
{ {
u64 value;
u32 dummy; u32 dummy;
if (c->x86 >= 0xf) if (c->x86 >= 0xf)
...@@ -603,20 +602,6 @@ static void early_init_amd(struct cpuinfo_x86 *c) ...@@ -603,20 +602,6 @@ static void early_init_amd(struct cpuinfo_x86 *c)
early_detect_mem_encrypt(c); early_detect_mem_encrypt(c);
/* Re-enable TopologyExtensions if switched off by BIOS */
if (c->x86 == 0x15 &&
(c->x86_model >= 0x10 && c->x86_model <= 0x6f) &&
!cpu_has(c, X86_FEATURE_TOPOEXT)) {
if (msr_set_bit(0xc0011005, 54) > 0) {
rdmsrl(0xc0011005, value);
if (value & BIT_64(54)) {
set_cpu_cap(c, X86_FEATURE_TOPOEXT);
pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
}
}
}
if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) { if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB)) if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE); setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
......
...@@ -61,6 +61,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); ...@@ -61,6 +61,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
EXPORT_SYMBOL_GPL(x86_pred_cmd); EXPORT_SYMBOL_GPL(x86_pred_cmd);
static u64 __ro_after_init x86_arch_cap_msr;
static DEFINE_MUTEX(spec_ctrl_mutex); static DEFINE_MUTEX(spec_ctrl_mutex);
void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk; void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
...@@ -144,6 +146,8 @@ void __init cpu_select_mitigations(void) ...@@ -144,6 +146,8 @@ void __init cpu_select_mitigations(void)
x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK; x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK;
} }
x86_arch_cap_msr = x86_read_arch_cap_msr();
/* Select the proper CPU mitigations before patching alternatives: */ /* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation(); spectre_v1_select_mitigation();
spectre_v2_select_mitigation(); spectre_v2_select_mitigation();
...@@ -301,8 +305,6 @@ static const char * const taa_strings[] = { ...@@ -301,8 +305,6 @@ static const char * const taa_strings[] = {
static void __init taa_select_mitigation(void) static void __init taa_select_mitigation(void)
{ {
u64 ia32_cap;
if (!boot_cpu_has_bug(X86_BUG_TAA)) { if (!boot_cpu_has_bug(X86_BUG_TAA)) {
taa_mitigation = TAA_MITIGATION_OFF; taa_mitigation = TAA_MITIGATION_OFF;
return; return;
...@@ -341,9 +343,8 @@ static void __init taa_select_mitigation(void) ...@@ -341,9 +343,8 @@ static void __init taa_select_mitigation(void)
* On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
* update is required. * update is required.
*/ */
ia32_cap = x86_read_arch_cap_msr(); if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
if ( (ia32_cap & ARCH_CAP_MDS_NO) && !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
taa_mitigation = TAA_MITIGATION_UCODE_NEEDED; taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
/* /*
...@@ -401,8 +402,6 @@ static const char * const mmio_strings[] = { ...@@ -401,8 +402,6 @@ static const char * const mmio_strings[] = {
static void __init mmio_select_mitigation(void) static void __init mmio_select_mitigation(void)
{ {
u64 ia32_cap;
if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN) || boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN) ||
cpu_mitigations_off()) { cpu_mitigations_off()) {
...@@ -413,8 +412,6 @@ static void __init mmio_select_mitigation(void) ...@@ -413,8 +412,6 @@ static void __init mmio_select_mitigation(void)
if (mmio_mitigation == MMIO_MITIGATION_OFF) if (mmio_mitigation == MMIO_MITIGATION_OFF)
return; return;
ia32_cap = x86_read_arch_cap_msr();
/* /*
* Enable CPU buffer clear mitigation for host and VMM, if also affected * Enable CPU buffer clear mitigation for host and VMM, if also affected
* by MDS or TAA. Otherwise, enable mitigation for VMM only. * by MDS or TAA. Otherwise, enable mitigation for VMM only.
...@@ -437,7 +434,7 @@ static void __init mmio_select_mitigation(void) ...@@ -437,7 +434,7 @@ static void __init mmio_select_mitigation(void)
* be propagated to uncore buffers, clearing the Fill buffers on idle * be propagated to uncore buffers, clearing the Fill buffers on idle
* is required irrespective of SMT state. * is required irrespective of SMT state.
*/ */
if (!(ia32_cap & ARCH_CAP_FBSDP_NO)) if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
static_branch_enable(&mds_idle_clear); static_branch_enable(&mds_idle_clear);
/* /*
...@@ -447,10 +444,10 @@ static void __init mmio_select_mitigation(void) ...@@ -447,10 +444,10 @@ static void __init mmio_select_mitigation(void)
* FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
* affected systems. * affected systems.
*/ */
if ((ia32_cap & ARCH_CAP_FB_CLEAR) || if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
(boot_cpu_has(X86_FEATURE_MD_CLEAR) && (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
boot_cpu_has(X86_FEATURE_FLUSH_L1D) && boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
!(ia32_cap & ARCH_CAP_MDS_NO))) !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
mmio_mitigation = MMIO_MITIGATION_VERW; mmio_mitigation = MMIO_MITIGATION_VERW;
else else
mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED; mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
...@@ -508,7 +505,7 @@ static void __init rfds_select_mitigation(void) ...@@ -508,7 +505,7 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_OFF) if (rfds_mitigation == RFDS_MITIGATION_OFF)
return; return;
if (x86_read_arch_cap_msr() & ARCH_CAP_RFDS_CLEAR) if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
else else
rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED; rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
...@@ -659,8 +656,6 @@ void update_srbds_msr(void) ...@@ -659,8 +656,6 @@ void update_srbds_msr(void)
static void __init srbds_select_mitigation(void) static void __init srbds_select_mitigation(void)
{ {
u64 ia32_cap;
if (!boot_cpu_has_bug(X86_BUG_SRBDS)) if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return; return;
...@@ -669,8 +664,7 @@ static void __init srbds_select_mitigation(void) ...@@ -669,8 +664,7 @@ static void __init srbds_select_mitigation(void)
* are only exposed to SRBDS when TSX is enabled or when CPU is affected * are only exposed to SRBDS when TSX is enabled or when CPU is affected
* by Processor MMIO Stale Data vulnerability. * by Processor MMIO Stale Data vulnerability.
*/ */
ia32_cap = x86_read_arch_cap_msr(); if ((x86_arch_cap_msr & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM) &&
if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM) &&
!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) !boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
srbds_mitigation = SRBDS_MITIGATION_TSX_OFF; srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
else if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
...@@ -813,7 +807,7 @@ static void __init gds_select_mitigation(void) ...@@ -813,7 +807,7 @@ static void __init gds_select_mitigation(void)
/* Will verify below that mitigation _can_ be disabled */ /* Will verify below that mitigation _can_ be disabled */
/* No microcode */ /* No microcode */
if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) { if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
if (gds_mitigation == GDS_MITIGATION_FORCE) { if (gds_mitigation == GDS_MITIGATION_FORCE) {
/* /*
* This only needs to be done on the boot CPU so do it * This only needs to be done on the boot CPU so do it
...@@ -1544,20 +1538,25 @@ static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) ...@@ -1544,20 +1538,25 @@ static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
return SPECTRE_V2_RETPOLINE; return SPECTRE_V2_RETPOLINE;
} }
static bool __ro_after_init rrsba_disabled;
/* Disable in-kernel use of non-RSB RET predictors */ /* Disable in-kernel use of non-RSB RET predictors */
static void __init spec_ctrl_disable_kernel_rrsba(void) static void __init spec_ctrl_disable_kernel_rrsba(void)
{ {
u64 ia32_cap; if (rrsba_disabled)
return;
if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL)) if (!(x86_arch_cap_msr & ARCH_CAP_RRSBA)) {
rrsba_disabled = true;
return; return;
}
ia32_cap = x86_read_arch_cap_msr(); if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL))
return;
if (ia32_cap & ARCH_CAP_RRSBA) {
x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S; x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S;
update_spec_ctrl(x86_spec_ctrl_base); update_spec_ctrl(x86_spec_ctrl_base);
} rrsba_disabled = true;
} }
static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode) static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
...@@ -1626,13 +1625,10 @@ static bool __init spec_ctrl_bhi_dis(void) ...@@ -1626,13 +1625,10 @@ static bool __init spec_ctrl_bhi_dis(void)
enum bhi_mitigations { enum bhi_mitigations {
BHI_MITIGATION_OFF, BHI_MITIGATION_OFF,
BHI_MITIGATION_ON, BHI_MITIGATION_ON,
BHI_MITIGATION_AUTO,
}; };
static enum bhi_mitigations bhi_mitigation __ro_after_init = static enum bhi_mitigations bhi_mitigation __ro_after_init =
IS_ENABLED(CONFIG_SPECTRE_BHI_ON) ? BHI_MITIGATION_ON : IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
IS_ENABLED(CONFIG_SPECTRE_BHI_OFF) ? BHI_MITIGATION_OFF :
BHI_MITIGATION_AUTO;
static int __init spectre_bhi_parse_cmdline(char *str) static int __init spectre_bhi_parse_cmdline(char *str)
{ {
...@@ -1643,8 +1639,6 @@ static int __init spectre_bhi_parse_cmdline(char *str) ...@@ -1643,8 +1639,6 @@ static int __init spectre_bhi_parse_cmdline(char *str)
bhi_mitigation = BHI_MITIGATION_OFF; bhi_mitigation = BHI_MITIGATION_OFF;
else if (!strcmp(str, "on")) else if (!strcmp(str, "on"))
bhi_mitigation = BHI_MITIGATION_ON; bhi_mitigation = BHI_MITIGATION_ON;
else if (!strcmp(str, "auto"))
bhi_mitigation = BHI_MITIGATION_AUTO;
else else
pr_err("Ignoring unknown spectre_bhi option (%s)", str); pr_err("Ignoring unknown spectre_bhi option (%s)", str);
...@@ -1658,9 +1652,11 @@ static void __init bhi_select_mitigation(void) ...@@ -1658,9 +1652,11 @@ static void __init bhi_select_mitigation(void)
return; return;
/* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */ /* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */
if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) && if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
!(x86_read_arch_cap_msr() & ARCH_CAP_RRSBA)) spec_ctrl_disable_kernel_rrsba();
if (rrsba_disabled)
return; return;
}
if (spec_ctrl_bhi_dis()) if (spec_ctrl_bhi_dis())
return; return;
...@@ -1672,9 +1668,6 @@ static void __init bhi_select_mitigation(void) ...@@ -1672,9 +1668,6 @@ static void __init bhi_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT); setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
pr_info("Spectre BHI mitigation: SW BHB clearing on vm exit\n"); pr_info("Spectre BHI mitigation: SW BHB clearing on vm exit\n");
if (bhi_mitigation == BHI_MITIGATION_AUTO)
return;
/* Mitigate syscalls when the mitigation is forced =on */ /* Mitigate syscalls when the mitigation is forced =on */
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP); setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n"); pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n");
...@@ -1908,8 +1901,6 @@ static void update_indir_branch_cond(void) ...@@ -1908,8 +1901,6 @@ static void update_indir_branch_cond(void)
/* Update the static key controlling the MDS CPU buffer clear in idle */ /* Update the static key controlling the MDS CPU buffer clear in idle */
static void update_mds_branch_idle(void) static void update_mds_branch_idle(void)
{ {
u64 ia32_cap = x86_read_arch_cap_msr();
/* /*
* Enable the idle clearing if SMT is active on CPUs which are * Enable the idle clearing if SMT is active on CPUs which are
* affected only by MSBDS and not any other MDS variant. * affected only by MSBDS and not any other MDS variant.
...@@ -1924,7 +1915,7 @@ static void update_mds_branch_idle(void) ...@@ -1924,7 +1915,7 @@ static void update_mds_branch_idle(void)
if (sched_smt_active()) { if (sched_smt_active()) {
static_branch_enable(&mds_idle_clear); static_branch_enable(&mds_idle_clear);
} else if (mmio_mitigation == MMIO_MITIGATION_OFF || } else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
(ia32_cap & ARCH_CAP_FBSDP_NO)) { (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) {
static_branch_disable(&mds_idle_clear); static_branch_disable(&mds_idle_clear);
} }
} }
...@@ -2809,7 +2800,7 @@ static char *pbrsb_eibrs_state(void) ...@@ -2809,7 +2800,7 @@ static char *pbrsb_eibrs_state(void)
} }
} }
static const char * const spectre_bhi_state(void) static const char *spectre_bhi_state(void)
{ {
if (!boot_cpu_has_bug(X86_BUG_BHI)) if (!boot_cpu_has_bug(X86_BUG_BHI))
return "; BHI: Not affected"; return "; BHI: Not affected";
...@@ -2817,13 +2808,12 @@ static const char * const spectre_bhi_state(void) ...@@ -2817,13 +2808,12 @@ static const char * const spectre_bhi_state(void)
return "; BHI: BHI_DIS_S"; return "; BHI: BHI_DIS_S";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP)) else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
return "; BHI: SW loop, KVM: SW loop"; return "; BHI: SW loop, KVM: SW loop";
else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled)
!(x86_read_arch_cap_msr() & ARCH_CAP_RRSBA))
return "; BHI: Retpoline"; return "; BHI: Retpoline";
else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT)) else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT))
return "; BHI: Syscall hardening, KVM: SW loop"; return "; BHI: Vulnerable, KVM: SW loop";
return "; BHI: Vulnerable (Syscall hardening enabled)"; return "; BHI: Vulnerable";
} }
static ssize_t spectre_v2_show_state(char *buf) static ssize_t spectre_v2_show_state(char *buf)
......
...@@ -1284,25 +1284,25 @@ static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long whi ...@@ -1284,25 +1284,25 @@ static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long whi
u64 x86_read_arch_cap_msr(void) u64 x86_read_arch_cap_msr(void)
{ {
u64 ia32_cap = 0; u64 x86_arch_cap_msr = 0;
if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); rdmsrl(MSR_IA32_ARCH_CAPABILITIES, x86_arch_cap_msr);
return ia32_cap; return x86_arch_cap_msr;
} }
static bool arch_cap_mmio_immune(u64 ia32_cap) static bool arch_cap_mmio_immune(u64 x86_arch_cap_msr)
{ {
return (ia32_cap & ARCH_CAP_FBSDP_NO && return (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO &&
ia32_cap & ARCH_CAP_PSDP_NO && x86_arch_cap_msr & ARCH_CAP_PSDP_NO &&
ia32_cap & ARCH_CAP_SBDR_SSDP_NO); x86_arch_cap_msr & ARCH_CAP_SBDR_SSDP_NO);
} }
static bool __init vulnerable_to_rfds(u64 ia32_cap) static bool __init vulnerable_to_rfds(u64 x86_arch_cap_msr)
{ {
/* The "immunity" bit trumps everything else: */ /* The "immunity" bit trumps everything else: */
if (ia32_cap & ARCH_CAP_RFDS_NO) if (x86_arch_cap_msr & ARCH_CAP_RFDS_NO)
return false; return false;
/* /*
...@@ -1310,7 +1310,7 @@ static bool __init vulnerable_to_rfds(u64 ia32_cap) ...@@ -1310,7 +1310,7 @@ static bool __init vulnerable_to_rfds(u64 ia32_cap)
* indicate that mitigation is needed because guest is running on a * indicate that mitigation is needed because guest is running on a
* vulnerable hardware or may migrate to such hardware: * vulnerable hardware or may migrate to such hardware:
*/ */
if (ia32_cap & ARCH_CAP_RFDS_CLEAR) if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
return true; return true;
/* Only consult the blacklist when there is no enumeration: */ /* Only consult the blacklist when there is no enumeration: */
...@@ -1319,11 +1319,11 @@ static bool __init vulnerable_to_rfds(u64 ia32_cap) ...@@ -1319,11 +1319,11 @@ static bool __init vulnerable_to_rfds(u64 ia32_cap)
static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
{ {
u64 ia32_cap = x86_read_arch_cap_msr(); u64 x86_arch_cap_msr = x86_read_arch_cap_msr();
/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */ /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) && if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
!(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) !(x86_arch_cap_msr & ARCH_CAP_PSCHANGE_MC_NO))
setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT); setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION)) if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
...@@ -1335,7 +1335,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1335,7 +1335,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_SPECTRE_V2); setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) && if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
!(ia32_cap & ARCH_CAP_SSB_NO) && !(x86_arch_cap_msr & ARCH_CAP_SSB_NO) &&
!cpu_has(c, X86_FEATURE_AMD_SSB_NO)) !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
...@@ -1346,17 +1346,17 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1346,17 +1346,17 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
* Don't use AutoIBRS when SNP is enabled because it degrades host * Don't use AutoIBRS when SNP is enabled because it degrades host
* userspace indirect branch performance. * userspace indirect branch performance.
*/ */
if ((ia32_cap & ARCH_CAP_IBRS_ALL) || if ((x86_arch_cap_msr & ARCH_CAP_IBRS_ALL) ||
(cpu_has(c, X86_FEATURE_AUTOIBRS) && (cpu_has(c, X86_FEATURE_AUTOIBRS) &&
!cpu_feature_enabled(X86_FEATURE_SEV_SNP))) { !cpu_feature_enabled(X86_FEATURE_SEV_SNP))) {
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
!(ia32_cap & ARCH_CAP_PBRSB_NO)) !(x86_arch_cap_msr & ARCH_CAP_PBRSB_NO))
setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
} }
if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) && if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
!(ia32_cap & ARCH_CAP_MDS_NO)) { !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)) {
setup_force_cpu_bug(X86_BUG_MDS); setup_force_cpu_bug(X86_BUG_MDS);
if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY)) if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
setup_force_cpu_bug(X86_BUG_MSBDS_ONLY); setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
...@@ -1375,9 +1375,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1375,9 +1375,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
* TSX_CTRL check alone is not sufficient for cases when the microcode * TSX_CTRL check alone is not sufficient for cases when the microcode
* update is not present or running as guest that don't get TSX_CTRL. * update is not present or running as guest that don't get TSX_CTRL.
*/ */
if (!(ia32_cap & ARCH_CAP_TAA_NO) && if (!(x86_arch_cap_msr & ARCH_CAP_TAA_NO) &&
(cpu_has(c, X86_FEATURE_RTM) || (cpu_has(c, X86_FEATURE_RTM) ||
(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) (x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR)))
setup_force_cpu_bug(X86_BUG_TAA); setup_force_cpu_bug(X86_BUG_TAA);
/* /*
...@@ -1403,7 +1403,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1403,7 +1403,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
* Set X86_BUG_MMIO_UNKNOWN for CPUs that are neither in the blacklist, * Set X86_BUG_MMIO_UNKNOWN for CPUs that are neither in the blacklist,
* nor in the whitelist and also don't enumerate MSR ARCH_CAP MMIO bits. * nor in the whitelist and also don't enumerate MSR ARCH_CAP MMIO bits.
*/ */
if (!arch_cap_mmio_immune(ia32_cap)) { if (!arch_cap_mmio_immune(x86_arch_cap_msr)) {
if (cpu_matches(cpu_vuln_blacklist, MMIO)) if (cpu_matches(cpu_vuln_blacklist, MMIO))
setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA); setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
else if (!cpu_matches(cpu_vuln_whitelist, NO_MMIO)) else if (!cpu_matches(cpu_vuln_whitelist, NO_MMIO))
...@@ -1411,7 +1411,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1411,7 +1411,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
} }
if (!cpu_has(c, X86_FEATURE_BTC_NO)) { if (!cpu_has(c, X86_FEATURE_BTC_NO)) {
if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA)) if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (x86_arch_cap_msr & ARCH_CAP_RSBA))
setup_force_cpu_bug(X86_BUG_RETBLEED); setup_force_cpu_bug(X86_BUG_RETBLEED);
} }
...@@ -1429,15 +1429,15 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1429,15 +1429,15 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
* disabling AVX2. The only way to do this in HW is to clear XCR0[2], * disabling AVX2. The only way to do this in HW is to clear XCR0[2],
* which means that AVX will be disabled. * which means that AVX will be disabled.
*/ */
if (cpu_matches(cpu_vuln_blacklist, GDS) && !(ia32_cap & ARCH_CAP_GDS_NO) && if (cpu_matches(cpu_vuln_blacklist, GDS) && !(x86_arch_cap_msr & ARCH_CAP_GDS_NO) &&
boot_cpu_has(X86_FEATURE_AVX)) boot_cpu_has(X86_FEATURE_AVX))
setup_force_cpu_bug(X86_BUG_GDS); setup_force_cpu_bug(X86_BUG_GDS);
if (vulnerable_to_rfds(ia32_cap)) if (vulnerable_to_rfds(x86_arch_cap_msr))
setup_force_cpu_bug(X86_BUG_RFDS); setup_force_cpu_bug(X86_BUG_RFDS);
/* When virtualized, eIBRS could be hidden, assume vulnerable */ /* When virtualized, eIBRS could be hidden, assume vulnerable */
if (!(ia32_cap & ARCH_CAP_BHI_NO) && if (!(x86_arch_cap_msr & ARCH_CAP_BHI_NO) &&
!cpu_matches(cpu_vuln_whitelist, NO_BHI) && !cpu_matches(cpu_vuln_whitelist, NO_BHI) &&
(boot_cpu_has(X86_FEATURE_IBRS_ENHANCED) || (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED) ||
boot_cpu_has(X86_FEATURE_HYPERVISOR))) boot_cpu_has(X86_FEATURE_HYPERVISOR)))
...@@ -1447,7 +1447,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1447,7 +1447,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
return; return;
/* Rogue Data Cache Load? No! */ /* Rogue Data Cache Load? No! */
if (ia32_cap & ARCH_CAP_RDCL_NO) if (x86_arch_cap_msr & ARCH_CAP_RDCL_NO)
return; return;
setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
......
...@@ -123,7 +123,6 @@ static void topo_set_cpuids(unsigned int cpu, u32 apic_id, u32 acpi_id) ...@@ -123,7 +123,6 @@ static void topo_set_cpuids(unsigned int cpu, u32 apic_id, u32 acpi_id)
early_per_cpu(x86_cpu_to_apicid, cpu) = apic_id; early_per_cpu(x86_cpu_to_apicid, cpu) = apic_id;
early_per_cpu(x86_cpu_to_acpiid, cpu) = acpi_id; early_per_cpu(x86_cpu_to_acpiid, cpu) = acpi_id;
#endif #endif
set_cpu_possible(cpu, true);
set_cpu_present(cpu, true); set_cpu_present(cpu, true);
} }
...@@ -210,7 +209,11 @@ static __init void topo_register_apic(u32 apic_id, u32 acpi_id, bool present) ...@@ -210,7 +209,11 @@ static __init void topo_register_apic(u32 apic_id, u32 acpi_id, bool present)
topo_info.nr_disabled_cpus++; topo_info.nr_disabled_cpus++;
} }
/* Register present and possible CPUs in the domain maps */ /*
* Register present and possible CPUs in the domain
* maps. cpu_possible_map will be updated in
* topology_init_possible_cpus() after enumeration is done.
*/
for (dom = TOPO_SMT_DOMAIN; dom < TOPO_MAX_DOMAIN; dom++) for (dom = TOPO_SMT_DOMAIN; dom < TOPO_MAX_DOMAIN; dom++)
set_bit(topo_apicid(apic_id, dom), apic_maps[dom].map); set_bit(topo_apicid(apic_id, dom), apic_maps[dom].map);
} }
......
...@@ -29,11 +29,21 @@ static bool parse_8000_0008(struct topo_scan *tscan) ...@@ -29,11 +29,21 @@ static bool parse_8000_0008(struct topo_scan *tscan)
if (!sft) if (!sft)
sft = get_count_order(ecx.cpu_nthreads + 1); sft = get_count_order(ecx.cpu_nthreads + 1);
topology_set_dom(tscan, TOPO_SMT_DOMAIN, sft, ecx.cpu_nthreads + 1); /*
* cpu_nthreads describes the number of threads in the package
* sft is the number of APIC ID bits per package
*
* As the number of actual threads per core is not described in
* this leaf, just set the CORE domain shift and let the later
* parsers set SMT shift. Assume one thread per core by default
* which is correct if there are no other CPUID leafs to parse.
*/
topology_update_dom(tscan, TOPO_SMT_DOMAIN, 0, 1);
topology_set_dom(tscan, TOPO_CORE_DOMAIN, sft, ecx.cpu_nthreads + 1);
return true; return true;
} }
static void store_node(struct topo_scan *tscan, unsigned int nr_nodes, u16 node_id) static void store_node(struct topo_scan *tscan, u16 nr_nodes, u16 node_id)
{ {
/* /*
* Starting with Fam 17h the DIE domain could probably be used to * Starting with Fam 17h the DIE domain could probably be used to
...@@ -73,12 +83,14 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb) ...@@ -73,12 +83,14 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb)
tscan->c->topo.initial_apicid = leaf.ext_apic_id; tscan->c->topo.initial_apicid = leaf.ext_apic_id;
/* /*
* If leaf 0xb is available, then SMT shift is set already. If not * If leaf 0xb is available, then the domain shifts are set
* take it from ecx.threads_per_core and use topo_update_dom() - * already and nothing to do here.
* topology_set_dom() would propagate and overwrite the already
* propagated CORE level.
*/ */
if (!has_0xb) { if (!has_0xb) {
/*
* Leaf 0x80000008 set the CORE domain shift already.
* Update the SMT domain, but do not propagate it.
*/
unsigned int nthreads = leaf.core_nthreads + 1; unsigned int nthreads = leaf.core_nthreads + 1;
topology_update_dom(tscan, TOPO_SMT_DOMAIN, get_count_order(nthreads), nthreads); topology_update_dom(tscan, TOPO_SMT_DOMAIN, get_count_order(nthreads), nthreads);
...@@ -109,13 +121,13 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb) ...@@ -109,13 +121,13 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb)
static bool parse_fam10h_node_id(struct topo_scan *tscan) static bool parse_fam10h_node_id(struct topo_scan *tscan)
{ {
struct {
union { union {
struct {
u64 node_id : 3, u64 node_id : 3,
nodes_per_pkg : 3, nodes_per_pkg : 3,
unused : 58; unused : 58;
u64 msr;
}; };
u64 msr;
} nid; } nid;
if (!boot_cpu_has(X86_FEATURE_NODEID_MSR)) if (!boot_cpu_has(X86_FEATURE_NODEID_MSR))
...@@ -135,6 +147,26 @@ static void legacy_set_llc(struct topo_scan *tscan) ...@@ -135,6 +147,26 @@ static void legacy_set_llc(struct topo_scan *tscan)
tscan->c->topo.llc_id = apicid >> tscan->dom_shifts[TOPO_CORE_DOMAIN]; tscan->c->topo.llc_id = apicid >> tscan->dom_shifts[TOPO_CORE_DOMAIN];
} }
static void topoext_fixup(struct topo_scan *tscan)
{
struct cpuinfo_x86 *c = tscan->c;
u64 msrval;
/* Try to re-enable TopologyExtensions if switched off by BIOS */
if (cpu_has(c, X86_FEATURE_TOPOEXT) || c->x86_vendor != X86_VENDOR_AMD ||
c->x86 != 0x15 || c->x86_model < 0x10 || c->x86_model > 0x6f)
return;
if (msr_set_bit(0xc0011005, 54) <= 0)
return;
rdmsrl(0xc0011005, msrval);
if (msrval & BIT_64(54)) {
set_cpu_cap(c, X86_FEATURE_TOPOEXT);
pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
}
}
static void parse_topology_amd(struct topo_scan *tscan) static void parse_topology_amd(struct topo_scan *tscan)
{ {
bool has_0xb = false; bool has_0xb = false;
...@@ -164,6 +196,7 @@ static void parse_topology_amd(struct topo_scan *tscan) ...@@ -164,6 +196,7 @@ static void parse_topology_amd(struct topo_scan *tscan)
void cpu_parse_topology_amd(struct topo_scan *tscan) void cpu_parse_topology_amd(struct topo_scan *tscan)
{ {
tscan->amd_nodes_per_pkg = 1; tscan->amd_nodes_per_pkg = 1;
topoext_fixup(tscan);
parse_topology_amd(tscan); parse_topology_amd(tscan);
if (tscan->amd_nodes_per_pkg > 1) if (tscan->amd_nodes_per_pkg > 1)
......
...@@ -3207,7 +3207,8 @@ enum cpu_mitigations { ...@@ -3207,7 +3207,8 @@ enum cpu_mitigations {
}; };
static enum cpu_mitigations cpu_mitigations __ro_after_init = static enum cpu_mitigations cpu_mitigations __ro_after_init =
CPU_MITIGATIONS_AUTO; IS_ENABLED(CONFIG_SPECULATION_MITIGATIONS) ? CPU_MITIGATIONS_AUTO :
CPU_MITIGATIONS_OFF;
static int __init mitigations_parse_cmdline(char *arg) static int __init mitigations_parse_cmdline(char *arg)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment