- 16 May, 2018 40 commits
-
-
Mauricio Faria de Oliveira authored
CVE-2018-3639 (powerpc) Currently the rfi-flush messages print 'Using <type> flush' for all enabled_flush_types, but that is not necessarily true -- as now the fallback flush is always enabled on pseries, but the fixup function overwrites its nop/branch slot with other flush types, if available. So, replace the 'Using <type> flush' messages with '<type> flush is available'. Also, print the patched flush types in the fixup function, so users can know what is (not) being used (e.g., the slower, fallback flush, or no flush type at all if flush is disabled via the debugfs switch). Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> (cherry picked from commit 0063d61c) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Michael Ellerman authored
CVE-2018-3639 (powerpc) This ensures the fallback flush area is always allocated on pseries, so in case a LPAR is migrated from a patched to an unpatched system, it is possible to enable the fallback flush in the target system. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> (cherry picked from commit 84749a58) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Michael Ellerman authored
CVE-2018-3639 (powerpc) For PowerVM migration we want to be able to call setup_rfi_flush() again after we've migrated the partition. To support that we need to check that we're not trying to allocate the fallback flush area after memblock has gone away (i.e., boot-time only). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> (cherry picked from abf110f3) [mauricio: backport: setup_64.c hunk2: update 'limit'/context lines] Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Michael Ellerman authored
CVE-2018-3639 (powerpc) rfi_flush_enable() includes a check to see if we're already enabled (or disabled), and in that case does nothing. But that means calling setup_rfi_flush() a 2nd time doesn't actually work, which is a bit confusing. Move that check into the debugfs code, where it really belongs. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> (cherry picked from commit 1e2a9fc7) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Michael Ellerman authored
CVE-2018-3639 (powerpc) Some versions of firmware will have a setting that can be configured to disable the RFI flush, add support for it. Fixes: 6e032b35 ("powerpc/powernv: Check device-tree for RFI flush settings") (cherry picked from commit eb0a2d26) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Michael Ellerman authored
CVE-2018-3639 (powerpc) Some versions of firmware will have a setting that can be configured to disable the RFI flush, add support for it. Fixes: 8989d568 ("powerpc/pseries: Query hypervisor for RFI flush settings") (cherry picked from commit 582605a4) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Jiri Kosina authored
CVE-2018-3639 (x86) cpu_show_common() is not used outside of arch/x86/kernel/cpu/bugs.c, so make it static. Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Jiri Kosina authored
CVE-2018-3639 (x86) __ssb_select_mitigation() returns one of the members of enum ssb_mitigation, not ssb_mitigation_cmd; fix the prototype to reflect that. Fixes: 24f7fc83 ("x86/bugs: Provide boot parameters for the spec_store_bypass_disable mitigation") Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Borislav Petkov authored
CVE-2018-3639 (x86) Fix some typos, improve formulations, end sentences with a fullstop. Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) The style for the 'status' file is CamelCase or this. _. Fixes: fae1fa0f ("proc: Provide details on speculation flaw mitigations") Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Intel collateral will reference the SSB mitigation bit in IA32_SPEC_CTL[2] as SSBD (Speculative Store Bypass Disable). Hence changing it. It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a) Bit(4) name is going to be. Following the rename it would be SSBD_NO but that rolls out to Speculative Store Bypass Disable No. Also fixed the missing space in X86_FEATURE_AMD_SSBD. [ tglx: Fixup x86_amd_rds_enable() and rds_tif_to_amd_ls_cfg() as well ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) Unless explicitly opted out of, anything running under seccomp will have SSB mitigations enabled. Choosing the "prctl" mode will disable this. [ tglx: Adjusted it to the new arch_seccomp_spec_mitigate() mechanism ] Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Modify Documentation/kernel-parameters.txt instead.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) The migitation control is simpler to implement in architecture code as it avoids the extra function call to check the mode. Aside of that having an explicit seccomp enabled mode in the architecture mitigations would require even more workarounds. Move it into architecture code and provide a weak function in the seccomp code. Remove the 'which' argument as this allows the architecture to decide which mitigations are relevant for seccomp. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) If a seccomp user is not interested in Speculative Store Bypass mitigation by default, it can set the new SECCOMP_FILTER_FLAG_SPEC_ALLOW flag when adding filters. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Mickaël Salaün authored
CVE-2018-3639 (x86) Rename SECCOMP_FLAG_FILTER_TSYNC to SECCOMP_FILTER_FLAG_TSYNC to match the UAPI. Signed-off-by: Mickaël Salaün <mic@digikod.net> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Kees Cook <keescook@chromium.org> Cc: Shuah Khan <shuahkh@osg.samsung.com> Cc: Will Drewry <wad@chromium.org> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com> (backported from commit 6c045d07) [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) Use PR_SPEC_FORCE_DISABLE in seccomp() because seccomp does not allow to widen restrictions. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) For certain use cases it is desired to enforce mitigations so they cannot be undone afterwards. That's important for loader stubs which want to prevent a child from disabling the mitigation again. Will also be used for seccomp(). The extra state preserving of the prctl state for SSB is a preparatory step for EBPF dymanic speculation control. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> [smb: minor context adaption in prctl.h] Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) There's no reason for these to be changed after boot. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Stefan Bader authored
CVE-2018-3639 (x86) Upstream implementation reads the content of the SPEC_CTRL MSR once during boot to record the state of reserved bits. Any access to this MSR (to enable/disable IBRS) needs to preserve those reserved bits. This tries to catch and convert all occurrances of the Intel based IBRS changes we carry. Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) When speculation flaw mitigations are opt-in (via prctl), using seccomp will automatically opt-in to these protections, since using seccomp indicates at least some level of sandboxing is desired. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) As done with seccomp and no_new_privs, also show speculation flaw mitigation state in /proc/$pid/status. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: - Context adjustments. - Include linux/nospec.h to prevent compilation error.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) Adjust arch_prctl_get/set_spec_ctrl() to operate on tasks other than current. This is needed both for /proc/$pid/status queries and for seccomp (since thread-syncing can trigger seccomp in non-current threads). Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) Add prctl based control for Speculative Store Bypass mitigation and make it the default mitigation for Intel and AMD. Andi Kleen provided the following rationale (slightly redacted): There are multiple levels of impact of Speculative Store Bypass: 1) JITed sandbox. It cannot invoke system calls, but can do PRIME+PROBE and may have call interfaces to other code 2) Native code process. No protection inside the process at this level. 3) Kernel. 4) Between processes. The prctl tries to protect against case (1) doing attacks. If the untrusted code can do random system calls then control is already lost in a much worse way. So there needs to be system call protection in some way (using a JIT not allowing them or seccomp). Or rather if the process can subvert its environment somehow to do the prctl it can already execute arbitrary code, which is much worse than SSB. To put it differently, the point of the prctl is to not allow JITed code to read data it shouldn't read from its JITed sandbox. If it already has escaped its sandbox then it can already read everything it wants in its address space, and do much worse. The ability to control Speculative Store Bypass allows to enable the protection selectively without affecting overall system performance. Based on an initial patch from Tim Chen. Completely rewritten. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Modify Documentation/kernel-parameters.txt instead.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) The Speculative Store Bypass vulnerability can be mitigated with the Reduced Data Speculation (RDS) feature. To allow finer grained control of this eventually expensive mitigation a per task mitigation control is required. Add a new TIF_RDS flag and put it into the group of TIF flags which are evaluated for mismatch in switch_to(). If these bits differ in the previous and the next task, then the slow path function __switch_to_xtra() is invoked. Implement the TIF_RDS dependent mitigation control in the slow path. If the prctl for controlling Speculative Store Bypass is disabled or no task uses the prctl then there is no overhead in the switch_to() fast path. Update the KVM related speculation control functions to take TID_RDS into account as well. Based on a patch from Tim Chen. Completely rewritten. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kyle Huey authored
CVE-2018-3639 (x86) Help the compiler to avoid reevaluating the thread flags for each checked bit by reordering the bit checks and providing an explicit xor for evaluation. With default defconfigs for each arch, x86_64: arch/x86/kernel/process.o text data bss dec hex 3056 8577 16 11649 2d81 Before 3024 8577 16 11617 2d61 After i386: arch/x86/kernel/process.o text data bss dec hex 2957 8673 8 11638 2d76 Before 2925 8673 8 11606 2d56 After Originally-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Kyle Huey <khuey@kylehuey.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Link: http://lkml.kernel.org/r/20170214081104.9244-2-khuey@kylehuey.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> (backported from commit af8b3cd3) [juergh: Dropped the call of refresh_tss_limit() (not available in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) Add two new prctls to control aspects of speculation related vulnerabilites and their mitigations to provide finer grained control over performance impacting mitigations. PR_GET_SPECULATION_CTRL returns the state of the speculation misfeature which is selected with arg2 of prctl(2). The return value uses bit 0-2 with the following meaning: Bit Define Description 0 PR_SPEC_PRCTL Mitigation can be controlled per task by PR_SET_SPECULATION_CTRL 1 PR_SPEC_ENABLE The speculation feature is enabled, mitigation is disabled 2 PR_SPEC_DISABLE The speculation feature is disabled, mitigation is enabled If all bits are 0 the CPU is not affected by the speculation misfeature. If PR_SPEC_PRCTL is set, then the per task control of the mitigation is available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation misfeature will fail. PR_SET_SPECULATION_CTRL allows to control the speculation misfeature, which is selected by arg2 of prctl(2) per task. arg3 is used to hand in the control value, i.e. either PR_SPEC_ENABLE or PR_SPEC_DISABLE. The common return values are: EINVAL prctl is not implemented by the architecture or the unused prctl() arguments are not 0 ENODEV arg2 is selecting a not supported speculation misfeature PR_SET_SPECULATION_CTRL has these additional return values: ERANGE arg3 is incorrect, i.e. it's not either PR_SPEC_ENABLE or PR_SPEC_DISABLE ENXIO prctl control of the selected speculation misfeature is disabled The first supported controlable speculation misfeature is PR_SPEC_STORE_BYPASS. Add the define so this can be shared between architectures. Based on an initial patch from Tim Chen and mostly rewritten. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [tyhicks: Minor backport for SAUCE patch context] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: - Context adjustments. - Create new file include/linux/nospec.h. - Create Documentation/spec-ctrl.txt instead of Documentation/userspace-api/spec-ctrl.rst.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Thomas Gleixner authored
CVE-2018-3639 (x86) Having everything in nospec-branch.h creates a hell of dependencies when adding the prctl based switching mechanism. Move everything which is not required in nospec-branch.h to spec-ctrl.h and fix up the includes in the relevant files. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> [tyhicks: Minor backport for context] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Expose the CPUID.7.EDX[31] bit to the guest, and also guard against various combinations of SPEC_CTRL MSR values. The handling of the MSR (to take into account the host value of SPEC_CTRL Bit(2)) is taken care of in patch: KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) AMD does not need the Speculative Store Bypass mitigation to be enabled. The parameters for this are already available and can be done via MSR C001_1020. Each family uses a different bit in that MSR for this. [ tglx: Expose the bit mask via a variable and move the actual MSR fiddling into the bugs code as that's the right thing to do and also required to prepare for dynamic enable/disable ] Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [tyhicks: Minor backport for context] Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Intel and AMD SPEC_CTRL (0x48) MSR semantics may differ in the future (or in fact use different MSRs for the same functionality). As such a run-time mechanism is required to whitelist the appropriate MSR values. [ tglx: Made the variable __ro_after_init ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Intel CPUs expose methods to: - Detect whether RDS capability is available via CPUID.7.0.EDX[31], - The SPEC_CTRL MSR(0x48), bit 2 set to enable RDS. - MSR_IA32_ARCH_CAPABILITIES, Bit(4) no need to enable RRS. With that in mind if spec_store_bypass_disable=[auto,on] is selected set at boot-time the SPEC_CTRL MSR to enable RDS if the platform requires it. Note that this does not fix the KVM case where the SPEC_CTRL is exposed to guests which can muck with it, see patch titled : KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS. And for the firmware (IBRS to be set), see patch titled: x86/spectre_v2: Read SPEC_CTRL MSR during boot and re-use reserved bits [ tglx: Distangled it from the intel implementation and kept the call order ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Evaluate ibrs_inuse instead of boot_cpu_has(X86_FEATURE_IBRS).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Contemporary high performance processors use a common industry-wide optimization known as "Speculative Store Bypass" in which loads from addresses to which a recent store has occurred may (speculatively) see an older value. Intel refers to this feature as "Memory Disambiguation" which is part of their "Smart Memory Access" capability. Memory Disambiguation can expose a cache side-channel attack against such speculatively read values. An attacker can create exploit code that allows them to read memory outside of a sandbox environment (for example, malicious JavaScript in a web page), or to perform more complex attacks against code running within the same privilege level, e.g. via the stack. As a first step to mitigate against such attacks, provide two boot command line control knobs: nospec_store_bypass_disable spec_store_bypass_disable=[off,auto,on] By default affected x86 processors will power on with Speculative Store Bypass enabled. Hence the provided kernel parameters are written from the point of view of whether to enable a mitigation or not. The parameters are as follows: - auto - Kernel detects whether your CPU model contains an implementation of Speculative Store Bypass and picks the most appropriate mitigation. - on - disable Speculative Store Bypass - off - enable Speculative Store Bypass [ tglx: Reordered the checks so that the whole evaluation is not done when the CPU does not support RDS ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Add the CPU feature bit CPUID.7.0.EDX[31] which indicates whether the CPU supports Reduced Data Speculation. [ tglx: Split it out from a later patch ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: Use word 16 instead of 18.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Add the sysfs file for the new vulerability. It does not do much except show the words 'Vulnerable' for recent x86 cores. Intel cores prior to family 6 are known not to be vulnerable, and so are some Atoms and some Xeon Phi. It assumes that older Cyrix, Centaur, etc. cores are immune. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Drop unknown CPU INTEL_FAM6_XEON_PHI_KNM.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) A guest may modify the SPEC_CTRL MSR from the value used by the kernel. Since the kernel doesn't use IBRS, this means a value of zero is what is needed in the host. But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to the other bits as reserved so the kernel should respect the boot time SPEC_CTRL value and use that. This allows to deal with future extensions to the SPEC_CTRL interface if any at all. Note: This uses wrmsrl() instead of native_wrmsl(). I does not make any difference as paravirt will over-write the callq *0xfff.. with the wrmsrl assembler code. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: - Context adjustments. - Evaluate ibrs_inuse (which exists in Xenial but not in upstream) instead of boot_cpu_has(X86_FEATURE_IBRS).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) The 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to all the other bits as reserved. The Intel SDM glossary defines reserved as implementation specific - aka unknown. As such at bootup this must be taken it into account and proper masking for the bits in use applied. A copy of this document is available at https://bugzilla.kernel.org/show_bug.cgi?id=199511 [ tglx: Made x86_spec_ctrl_base __ro_after_init ] Suggested-by: Jon Masters <jcm@redhat.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Dropped some calls of newly introduced functions (calling code not present in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) One of the easiest ways to protect the kernel from attack is to reduce the internal attack surface exposed when a "write" flaw is available. By making as much of the kernel read-only as possible, we reduce the attack surface. Many things are written to only during __init, and never changed again. These cannot be made "const" since the compiler will do the wrong thing (we do actually need to write to them). Instead, move these items into a memory region that will be made read-only during mark_rodata_ro() which happens after all kernel __init code has finished. This introduces __ro_after_init as a way to mark such memory, and adds some documentation about the existing __read_mostly marking. This improves the security of the Linux kernel by marking formerly read-write memory regions as read-only on a fully booted up system. Based on work by PaX Team and Brad Spengler. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit c74ba8b3) Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Those SysFS functions have a similar preamble, as such make common code to handle them. Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: - Context adjustments. - X86_FEATURE_PTI -> X86_FEATURE_KAISER. - Drop eval of X86_FEATURE_USE_IBRS_FW for Spectre v2 (not implemented in Xenial). - Report state of OSB for Spectre v1.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Combine the various logic which goes through all those x86_cpu_id matching structures in one function. Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Linus Torvalds authored
CVE-2018-3639 (x86) The macro is not type safe and I did look for why that "g" constraint for the asm doesn't work: it's because the asm is more fundamentally wrong. It does movl %[val], %%eax but "val" isn't a 32-bit value, so then gcc will pass it in a register, and generate code like movl %rsi, %eax and gas will complain about a nonsensical 'mov' instruction (it's moving a 64-bit register to a 32-bit one). Passing it through memory will just hide the real bug - gcc still thinks the memory location is 64-bit, but the "movl" will only load the first 32 bits and it all happens to work because x86 is little-endian. Convert it to a type safe inline function with a little trick which hands the feature into the ALTERNATIVE macro. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-