- 16 May, 2018 29 commits
-
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Contemporary high performance processors use a common industry-wide optimization known as "Speculative Store Bypass" in which loads from addresses to which a recent store has occurred may (speculatively) see an older value. Intel refers to this feature as "Memory Disambiguation" which is part of their "Smart Memory Access" capability. Memory Disambiguation can expose a cache side-channel attack against such speculatively read values. An attacker can create exploit code that allows them to read memory outside of a sandbox environment (for example, malicious JavaScript in a web page), or to perform more complex attacks against code running within the same privilege level, e.g. via the stack. As a first step to mitigate against such attacks, provide two boot command line control knobs: nospec_store_bypass_disable spec_store_bypass_disable=[off,auto,on] By default affected x86 processors will power on with Speculative Store Bypass enabled. Hence the provided kernel parameters are written from the point of view of whether to enable a mitigation or not. The parameters are as follows: - auto - Kernel detects whether your CPU model contains an implementation of Speculative Store Bypass and picks the most appropriate mitigation. - on - disable Speculative Store Bypass - off - enable Speculative Store Bypass [ tglx: Reordered the checks so that the whole evaluation is not done when the CPU does not support RDS ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Add the CPU feature bit CPUID.7.0.EDX[31] which indicates whether the CPU supports Reduced Data Speculation. [ tglx: Split it out from a later patch ] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: Use word 16 instead of 18.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Add the sysfs file for the new vulerability. It does not do much except show the words 'Vulnerable' for recent x86 cores. Intel cores prior to family 6 are known not to be vulnerable, and so are some Atoms and some Xeon Phi. It assumes that older Cyrix, Centaur, etc. cores are immune. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Drop unknown CPU INTEL_FAM6_XEON_PHI_KNM.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) A guest may modify the SPEC_CTRL MSR from the value used by the kernel. Since the kernel doesn't use IBRS, this means a value of zero is what is needed in the host. But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to the other bits as reserved so the kernel should respect the boot time SPEC_CTRL value and use that. This allows to deal with future extensions to the SPEC_CTRL interface if any at all. Note: This uses wrmsrl() instead of native_wrmsl(). I does not make any difference as paravirt will over-write the callq *0xfff.. with the wrmsrl assembler code. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: - Context adjustments. - Evaluate ibrs_inuse (which exists in Xenial but not in upstream) instead of boot_cpu_has(X86_FEATURE_IBRS).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) The 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to all the other bits as reserved. The Intel SDM glossary defines reserved as implementation specific - aka unknown. As such at bootup this must be taken it into account and proper masking for the bits in use applied. A copy of this document is available at https://bugzilla.kernel.org/show_bug.cgi?id=199511 [ tglx: Made x86_spec_ctrl_base __ro_after_init ] Suggested-by: Jon Masters <jcm@redhat.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> [juergh: Dropped some calls of newly introduced functions (calling code not present in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Kees Cook authored
CVE-2018-3639 (x86) One of the easiest ways to protect the kernel from attack is to reduce the internal attack surface exposed when a "write" flaw is available. By making as much of the kernel read-only as possible, we reduce the attack surface. Many things are written to only during __init, and never changed again. These cannot be made "const" since the compiler will do the wrong thing (we do actually need to write to them). Instead, move these items into a memory region that will be made read-only during mark_rodata_ro() which happens after all kernel __init code has finished. This introduces __ro_after_init as a way to mark such memory, and adds some documentation about the existing __read_mostly marking. This improves the security of the Linux kernel by marking formerly read-write memory regions as read-only on a fully booted up system. Based on work by PaX Team and Brad Spengler. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit c74ba8b3) Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Those SysFS functions have a similar preamble, as such make common code to handle them. Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> [juergh: - Context adjustments. - X86_FEATURE_PTI -> X86_FEATURE_KAISER. - Drop eval of X86_FEATURE_USE_IBRS_FW for Spectre v2 (not implemented in Xenial). - Report state of OSB for Spectre v1.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Konrad Rzeszutek Wilk authored
CVE-2018-3639 (x86) Combine the various logic which goes through all those x86_cpu_id matching structures in one function. Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Linus Torvalds authored
CVE-2018-3639 (x86) The macro is not type safe and I did look for why that "g" constraint for the asm doesn't work: it's because the asm is more fundamentally wrong. It does movl %[val], %%eax but "val" isn't a 32-bit value, so then gcc will pass it in a register, and generate code like movl %rsi, %eax and gas will complain about a nonsensical 'mov' instruction (it's moving a 64-bit register to a 32-bit one). Passing it through memory will just hide the real bug - gcc still thinks the memory location is 64-bit, but the "movl" will only load the first 32 bits and it all happens to work because x86 is little-endian. Convert it to a type safe inline function with a little trick which hands the feature into the ALTERNATIVE macro. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Stefan Bader authored
CVE-2018-3639 (x86) This is a partial backport of dd84441a "x86/speculation: Use IBRS if available before calling into firmware" as later patches rely on the call. Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) Now that we have generic, vendor-agnostic support of IBPB/IBRS feature detection in common code, move the code to enable/disable it from the vendor-specifc init_<vendor> functions to the common code. Except for the AMD special case where we need to write an MSR on every CPU. Keep that in init_amd() which runs on every CPU, whereas the common code mentioned above only runs once when CPU 0 is being onlined. Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) (cherry picked from commit 7fcae111) Despite the fact that all the other code there seems to be doing it, just using set_cpu_cap() in early_intel_init() doesn't actually work. For CPUs with PKU support, setup_pku() calls get_cpu_cap() after c->c_init() has set those feature bits. That resets those bits back to what was queried from the hardware. Turning the bits off for bad microcode is easy to fix. That can just use setup_clear_cpu_cap() to force them off for all CPUs. I was less keen on forcing the feature bits *on* that way, just in case of inconsistencies. I appreciate that the kernel is going to get this utterly wrong if CPU features are not consistent, because it has already applied alternatives by the time secondary CPUs are brought up. But at least if setup_force_cpu_cap() isn't being used, we might have a chance of *detecting* the lack of the corresponding bit and either panicking or refusing to bring the offending CPU online. So ensure that the appropriate feature bits are set within get_cpu_cap() regardless of how many extra times it's called. Fixes: 2961298e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags") Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: karahmed@amazon.de Cc: peterz@infradead.org Cc: bp@alien8.de Link: https://lkml.kernel.org/r/1517322623-15261-1-git-send-email-dwmw@amazon.co.ukSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit cda6b607 linux-4.9.y) Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) (cherry picked from commit 2961298e) We want to expose the hardware features simply in /proc/cpuinfo as "ibrs", "ibpb" and "stibp". Since AMD has separate CPUID bits for those, use them as the user-visible bits. When the Intel SPEC_CTRL bit is set which indicates both IBRS and IBPB capability, set those (AMD) bits accordingly. Likewise if the Intel STIBP bit is set, set the AMD STIBP that's used for the generic hardware capability. Hide the rest from /proc/cpuinfo by putting "" in the comments. Including RETPOLINE and RETPOLINE_AMD which shouldn't be visible there. There are patches to make the sysfs vulnerabilities information non-readable by non-root, and the same should apply to all information about which mitigations are actually in use. Those *shouldn't* appear in /proc/cpuinfo. The feature bit for whether IBPB is actually used, which is needed for ALTERNATIVEs, is renamed to X86_FEATURE_USE_IBPB. Originally-by: Borislav Petkov <bp@suse.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: ak@linux.intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1517070274-12128-2-git-send-email-dwmw@amazon.co.ukSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (backported from commit 77b3b3ee linux-4.9.y) [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Peter Zijlstra authored
CVE-2018-3639 (x86) commit ea00f301 upstream. Joe Konno reported a compile failure resulting from using an MSR without inclusion of <asm/msr-index.h>, and while the current code builds fine (by accident) this needs fixing for future patches. Reported-by: Joe Konno <joe.konno@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: arjan@linux.intel.com Cc: bp@alien8.de Cc: dan.j.williams@intel.com Cc: dave.hansen@linux.intel.com Cc: dwmw2@infradead.org Cc: dwmw@amazon.co.uk Cc: gregkh@linuxfoundation.org Cc: hpa@zytor.com Cc: jpoimboe@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: luto@kernel.org Fixes: 20ffa1ca ("x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support") Link: http://lkml.kernel.org/r/20180213132819.GJ25201@hirez.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 90ca2694 linux-4.9.y) Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) (cherry picked from commit 20ffa1ca) Expose indirect_branch_prediction_barrier() for use in subsequent patches. [ tglx: Add IBPB status to spectre_v2 sysfs file ] Co-developed-by: KarimAllah Ahmed <karahmed@amazon.de> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-8-git-send-email-dwmw@amazon.co.ukSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (backported from commit 31fd9eda linux-4.9.y) [juergh: This is only a partial backport, hence UBUNTU: SAUCE:! - Context adjustments. - Drop previous #define X86_FEATURE_IBPB. - Don't define indirect_branch_prediction_barrier() (not needed yet in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) (cherry picked from commit a5b29663) This doesn't refuse to load the affected microcodes; it just refuses to use the Spectre v2 mitigation features if they're detected, by clearing the appropriate feature bits. The AMD CPUID bits are handled here too, because hypervisors *may* have been exposing those bits even on Intel chips, for fine-grained control of what's available. It is non-trivial to use x86_match_cpu() for this table because that doesn't handle steppings. And the approach taken in commit bd9240a1 almost made me lose my lunch. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-7-git-send-email-dwmw@amazon.co.ukSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (backported from commit 6c5e4915 linux-4.9.y) [juergh: - Context adjustments. - Renamed Merrifield1/2 to Merrifield/Moorefield (per commit f5fbf848).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) (cherry picked from commit fec9434a) Also, for CPUs which don't speculate at all, don't report that they're vulnerable to the Spectre variants either. Leave the cpu_no_meltdown[] match table with just X86_VENDOR_AMD in it for now, even though that could be done with a simple comparison, on the assumption that we'll have more to add. Based on suggestions from Dave Hansen and Alan Cox. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-6-git-send-email-dwmw@amazon.co.ukSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit a8799fd1 linux-4.9.y) Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) With the previous commit 50d36d375b89 ("x86/msr: Add definitions for new speculation control MSRs") the spec control bits from upstream were introduced, so rename the existing bits to match upstream: FEATURE_ENABLE_IBRS -> SPEC_CTRL_IBRS FEATURE_SET_IBPB -> PRED_CMD_IBPB Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES. See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-5-git-send-email-dwmw@amazon.co.uk (backported from commit 1e340c60) [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) AMD exposes the PRED_CMD/SPEC_CTRL MSRs slightly differently to Intel. See http://lkml.kernel.org/r/2b3e25cc-286d-8bd0-aeaf-9ac4aae39de8@amd.comSigned-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-4-git-send-email-dwmw@amazon.co.uk (backported from commit 5d10cbc9) [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) The SPEC_CTRL featured moved from the scattered list to a leaf, so expose it to the guest from the leaf rather than the scattered list. Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) Add three feature bits exposed by new microcode on Intel CPUs for speculation control. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-3-git-send-email-dwmw@amazon.co.uk (backported from commit fc67dd70) [juergh: - Context adjustments. - Use feature word 16 instead of 18. - Remove existing SPEC_CTRL feature bit from scatter list (now that it's part of the leaf).] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
David Woodhouse authored
CVE-2018-3639 (x86) This is a pure feature bits leaf. Three feature bits from this leaf are going to be added for speculation control features. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: gnomes@lxorguk.ukuu.org.uk Cc: ak@linux.intel.com Cc: ashok.raj@intel.com Cc: dave.hansen@intel.com Cc: karahmed@amazon.de Cc: arjan@linux.intel.com Cc: torvalds@linux-foundation.org Cc: peterz@infradead.org Cc: bp@alien8.de Cc: pbonzini@redhat.com Cc: tim.c.chen@linux.intel.com Cc: gregkh@linux-foundation.org Link: https://lkml.kernel.org/r/1516896855-7642-2-git-send-email-dwmw@amazon.co.uk (backported from commit 95ca0ee8) [juergh: Use feature word 16 instead of 18.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Andy Lutomirski authored
CVE-2018-3639 (x86) A typo (or mis-merge?) resulted in leaf 6 only being probed if cpuid_level >= 7. Fixes: 2ccd71f1 ("x86/cpufeature: Move some of the scattered feature bits to x86_capability") Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Link: http://lkml.kernel.org/r/6ea30c0e9daec21e488b54761881a6dfcf3e04d0.1481825597.git.luto@kernel.orgSigned-off-by: Thomas Gleixner <tglx@linutronix.de> (backported from commit 3df8d920) [juergh: Context adjustments.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Borislav Petkov authored
CVE-2018-3639 (x86) Add an enum for the ->x86_capability array indices and cleanup get_cpu_cap() by killing some redundant local vars. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1449481182-27541-3-git-send-email-bp@alien8.deSigned-off-by: Thomas Gleixner <tglx@linutronix.de> (backported from commit 39c06df4) [juergh: - Context adjustments. - Use CPUID_8000_0008_EBX enum in __do_cpuid_ent().] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Borislav Petkov authored
CVE-2018-3639 (x86) Turn the CPUID leafs which are proper CPUID feature bit leafs into separate ->x86_capability words. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1449481182-27541-2-git-send-email-bp@alien8.deSigned-off-by: Thomas Gleixner <tglx@linutronix.de> (backported from commit 2ccd71f1) [juergh: This is a partial backport to clean up some inconsistencies introduced by past merges/updates.] Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) Internally, kernel 4.4 uses the term KAISER instead of PTI, so drop X86_FEATURE_PTI (since X86_FEATURE_KAISER exists) and also rename DISABLE_PTI. Fixes: b2f8503b ("x86/cpufeatures: Add X86_BUG_CPU_INSECURE") Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) There's no Documentation/admin-guide/ in 4.4 ... Fixes: df043b74 ("x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature") Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
Juerg Haefliger authored
CVE-2018-3639 (x86) Remove definitions of trivial functions that are not used externally. Introduce #defines for the bits for better readability. No functional changes. Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
- 15 May, 2018 1 commit
-
-
Juerg Haefliger authored
Ignore: yes Signed-off-by: Juerg Haefliger <juergh@canonical.com>
-
- 09 May, 2018 10 commits
-
-
Stefan Bader authored
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thadeu Lima de Souza Cascardo authored
Function bpf_fill_maxinsns11 is designed to not be able to be JITed on x86_64. So, it fails when CONFIG_BPF_JIT_ALWAYS_ON=y, and commit 09584b40 ("bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y") makes sure that failure is detected on that case. However, it does not fail on other architectures, which have a different JIT compiler design. So, test_bpf has started to fail to load on those. After this fix, test_bpf loads fine on both x86_64 and ppc64el. Fixes: 09584b40 ("bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y") Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Reviewed-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> BugLink: https://bugs.launchpad.net/bugs/1765698 (cherry-picked from commit 52fda36d) Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Colin King <colin.king@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Yonghong Song authored
With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file, tools/testing/selftests/bpf/test_kmod.sh failed like below: [root@localhost bpf]# ./test_kmod.sh sysctl: setting key "net.core.bpf_jit_enable": Invalid argument [ JIT enabled:0 hardened:0 ] [ 132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096 [ 132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed] [ JIT enabled:1 hardened:0 ] [ 133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096 [ 133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed] [ JIT enabled:1 hardened:1 ] [ 134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096 [ 135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed] [ JIT enabled:1 hardened:2 ] [ 136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096 [ 136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed] [root@localhost bpf]# The test_kmod.sh load/remove test_bpf.ko multiple times with different settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297 of test_bpf.ko is designed such that JIT always fails. Commit 290af866 (bpf: introduce BPF_JIT_ALWAYS_ON config) introduced the following tightening logic: ... if (!bpf_prog_is_dev_bound(fp->aux)) { fp = bpf_int_jit_compile(fp); #ifdef CONFIG_BPF_JIT_ALWAYS_ON if (!fp->jited) { *err = -ENOTSUPP; return fp; } #endif ... With this logic, Test #297 always gets return value -ENOTSUPP when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure. This patch fixed the failure by marking Test #297 as expected failure when CONFIG_BPF_JIT_ALWAYS_ON is defined. Fixes: 290af866 (bpf: introduce BPF_JIT_ALWAYS_ON config) Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> BugLink: https://bugs.launchpad.net/bugs/1765698 (backported from commit 09584b40) [smb: ignored fuzz 2 in hunk #1] Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Colin King <colin.king@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Jay Vosburgh authored
BugLink: http://bugs.launchpad.net/bugs/1765241 A race condition exists in virtio_scsi between the completion of a request and the freeing of the target structure. The race is between (a) virtscsi_complete_cmd that, first, wakes up a task waiting for a completion, then, second, releases a reference in the target structure and (b) the woken up task freeing that target structure. The race appears to exist in all verisons of virtio_scsi, but most kernels are not impacted due to a coincidental RCU sync in the "(b)" path above that will effectively wait for the "(a)" path to complete. The Ubuntu Xenial 4.4 kernel since commit be2a2080 lacks any RCU sync in the "(b)" code path, thus opening the race window. The fix is to wait for any outstanding requests to release their references prior to freeing the target structure. Signed-off-by: Jay Vosburgh <jay.vosburgh@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Seth Forshee authored
BugLink: http://bugs.launchpad.net/bugs/1763454 At the time this commit was backported some of the code it modifies was not present. When the code was later introduced from upstream stable it did not get the changes from this commit. Backport those changes now. v2: Also remove errant marking of instruction as seen from the backport. CVE-2017-17862 Fixes: 68dd63b2 ("bpf: fix branch pruning logic") Signed-off-by: Seth Forshee <seth.forshee@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Juerg Haefliger <juerg.haefliger@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Colin Ian King authored
BugLink: http://bugs.launchpad.net/bugs/1764810 An previous backport to bug LP: #1745130 overlooked adding in an error return that was introduced by commit 6124c53e. Fix this by adding in the missing return. Detected by CoverityScan, CID#1467925 ("Missing return statement) Fixes: b9a5fffb ("rfkill: Add rfkill-any LED trigger") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Kamal Mostafa <kamal@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Jay Vosburgh authored
BugLink: https://bugs.launchpad.net/bugs/1761534 The operstate update logic will leave an interface in the default UNKNOWN operstate if the interface carrier state never changes from the default carrier up state set at creation. This includes the case of an explicit call to netif_carrier_on, as the carrier on to on transition has no effect on operstate. This affects virtio-net for the case that the virtio peer does not support VIRTIO_NET_F_STATUS (the feature that provides carrier state updates). Without this feature, the virtio specification states that "the link should be assumed active," so, logically, the operstate should be UP instead of UNKNOWN. This has impact on user space applications that use the operstate to make availability decisions for the interface. Resolve this by changing the virtio probe logic slightly to call netif_carrier_off for both the "with" and "without" VIRTIO_NET_F_STATUS cases, and then the existing call to netif_carrier_on for the "without" case will cause an operstate transition. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jay Vosburgh <jay.vosburgh@canonical.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry-picked from commit bda7fab5) Signed-off-by: Eric Desrochers <eric.desrochers@canonical.com> Acked-by: Seth Forshee <seth.forshee@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Greg Kroah-Hartman authored
BugLink: http://bugs.launchpad.net/bugs/1765010Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Greg Hackmann authored
BugLink: http://bugs.launchpad.net/bugs/1765010 Pixel 2 field testers reported that when they tried to reboot their phones with some USB devices plugged in, the reboot would get wedged and eventually trigger watchdog reset. Once the Pixel kernel team found a reliable repro case, they narrowed it down to this commit's 4.4.y backport. Reverting the change made the issue go away. This reverts commit b07c1251. Signed-off-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
David Ahern authored
BugLink: http://bugs.launchpad.net/bugs/1765010 commit 82dd0d2a upstream. Miguel reported an skb use after free / double free in vrf_finish_output when neigh_output returns an error. The vrf driver should return after the call to neigh_output as it takes over the skb on error path as well. Patch is a simplified version of Miguel's patch which was written for 4.9, and updated to top of tree. Fixes: 8f58336d ("net: Add ethernet header for pass through VRF device") Signed-off-by: Miguel Fadon Perlines <mfadon@teldat.com> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> [ backport to 4.4 and 4.9 dropped the sock_confirm_neigh and changed neigh_output to dst_neigh_output ] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-