1. 12 Jan, 2018 14 commits
    • Elena Reshetova's avatar
      Thermal/int340x: prevent speculative execution · 1c050491
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      Real commit text tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 7ef8b5b36b47e74d35506760175eaf1f4235068b)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      1c050491
    • Elena Reshetova's avatar
      qla2xxx: prevent speculative execution · fa24f391
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      Real commit text tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit a2ef3475fff03ae6fcdf07163d3a762e9811e3be)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      fa24f391
    • Elena Reshetova's avatar
      carl9170: prevent speculative execution · 5f922fcb
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      Real commit text tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 19299d3cee99e47bec3ace5d654eeb8fa6365bfd)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      5f922fcb
    • Elena Reshetova's avatar
      uvcvideo: prevent speculative execution · ffafbf6a
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      real commit text tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 5d9ab7231ea9f5a1b0c3cb612e20b0b486a5bdca)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      ffafbf6a
    • Elena Reshetova's avatar
      x86, bpf, jit: prevent speculative execution when JIT is enabled · 87f0ff16
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      When constant blinding is enabled (bpf_jit_harden = 1), this adds
      a generic memory barrier (lfence for intel, mfence for AMD) before
      emitting x86 jitted code for the BPF_ALU(64)_OR_X and BPF_ALU_LHS_X
      (for BPF_REG_AX register) eBPF instructions. This is needed in order
      to prevent speculative execution on out of bounds BPF_MAP array
      indexes when JIT is enabled. This way an arbitary kernel memory is
      not exposed through side-channel attacks.
      
      For more details, please see this Google Project Zero report: tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 33f5e63378ad75331315216b459362b0a5350662)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      87f0ff16
    • Elena Reshetova's avatar
      bpf: prevent speculative execution in eBPF interpreter · ac92d827
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      This adds a generic memory barrier before LD_IMM_DW and
      LDX_MEM_B/H/W/DW eBPF instructions during eBPF program
      execution in order to prevent speculative execution on out
      of bound BFP_MAP array indexes. This way an arbitary kernel
      memory is not exposed through side channel attacks.
      
      For more details, please see this Google Project Zero report: tbd
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 69cfcc33d4ec282f14e47f1705bf45117e557b69)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      ac92d827
    • Elena Reshetova's avatar
      locking/barriers: introduce new memory barrier gmb() · 347e08dc
      Elena Reshetova authored
      CVE-2017-5753
      CVE-2017-5715
      
      In constrast to existing mb() and rmb() barriers,
      gmb() barrier is arch-independent and can be used to
      implement any type of memory barrier.
      In x86 case, it is either lfence or mfence, based on
      processor type. ARM and others can define it according
      to their needs.
      Suggested-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      (cherry picked from commit 15cdd6b1b8bdf69f6318b64650b342c38cc58451)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      347e08dc
    • Daniel Borkmann's avatar
      bpf: add generic constant blinding for use in jits · 6cb83270
      Daniel Borkmann authored
      CVE-2017-5753
      CVE-2017-5715
      
      This work adds a generic facility for use from eBPF JIT compilers
      that allows for further hardening of JIT generated images through
      blinding constants. In response to the original work on BPF JIT
      spraying published by Keegan McAllister [1], most BPF JITs were
      changed to make images read-only and start at a randomized offset
      in the page, where the rest was filled with trap instructions. We
      have this nowadays in x86, arm, arm64 and s390 JIT compilers.
      Additionally, later work also made eBPF interpreter images read
      only for kernels supporting DEBUG_SET_MODULE_RONX, that is, x86,
      arm, arm64 and s390 archs as well currently. This is done by
      default for mentioned JITs when JITing is enabled. Furthermore,
      we had a generic and configurable constant blinding facility on our
      todo for quite some time now to further make spraying harder, and
      first implementation since around netconf 2016.
      
      We found that for systems where untrusted users can load cBPF/eBPF
      code where JIT is enabled, start offset randomization helps a bit
      to make jumps into crafted payload harder, but in case where larger
      programs that cross page boundary are injected, we again have some
      part of the program opcodes at a page start offset. With improved
      guessing and more reliable payload injection, chances can increase
      to jump into such payload. Elena Reshetova recently wrote a test
      case for it [2, 3]. Moreover, eBPF comes with 64 bit constants, which
      can leave some more room for payloads. Note that for all this,
      additional bugs in the kernel are still required to make the jump
      (and of course to guess right, to not jump into a trap) and naturally
      the JIT must be enabled, which is disabled by default.
      
      For helping mitigation, the general idea is to provide an option
      bpf_jit_harden that admins can tweak along with bpf_jit_enable, so
      that for cases where JIT should be enabled for performance reasons,
      the generated image can be further hardened with blinding constants
      for unpriviledged users (bpf_jit_harden == 1), with trading off
      performance for these, but not for privileged ones. We also added
      the option of blinding for all users (bpf_jit_harden == 2), which
      is quite helpful for testing f.e. with test_bpf.ko. There are no
      further e.g. hardening levels of bpf_jit_harden switch intended,
      rationale is to have it dead simple to use as on/off. Since this
      functionality would need to be duplicated over and over for JIT
      compilers to use, which are already complex enough, we provide a
      generic eBPF byte-code level based blinding implementation, which is
      then just transparently JITed. JIT compilers need to make only a few
      changes to integrate this facility and can be migrated one by one.
      
      This option is for eBPF JITs and will be used in x86, arm64, s390
      without too much effort, and soon ppc64 JITs, thus that native eBPF
      can be blinded as well as cBPF to eBPF migrations, so that both can
      be covered with a single implementation. The rule for JITs is that
      bpf_jit_blind_constants() must be called from bpf_int_jit_compile(),
      and in case blinding is disabled, we follow normally with JITing the
      passed program. In case blinding is enabled and we fail during the
      process of blinding itself, we must return with the interpreter.
      Similarly, in case the JITing process after the blinding failed, we
      return normally to the interpreter with the non-blinded code. Meaning,
      interpreter doesn't change in any way and operates on eBPF code as
      usual. For doing this pre-JIT blinding step, we need to make use of
      a helper/auxiliary register, here BPF_REG_AX. This is strictly internal
      to the JIT and not in any way part of the eBPF architecture. Just like
      in the same way as JITs internally make use of some helper registers
      when emitting code, only that here the helper register is one
      abstraction level higher in eBPF bytecode, but nevertheless in JIT
      phase. That helper register is needed since f.e. manually written
      program can issue loads to all registers of eBPF architecture.
      
      The core concept with the additional register is: blind out all 32
      and 64 bit constants by converting BPF_K based instructions into a
      small sequence from K_VAL into ((RND ^ K_VAL) ^ RND). Therefore, this
      is transformed into: BPF_REG_AX := (RND ^ K_VAL), BPF_REG_AX ^= RND,
      and REG <OP> BPF_REG_AX, so actual operation on the target register
      is translated from BPF_K into BPF_X one that is operating on
      BPF_REG_AX's content. During rewriting phase when blinding, RND is
      newly generated via prandom_u32() for each processed instruction.
      64 bit loads are split into two 32 bit loads to make translation and
      patching not too complex. Only basic thing required by JITs is to
      call the helper bpf_jit_blind_constants()/bpf_jit_prog_release_other()
      pair, and to map BPF_REG_AX into an unused register.
      
      Small bpf_jit_disasm extract from [2] when applied to x86 JIT:
      
      echo 0 > /proc/sys/net/core/bpf_jit_harden
      
        ffffffffa034f5e9 + <x>:
        [...]
        39:   mov    $0xa8909090,%eax
        3e:   mov    $0xa8909090,%eax
        43:   mov    $0xa8ff3148,%eax
        48:   mov    $0xa89081b4,%eax
        4d:   mov    $0xa8900bb0,%eax
        52:   mov    $0xa810e0c1,%eax
        57:   mov    $0xa8908eb4,%eax
        5c:   mov    $0xa89020b0,%eax
        [...]
      
      echo 1 > /proc/sys/net/core/bpf_jit_harden
      
        ffffffffa034f1e5 + <x>:
        [...]
        39:   mov    $0xe1192563,%r10d
        3f:   xor    $0x4989b5f3,%r10d
        46:   mov    %r10d,%eax
        49:   mov    $0xb8296d93,%r10d
        4f:   xor    $0x10b9fd03,%r10d
        56:   mov    %r10d,%eax
        59:   mov    $0x8c381146,%r10d
        5f:   xor    $0x24c7200e,%r10d
        66:   mov    %r10d,%eax
        69:   mov    $0xeb2a830e,%r10d
        6f:   xor    $0x43ba02ba,%r10d
        76:   mov    %r10d,%eax
        79:   mov    $0xd9730af,%r10d
        7f:   xor    $0xa5073b1f,%r10d
        86:   mov    %r10d,%eax
        89:   mov    $0x9a45662b,%r10d
        8f:   xor    $0x325586ea,%r10d
        96:   mov    %r10d,%eax
        [...]
      
      As can be seen, original constants that carry payload are hidden
      when enabled, actual operations are transformed from constant-based
      to register-based ones, making jumps into constants ineffective.
      Above extract/example uses single BPF load instruction over and
      over, but of course all instructions with constants are blinded.
      
      Performance wise, JIT with blinding performs a bit slower than just
      JIT and faster than interpreter case. This is expected, since we
      still get all the performance benefits from JITing and in normal
      use-cases not every single instruction needs to be blinded. Summing
      up all 296 test cases averaged over multiple runs from test_bpf.ko
      suite, interpreter was 55% slower than JIT only and JIT with blinding
      was 8% slower than JIT only. Since there are also some extremes in
      the test suite, I expect for ordinary workloads that the performance
      for the JIT with blinding case is even closer to JIT only case,
      f.e. nmap test case from suite has averaged timings in ns 29 (JIT),
      35 (+ blinding), and 151 (interpreter).
      
      BPF test suite, seccomp test suite, eBPF sample code and various
      bigger networking eBPF programs have been tested with this and were
      running fine. For testing purposes, I also adapted interpreter and
      redirected blinded eBPF image to interpreter and also here all tests
      pass.
      
        [1] http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html
        [2] https://github.com/01org/jit-spray-poc-for-ksp/
        [3] http://www.openwall.com/lists/kernel-hardening/2016/05/03/5Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      (backported from commit 4f3446bb)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      6cb83270
    • Daniel Borkmann's avatar
      bpf: prepare bpf_int_jit_compile/bpf_prog_select_runtime apis · 3098d8ea
      Daniel Borkmann authored
      CVE-2017-5753
      CVE-2017-5715
      
      Since the blinding is strictly only called from inside eBPF JITs,
      we need to change signatures for bpf_int_jit_compile() and
      bpf_prog_select_runtime() first in order to prepare that the
      eBPF program we're dealing with can change underneath. Hence,
      for call sites, we need to return the latest prog. No functional
      change in this patch.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      (cherry picked from commit d1c55ab5)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      3098d8ea
    • Daniel Borkmann's avatar
      bpf: add bpf_patch_insn_single helper · db0f73ac
      Daniel Borkmann authored
      CVE-2017-5753
      CVE-2017-5715
      
      Move the functionality to patch instructions out of the verifier
      code and into the core as the new bpf_patch_insn_single() helper
      will be needed later on for blinding as well. No changes in
      functionality.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      (cherry picked from commit c237ee5e)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      db0f73ac
    • Borislav Petkov's avatar
      Map the vsyscall page with _PAGE_USER · 54d6ff3e
      Borislav Petkov authored
      CVE-2017-5754
      
      This needs to happen early in kaiser_pagetable_walk(), before the
      hierarchy is established so that _PAGE_USER permission can be really
      set.
      
      A proper fix would be to teach kaiser_pagetable_walk() to update those
      permissions but the vsyscall page is the only exception here so ...
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit 6dcf5491)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      54d6ff3e
    • Thomas Gleixner's avatar
      x86/tlb: Drop the _GPL from the cpu_tlbstate export · 8832f29d
      Thomas Gleixner authored
      commit 1e547681 upstream.
      
      CVE-2017-5754
      
      The recent changes for PTI touch cpu_tlbstate from various tlb_flush
      inlines. cpu_tlbstate is exported as GPL symbol, so this causes a
      regression when building out of tree drivers for certain graphics cards.
      
      Aside of that the export was wrong since it was introduced as it should
      have been EXPORT_PER_CPU_SYMBOL_GPL().
      
      Use the correct PER_CPU export and drop the _GPL to restore the previous
      state which allows users to utilize the cards they payed for.
      
      As always I'm really thrilled to make this kind of change to support the
      sauce graphics corp.
      
      Fixes: 1e02ce4c ("x86: Store a per-cpu shadow copy of CR4")
      Fixes: 6fd166aa ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
      Reported-by: default avatarKees Cook <keescook@google.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Thomas Backlund <tmb@mageia.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit a4c1c753)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      8832f29d
    • Tom Lendacky's avatar
      x86/microcode/AMD: Add support for fam17h microcode loading · faa1c337
      Tom Lendacky authored
      commit f4e9b7af upstream.
      
      CVE-2017-5753
      CVE-2017-5715
      
      The size for the Microcode Patch Block (MPB) for an AMD family 17h
      processor is 3200 bytes.  Add a #define for fam17h so that it does
      not default to 2048 bytes and fail a microcode load/update.
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarBorislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/20171130224640.15391.40247.stgit@tlendack-t1.amdoffice.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Alice Ferrazzi <alicef@gentoo.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit 3db597fe)
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      faa1c337
    • Marcelo Henrique Cerri's avatar
      UBUNTU: Start new release · f8e5133d
      Marcelo Henrique Cerri authored
      Ignore: yes
      Signed-off-by: default avatarMarcelo Henrique Cerri <marcelo.cerri@canonical.com>
      f8e5133d
  2. 09 Jan, 2018 3 commits
  3. 07 Jan, 2018 3 commits
  4. 06 Jan, 2018 8 commits
    • Kleber Sacilotto de Souza's avatar
    • Guenter Roeck's avatar
      kaiser: Set _PAGE_NX only if supported · 2b2e4ca1
      Guenter Roeck authored
      CVE-2017-5754
      
      This resolves a crash if loaded under qemu + haxm under windows.
      See https://www.spinics.net/lists/kernel/msg2689835.html for details.
      Here is a boot log (the log is from chromeos-4.4, but Tao Wu says that
      the same log is also seen with vanilla v4.4.110-rc1).
      
      [    0.712750] Freeing unused kernel memory: 552K
      [    0.721821] init: Corrupted page table at address 57b029b332e0
      [    0.722761] PGD 80000000bb238067 PUD bc36a067 PMD bc369067 PTE 45d2067
      [    0.722761] Bad pagetable: 000b [#1] PREEMPT SMP
      [    0.722761] Modules linked in:
      [    0.722761] CPU: 1 PID: 1 Comm: init Not tainted 4.4.96 #31
      [    0.722761] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
      rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org 04/01/2014
      [    0.722761] task: ffff8800bc290000 ti: ffff8800bc28c000 task.ti: ffff8800bc28c000
      [    0.722761] RIP: 0010:[<ffffffff83f4129e>]  [<ffffffff83f4129e>] __clear_user+0x42/0x67
      [    0.722761] RSP: 0000:ffff8800bc28fcf8  EFLAGS: 00010202
      [    0.722761] RAX: 0000000000000000 RBX: 00000000000001a4 RCX: 00000000000001a4
      [    0.722761] RDX: 0000000000000000 RSI: 0000000000000008 RDI: 000057b029b332e0
      [    0.722761] RBP: ffff8800bc28fd08 R08: ffff8800bc290000 R09: ffff8800bb2f4000
      [    0.722761] R10: ffff8800bc290000 R11: ffff8800bb2f4000 R12: 000057b029b332e0
      [    0.722761] R13: 0000000000000000 R14: 000057b029b33340 R15: ffff8800bb1e2a00
      [    0.722761] FS:  0000000000000000(0000) GS:ffff8800bfb00000(0000) knlGS:0000000000000000
      [    0.722761] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [    0.722761] CR2: 000057b029b332e0 CR3: 00000000bb2f8000 CR4: 00000000000006e0
      [    0.722761] Stack:
      [    0.722761]  000057b029b332e0 ffff8800bb95fa80 ffff8800bc28fd18 ffffffff83f4120c
      [    0.722761]  ffff8800bc28fe18 ffffffff83e9e7a1 ffff8800bc28fd68 0000000000000000
      [    0.722761]  ffff8800bc290000 ffff8800bc290000 ffff8800bc290000 ffff8800bc290000
      [    0.722761] Call Trace:
      [    0.722761]  [<ffffffff83f4120c>] clear_user+0x2e/0x30
      [    0.722761]  [<ffffffff83e9e7a1>] load_elf_binary+0xa7f/0x18f7
      [    0.722761]  [<ffffffff83de2088>] search_binary_handler+0x86/0x19c
      [    0.722761]  [<ffffffff83de389e>] do_execveat_common.isra.26+0x909/0xf98
      [    0.722761]  [<ffffffff844febe0>] ? rest_init+0x87/0x87
      [    0.722761]  [<ffffffff83de40be>] do_execve+0x23/0x25
      [    0.722761]  [<ffffffff83c002e3>] run_init_process+0x2b/0x2d
      [    0.722761]  [<ffffffff844fec4d>] kernel_init+0x6d/0xda
      [    0.722761]  [<ffffffff84505b2f>] ret_from_fork+0x3f/0x70
      [    0.722761]  [<ffffffff844febe0>] ? rest_init+0x87/0x87
      [    0.722761] Code: 86 84 be 12 00 00 00 e8 87 0d e8 ff 66 66 90 48 89 d8 48 c1
      eb 03 4c 89 e7 83 e0 07 48 89 d9 be 08 00 00 00 31 d2 48 85 c9 74 0a <48> 89 17
      48 01 f7 ff c9 75 f6 48 89 c1 85 c9 74 09 88 17 48 ff
      [    0.722761] RIP  [<ffffffff83f4129e>] __clear_user+0x42/0x67
      [    0.722761]  RSP <ffff8800bc28fcf8>
      [    0.722761] ---[ end trace def703879b4ff090 ]---
      [    0.722761] BUG: sleeping function called from invalid context at /mnt/host/source/src/third_party/kernel/v4.4/kernel/locking/rwsem.c:21
      [    0.722761] in_atomic(): 0, irqs_disabled(): 1, pid: 1, name: init
      [    0.722761] CPU: 1 PID: 1 Comm: init Tainted: G      D         4.4.96 #31
      [    0.722761] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org 04/01/2014
      [    0.722761]  0000000000000086 dcb5d76098c89836 ffff8800bc28fa30 ffffffff83f34004
      [    0.722761]  ffffffff84839dc2 0000000000000015 ffff8800bc28fa40 ffffffff83d57dc9
      [    0.722761]  ffff8800bc28fa68 ffffffff83d57e6a ffffffff84a53640 0000000000000000
      [    0.722761] Call Trace:
      [    0.722761]  [<ffffffff83f34004>] dump_stack+0x4d/0x63
      [    0.722761]  [<ffffffff83d57dc9>] ___might_sleep+0x13a/0x13c
      [    0.722761]  [<ffffffff83d57e6a>] __might_sleep+0x9f/0xa6
      [    0.722761]  [<ffffffff84502788>] down_read+0x20/0x31
      [    0.722761]  [<ffffffff83cc5d9b>] __blocking_notifier_call_chain+0x35/0x63
      [    0.722761]  [<ffffffff83cc5ddd>] blocking_notifier_call_chain+0x14/0x16
      [    0.800374] usb 1-1: new full-speed USB device number 2 using uhci_hcd
      [    0.722761]  [<ffffffff83cefe97>] profile_task_exit+0x1a/0x1c
      [    0.802309]  [<ffffffff83cac84e>] do_exit+0x39/0xe7f
      [    0.802309]  [<ffffffff83ce5938>] ? vprintk_default+0x1d/0x1f
      [    0.802309]  [<ffffffff83d7bb95>] ? printk+0x57/0x73
      [    0.802309]  [<ffffffff83c46e25>] oops_end+0x80/0x85
      [    0.802309]  [<ffffffff83c7b747>] pgtable_bad+0x8a/0x95
      [    0.802309]  [<ffffffff83ca7f4a>] __do_page_fault+0x8c/0x352
      [    0.802309]  [<ffffffff83eefba5>] ? file_has_perm+0xc4/0xe5
      [    0.802309]  [<ffffffff83ca821c>] do_page_fault+0xc/0xe
      [    0.802309]  [<ffffffff84507682>] page_fault+0x22/0x30
      [    0.802309]  [<ffffffff83f4129e>] ? __clear_user+0x42/0x67
      [    0.802309]  [<ffffffff83f4127f>] ? __clear_user+0x23/0x67
      [    0.802309]  [<ffffffff83f4120c>] clear_user+0x2e/0x30
      [    0.802309]  [<ffffffff83e9e7a1>] load_elf_binary+0xa7f/0x18f7
      [    0.802309]  [<ffffffff83de2088>] search_binary_handler+0x86/0x19c
      [    0.802309]  [<ffffffff83de389e>] do_execveat_common.isra.26+0x909/0xf98
      [    0.802309]  [<ffffffff844febe0>] ? rest_init+0x87/0x87
      [    0.802309]  [<ffffffff83de40be>] do_execve+0x23/0x25
      [    0.802309]  [<ffffffff83c002e3>] run_init_process+0x2b/0x2d
      [    0.802309]  [<ffffffff844fec4d>] kernel_init+0x6d/0xda
      [    0.802309]  [<ffffffff84505b2f>] ret_from_fork+0x3f/0x70
      [    0.802309]  [<ffffffff844febe0>] ? rest_init+0x87/0x87
      [    0.830559] Kernel panic - not syncing: Attempted to kill init!  exitcode=0x00000009
      [    0.830559]
      [    0.831305] Kernel Offset: 0x2c00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
      [    0.831305] ---[ end Kernel panic - not syncing: Attempted to kill init!  exitcode=0x00000009
      
      The crash part of this problem may be solved with the following patch
      (thanks to Hugh for the hint). There is still another problem, though -
      with this patch applied, the qemu session aborts with "VCPU Shutdown
      request", whatever that means.
      
      Cc: lepton <ytht.net@gmail.com>
      Signed-off-by: default avatarGuenter Roeck <groeck@chromium.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      2b2e4ca1
    • Andrey Ryabinin's avatar
      x86/kasan: Clear kasan_zero_page after TLB flush · 1b312219
      Andrey Ryabinin authored
      CVE-2017-5754
      
      commit 69e0210f upstream.
      
      Currently we clear kasan_zero_page before __flush_tlb_all(). This
      works with current implementation of native_flush_tlb[_global]()
      because it doesn't cause do any writes to kasan shadow memory.
      But any subtle change made in native_flush_tlb*() could break this.
      Also current code seems doesn't work for paravirt guests (lguest).
      
      Only after the TLB flush we can be sure that kasan_zero_page is not
      used as early shadow anymore (instrumented code will not write to it).
      So it should cleared it only after the TLB flush.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/1452516679-32040-2-git-send-email-aryabinin@virtuozzo.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Jamie Iles <jamie.iles@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      1b312219
    • Andy Lutomirski's avatar
      x86/vdso: Get pvclock data from the vvar VMA instead of the fixmap · 6e67b204
      Andy Lutomirski authored
      CVE-2017-5754
      
      commit dac16fba upstream.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/9d37826fdc7e2d2809efe31d5345f97186859284.1449702533.git.luto@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Jamie Iles <jamie.iles@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      6e67b204
    • Andy Lutomirski's avatar
      x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader · 816cf43b
      Andy Lutomirski authored
      CVE-2017-5754
      
      commit 6b078f5d upstream.
      
      The pvclock vdso code was too abstracted to understand easily
      and excessively paranoid.  Simplify it for a huge speedup.
      
      This opens the door for additional simplifications, as the vdso
      no longer accesses the pvti for any vcpu other than vcpu 0.
      
      Before, vclock_gettime using kvm-clock took about 45ns on my
      machine. With this change, it takes 29ns, which is almost as
      fast as the pure TSC implementation.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/6b51dcc41f1b101f963945c5ec7093d72bdac429.1449702533.git.luto@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Jamie Iles <jamie.iles@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      816cf43b
    • Kees Cook's avatar
      KPTI: Report when enabled · f0b2b06f
      Kees Cook authored
      CVE-2017-5754
      
      Make sure dmesg reports when KPTI is enabled.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      f0b2b06f
    • Colin Ian King's avatar
      c9fa6e32
    • Kleber Sacilotto de Souza's avatar
      UBUNTU: Start new release · 692e22e8
      Kleber Sacilotto de Souza authored
      Ignore: yes
      Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
      692e22e8
  5. 05 Jan, 2018 12 commits