1. 03 Feb, 2018 13 commits
    • Takashi Iwai's avatar
      ALSA: seq: Make ioctls race-free · 623e5c8a
      Takashi Iwai authored
      commit b3defb79 upstream.
      
      The ALSA sequencer ioctls have no protection against racy calls while
      the concurrent operations may lead to interfere with each other.  As
      reported recently, for example, the concurrent calls of setting client
      pool with a combination of write calls may lead to either the
      unkillable dead-lock or UAF.
      
      As a slightly big hammer solution, this patch introduces the mutex to
      make each ioctl exclusive.  Although this may reduce performance via
      parallel ioctl calls, usually it's not demanded for sequencer usages,
      hence it should be negligible.
      Reported-by: default avatarLuo Quan <a4651386@163.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      [bwh: Backported to 4.4: ioctl dispatch is done from snd_seq_do_ioctl();
       take the mutex and add ret variable there.]
      Signed-off-by: default avatarBen Hutchings <ben.hutchings@codethink.co.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      623e5c8a
    • Hugh Dickins's avatar
      kaiser: fix intel_bts perf crashes · 145ebf95
      Hugh Dickins authored
      Vince reported perf_fuzzer quickly locks up on 4.15-rc7 with PTI;
      Robert reported Bad RIP with KPTI and Intel BTS also on 4.15-rc7:
      honggfuzz -f /tmp/somedirectorywithatleastonefile \
                --linux_perf_bts_edge -s -- /bin/true
      (honggfuzz from https://github.com/google/honggfuzz) crashed with
      BUG: unable to handle kernel paging request at ffff9d3215100000
      (then narrowed it down to
      perf record --per-thread -e intel_bts//u -- /bin/ls).
      
      The intel_bts driver does not use the 'normal' BTS buffer which is
      exposed through kaiser_add_mapping(), but instead uses the memory
      allocated for the perf AUX buffer.
      
      This obviously comes apart when using PTI, because then the kernel
      mapping, which includes that AUX buffer memory, disappears while
      switched to user page tables.
      
      Easily fixed in old-Kaiser backports, by applying kaiser_add_mapping()
      to those pages; perhaps not so easy for upstream, where 4.15-rc8 commit
      99a9dc98 ("x86,perf: Disable intel_bts when PTI") disables for now.
      
      Slightly reorganized surrounding code in bts_buffer_setup_aux(),
      so it can better match bts_buffer_free_aux(): free_aux with an #ifdef
      to avoid the loop when PTI is off, but setup_aux needs to loop anyway
      (and kaiser_add_mapping() is cheap when PTI config is off or "pti=off").
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Reported-by: default avatarRobert Święcki <robert@swiecki.net>
      Analyzed-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Analyzed-by: default avatarStephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Vince Weaver <vince@deater.net>
      Cc: stable@vger.kernel.org
      Cc: Jiri Kosina <jkosina@suze.cz>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      145ebf95
    • Dave Hansen's avatar
      x86/pti: Make unpoison of pgd for trusted boot work for real · 5991ee90
      Dave Hansen authored
      commit 445b69e3 upstream.
      
      The inital fix for trusted boot and PTI potentially misses the pgd clearing
      if pud_alloc() sets a PGD.  It probably works in *practice* because for two
      adjacent calls to map_tboot_page() that share a PGD entry, the first will
      clear NX, *then* allocate and set the PGD (without NX clear).  The second
      call will *not* allocate but will clear the NX bit.
      
      Defer the NX clearing to a point after it is known that all top-level
      allocations have occurred.  Add a comment to clarify why.
      
      [ tglx: Massaged changelog ]
      
      [hughd notes: I have not tested tboot, but this looks to me as necessary
      and as safe in old-Kaiser backports as it is upstream; I'm not submitting
      the commit-to-be-fixed 262b6b30, since it was undone by 445b69e3,
      and makes conflict trouble because of 5-level's p4d versus 4-level's pgd.]
      
      Fixes: 262b6b30 ("x86/tboot: Unbreak tboot with PTI enabled")
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: "Tim Chen" <tim.c.chen@linux.intel.com>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: peterz@infradead.org
      Cc: ning.sun@intel.com
      Cc: tboot-devel@lists.sourceforge.net
      Cc: andi@firstfloor.org
      Cc: luto@kernel.org
      Cc: law@redhat.com
      Cc: pbonzini@redhat.com
      Cc: torvalds@linux-foundation.org
      Cc: gregkh@linux-foundation.org
      Cc: dwmw@amazon.co.uk
      Cc: nickc@redhat.com
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180110224939.2695CD47@viggo.jf.intel.comSigned-off-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5991ee90
    • Daniel Borkmann's avatar
      bpf: reject stores into ctx via st and xadd · faa74a86
      Daniel Borkmann authored
      [ upstream commit f37a8cb8 ]
      
      Alexei found that verifier does not reject stores into context
      via BPF_ST instead of BPF_STX. And while looking at it, we
      also should not allow XADD variant of BPF_STX.
      
      The context rewriter is only assuming either BPF_LDX_MEM- or
      BPF_STX_MEM-type operations, thus reject anything other than
      that so that assumptions in the rewriter properly hold. Add
      test cases as well for BPF selftests.
      
      Fixes: d691f9e8 ("bpf: allow programs to write to certain skb fields")
      Reported-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      faa74a86
    • Alexei Starovoitov's avatar
      bpf: fix 32-bit divide by zero · 02662601
      Alexei Starovoitov authored
      [ upstream commit 68fda450 ]
      
      due to some JITs doing if (src_reg == 0) check in 64-bit mode
      for div/mod operations mask upper 32-bits of src register
      before doing the check
      
      Fixes: 62258278 ("net: filter: x86: internal BPF JIT")
      Fixes: 7a12b503 ("sparc64: Add eBPF JIT.")
      Reported-by: syzbot+48340bb518e88849e2e3@syzkaller.appspotmail.com
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      02662601
    • Eric Dumazet's avatar
      bpf: fix divides by zero · b72ba2a0
      Eric Dumazet authored
      [ upstream commit c366287e ]
      
      Divides by zero are not nice, lets avoid them if possible.
      
      Also do_div() seems not needed when dealing with 32bit operands,
      but this seems a minor detail.
      
      Fixes: bd4cf0ed ("net: filter: rework/optimize internal BPF interpreter's instruction set")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b72ba2a0
    • Daniel Borkmann's avatar
      bpf: avoid false sharing of map refcount with max_entries · 96d9b233
      Daniel Borkmann authored
      [ upstream commit be95a845 ]
      
      In addition to commit b2157399 ("bpf: prevent out-of-bounds
      speculation") also change the layout of struct bpf_map such that
      false sharing of fast-path members like max_entries is avoided
      when the maps reference counter is altered. Therefore enforce
      them to be placed into separate cachelines.
      
      pahole dump after change:
      
        struct bpf_map {
              const struct bpf_map_ops  * ops;                 /*     0     8 */
              struct bpf_map *           inner_map_meta;       /*     8     8 */
              void *                     security;             /*    16     8 */
              enum bpf_map_type          map_type;             /*    24     4 */
              u32                        key_size;             /*    28     4 */
              u32                        value_size;           /*    32     4 */
              u32                        max_entries;          /*    36     4 */
              u32                        map_flags;            /*    40     4 */
              u32                        pages;                /*    44     4 */
              u32                        id;                   /*    48     4 */
              int                        numa_node;            /*    52     4 */
              bool                       unpriv_array;         /*    56     1 */
      
              /* XXX 7 bytes hole, try to pack */
      
              /* --- cacheline 1 boundary (64 bytes) --- */
              struct user_struct *       user;                 /*    64     8 */
              atomic_t                   refcnt;               /*    72     4 */
              atomic_t                   usercnt;              /*    76     4 */
              struct work_struct         work;                 /*    80    32 */
              char                       name[16];             /*   112    16 */
              /* --- cacheline 2 boundary (128 bytes) --- */
      
              /* size: 128, cachelines: 2, members: 17 */
              /* sum members: 121, holes: 1, sum holes: 7 */
        };
      
      Now all entries in the first cacheline are read only throughout
      the life time of the map, set up once during map creation. Overall
      struct size and number of cachelines doesn't change from the
      reordering. struct bpf_map is usually first member and embedded
      in map structs in specific map implementations, so also avoid those
      members to sit at the end where it could potentially share the
      cacheline with first map values e.g. in the array since remote
      CPUs could trigger map updates just as well for those (easily
      dirtying members like max_entries intentionally as well) while
      having subsequent values in cache.
      
      Quoting from Google's Project Zero blog [1]:
      
        Additionally, at least on the Intel machine on which this was
        tested, bouncing modified cache lines between cores is slow,
        apparently because the MESI protocol is used for cache coherence
        [8]. Changing the reference counter of an eBPF array on one
        physical CPU core causes the cache line containing the reference
        counter to be bounced over to that CPU core, making reads of the
        reference counter on all other CPU cores slow until the changed
        reference counter has been written back to memory. Because the
        length and the reference counter of an eBPF array are stored in
        the same cache line, this also means that changing the reference
        counter on one physical CPU core causes reads of the eBPF array's
        length to be slow on other physical CPU cores (intentional false
        sharing).
      
      While this doesn't 'control' the out-of-bounds speculation through
      masking the index as in commit b2157399, triggering a manipulation
      of the map's reference counter is really trivial, so lets not allow
      to easily affect max_entries from it.
      
      Splitting to separate cachelines also generally makes sense from
      a performance perspective anyway in that fast-path won't have a
      cache miss if the map gets pinned, reused in other progs, etc out
      of control path, thus also avoids unintentional false sharing.
      
        [1] https://googleprojectzero.blogspot.ch/2018/01/reading-privileged-memory-with-side.htmlSigned-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      96d9b233
    • Daniel Borkmann's avatar
      bpf: arsh is not supported in 32 bit alu thus reject it · 7dcda40e
      Daniel Borkmann authored
      [ upstream commit 7891a87e ]
      
      The following snippet was throwing an 'unknown opcode cc' warning
      in BPF interpreter:
      
        0: (18) r0 = 0x0
        2: (7b) *(u64 *)(r10 -16) = r0
        3: (cc) (u32) r0 s>>= (u32) r0
        4: (95) exit
      
      Although a number of JITs do support BPF_ALU | BPF_ARSH | BPF_{K,X}
      generation, not all of them do and interpreter does neither. We can
      leave existing ones and implement it later in bpf-next for the
      remaining ones, but reject this properly in verifier for the time
      being.
      
      Fixes: 17a52670 ("bpf: verifier (add verifier core)")
      Reported-by: syzbot+93c4904c5c70348a6890@syzkaller.appspotmail.com
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7dcda40e
    • Alexei Starovoitov's avatar
      bpf: introduce BPF_JIT_ALWAYS_ON config · 28c48674
      Alexei Starovoitov authored
      [ upstream commit 290af866 ]
      
      The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.
      
      A quote from goolge project zero blog:
      "At this point, it would normally be necessary to locate gadgets in
      the host kernel code that can be used to actually leak data by reading
      from an attacker-controlled location, shifting and masking the result
      appropriately and then using the result of that as offset to an
      attacker-controlled address for a load. But piecing gadgets together
      and figuring out which ones work in a speculation context seems annoying.
      So instead, we decided to use the eBPF interpreter, which is built into
      the host kernel - while there is no legitimate way to invoke it from inside
      a VM, the presence of the code in the host kernel's text section is sufficient
      to make it usable for the attack, just like with ordinary ROP gadgets."
      
      To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
      option that removes interpreter from the kernel in favor of JIT-only mode.
      So far eBPF JIT is supported by:
      x64, arm64, arm32, sparc64, s390, powerpc64, mips64
      
      The start of JITed program is randomized and code page is marked as read-only.
      In addition "constant blinding" can be turned on with net.core.bpf_jit_harden
      
      v2->v3:
      - move __bpf_prog_ret0 under ifdef (Daniel)
      
      v1->v2:
      - fix init order, test_bpf and cBPF (Daniel's feedback)
      - fix offloaded bpf (Jakub's feedback)
      - add 'return 0' dummy in case something can invoke prog->bpf_func
      - retarget bpf tree. For bpf-next the patch would need one extra hunk.
        It will be sent when the trees are merged back to net-next
      
      Considered doing:
        int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
      but it seems better to land the patch as-is and in bpf-next remove
      bpf_jit_enable global variable from all JITs, consolidate in one place
      and remove this jit_init() function.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      28c48674
    • Alexei Starovoitov's avatar
      bpf: fix bpf_tail_call() x64 JIT · 361fb048
      Alexei Starovoitov authored
      [ upstream commit 90caccdd ]
      
      - bpf prog_array just like all other types of bpf array accepts 32-bit index.
        Clarify that in the comment.
      - fix x64 JIT of bpf_tail_call which was incorrectly loading 8 instead of 4 bytes
      - tighten corresponding check in the interpreter to stay consistent
      
      The JIT bug can be triggered after introduction of BPF_F_NUMA_NODE flag
      in commit 96eabe7a in 4.14. Before that the map_flags would stay zero and
      though JIT code is wrong it will check bounds correctly.
      Hence two fixes tags. All other JITs don't have this problem.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Fixes: 96eabe7a ("bpf: Allow selecting numa node during map creation")
      Fixes: b52f00e6 ("x86: bpf_jit: implement bpf_tail_call() helper")
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      361fb048
    • Eric Dumazet's avatar
      x86: bpf_jit: small optimization in emit_bpf_tail_call() · 5a802e67
      Eric Dumazet authored
      [ upstream commit 84ccac6e ]
      
      Saves 4 bytes replacing following instructions :
      
      lea rax, [rsi + rdx * 8 + offsetof(...)]
      mov rax, qword ptr [rax]
      cmp rax, 0
      
      by :
      
      mov rax, [rsi + rdx * 8 + offsetof(...)]
      test rax, rax
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5a802e67
    • Alexei Starovoitov's avatar
      bpf: fix branch pruning logic · 1367d854
      Alexei Starovoitov authored
      [ Upstream commit c131187d ]
      
      when the verifier detects that register contains a runtime constant
      and it's compared with another constant it will prune exploration
      of the branch that is guaranteed not to be taken at runtime.
      This is all correct, but malicious program may be constructed
      in such a way that it always has a constant comparison and
      the other branch is never taken under any conditions.
      In this case such path through the program will not be explored
      by the verifier. It won't be taken at run-time either, but since
      all instructions are JITed the malicious program may cause JITs
      to complain about using reserved fields, etc.
      To fix the issue we have to track the instructions explored by
      the verifier and sanitize instructions that are dead at run time
      with NOPs. We cannot reject such dead code, since llvm generates
      it for valid C code, since it doesn't do as much data flow
      analysis as the verifier does.
      
      Fixes: 17a52670 ("bpf: verifier (add verifier core)")
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1367d854
    • Linus Torvalds's avatar
      loop: fix concurrent lo_open/lo_release · b3922254
      Linus Torvalds authored
      commit ae665016 upstream.
      
      范龙飞 reports that KASAN can report a use-after-free in __lock_acquire.
      The reason is due to insufficient serialization in lo_release(), which
      will continue to use the loop device even after it has decremented the
      lo_refcnt to zero.
      
      In the meantime, another process can come in, open the loop device
      again as it is being shut down. Confusion ensues.
      Reported-by: default avatar范龙飞 <long7573@126.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b3922254
  2. 31 Jan, 2018 27 commits