1. 23 Apr, 2016 2 commits
  2. 21 Apr, 2016 9 commits
    • Florian Westphal's avatar
      netfilter: x_tables: fix unconditional helper · 9192d640
      Florian Westphal authored
      commit 54d83fc7 upstream.
      
      Ben Hawkes says:
      
       In the mark_source_chains function (net/ipv4/netfilter/ip_tables.c) it
       is possible for a user-supplied ipt_entry structure to have a large
       next_offset field. This field is not bounds checked prior to writing a
       counter value at the supplied offset.
      
      Problem is that mark_source_chains should not have been called --
      the rule doesn't have a next entry, so its supposed to return
      an absolute verdict of either ACCEPT or DROP.
      
      However, the function conditional() doesn't work as the name implies.
      It only checks that the rule is using wildcard address matching.
      
      However, an unconditional rule must also not be using any matches
      (no -m args).
      
      The underflow validator only checked the addresses, therefore
      passing the 'unconditional absolute verdict' test, while
      mark_source_chains also tested for presence of matches, and thus
      proceeeded to the next (not-existent) rule.
      
      Unify this so that all the callers have same idea of 'unconditional rule'.
      Reported-by: default avatarBen Hawkes <hawkes@google.com>
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9192d640
    • Florian Westphal's avatar
      netfilter: x_tables: validate e->target_offset early · 099f87c4
      Florian Westphal authored
      commit bdf533de upstream.
      
      We should check that e->target_offset is sane before
      mark_source_chains gets called since it will fetch the target entry
      for loop detection.
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Acked-by: default avatarMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      099f87c4
    • Willy Tarreau's avatar
      pipe: limit the per-user amount of pages allocated in pipes · 2a032e30
      Willy Tarreau authored
      commit 759c0114 upstream.
      
      On no-so-small systems, it is possible for a single process to cause an
      OOM condition by filling large pipes with data that are never read. A
      typical process filling 4000 pipes with 1 MB of data will use 4 GB of
      memory. On small systems it may be tricky to set the pipe max size to
      prevent this from happening.
      
      This patch makes it possible to enforce a per-user soft limit above
      which new pipes will be limited to a single page, effectively limiting
      them to 4 kB each, as well as a hard limit above which no new pipes may
      be created for this user. This has the effect of protecting the system
      against memory abuse without hurting other users, and still allowing
      pipes to work correctly though with less data at once.
      
      The limit are controlled by two new sysctls : pipe-user-pages-soft, and
      pipe-user-pages-hard. Both may be disabled by setting them to zero. The
      default soft limit allows the default number of FDs per process (1024)
      to create pipes of the default size (64kB), thus reaching a limit of 64MB
      before starting to create only smaller pipes. With 256 processes limited
      to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB =
      1084 MB of memory allocated for a user. The hard limit is disabled by
      default to avoid breaking existing applications that make intensive use
      of pipes (eg: for splicing).
      
      Reported-by: socketpair@gmail.com
      Reported-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Mitigates: CVE-2013-4312 (Linux 2.0+)
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarWilly Tarreau <w@1wt.eu>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2a032e30
    • Chuck Lever's avatar
      SUNRPC: Fix large reads on NFS/RDMA · 62efb1f4
      Chuck Lever authored
      commit 2b7bbc96 upstream.
      
      After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves
      in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a
      few hundred bytes via NFS/RDMA no longer work.  This commit exposed
      a long-standing bug in rpcrdma_inline_fixup().
      
      I reproduce this with an rsize=4096 mount using the cthon04 basic
      tests.  Test 5 fails with an EIO error.
      
      For my reproducer, kernel log shows:
      
        NFS: server cheating in read reply: count 4096 > recvd 0
      
      rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field,
      and xdr_align_pages() is now returning that value to the READ XDR
      decoder function.
      
      That field is set up by xdr_inline_pages() by the READ XDR encoder
      function.  As far as I can tell, it is supposed to be left alone
      after that, as it describes the dimensions of the reply xdr_stream,
      not the contents of that stream.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      62efb1f4
    • Takashi Iwai's avatar
      ALSA: timer: Sync timer deletion at closing the system timer · 9a2fa0d4
      Takashi Iwai authored
      commit f146357f upstream.
      
      ALSA timer core framework has no sync point at stopping because it's
      called inside the spinlock.  Thus we need a sync point at close for
      avoiding the stray timer task.  This is simply done by implementing
      the close callback just calling del_timer_sync().  (It's harmless to
      call it unconditionally, as the core timer itself cares of the already
      deleted timer instance.)
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9a2fa0d4
    • Romain Izard's avatar
      mmc: Allow forward compatibility for eMMC · 6f6abe51
      Romain Izard authored
      commit 03a59437 upstream.
      
      As stated by the eMMC 5.0 specification, a chip should not be rejected
      only because of the revision stated in the EXT_CSD_REV field of the
      EXT_CSD register.
      
      Remove the control on this value, the control of the CSD_STRUCTURE field
      should be sufficient to reject future incompatible changes.
      Signed-off-by: default avatarRomain Izard <romain.izard.pro@gmail.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      6f6abe51
    • Greg Thelen's avatar
      fs, seqfile: always allow oom killer · 8bb06e09
      Greg Thelen authored
      commit 0f930902 upstream.
      
      Since 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill
      processes") seq_buf_alloc() avoids calling the oom killer for PAGE_SIZE or
      smaller allocations; but larger allocations can use the oom killer via
      vmalloc().  Thus reads of small files can return ENOMEM, but larger files
      use the oom killer to avoid ENOMEM.
      
      The effect of this bug is that reads from /proc and other virtual
      filesystems can return ENOMEM instead of the preferred behavior - oom
      killing something (possibly the calling process).  I don't know of anyone
      except Google who has noticed the issue.
      
      I suspect the fix is more needed in smaller systems where there isn't any
      reclaimable memory.  But these seem like the kinds of systems which
      probably don't use the oom killer for production situations.
      
      Memory overcommit requires use of the oom killer to select a victim
      regardless of file size.
      
      Enable oom killer for small seq_buf_alloc() allocations.
      
      Fixes: 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill processes")
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarGreg Thelen <gthelen@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8bb06e09
    • David Rientjes's avatar
      fs, seq_file: fallback to vmalloc instead of oom kill processes · ef2276fc
      David Rientjes authored
      commit 5cec38ac upstream.
      
      Since commit 058504ed ("fs/seq_file: fallback to vmalloc allocation"),
      seq_buf_alloc() falls back to vmalloc() when the kmalloc() for contiguous
      memory fails.  This was done to address order-4 slab allocations for
      reading /proc/stat on large machines and noticed because
      PAGE_ALLOC_COSTLY_ORDER < 4, so there is no infinite loop in the page
      allocator when allocating new slab for such high-order allocations.
      
      Contiguous memory isn't necessary for caller of seq_buf_alloc(), however.
      Other GFP_KERNEL high-order allocations that are <=
      PAGE_ALLOC_COSTLY_ORDER will simply loop forever in the page allocator and
      oom kill processes as a result.
      
      We don't want to kill processes so that we can allocate contiguous memory
      in situations when contiguous memory isn't necessary.
      
      This patch does the kmalloc() allocation with __GFP_NORETRY for high-order
      allocations.  This still utilizes memory compaction and direct reclaim in
      the allocation path, the only difference is that it will fail immediately
      instead of oom kill processes when out of memory.
      
      [akpm@linux-foundation.org: add comment]
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ef2276fc
    • Bjørn Mork's avatar
      cdc_ncm: do not call usbnet_link_change from cdc_ncm_bind · f0592d35
      Bjørn Mork authored
      commit 4d06dd53 upstream.
      
      usbnet_link_change will call schedule_work and should be
      avoided if bind is failing. Otherwise we will end up with
      scheduled work referring to a netdev which has gone away.
      
      Instead of making the call conditional, we can just defer
      it to usbnet_probe, using the driver_info flag made for
      this purpose.
      
      Fixes: 8a34b0ae ("usbnet: cdc_ncm: apply usbnet_link_change")
      Reported-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      f0592d35
  3. 20 Apr, 2016 24 commits
  4. 19 Apr, 2016 5 commits