1. 03 May, 2016 5 commits
  2. 02 May, 2016 14 commits
  3. 23 Apr, 2016 10 commits
    • Jiri Slaby's avatar
      Linux 3.12.59 · 0fd090c8
      Jiri Slaby authored
      0fd090c8
    • Andrew Honig's avatar
      KVM: x86: Reload pit counters for all channels when restoring state · e31a2100
      Andrew Honig authored
      commit 0185604c upstream.
      
      Currently if userspace restores the pit counters with a count of 0
      on channels 1 or 2 and the guest attempts to read the count on those
      channels, then KVM will perform a mod of 0 and crash.  This will ensure
      that 0 values are converted to 65536 as per the spec.
      
      This is CVE-2015-7513.
      Signed-off-by: default avatarAndy Honig <ahonig@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e31a2100
    • Saurabh Sengar's avatar
      KVM: x86: removing unused variable · 979e5410
      Saurabh Sengar authored
      commit 2da29bcc upstream.
      
      removing unused variables, found by coccinelle
      Signed-off-by: default avatarSaurabh Sengar <saurabh.truth@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      979e5410
    • Ryan Ware's avatar
      EVM: Use crypto_memneq() for digest comparisons · afe5a791
      Ryan Ware authored
      commit 613317bd upstream.
      
      This patch fixes vulnerability CVE-2016-2085.  The problem exists
      because the vm_verify_hmac() function includes a use of memcmp().
      Unfortunately, this allows timing side channel attacks; specifically
      a MAC forgery complexity drop from 2^128 to 2^12.  This patch changes
      the memcmp() to the cryptographically safe crypto_memneq().
      Reported-by: default avatarXiaofei Rex Guo <xiaofei.rex.guo@intel.com>
      Signed-off-by: default avatarRyan Ware <ware@linux.intel.com>
      Signed-off-by: default avatarMimi Zohar <zohar@linux.vnet.ibm.com>
      Signed-off-by: default avatarJames Morris <james.l.morris@oracle.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      afe5a791
    • James Yonan's avatar
      crypto: crypto_memneq - add equality testing of memory regions w/o timing leaks · d68e944a
      James Yonan authored
      commit 6bf37e5a upstream.
      
      When comparing MAC hashes, AEAD authentication tags, or other hash
      values in the context of authentication or integrity checking, it
      is important not to leak timing information to a potential attacker,
      i.e. when communication happens over a network.
      
      Bytewise memory comparisons (such as memcmp) are usually optimized so
      that they return a nonzero value as soon as a mismatch is found. E.g,
      on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
      and up to ~850 cyc for a full match (cold). This early-return behavior
      can leak timing information as a side channel, allowing an attacker to
      iteratively guess the correct result.
      
      This patch adds a new method crypto_memneq ("memory not equal to each
      other") to the crypto API that compares memory areas of the same length
      in roughly "constant time" (cache misses could change the timing, but
      since they don't reveal information about the content of the strings
      being compared, they are effectively benign). Iow, best and worst case
      behaviour take the same amount of time to complete (in contrast to
      memcmp).
      
      Note that crypto_memneq (unlike memcmp) can only be used to test for
      equality or inequality, NOT for lexicographical order. This, however,
      is not an issue for its use-cases within the crypto API.
      
      We tried to locate all of the places in the crypto API where memcmp was
      being used for authentication or integrity checking, and convert them
      over to crypto_memneq.
      
      crypto_memneq is declared noinline, placed in its own source file,
      and compiled with optimizations that might increase code size disabled
      ("Os") because a smart compiler (or LTO) might notice that the return
      value is always compared against zero/nonzero, and might then
      reintroduce the same early-return optimization that we are trying to
      avoid.
      
      Using #pragma or __attribute__ optimization annotations of the code
      for disabling optimization was avoided as it seems to be considered
      broken or unmaintained for long time in GCC [1]. Therefore, we work
      around that by specifying the compile flag for memneq.o directly in
      the Makefile. We found that this seems to be most appropriate.
      
      As we use ("Os"), this patch also provides a loop-free "fast-path" for
      frequently used 16 byte digests. Similarly to kernel library string
      functions, leave an option for future even further optimized architecture
      specific assembler implementations.
      
      This was a joint work of James Yonan and Daniel Borkmann. Also thanks
      for feedback from Florian Weimer on this and earlier proposals [2].
      
        [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
        [2] https://lkml.org/lkml/2013/2/10/131Signed-off-by: default avatarJames Yonan <james@openvpn.net>
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Florian Weimer <fw@deneb.enyo.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      d68e944a
    • David Howells's avatar
      KEYS: Fix handling of stored error in a negatively instantiated user key · 15216848
      David Howells authored
      commit 096fe9ea upstream.
      
      If a user key gets negatively instantiated, an error code is cached in the
      payload area.  A negatively instantiated key may be then be positively
      instantiated by updating it with valid data.  However, the ->update key
      type method must be aware that the error code may be there.
      
      The following may be used to trigger the bug in the user key type:
      
          keyctl request2 user user "" @u
          keyctl add user user "a" @u
      
      which manifests itself as:
      
      	BUG: unable to handle kernel paging request at 00000000ffffff8a
      	IP: [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046
      	PGD 7cc30067 PUD 0
      	Oops: 0002 [#1] SMP
      	Modules linked in:
      	CPU: 3 PID: 2644 Comm: a.out Not tainted 4.3.0+ #49
      	Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
      	task: ffff88003ddea700 ti: ffff88003dd88000 task.ti: ffff88003dd88000
      	RIP: 0010:[<ffffffff810a376f>]  [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280
      	 [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046
      	RSP: 0018:ffff88003dd8bdb0  EFLAGS: 00010246
      	RAX: 00000000ffffff82 RBX: 0000000000000000 RCX: 0000000000000001
      	RDX: ffffffff81e3fe40 RSI: 0000000000000000 RDI: 00000000ffffff82
      	RBP: ffff88003dd8bde0 R08: ffff88007d2d2da0 R09: 0000000000000000
      	R10: 0000000000000000 R11: ffff88003e8073c0 R12: 00000000ffffff82
      	R13: ffff88003dd8be68 R14: ffff88007d027600 R15: ffff88003ddea700
      	FS:  0000000000b92880(0063) GS:ffff88007fd00000(0000) knlGS:0000000000000000
      	CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      	CR2: 00000000ffffff8a CR3: 000000007cc5f000 CR4: 00000000000006e0
      	Stack:
      	 ffff88003dd8bdf0 ffffffff81160a8a 0000000000000000 00000000ffffff82
      	 ffff88003dd8be68 ffff88007d027600 ffff88003dd8bdf0 ffffffff810a39e5
      	 ffff88003dd8be20 ffffffff812a31ab ffff88007d027600 ffff88007d027620
      	Call Trace:
      	 [<ffffffff810a39e5>] kfree_call_rcu+0x15/0x20 kernel/rcu/tree.c:3136
      	 [<ffffffff812a31ab>] user_update+0x8b/0xb0 security/keys/user_defined.c:129
      	 [<     inline     >] __key_update security/keys/key.c:730
      	 [<ffffffff8129e5c1>] key_create_or_update+0x291/0x440 security/keys/key.c:908
      	 [<     inline     >] SYSC_add_key security/keys/keyctl.c:125
      	 [<ffffffff8129fc21>] SyS_add_key+0x101/0x1e0 security/keys/keyctl.c:60
      	 [<ffffffff8185f617>] entry_SYSCALL_64_fastpath+0x12/0x6a arch/x86/entry/entry_64.S:185
      
      Note the error code (-ENOKEY) in EDX.
      
      A similar bug can be tripped by:
      
          keyctl request2 trusted user "" @u
          keyctl add trusted user "a" @u
      
      This should also affect encrypted keys - but that has to be correctly
      parameterised or it will fail with EINVAL before getting to the bit that
      will crashes.
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarMimi Zohar <zohar@linux.vnet.ibm.com>
      Signed-off-by: default avatarJames Morris <james.l.morris@oracle.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      15216848
    • Eric W. Biederman's avatar
      mnt: Move the clear of MNT_LOCKED from copy_tree to it's callers. · d26388bb
      Eric W. Biederman authored
      commit 8486a788 upstream.
      
      Clear MNT_LOCKED in the callers of copy_tree except copy_mnt_ns, and
      collect_mounts.  In copy_mnt_ns it is necessary to create an exact
      copy of a mount tree, so not clearing MNT_LOCKED is important.
      Similarly collect_mounts is used to take a snapshot of the mount tree
      for audit logging purposes and auditing using a faithful copy of the
      tree is important.
      
      This becomes particularly significant when we start setting MNT_LOCKED
      on rootfs to prevent it from being unmounted.
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Acked-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      d26388bb
    • Dmitry Monakhov's avatar
      fs/pipe.c: skip file_update_time on frozen fs · cf9d5808
      Dmitry Monakhov authored
      commit 7e775f46 upstream.
      
      Pipe has no data associated with fs so it is not good idea to block
      pipe_write() if FS is frozen, but we can not update file's time on such
      filesystem.  Let's use same idea as we use in touch_time().
      
      Addresses https://bugzilla.kernel.org/show_bug.cgi?id=65701Signed-off-by: default avatarDmitry Monakhov <dmonakhov@openvz.org>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      cf9d5808
    • Ignat Korchagin's avatar
      USB: usbip: fix potential out-of-bounds write · 3b86e790
      Ignat Korchagin authored
      commit b348d7dd upstream.
      
      Fix potential out-of-bounds write to urb->transfer_buffer
      usbip handles network communication directly in the kernel. When receiving a
      packet from its peer, usbip code parses headers according to protocol. As
      part of this parsing urb->actual_length is filled. Since the input for
      urb->actual_length comes from the network, it should be treated as untrusted.
      Any entity controlling the network may put any value in the input and the
      preallocated urb->transfer_buffer may not be large enough to hold the data.
      Thus, the malicious entity is able to write arbitrary data to kernel memory.
      Signed-off-by: default avatarIgnat Korchagin <ignat.korchagin@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      3b86e790
    • Florian Westphal's avatar
      netfilter: x_tables: make sure e->next_offset covers remaining blob size · 8bdb7e5e
      Florian Westphal authored
      commit 6e94e0cf upstream.
      
      Otherwise this function may read data beyond the ruleset blob.
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Cc: Michal Kubecek <mkubecek@suse.cz>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8bdb7e5e
  4. 21 Apr, 2016 9 commits
    • Florian Westphal's avatar
      netfilter: x_tables: fix unconditional helper · 9192d640
      Florian Westphal authored
      commit 54d83fc7 upstream.
      
      Ben Hawkes says:
      
       In the mark_source_chains function (net/ipv4/netfilter/ip_tables.c) it
       is possible for a user-supplied ipt_entry structure to have a large
       next_offset field. This field is not bounds checked prior to writing a
       counter value at the supplied offset.
      
      Problem is that mark_source_chains should not have been called --
      the rule doesn't have a next entry, so its supposed to return
      an absolute verdict of either ACCEPT or DROP.
      
      However, the function conditional() doesn't work as the name implies.
      It only checks that the rule is using wildcard address matching.
      
      However, an unconditional rule must also not be using any matches
      (no -m args).
      
      The underflow validator only checked the addresses, therefore
      passing the 'unconditional absolute verdict' test, while
      mark_source_chains also tested for presence of matches, and thus
      proceeeded to the next (not-existent) rule.
      
      Unify this so that all the callers have same idea of 'unconditional rule'.
      Reported-by: default avatarBen Hawkes <hawkes@google.com>
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9192d640
    • Florian Westphal's avatar
      netfilter: x_tables: validate e->target_offset early · 099f87c4
      Florian Westphal authored
      commit bdf533de upstream.
      
      We should check that e->target_offset is sane before
      mark_source_chains gets called since it will fetch the target entry
      for loop detection.
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Acked-by: default avatarMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      099f87c4
    • Willy Tarreau's avatar
      pipe: limit the per-user amount of pages allocated in pipes · 2a032e30
      Willy Tarreau authored
      commit 759c0114 upstream.
      
      On no-so-small systems, it is possible for a single process to cause an
      OOM condition by filling large pipes with data that are never read. A
      typical process filling 4000 pipes with 1 MB of data will use 4 GB of
      memory. On small systems it may be tricky to set the pipe max size to
      prevent this from happening.
      
      This patch makes it possible to enforce a per-user soft limit above
      which new pipes will be limited to a single page, effectively limiting
      them to 4 kB each, as well as a hard limit above which no new pipes may
      be created for this user. This has the effect of protecting the system
      against memory abuse without hurting other users, and still allowing
      pipes to work correctly though with less data at once.
      
      The limit are controlled by two new sysctls : pipe-user-pages-soft, and
      pipe-user-pages-hard. Both may be disabled by setting them to zero. The
      default soft limit allows the default number of FDs per process (1024)
      to create pipes of the default size (64kB), thus reaching a limit of 64MB
      before starting to create only smaller pipes. With 256 processes limited
      to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB =
      1084 MB of memory allocated for a user. The hard limit is disabled by
      default to avoid breaking existing applications that make intensive use
      of pipes (eg: for splicing).
      
      Reported-by: socketpair@gmail.com
      Reported-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Mitigates: CVE-2013-4312 (Linux 2.0+)
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarWilly Tarreau <w@1wt.eu>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2a032e30
    • Chuck Lever's avatar
      SUNRPC: Fix large reads on NFS/RDMA · 62efb1f4
      Chuck Lever authored
      commit 2b7bbc96 upstream.
      
      After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves
      in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a
      few hundred bytes via NFS/RDMA no longer work.  This commit exposed
      a long-standing bug in rpcrdma_inline_fixup().
      
      I reproduce this with an rsize=4096 mount using the cthon04 basic
      tests.  Test 5 fails with an EIO error.
      
      For my reproducer, kernel log shows:
      
        NFS: server cheating in read reply: count 4096 > recvd 0
      
      rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field,
      and xdr_align_pages() is now returning that value to the READ XDR
      decoder function.
      
      That field is set up by xdr_inline_pages() by the READ XDR encoder
      function.  As far as I can tell, it is supposed to be left alone
      after that, as it describes the dimensions of the reply xdr_stream,
      not the contents of that stream.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      62efb1f4
    • Takashi Iwai's avatar
      ALSA: timer: Sync timer deletion at closing the system timer · 9a2fa0d4
      Takashi Iwai authored
      commit f146357f upstream.
      
      ALSA timer core framework has no sync point at stopping because it's
      called inside the spinlock.  Thus we need a sync point at close for
      avoiding the stray timer task.  This is simply done by implementing
      the close callback just calling del_timer_sync().  (It's harmless to
      call it unconditionally, as the core timer itself cares of the already
      deleted timer instance.)
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9a2fa0d4
    • Romain Izard's avatar
      mmc: Allow forward compatibility for eMMC · 6f6abe51
      Romain Izard authored
      commit 03a59437 upstream.
      
      As stated by the eMMC 5.0 specification, a chip should not be rejected
      only because of the revision stated in the EXT_CSD_REV field of the
      EXT_CSD register.
      
      Remove the control on this value, the control of the CSD_STRUCTURE field
      should be sufficient to reject future incompatible changes.
      Signed-off-by: default avatarRomain Izard <romain.izard.pro@gmail.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      6f6abe51
    • Greg Thelen's avatar
      fs, seqfile: always allow oom killer · 8bb06e09
      Greg Thelen authored
      commit 0f930902 upstream.
      
      Since 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill
      processes") seq_buf_alloc() avoids calling the oom killer for PAGE_SIZE or
      smaller allocations; but larger allocations can use the oom killer via
      vmalloc().  Thus reads of small files can return ENOMEM, but larger files
      use the oom killer to avoid ENOMEM.
      
      The effect of this bug is that reads from /proc and other virtual
      filesystems can return ENOMEM instead of the preferred behavior - oom
      killing something (possibly the calling process).  I don't know of anyone
      except Google who has noticed the issue.
      
      I suspect the fix is more needed in smaller systems where there isn't any
      reclaimable memory.  But these seem like the kinds of systems which
      probably don't use the oom killer for production situations.
      
      Memory overcommit requires use of the oom killer to select a victim
      regardless of file size.
      
      Enable oom killer for small seq_buf_alloc() allocations.
      
      Fixes: 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill processes")
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarGreg Thelen <gthelen@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8bb06e09
    • David Rientjes's avatar
      fs, seq_file: fallback to vmalloc instead of oom kill processes · ef2276fc
      David Rientjes authored
      commit 5cec38ac upstream.
      
      Since commit 058504ed ("fs/seq_file: fallback to vmalloc allocation"),
      seq_buf_alloc() falls back to vmalloc() when the kmalloc() for contiguous
      memory fails.  This was done to address order-4 slab allocations for
      reading /proc/stat on large machines and noticed because
      PAGE_ALLOC_COSTLY_ORDER < 4, so there is no infinite loop in the page
      allocator when allocating new slab for such high-order allocations.
      
      Contiguous memory isn't necessary for caller of seq_buf_alloc(), however.
      Other GFP_KERNEL high-order allocations that are <=
      PAGE_ALLOC_COSTLY_ORDER will simply loop forever in the page allocator and
      oom kill processes as a result.
      
      We don't want to kill processes so that we can allocate contiguous memory
      in situations when contiguous memory isn't necessary.
      
      This patch does the kmalloc() allocation with __GFP_NORETRY for high-order
      allocations.  This still utilizes memory compaction and direct reclaim in
      the allocation path, the only difference is that it will fail immediately
      instead of oom kill processes when out of memory.
      
      [akpm@linux-foundation.org: add comment]
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ef2276fc
    • Bjørn Mork's avatar
      cdc_ncm: do not call usbnet_link_change from cdc_ncm_bind · f0592d35
      Bjørn Mork authored
      commit 4d06dd53 upstream.
      
      usbnet_link_change will call schedule_work and should be
      avoided if bind is failing. Otherwise we will end up with
      scheduled work referring to a netdev which has gone away.
      
      Instead of making the call conditional, we can just defer
      it to usbnet_probe, using the driver_info flag made for
      this purpose.
      
      Fixes: 8a34b0ae ("usbnet: cdc_ncm: apply usbnet_link_change")
      Reported-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      f0592d35
  5. 20 Apr, 2016 2 commits
    • Bjørn Mork's avatar
      net: qmi_wwan: MDM9x30 specific power management · 9c186cff
      Bjørn Mork authored
      commit 93725149 upstream.
      
      MDM9x30 based modems appear to go into a deeper sleep when
      suspended without "Remote Wakeup" enabled.  The QMI interface
      will not respond unless a "set DTR" control request is sent
      on resume. The effect is similar to a QMI_CTL SYNC request,
      resetting (some of) the firmware state.
      
      We allow userspace sessions to span multiple character device
      open/close sequences.  This means that userspace can depend
      on firmware state while both the netdev and the character
      device are closed.  We have disabled "needs_remote_wakeup" at
      this point to allow devices without remote wakeup support to
      be auto-suspended.
      
      To make sure the MDM9x30 keeps firmware state, we need to
      keep "needs_remote_wakeup" always set. We also need to
      issue a "set DTR" request to enable the QMI interface.
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9c186cff
    • Felipe F. Tonello's avatar
      usb: gadget: f_midi: Fixed a bug when buflen was smaller than wMaxPacketSize · 7d4647a4
      Felipe F. Tonello authored
      commit 03d27ade upstream.
      
      buflen by default (256) is smaller than wMaxPacketSize (512) in high-speed
      devices.
      
      That caused the OUT endpoint to freeze if the host send any data packet of
      length greater than 256 bytes.
      
      This is an example dump of what happended on that enpoint:
      HOST:   [DATA][Length=260][...]
      DEVICE: [NAK]
      HOST:   [PING]
      DEVICE: [NAK]
      HOST:   [PING]
      DEVICE: [NAK]
      ...
      HOST:   [PING]
      DEVICE: [NAK]
      
      This patch fixes this problem by setting the minimum usb_request's buffer size
      for the OUT endpoint as its wMaxPacketSize.
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Signed-off-by: default avatarFelipe F. Tonello <eu@felipetonello.com>
      Signed-off-by: default avatarFelipe Balbi <felipe.balbi@linux.intel.com>
      Cc: Oliver Neukum <oliver@neukum.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      7d4647a4