- 23 Apr, 2016 8 commits
-
-
Saurabh Sengar authored
commit 2da29bcc upstream. removing unused variables, found by coccinelle Signed-off-by: Saurabh Sengar <saurabh.truth@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Ryan Ware authored
commit 613317bd upstream. This patch fixes vulnerability CVE-2016-2085. The problem exists because the vm_verify_hmac() function includes a use of memcmp(). Unfortunately, this allows timing side channel attacks; specifically a MAC forgery complexity drop from 2^128 to 2^12. This patch changes the memcmp() to the cryptographically safe crypto_memneq(). Reported-by: Xiaofei Rex Guo <xiaofei.rex.guo@intel.com> Signed-off-by: Ryan Ware <ware@linux.intel.com> Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
James Yonan authored
commit 6bf37e5a upstream. When comparing MAC hashes, AEAD authentication tags, or other hash values in the context of authentication or integrity checking, it is important not to leak timing information to a potential attacker, i.e. when communication happens over a network. Bytewise memory comparisons (such as memcmp) are usually optimized so that they return a nonzero value as soon as a mismatch is found. E.g, on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch and up to ~850 cyc for a full match (cold). This early-return behavior can leak timing information as a side channel, allowing an attacker to iteratively guess the correct result. This patch adds a new method crypto_memneq ("memory not equal to each other") to the crypto API that compares memory areas of the same length in roughly "constant time" (cache misses could change the timing, but since they don't reveal information about the content of the strings being compared, they are effectively benign). Iow, best and worst case behaviour take the same amount of time to complete (in contrast to memcmp). Note that crypto_memneq (unlike memcmp) can only be used to test for equality or inequality, NOT for lexicographical order. This, however, is not an issue for its use-cases within the crypto API. We tried to locate all of the places in the crypto API where memcmp was being used for authentication or integrity checking, and convert them over to crypto_memneq. crypto_memneq is declared noinline, placed in its own source file, and compiled with optimizations that might increase code size disabled ("Os") because a smart compiler (or LTO) might notice that the return value is always compared against zero/nonzero, and might then reintroduce the same early-return optimization that we are trying to avoid. Using #pragma or __attribute__ optimization annotations of the code for disabling optimization was avoided as it seems to be considered broken or unmaintained for long time in GCC [1]. Therefore, we work around that by specifying the compile flag for memneq.o directly in the Makefile. We found that this seems to be most appropriate. As we use ("Os"), this patch also provides a loop-free "fast-path" for frequently used 16 byte digests. Similarly to kernel library string functions, leave an option for future even further optimized architecture specific assembler implementations. This was a joint work of James Yonan and Daniel Borkmann. Also thanks for feedback from Florian Weimer on this and earlier proposals [2]. [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html [2] https://lkml.org/lkml/2013/2/10/131Signed-off-by: James Yonan <james@openvpn.net> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Florian Weimer <fw@deneb.enyo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
David Howells authored
commit 096fe9ea upstream. If a user key gets negatively instantiated, an error code is cached in the payload area. A negatively instantiated key may be then be positively instantiated by updating it with valid data. However, the ->update key type method must be aware that the error code may be there. The following may be used to trigger the bug in the user key type: keyctl request2 user user "" @u keyctl add user user "a" @u which manifests itself as: BUG: unable to handle kernel paging request at 00000000ffffff8a IP: [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046 PGD 7cc30067 PUD 0 Oops: 0002 [#1] SMP Modules linked in: CPU: 3 PID: 2644 Comm: a.out Not tainted 4.3.0+ #49 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 task: ffff88003ddea700 ti: ffff88003dd88000 task.ti: ffff88003dd88000 RIP: 0010:[<ffffffff810a376f>] [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046 RSP: 0018:ffff88003dd8bdb0 EFLAGS: 00010246 RAX: 00000000ffffff82 RBX: 0000000000000000 RCX: 0000000000000001 RDX: ffffffff81e3fe40 RSI: 0000000000000000 RDI: 00000000ffffff82 RBP: ffff88003dd8bde0 R08: ffff88007d2d2da0 R09: 0000000000000000 R10: 0000000000000000 R11: ffff88003e8073c0 R12: 00000000ffffff82 R13: ffff88003dd8be68 R14: ffff88007d027600 R15: ffff88003ddea700 FS: 0000000000b92880(0063) GS:ffff88007fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00000000ffffff8a CR3: 000000007cc5f000 CR4: 00000000000006e0 Stack: ffff88003dd8bdf0 ffffffff81160a8a 0000000000000000 00000000ffffff82 ffff88003dd8be68 ffff88007d027600 ffff88003dd8bdf0 ffffffff810a39e5 ffff88003dd8be20 ffffffff812a31ab ffff88007d027600 ffff88007d027620 Call Trace: [<ffffffff810a39e5>] kfree_call_rcu+0x15/0x20 kernel/rcu/tree.c:3136 [<ffffffff812a31ab>] user_update+0x8b/0xb0 security/keys/user_defined.c:129 [< inline >] __key_update security/keys/key.c:730 [<ffffffff8129e5c1>] key_create_or_update+0x291/0x440 security/keys/key.c:908 [< inline >] SYSC_add_key security/keys/keyctl.c:125 [<ffffffff8129fc21>] SyS_add_key+0x101/0x1e0 security/keys/keyctl.c:60 [<ffffffff8185f617>] entry_SYSCALL_64_fastpath+0x12/0x6a arch/x86/entry/entry_64.S:185 Note the error code (-ENOKEY) in EDX. A similar bug can be tripped by: keyctl request2 trusted user "" @u keyctl add trusted user "a" @u This should also affect encrypted keys - but that has to be correctly parameterised or it will fail with EINVAL before getting to the bit that will crashes. Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric W. Biederman authored
commit 8486a788 upstream. Clear MNT_LOCKED in the callers of copy_tree except copy_mnt_ns, and collect_mounts. In copy_mnt_ns it is necessary to create an exact copy of a mount tree, so not clearing MNT_LOCKED is important. Similarly collect_mounts is used to take a snapshot of the mount tree for audit logging purposes and auditing using a faithful copy of the tree is important. This becomes particularly significant when we start setting MNT_LOCKED on rootfs to prevent it from being unmounted. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: NeilBrown <neilb@suse.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dmitry Monakhov authored
commit 7e775f46 upstream. Pipe has no data associated with fs so it is not good idea to block pipe_write() if FS is frozen, but we can not update file's time on such filesystem. Let's use same idea as we use in touch_time(). Addresses https://bugzilla.kernel.org/show_bug.cgi?id=65701Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Ignat Korchagin authored
commit b348d7dd upstream. Fix potential out-of-bounds write to urb->transfer_buffer usbip handles network communication directly in the kernel. When receiving a packet from its peer, usbip code parses headers according to protocol. As part of this parsing urb->actual_length is filled. Since the input for urb->actual_length comes from the network, it should be treated as untrusted. Any entity controlling the network may put any value in the input and the preallocated urb->transfer_buffer may not be large enough to hold the data. Thus, the malicious entity is able to write arbitrary data to kernel memory. Signed-off-by: Ignat Korchagin <ignat.korchagin@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Florian Westphal authored
commit 6e94e0cf upstream. Otherwise this function may read data beyond the ruleset blob. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
- 21 Apr, 2016 9 commits
-
-
Florian Westphal authored
commit 54d83fc7 upstream. Ben Hawkes says: In the mark_source_chains function (net/ipv4/netfilter/ip_tables.c) it is possible for a user-supplied ipt_entry structure to have a large next_offset field. This field is not bounds checked prior to writing a counter value at the supplied offset. Problem is that mark_source_chains should not have been called -- the rule doesn't have a next entry, so its supposed to return an absolute verdict of either ACCEPT or DROP. However, the function conditional() doesn't work as the name implies. It only checks that the rule is using wildcard address matching. However, an unconditional rule must also not be using any matches (no -m args). The underflow validator only checked the addresses, therefore passing the 'unconditional absolute verdict' test, while mark_source_chains also tested for presence of matches, and thus proceeeded to the next (not-existent) rule. Unify this so that all the callers have same idea of 'unconditional rule'. Reported-by: Ben Hawkes <hawkes@google.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Florian Westphal authored
commit bdf533de upstream. We should check that e->target_offset is sane before mark_source_chains gets called since it will fetch the target entry for loop detection. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Acked-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Willy Tarreau authored
commit 759c0114 upstream. On no-so-small systems, it is possible for a single process to cause an OOM condition by filling large pipes with data that are never read. A typical process filling 4000 pipes with 1 MB of data will use 4 GB of memory. On small systems it may be tricky to set the pipe max size to prevent this from happening. This patch makes it possible to enforce a per-user soft limit above which new pipes will be limited to a single page, effectively limiting them to 4 kB each, as well as a hard limit above which no new pipes may be created for this user. This has the effect of protecting the system against memory abuse without hurting other users, and still allowing pipes to work correctly though with less data at once. The limit are controlled by two new sysctls : pipe-user-pages-soft, and pipe-user-pages-hard. Both may be disabled by setting them to zero. The default soft limit allows the default number of FDs per process (1024) to create pipes of the default size (64kB), thus reaching a limit of 64MB before starting to create only smaller pipes. With 256 processes limited to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB = 1084 MB of memory allocated for a user. The hard limit is disabled by default to avoid breaking existing applications that make intensive use of pipes (eg: for splicing). Reported-by: socketpair@gmail.com Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Mitigates: CVE-2013-4312 (Linux 2.0+) Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Chuck Lever authored
commit 2b7bbc96 upstream. After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a few hundred bytes via NFS/RDMA no longer work. This commit exposed a long-standing bug in rpcrdma_inline_fixup(). I reproduce this with an rsize=4096 mount using the cthon04 basic tests. Test 5 fails with an EIO error. For my reproducer, kernel log shows: NFS: server cheating in read reply: count 4096 > recvd 0 rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field, and xdr_align_pages() is now returning that value to the READ XDR decoder function. That field is set up by xdr_inline_pages() by the READ XDR encoder function. As far as I can tell, it is supposed to be left alone after that, as it describes the dimensions of the reply xdr_stream, not the contents of that stream. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit f146357f upstream. ALSA timer core framework has no sync point at stopping because it's called inside the spinlock. Thus we need a sync point at close for avoiding the stray timer task. This is simply done by implementing the close callback just calling del_timer_sync(). (It's harmless to call it unconditionally, as the core timer itself cares of the already deleted timer instance.) Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Romain Izard authored
commit 03a59437 upstream. As stated by the eMMC 5.0 specification, a chip should not be rejected only because of the revision stated in the EXT_CSD_REV field of the EXT_CSD register. Remove the control on this value, the control of the CSD_STRUCTURE field should be sufficient to reject future incompatible changes. Signed-off-by: Romain Izard <romain.izard.pro@gmail.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Greg Thelen authored
commit 0f930902 upstream. Since 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill processes") seq_buf_alloc() avoids calling the oom killer for PAGE_SIZE or smaller allocations; but larger allocations can use the oom killer via vmalloc(). Thus reads of small files can return ENOMEM, but larger files use the oom killer to avoid ENOMEM. The effect of this bug is that reads from /proc and other virtual filesystems can return ENOMEM instead of the preferred behavior - oom killing something (possibly the calling process). I don't know of anyone except Google who has noticed the issue. I suspect the fix is more needed in smaller systems where there isn't any reclaimable memory. But these seem like the kinds of systems which probably don't use the oom killer for production situations. Memory overcommit requires use of the oom killer to select a victim regardless of file size. Enable oom killer for small seq_buf_alloc() allocations. Fixes: 5cec38ac ("fs, seq_file: fallback to vmalloc instead of oom kill processes") Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Greg Thelen <gthelen@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
David Rientjes authored
commit 5cec38ac upstream. Since commit 058504ed ("fs/seq_file: fallback to vmalloc allocation"), seq_buf_alloc() falls back to vmalloc() when the kmalloc() for contiguous memory fails. This was done to address order-4 slab allocations for reading /proc/stat on large machines and noticed because PAGE_ALLOC_COSTLY_ORDER < 4, so there is no infinite loop in the page allocator when allocating new slab for such high-order allocations. Contiguous memory isn't necessary for caller of seq_buf_alloc(), however. Other GFP_KERNEL high-order allocations that are <= PAGE_ALLOC_COSTLY_ORDER will simply loop forever in the page allocator and oom kill processes as a result. We don't want to kill processes so that we can allocate contiguous memory in situations when contiguous memory isn't necessary. This patch does the kmalloc() allocation with __GFP_NORETRY for high-order allocations. This still utilizes memory compaction and direct reclaim in the allocation path, the only difference is that it will fail immediately instead of oom kill processes when out of memory. [akpm@linux-foundation.org: add comment] Signed-off-by: David Rientjes <rientjes@google.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Bjørn Mork authored
commit 4d06dd53 upstream. usbnet_link_change will call schedule_work and should be avoided if bind is failing. Otherwise we will end up with scheduled work referring to a netdev which has gone away. Instead of making the call conditional, we can just defer it to usbnet_probe, using the driver_info flag made for this purpose. Fixes: 8a34b0ae ("usbnet: cdc_ncm: apply usbnet_link_change") Reported-by: Andrey Konovalov <andreyknvl@gmail.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
- 20 Apr, 2016 23 commits
-
-
Bjørn Mork authored
commit 93725149 upstream. MDM9x30 based modems appear to go into a deeper sleep when suspended without "Remote Wakeup" enabled. The QMI interface will not respond unless a "set DTR" control request is sent on resume. The effect is similar to a QMI_CTL SYNC request, resetting (some of) the firmware state. We allow userspace sessions to span multiple character device open/close sequences. This means that userspace can depend on firmware state while both the netdev and the character device are closed. We have disabled "needs_remote_wakeup" at this point to allow devices without remote wakeup support to be auto-suspended. To make sure the MDM9x30 keeps firmware state, we need to keep "needs_remote_wakeup" always set. We also need to issue a "set DTR" request to enable the QMI interface. Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Felipe F. Tonello authored
commit 03d27ade upstream. buflen by default (256) is smaller than wMaxPacketSize (512) in high-speed devices. That caused the OUT endpoint to freeze if the host send any data packet of length greater than 256 bytes. This is an example dump of what happended on that enpoint: HOST: [DATA][Length=260][...] DEVICE: [NAK] HOST: [PING] DEVICE: [NAK] HOST: [PING] DEVICE: [NAK] ... HOST: [PING] DEVICE: [NAK] This patch fixes this problem by setting the minimum usb_request's buffer size for the OUT endpoint as its wMaxPacketSize. Acked-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Felipe F. Tonello <eu@felipetonello.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Cc: Oliver Neukum <oliver@neukum.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eryu Guan authored
commit 5e1021f2 upstream. ext4_reserve_inode_write() in ext4_mark_inode_dirty() could fail on error (e.g. EIO) and iloc.bh can be NULL in this case. But the error is ignored in the following "if" condition and ext4_expand_extra_isize() might be called with NULL iloc.bh set, which triggers NULL pointer dereference. This is uncovered by commit 8b4953e1 ("ext4: reserve code points for the project quota feature"), which enlarges the ext4_inode size, and run the following script on new kernel but with old mke2fs: #/bin/bash mnt=/mnt/ext4 devname=ext4-error dev=/dev/mapper/$devname fsimg=/home/fs.img trap cleanup 0 1 2 3 9 15 cleanup() { umount $mnt >/dev/null 2>&1 dmsetup remove $devname losetup -d $backend_dev rm -f $fsimg exit 0 } rm -f $fsimg fallocate -l 1g $fsimg backend_dev=`losetup -f --show $fsimg` devsize=`blockdev --getsz $backend_dev` good_tab="0 $devsize linear $backend_dev 0" error_tab="0 $devsize error $backend_dev 0" dmsetup create $devname --table "$good_tab" mkfs -t ext4 $dev mount -t ext4 -o errors=continue,strictatime $dev $mnt dmsetup load $devname --table "$error_tab" && dmsetup resume $devname echo 3 > /proc/sys/vm/drop_caches ls -l $mnt exit 0 [ Patch changed to simplify the function a tiny bit. -- Ted ] Signed-off-by: Eryu Guan <guaneryu@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Guo-Fu Tseng authored
commit 81422e67 upstream. According to Documentation/power/devices.txt The driver should not use device_set_wakeup_enable() which is the policy for user to decide. Using device_init_wakeup() to initialize dev->power.should_wakeup and dev->power.can_wakeup on driver initialization. And use device_may_wakeup() on suspend to decide if WoL function should be enabled on NIC. Reported-by: Diego Viola <diego.viola@gmail.com> Signed-off-by: Guo-Fu Tseng <cooldavid@cooldavid.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Guo-Fu Tseng authored
commit 0772a99b upstream. Otherwise it might be back on resume right after going to suspend in some hardware. Reported-by: Diego Viola <diego.viola@gmail.com> Signed-off-by: Guo-Fu Tseng <cooldavid@cooldavid.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Vladis Dronov authored
commit fa52bd50 upstream. The usbvision driver crashes when a specially crafted usb device with invalid number of interfaces or endpoints is detected. This fix adds checks that the device has proper configuration expected by the driver. Reported-by: Ralf Spenneberg <ralf@spenneberg.net> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alexey Khoroshilov authored
commit afd270d1 upstream. There is no usb_put_dev() on failure paths in usbvision_probe(). Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nicolai Hähnle authored
[Backport of upstream commit f6ff4f67, with an additional NULL pointer guard that is required for kernels 3.17 and older. To be precise, any kernel that does *not* have commit 954605ca "drm/radeon: use common fence implementation for fences, v4" requires this additional NULL pointer guard.] An arbitrary amount of time can pass between spin_unlock and radeon_fence_wait_any, so we need to ensure that nobody frees the fences from under us. Based on the analogous fix for amdgpu. Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> (v1 + fix) Tested-by: Lutz Euler <lutz.euler@freenet.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alan Stern authored
commit 972e6a99 upstream. The usbhid driver has inconsistently duplicated code in its post-reset, resume, and reset-resume pathways. reset-resume doesn't check HID_STARTED before trying to restart the I/O queues. resume fails to clear the HID_SUSPENDED flag if HID_STARTED isn't set. resume calls usbhid_restart_queues() with usbhid->lock held and the others call it without holding the lock. The first item in particular causes a problem following a reset-resume if the driver hasn't started up its I/O. URB submission fails because usbhid->urbin is NULL, and this triggers an unending reset-retry loop. This patch fixes the problem by creating a new subroutine, hid_restart_io(), to carry out all the common activities. It also adds some checks that were missing in the original code: After a reset, there's no need to clear any halted endpoints. After a resume, if a reset is pending there's no need to restart any I/O until the reset is finished. After a resume, if the interrupt-IN endpoint is halted there's no need to submit the input URB until the halt has been cleared. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Daniel Fraga <fragabr@gmail.com> Tested-by: Daniel Fraga <fragabr@gmail.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Peter Zijlstra authored
commit 28a967c3 upstream. Because event_sched_out() checks event->pending_disable _before_ actually disabling the event, it can happen that the event fires after it checks but before it gets disabled. This would leave event->pending_disable set and the queued irq_work will try and process it. However, if the event trigger was during schedule(), the event might have been de-scheduled by the time the irq_work runs, and perf_event_disable_local() will fail. Fix this by checking event->pending_disable _after_ we call event->pmu->del(). This depends on the latter being a compiler barrier, such that the compiler does not lift the load and re-creates the problem. Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.040469884@infradead.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Theodore Ts'o authored
commit daf647d2 upstream. With the internal Quota feature, mke2fs creates empty quota inodes and quota usage tracking is enabled as soon as the file system is mounted. Since quotacheck is no longer preallocating all of the blocks in the quota inode that are likely needed to be written to, we are now seeing a lockdep false positive caused by needing to allocate a quota block from inside ext4_map_blocks(), while holding i_data_sem for a data inode. This results in this complaint: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ei->i_data_sem); lock(&s->s_dquot.dqio_mutex); lock(&ei->i_data_sem); lock(&s->s_dquot.dqio_mutex); Google-Bug-Id: 27907753 Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Yoshihiro Shimoda authored
commit 6490865c upstream. This patch adds a code to surely disable TX IRQ of the pipe before starting TX DMAC transfer. Otherwise, a lot of unnecessary TX IRQs may happen in rare cases when DMAC is used. Fixes: e73a9891 ("usb: renesas_usbhs: add DMAEngine support") Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Yoshihiro Shimoda authored
commit 894f2fc4 upstream. When unexpected situation happened (e.g. tx/rx irq happened while DMAC is used), the usbhsf_pkt_handler() was possible to cause NULL pointer dereference like the followings: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = c0004000 [00000000] *pgd=00000000 Internal error: Oops: 80000007 [#1] SMP ARM Modules linked in: usb_f_acm u_serial g_serial libcomposite CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.5.0-rc6-00842-gac57066-dirty #63 Hardware name: Generic R8A7790 (Flattened Device Tree) task: c0729c00 ti: c0724000 task.ti: c0724000 PC is at 0x0 LR is at usbhsf_pkt_handler+0xac/0x118 pc : [<00000000>] lr : [<c03257e0>] psr: 60000193 sp : c0725db8 ip : 00000000 fp : c0725df4 r10: 00000001 r9 : 00000193 r8 : ef3ccab4 r7 : ef3cca10 r6 : eea4586c r5 : 00000000 r4 : ef19ceb4 r3 : 00000000 r2 : 0000009c r1 : c0725dc4 r0 : ef19ceb4 This patch adds a condition to avoid the dereference. Fixes: e73a9891 ("usb: renesas_usbhs: add DMAEngine support") Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Michal Kazior authored
commit cf440128 upstream. The ieee80211_queue_stopped() expects hw queue number but it was given raw WMM AC number instead. This could cause frame drops and problems with traffic in some cases - most notably if driver doesn't map AC numbers to queue numbers 1:1 and uses ieee80211_stop_queues() and ieee80211_wake_queue() only without ever calling ieee80211_wake_queues(). On ath10k it was possible to hit this problem in the following case: 1. wlan0 uses queue 0 (ath10k maps queues per vif) 2. offchannel uses queue 15 3. queues 1-14 are unused 4. ieee80211_stop_queues() 5. ieee80211_wake_queue(q=0) 6. ieee80211_wake_queue(q=15) (other queues are not woken up because both driver and mac80211 know other queues are unused) 7. ieee80211_rx_h_mesh_fwding() 8. ieee80211_select_queue_80211() returns 2 9. ieee80211_queue_stopped(q=2) returns true 10. frame is dropped (oops!) Fixes: d3c1597b ("mac80211: fix forwarded mesh frame queue mapping") Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Boris Ostrovsky authored
commit ff1e22e7 upstream. Moving an unmasked irq may result in irq handler being invoked on both source and target CPUs. With 2-level this can happen as follows: On source CPU: evtchn_2l_handle_events() -> generic_handle_irq() -> handle_edge_irq() -> eoi_pirq(): irq_move_irq(data); /***** WE ARE HERE *****/ if (VALID_EVTCHN(evtchn)) clear_evtchn(evtchn); If at this moment target processor is handling an unrelated event in evtchn_2l_handle_events()'s loop it may pick up our event since target's cpu_evtchn_mask claims that this event belongs to it *and* the event is unmasked and still pending. At the same time, source CPU will continue executing its own handle_edge_irq(). With FIFO interrupt the scenario is similar: irq_move_irq() may result in a EVTCHNOP_unmask hypercall which, in turn, may make the event pending on the target CPU. We can avoid this situation by moving and clearing the event while keeping event masked. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Wei Liu authored
commit 3f70fa82 upstream. In preparation for adding event channel port ops, add test_and_set_mask(). Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alex Deucher authored
commit 0e5585dc upstream. Higher mclk values are not stable due to a bug somewhere. Limit them for now. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alex Deucher authored
commit f971f226 upstream. bug: https://bugs.freedesktop.org/show_bug.cgi?id=94692Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Xishi Qiu authored
commit 6f25a14a upstream. It is incorrect to use next_node to find a target node, it will return MAX_NUMNODES or invalid node. This will lead to crash in buddy system allocation. Fixes: c8721bbb ("mm: memory-hotplug: enable memory hotplug to handle hugepage") Signed-off-by: Xishi Qiu <qiuxishi@huawei.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: "Laura Abbott" <lauraa@codeaurora.org> Cc: Hui Zhu <zhuhui@xiaomi.com> Cc: Wang Xiaoqiang <wangxq10@lzu.edu.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 4a07083e upstream. ALSA system timer backend stops the timer via del_timer() without sync and leaves del_timer_sync() at the close instead. This is because of the restriction by the design of ALSA timer: namely, the stop callback may be called from the timer handler, and calling the sync shall lead to a hangup. However, this also triggers a kernel BUG() when the timer is rearmed immediately after stopping without sync: kernel BUG at kernel/time/timer.c:966! Call Trace: <IRQ> [<ffffffff8239c94e>] snd_timer_s_start+0x13e/0x1a0 [<ffffffff8239e1f4>] snd_timer_interrupt+0x504/0xec0 [<ffffffff8122fca0>] ? debug_check_no_locks_freed+0x290/0x290 [<ffffffff8239ec64>] snd_timer_s_function+0xb4/0x120 [<ffffffff81296b72>] call_timer_fn+0x162/0x520 [<ffffffff81296add>] ? call_timer_fn+0xcd/0x520 [<ffffffff8239ebb0>] ? snd_timer_interrupt+0xec0/0xec0 .... It's the place where add_timer() checks the pending timer. It's clear that this may happen after the immediate restart without sync in our cases. So, the workaround here is just to use mod_timer() instead of add_timer(). This looks like a band-aid fix, but it's a right move, as snd_timer_interrupt() takes care of the continuous rearm of timer. Reported-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Helge Deller authored
commit ef72f311 upstream. The kernel module testcase (lib/test_user_copy.c) exhibited a kernel crash on parisc if the parameters for copy_from_user were reversed ("illegal reversed copy_to_user" testcase). Fix this potential crash by checking the fault handler if the faulting address is in the exception table. Signed-off-by: Helge Deller <deller@gmx.de> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Helge Deller authored
commit e3893027 upstream. We want to avoid the kernel module loader to create function pointers for the kernel fixup routines of get_user() and put_user(). Changing the external reference from function type to int type fixes this. This unbreaks exception handling for get_user() and put_user() when called from a kernel module. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Guenter Roeck authored
commit 3c2e2266 upstream. arm:pxa_defconfig can result in the following crash if the max1111 driver is not instantiated. Unhandled fault: page domain fault (0x01b) at 0x00000000 pgd = c0004000 [00000000] *pgd=00000000 Internal error: : 1b [#1] PREEMPT ARM Modules linked in: CPU: 0 PID: 300 Comm: kworker/0:1 Not tainted 4.5.0-01301-g1701f680 #10 Hardware name: SHARP Akita Workqueue: events sharpsl_charge_toggle task: c390a000 ti: c391e000 task.ti: c391e000 PC is at max1111_read_channel+0x20/0x30 LR is at sharpsl_pm_pxa_read_max1111+0x2c/0x3c pc : [<c03aaab0>] lr : [<c0024b50>] psr: 20000013 ... [<c03aaab0>] (max1111_read_channel) from [<c0024b50>] (sharpsl_pm_pxa_read_max1111+0x2c/0x3c) [<c0024b50>] (sharpsl_pm_pxa_read_max1111) from [<c00262e0>] (spitzpm_read_devdata+0x5c/0xc4) [<c00262e0>] (spitzpm_read_devdata) from [<c0024094>] (sharpsl_check_battery_temp+0x78/0x110) [<c0024094>] (sharpsl_check_battery_temp) from [<c0024f9c>] (sharpsl_charge_toggle+0x48/0x110) [<c0024f9c>] (sharpsl_charge_toggle) from [<c004429c>] (process_one_work+0x14c/0x48c) [<c004429c>] (process_one_work) from [<c0044618>] (worker_thread+0x3c/0x5d4) [<c0044618>] (worker_thread) from [<c004a238>] (kthread+0xd0/0xec) [<c004a238>] (kthread) from [<c000a670>] (ret_from_fork+0x14/0x24) This can occur because the SPI controller driver (SPI_PXA2XX) is built as module and thus not necessarily loaded. While building SPI_PXA2XX into the kernel would make the problem disappear, it appears prudent to ensure that the driver is instantiated before accessing its data structures. Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-