- 25 Aug, 2019 40 commits
-
-
Wenwen Wang authored
commit cfef67f0 upstream. In snd_hda_parse_generic_codec(), 'spec' is allocated through kzalloc(). Then, the pin widgets in 'codec' are parsed. However, if the parsing process fails, 'spec' is not deallocated, leading to a memory leak. To fix the above issue, free 'spec' before returning the error. Fixes: 352f7f91 ("ALSA: hda - Merge Realtek parser code to generic parser") Signed-off-by:
Wenwen Wang <wenwen@cs.uga.edu> Cc: <stable@vger.kernel.org> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Max Filippov authored
commit cd8869f4 upstream. ITLB entry modifications must be followed by the isync instruction before the new entries are possibly used. cpu_reset lacks one isync between ITLB way 6 initialization and jump to the identity mapping. Add missing isync to xtensa cpu_reset. Cc: stable@vger.kernel.org Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Westphal authored
commit 3c791076 upstream. else, we leak the addresses to userspace via ctnetlink events and dumps. Compute an ID on demand based on the immutable parts of nf_conn struct. Another advantage compared to using an address is that there is no immediate re-use of the same ID in case the conntrack entry is freed and reallocated again immediately. Fixes: 35832402 ("[NETFILTER]: nf_conntrack_expect: kill unique ID") Fixes: 7f85f914 ("[NETFILTER]: nf_conntrack: kill unique ID") Signed-off-by:
Florian Westphal <fw@strlen.de> Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
commit df453700 upstream. According to Amit Klein and Benny Pinkas, IP ID generation is too weak and might be used by attackers. Even with recent net_hash_mix() fix (netns: provide pure entropy for net_hash_mix()) having 64bit key and Jenkins hash is risky. It is time to switch to siphash and its 128bit keys. Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
Amit Klein <aksecurity@gmail.com> Reported-by:
Benny Pinkas <benny@pinkas.net> Signed-off-by:
David S. Miller <davem@davemloft.net> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason A. Donenfeld authored
commit 1ae2324f upstream. HalfSipHash, or hsiphash, is a shortened version of SipHash, which generates 32-bit outputs using a weaker 64-bit key. It has *much* lower security margins, and shouldn't be used for anything too sensitive, but it could be used as a hashtable key function replacement, if the output is never exposed, and if the security requirement is not too high. The goal is to make this something that performance-critical jhash users would be willing to use. On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias SipHash1-3 to HalfSipHash1-3 on those systems. 64-bit x86_64: [ 0.509409] test_siphash: SipHash2-4 cycles: 4049181 [ 0.510650] test_siphash: SipHash1-3 cycles: 2512884 [ 0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920 [ 0.512904] test_siphash: JenkinsHash cycles: 978267 So, we map hsiphash() -> SipHash1-3 32-bit x86: [ 0.509868] test_siphash: SipHash2-4 cycles: 14812892 [ 0.513601] test_siphash: SipHash1-3 cycles: 9510710 [ 0.515263] test_siphash: HalfSipHash1-3 cycles: 3856157 [ 0.515952] test_siphash: JenkinsHash cycles: 1148567 So, we map hsiphash() -> HalfSipHash1-3 hsiphash() is roughly 3 times slower than jhash(), but comes with a considerable security improvement. Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by:
Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> [bwh: Backported to 4.9 to avoid regression for WireGuard with only half the siphash API present] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason A. Donenfeld authored
commit 2c956a60 upstream. SipHash is a 64-bit keyed hash function that is actually a cryptographically secure PRF, like HMAC. Except SipHash is super fast, and is meant to be used as a hashtable keyed lookup function, or as a general PRF for short input use cases, such as sequence numbers or RNG chaining. For the first usage: There are a variety of attacks known as "hashtable poisoning" in which an attacker forms some data such that the hash of that data will be the same, and then preceeds to fill up all entries of a hashbucket. This is a realistic and well-known denial-of-service vector. Currently hashtables use jhash, which is fast but not secure, and some kind of rotating key scheme (or none at all, which isn't good). SipHash is meant as a replacement for jhash in these cases. There are a modicum of places in the kernel that are vulnerable to hashtable poisoning attacks, either via userspace vectors or network vectors, and there's not a reliable mechanism inside the kernel at the moment to fix it. The first step toward fixing these issues is actually getting a secure primitive into the kernel for developers to use. Then we can, bit by bit, port things over to it as deemed appropriate. While SipHash is extremely fast for a cryptographically secure function, it is likely a bit slower than the insecure jhash, and so replacements will be evaluated on a case-by-case basis based on whether or not the difference in speed is negligible and whether or not the current jhash usage poses a real security risk. For the second usage: A few places in the kernel are using MD5 or SHA1 for creating secure sequence numbers, syn cookies, port numbers, or fast random numbers. SipHash is a faster and more fitting, and more secure replacement for MD5 in those situations. Replacing MD5 and SHA1 with SipHash for these uses is obvious and straight-forward, and so is submitted along with this patch series. There shouldn't be much of a debate over its efficacy. Dozens of languages are already using this internally for their hash tables and PRFs. Some of the BSDs already use this in their kernels. SipHash is a widely known high-speed solution to a widely known set of problems, and it's time we catch-up. Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by:
Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: David Laight <David.Laight@aculab.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> [bwh: Backported to 4.9 as dependency of commits df453700 "inet: switch IP ID generator to siphash" and 3c791076 "netfilter: ctnetlink: don't use conntrack/expect object addresses as id"] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Wang authored
commit c1ea02f1 upstream. This patch will check the weight and exit the loop if we exceeds the weight. This is useful for preventing scsi kthread from hogging cpu which is guest triggerable. This addresses CVE-2019-3900. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Fixes: 057cbf49 ("tcm_vhost: Initial merge for vhost level target fabric driver") Signed-off-by:
Jason Wang <jasowang@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> [bwh: Backported to 4.9: - Drop changes in vhost_scsi_ctl_handle_vq() - Adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Wang authored
commit e2412c07 upstream. When the rx buffer is too small for a packet, we will discard the vq descriptor and retry it for the next packet: while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, &busyloop_intr))) { ... /* On overrun, truncate and discard */ if (unlikely(headcount > UIO_MAXIOV)) { iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); err = sock->ops->recvmsg(sock, &msg, 1, MSG_DONTWAIT | MSG_TRUNC); pr_debug("Discarded rx packet: len %zd\n", sock_len); continue; } ... } This makes it possible to trigger a infinite while..continue loop through the co-opreation of two VMs like: 1) Malicious VM1 allocate 1 byte rx buffer and try to slow down the vhost process as much as possible e.g using indirect descriptors or other. 2) Malicious VM2 generate packets to VM1 as fast as possible Fixing this by checking against weight at the end of RX and TX loop. This also eliminate other similar cases when: - userspace is consuming the packets in the meanwhile - theoretical TOCTOU attack if guest moving avail index back and forth to hit the continue after vhost find guest just add new buffers This addresses CVE-2019-3900. Fixes: d8316f39 ("vhost: fix total length when packets are too short") Fixes: 3a4d5c94 ("vhost_net: a kernel-level virtio server") Signed-off-by:
Jason Wang <jasowang@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> [bwh: Backported to 4.9: - Both Tx modes are handled in one loop in handle_tx() - Adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Wang authored
commit e82b9b07 upstream. We used to have vhost_exceeds_weight() for vhost-net to: - prevent vhost kthread from hogging the cpu - balance the time spent between TX and RX This function could be useful for vsock and scsi as well. So move it to vhost.c. Device must specify a weight which counts the number of requests, or it can also specific a byte_weight which counts the number of bytes that has been processed. Signed-off-by:
Jason Wang <jasowang@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> [bwh: Backported to 4.9: - In vhost_net, both Tx modes are handled in one loop in handle_tx() - Adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Wang authored
commit 272f35cb upstream. Signed-off-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Abeni authored
commit db688c24 upstream. Similar to commit a2ac9990 ("vhost-net: set packet weight of tx polling to 2 * vq size"), we need a packet-based limit for handler_rx, too - elsewhere, under rx flood with small packets, tx can be delayed for a very long time, even without busypolling. The pkt limit applied to handle_rx must be the same applied by handle_tx, or we will get unfair scheduling between rx and tx. Tying such limit to the queue length makes it less effective for large queue length values and can introduce large process scheduler latencies, so a constant valued is used - likewise the existing bytes limit. The selected limit has been validated with PVP[1] performance test with different queue sizes: queue size 256 512 1024 baseline 366 354 362 weight 128 715 723 670 weight 256 740 745 733 weight 512 600 460 583 weight 1024 423 427 418 A packet weight of 256 gives peek performances in under all the tested scenarios. No measurable regression in unidirectional performance tests has been detected. [1] https://developers.redhat.com/blog/2017/06/05/measuring-and-comparing-open-vswitch-performance/Signed-off-by:
Paolo Abeni <pabeni@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
haibinzhang(张海斌) authored
commit a2ac9990 upstream. handle_tx will delay rx for tens or even hundreds of milliseconds when tx busy polling udp packets with small length(e.g. 1byte udp payload), because setting VHOST_NET_WEIGHT takes into account only sent-bytes but no single packet length. Ping-Latencies shown below were tested between two Virtual Machines using netperf (UDP_STREAM, len=1), and then another machine pinged the client: vq size=256 Packet-Weight Ping-Latencies(millisecond) min avg max Origin 3.319 18.489 57.303 64 1.643 2.021 2.552 128 1.825 2.600 3.224 256 1.997 2.710 4.295 512 1.860 3.171 4.631 1024 2.002 4.173 9.056 2048 2.257 5.650 9.688 4096 2.093 8.508 15.943 vq size=512 Packet-Weight Ping-Latencies(millisecond) min avg max Origin 6.537 29.177 66.245 64 2.798 3.614 4.403 128 2.861 3.820 4.775 256 3.008 4.018 4.807 512 3.254 4.523 5.824 1024 3.079 5.335 7.747 2048 3.944 8.201 12.762 4096 4.158 11.057 19.985 Seems pretty consistent, a small dip at 2 VQ sizes. Ring size is a hint from device about a burst size it can tolerate. Based on benchmarks, set the weight to 2 * vq size. To evaluate this change, another tests were done using netperf(RR, TX) between two machines with Intel(R) Xeon(R) Gold 6133 CPU @ 2.50GHz, and vq size was tweaked through qemu. Results shown below does not show obvious changes. vq size=256 TCP_RR vq size=512 TCP_RR size/sessions/+thu%/+normalize% size/sessions/+thu%/+normalize% 1/ 1/ -7%/ -2% 1/ 1/ 0%/ -2% 1/ 4/ +1%/ 0% 1/ 4/ +1%/ 0% 1/ 8/ +1%/ -2% 1/ 8/ 0%/ +1% 64/ 1/ -6%/ 0% 64/ 1/ +7%/ +3% 64/ 4/ 0%/ +2% 64/ 4/ -1%/ +1% 64/ 8/ 0%/ 0% 64/ 8/ -1%/ -2% 256/ 1/ -3%/ -4% 256/ 1/ -4%/ -2% 256/ 4/ +3%/ +4% 256/ 4/ +1%/ +2% 256/ 8/ +2%/ 0% 256/ 8/ +1%/ -1% vq size=256 UDP_RR vq size=512 UDP_RR size/sessions/+thu%/+normalize% size/sessions/+thu%/+normalize% 1/ 1/ -5%/ +1% 1/ 1/ -3%/ -2% 1/ 4/ +4%/ +1% 1/ 4/ -2%/ +2% 1/ 8/ -1%/ -1% 1/ 8/ -1%/ 0% 64/ 1/ -2%/ -3% 64/ 1/ +1%/ +1% 64/ 4/ -5%/ -1% 64/ 4/ +2%/ 0% 64/ 8/ 0%/ -1% 64/ 8/ -2%/ +1% 256/ 1/ +7%/ +1% 256/ 1/ -7%/ 0% 256/ 4/ +1%/ +1% 256/ 4/ -3%/ -4% 256/ 8/ +2%/ +2% 256/ 8/ +1%/ +1% vq size=256 TCP_STREAM vq size=512 TCP_STREAM size/sessions/+thu%/+normalize% size/sessions/+thu%/+normalize% 64/ 1/ 0%/ -3% 64/ 1/ 0%/ 0% 64/ 4/ +3%/ -1% 64/ 4/ -2%/ +4% 64/ 8/ +9%/ -4% 64/ 8/ -1%/ +2% 256/ 1/ +1%/ -4% 256/ 1/ +1%/ +1% 256/ 4/ -1%/ -1% 256/ 4/ -3%/ 0% 256/ 8/ +7%/ +5% 256/ 8/ -3%/ 0% 512/ 1/ +1%/ 0% 512/ 1/ -1%/ -1% 512/ 4/ +1%/ -1% 512/ 4/ 0%/ 0% 512/ 8/ +7%/ -5% 512/ 8/ +6%/ -1% 1024/ 1/ 0%/ -1% 1024/ 1/ 0%/ +1% 1024/ 4/ +3%/ 0% 1024/ 4/ +1%/ 0% 1024/ 8/ +8%/ +5% 1024/ 8/ -1%/ 0% 2048/ 1/ +2%/ +2% 2048/ 1/ -1%/ 0% 2048/ 4/ +1%/ 0% 2048/ 4/ 0%/ -1% 2048/ 8/ -2%/ 0% 2048/ 8/ 5%/ -1% 4096/ 1/ -2%/ 0% 4096/ 1/ -2%/ 0% 4096/ 4/ +2%/ 0% 4096/ 4/ 0%/ 0% 4096/ 8/ +9%/ -2% 4096/ 8/ -5%/ -1% Acked-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Haibin Zhang <haibinzhang@tencent.com> Signed-off-by:
Yunfang Tai <yunfangtai@tencent.com> Signed-off-by:
Lidong Chen <lidongchen@tencent.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Borkmann authored
commit ede95a63 upstream. Rick reported that the BPF JIT could potentially fill the entire module space with BPF programs from unprivileged users which would prevent later attempts to load normal kernel modules or privileged BPF programs, for example. If JIT was enabled but unsuccessful to generate the image, then before commit 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON config") we would always fall back to the BPF interpreter. Nowadays in the case where the CONFIG_BPF_JIT_ALWAYS_ON could be set, then the load will abort with a failure since the BPF interpreter was compiled out. Add a global limit and enforce it for unprivileged users such that in case of BPF interpreter compiled out we fail once the limit has been reached or we fall back to BPF interpreter earlier w/o using module mem if latter was compiled in. In a next step, fair share among unprivileged users can be resolved in particular for the case where we would fail hard once limit is reached. Fixes: 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON config") Fixes: 0a14842f ("net: filter: Just In Time compiler for x86-64") Co-Developed-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jann Horn <jannh@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: LKML <linux-kernel@vger.kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Borkmann authored
commit 2e4a3098 upstream. Given BPF reaches far beyond just networking these days, it was never intended to allow setting and in some cases reading those knobs out of a user namespace root running without CAP_SYS_ADMIN, thus tighten such access. Also the bpf_jit_enable = 2 debugging mode should only be allowed if kptr_restrict is not set since it otherwise can leak addresses to the kernel log. Dump a note to the kernel log that this is for debugging JITs only when enabled. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> [bwh: Backported to 4.9: - We don't have bpf_dump_raw_ok(), so drop the condition based on it. This condition only made it a bit harder for a privileged user to do something silly. - Drop change to bpf_jit_kallsyms] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Borkmann authored
commit fa9dd599 upstream. Having a pure_initcall() callback just to permanently enable BPF JITs under CONFIG_BPF_JIT_ALWAYS_ON is unnecessary and could leave a small race window in future where JIT is still disabled on boot. Since we know about the setting at compilation time anyway, just initialize it properly there. Also consolidate all the individual bpf_jit_enable variables into a single one and move them under one location. Moreover, don't allow for setting unspecified garbage values on them. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> [bwh: Backported to 4.9 as dependency of commit 2e4a3098 "bpf: restrict access to core bpf sysctls": - Drop change in arch/mips/net/ebpf_jit.c - Drop change to bpf_jit_kallsyms - Adjust filenames, context] Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miles Chen authored
commit 54a83d6b upstream. This patch is sent to report an use after free in mem_cgroup_iter() after merging commit be265775 ("mm: memcg: fix use after free in mem_cgroup_iter()"). I work with android kernel tree (4.9 & 4.14), and commit be265775 ("mm: memcg: fix use after free in mem_cgroup_iter()") has been merged to the trees. However, I can still observe use after free issues addressed in the commit be265775. (on low-end devices, a few times this month) backtrace: css_tryget <- crash here mem_cgroup_iter shrink_node shrink_zones do_try_to_free_pages try_to_free_pages __perform_reclaim __alloc_pages_direct_reclaim __alloc_pages_slowpath __alloc_pages_nodemask To debug, I poisoned mem_cgroup before freeing it: static void __mem_cgroup_free(struct mem_cgroup *memcg) for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->stat); + /* poison memcg before freeing it */ + memset(memcg, 0x78, sizeof(struct mem_cgroup)); kfree(memcg); } The coredump shows the position=0xdbbc2a00 is freed. (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8] $13 = {position = 0xdbbc2a00, generation = 0x2efd} 0xdbbc2a00: 0xdbbc2e00 0x00000000 0xdbbc2800 0x00000100 0xdbbc2a10: 0x00000200 0x78787878 0x00026218 0x00000000 0xdbbc2a20: 0xdcad6000 0x00000001 0x78787800 0x00000000 0xdbbc2a30: 0x78780000 0x00000000 0x0068fb84 0x78787878 0xdbbc2a40: 0x78787878 0x78787878 0x78787878 0xe3fa5cc0 0xdbbc2a50: 0x78787878 0x78787878 0x00000000 0x00000000 0xdbbc2a60: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a70: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a80: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a90: 0x00000001 0x00000000 0x00000000 0x00100000 0xdbbc2aa0: 0x00000001 0xdbbc2ac8 0x00000000 0x00000000 0xdbbc2ab0: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2ac0: 0x00000000 0x00000000 0xe5b02618 0x00001000 0xdbbc2ad0: 0x00000000 0x78787878 0x78787878 0x78787878 0xdbbc2ae0: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2af0: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b00: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b10: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b20: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b30: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b40: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b50: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b60: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b70: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b80: 0x78787878 0x78787878 0x00000000 0x78787878 0xdbbc2b90: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2ba0: 0x78787878 0x78787878 0x78787878 0x78787878 In the reclaim path, try_to_free_pages() does not setup sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ..., shrink_node(). In mem_cgroup_iter(), root is set to root_mem_cgroup because sc->target_mem_cgroup is NULL. It is possible to assign a memcg to root_mem_cgroup.nodeinfo.iter in mem_cgroup_iter(). try_to_free_pages struct scan_control sc = {...}, target_mem_cgroup is 0x0; do_try_to_free_pages shrink_zones shrink_node mem_cgroup *root = sc->target_mem_cgroup; memcg = mem_cgroup_iter(root, NULL, &reclaim); mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... css = css_next_descendant_pre(css, &root->css); memcg = mem_cgroup_from_css(css); cmpxchg(&iter->position, pos, memcg); My device uses memcg non-hierarchical mode. When we release a memcg: invalidate_reclaim_iterators() reaches only dead_memcg and its parents. If non-hierarchical mode is used, invalidate_reclaim_iterators() never reaches root_mem_cgroup. static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) { struct mem_cgroup *memcg = dead_memcg; for (; memcg; memcg = parent_mem_cgroup(memcg) ... } So the use after free scenario looks like: CPU1 CPU2 try_to_free_pages do_try_to_free_pages shrink_zones shrink_node mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... css = css_next_descendant_pre(css, &root->css); memcg = mem_cgroup_from_css(css); cmpxchg(&iter->position, pos, memcg); invalidate_reclaim_iterators(memcg); ... __mem_cgroup_free() kfree(memcg); try_to_free_pages do_try_to_free_pages shrink_zones shrink_node mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id); iter = &mz->iter[reclaim->priority]; pos = READ_ONCE(iter->position); css_tryget(&pos->css) <- use after free To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter in invalidate_reclaim_iterators(). [cai@lca.pw: fix -Wparentheses compilation warning] Link: http://lkml.kernel.org/r/1564580753-17531-1-git-send-email-cai@lca.pw Link: http://lkml.kernel.org/r/20190730015729.4406-1-miles.chen@mediatek.com Fixes: 5ac8fb31 ("mm: memcontrol: convert reclaim iterator to simple css refcounting") Signed-off-by:
Miles Chen <miles.chen@mediatek.com> Signed-off-by:
Qian Cai <cai@lca.pw> Acked-by:
Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Isaac J. Manjarres authored
commit 95153169 upstream. Currently, when checking to see if accessing n bytes starting at address "ptr" will cause a wraparound in the memory addresses, the check in check_bogus_address() adds an extra byte, which is incorrect, as the range of addresses that will be accessed is [ptr, ptr + (n - 1)]. This can lead to incorrectly detecting a wraparound in the memory address, when trying to read 4 KB from memory that is mapped to the the last possible page in the virtual address space, when in fact, accessing that range of memory would not cause a wraparound to occur. Use the memory range that will actually be accessed when considering if accessing a certain amount of bytes will cause the memory address to wrap around. Link: http://lkml.kernel.org/r/1564509253-23287-1-git-send-email-isaacm@codeaurora.org Fixes: f5509cc1 ("mm: Hardened usercopy") Signed-off-by:
Prasad Sodagudi <psodagud@codeaurora.org> Signed-off-by:
Isaac J. Manjarres <isaacm@codeaurora.org> Co-developed-by:
Prasad Sodagudi <psodagud@codeaurora.org> Reviewed-by:
William Kucharski <william.kucharski@oracle.com> Acked-by:
Kees Cook <keescook@chromium.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Trilok Soni <tsoni@codeaurora.org> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> [kees: backport to v4.9] Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gustavo A. R. Silva authored
commit 1ee1119d upstream. Add missing break statement in order to prevent the code from falling through to case SH_BREAKPOINT_WRITE. Fixes: 09a07294 ("sh: hw-breakpoints: Add preliminary support for SH-4A UBC.") Cc: stable@vger.kernel.org Reviewed-by:
Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by:
Guenter Roeck <linux@roeck-us.net> Tested-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Suganath Prabu authored
commit df9a6061 upstream. Although SAS3 & SAS3.5 IT HBA controllers support 64-bit DMA addressing, as per hardware design, if DMA-able range contains all 64-bits set (0xFFFFFFFF-FFFFFFFF) then it results in a firmware fault. E.g. SGE's start address is 0xFFFFFFFF-FFFF000 and data length is 0x1000 bytes. when HBA tries to DMA the data at 0xFFFFFFFF-FFFFFFFF location then HBA will fault the firmware. Driver will set 63-bit DMA mask to ensure the above address will not be used. Cc: <stable@vger.kernel.org> # 5.1.20+ Signed-off-by:
Suganath Prabu <suganath-prabu.subramani@broadcom.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 87e7e25a upstream. In order to remember how to unmap a memory (as single or as page), we maintain a bit per Transmit Buffer (TBs) in the meta data (structure iwl_cmd_meta). We maintain a bitmap: 1 bit per TB. If the TB is set, we will free the memory as a page. This bitmap was never cleared. Fix this. Cc: stable@vger.kernel.org Fixes: 3cd1980b ("iwlwifi: pcie: introduce new tfd and tb formats") Signed-off-by:
Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Brian Norris authored
commit df612421 upstream. Commit 63d7ef36 ("mwifiex: Don't abort on small, spec-compliant vendor IEs") adjusted the ieee_types_vendor_header struct, which inadvertently messed up the offsets used in mwifiex_is_wpa_oui_present(). Add that offset back in, mirroring mwifiex_is_rsn_oui_present(). As it stands, commit 63d7ef36 breaks compatibility with WPA (not WPA2) 802.11n networks, since we hit the "info: Disable 11n if AES is not supported by AP" case in mwifiex_is_network_compatible(). Fixes: 63d7ef36 ("mwifiex: Don't abort on small, spec-compliant vendor IEs") Cc: <stable@vger.kernel.org> Signed-off-by:
Brian Norris <briannorris@chromium.org> Signed-off-by:
Kalle Valo <kvalo@codeaurora.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve French authored
commit 8d33096a upstream. We had a report of a server which did not do a DFS referral because the session setup Capabilities field was set to 0 (unlike negotiate protocol where we set CAP_DFS). Better to send it session setup in the capabilities as well (this also more closely matches Windows client behavior). Signed-off-by:
Steve French <stfrench@microsoft.com> Reviewed-off-by:
Ronnie Sahlberg <lsahlber@redhat.com> Reviewed-by:
Pavel Shilovsky <pshilov@microsoft.com> CC: Stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pavel Shilovsky authored
commit e99c63e4 upstream. Currently we skip SMB2_TREE_CONNECT command when checking during reconnect because Tree Connect happens when establishing an SMB session. For SMB 3.0 protocol version the code also calls validate negotiate which results in SMB2_IOCL command being sent over the wire. This may deadlock on trying to acquire a mutex when checking for reconnect. Fix this by skipping SMB2_IOCL command when doing the reconnect check. Signed-off-by:
Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by:
Steve French <stfrench@microsoft.com> Reviewed-by:
Ronnie Sahlberg <lsahlber@redhat.com> CC: Stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Brian Norris authored
commit 05aaa5c9 upstream. In a very similar spirit to commit c470bdc1 ("mac80211: don't WARN on bad WMM parameters from buggy APs"), an AP may not transmit a fully-formed WMM IE. For example, it may miss or repeat an Access Category. The above loop won't catch that and will instead leave one of the four ACs zeroed out. This triggers the following warning in drv_conf_tx() wlan0: invalid CW_min/CW_max: 0/0 and it may leave one of the hardware queues unconfigured. If we detect such a case, let's just print a warning and fall back to the defaults. Tested with a hacked version of hostapd, intentionally corrupting the IEs in hostapd_eid_wmm(). Cc: stable@vger.kernel.org Signed-off-by:
Brian Norris <briannorris@chromium.org> Link: https://lore.kernel.org/r/20190726224758.210953-1-briannorris@chromium.orgSigned-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit c1c6c877 upstream. The commit bfcba288 ("ALSA - hda: Add support for link audio time reporting") introduced the conditional PCM hw info setup, but it overwrites the global azx_pcm_hw object. This will cause a problem if any other HD-audio controller, as it'll inherit the same bit flag although another controller doesn't support that feature. Fix the bug by setting the PCM hw info flag locally. Fixes: bfcba288 ("ALSA - hda: Add support for link audio time reporting") Cc: <stable@vger.kernel.org> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wenwen Wang authored
commit 1be3c1fa upstream. In iso_packets_buffer_init(), 'b->packets' is allocated through kmalloc_array(). Then, the aligned packet size is checked. If it is larger than PAGE_SIZE, -EINVAL will be returned to indicate the error. However, the allocated 'b->packets' is not deallocated on this path, leading to a memory leak. To fix the above issue, free 'b->packets' before returning the error code. Fixes: 31ef9134 ("ALSA: add LaCie FireWire Speakers/Griffin FireWave Surround driver") Signed-off-by:
Wenwen Wang <wenwen@cs.uga.edu> Reviewed-by:
Takashi Sakamoto <o-takashi@sakamocchi.jp> Cc: <stable@vger.kernel.org> # v2.6.39+ Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 38ada2f4 upstream. The code to detect if in4 is present is wrong; if in4 is not present, the in4_input sysfs attribute is still present. In detail: - Ihen RTD3_MD=11 (VSEN3 present), everything is as expected (no bug). - If we have RTD3_MD!=11 (no VSEN3), we unexpectedly have a in4_input file under /sys and the "sensors" command displays in4_input. But as expected, we have no in4_min, in4_max, in4_alarm, in4_beep. Fix is_visible function to detect and report in4_input visibility as expected. Reported-by:
Gilles Buloz <Gilles.Buloz@kontron.com> Cc: Gilles Buloz <Gilles.Buloz@kontron.com> Cc: stable@vger.kernel.org Fixes: 3434f378 ("hwmon: Driver for Nuvoton NCT7802Y") Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Bortoli authored
commit 30a8beeb upstream. Uninitialized Kernel memory can leak to USB devices. Fix by using kzalloc() instead of kmalloc() on the affected buffers. Signed-off-by:
Tomas Bortoli <tomasbortoli@gmail.com> Reported-by: syzbot+513e4d0985298538bf9b@syzkaller.appspotmail.com Fixes: 0a25e1f4 ("can: peak_usb: add support for PEAK new CANFD USB adapters") Cc: linux-stable <stable@vger.kernel.org> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Bortoli authored
commit ead16e53 upstream. Uninitialized Kernel memory can leak to USB devices. Fix by using kzalloc() instead of kmalloc() on the affected buffers. Signed-off-by:
Tomas Bortoli <tomasbortoli@gmail.com> Reported-by: syzbot+d6a5a1a3657b596ef132@syzkaller.appspotmail.com Fixes: f14e2243 ("net: can: peak_usb: Do not do dma on the stack") Cc: linux-stable <stable@vger.kernel.org> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Leonard Crestez authored
[ Upstream commit 4ce54af8 ] Some hardware PMU drivers will override perf_event.cpu inside their event_init callback. This causes a lockdep splat when initialized through the kernel API: WARNING: CPU: 0 PID: 250 at kernel/events/core.c:2917 ctx_sched_out+0x78/0x208 pc : ctx_sched_out+0x78/0x208 Call trace: ctx_sched_out+0x78/0x208 __perf_install_in_context+0x160/0x248 remote_function+0x58/0x68 generic_exec_single+0x100/0x180 smp_call_function_single+0x174/0x1b8 perf_install_in_context+0x178/0x188 perf_event_create_kernel_counter+0x118/0x160 Fix this by calling perf_install_in_context with event->cpu, just like perf_event_open Signed-off-by:
Leonard Crestez <leonard.crestez@nxp.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Frank Li <Frank.li@nxp.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/c4ebe0503623066896d7046def4d6b1e06e0eb2e.1563972056.git.leonard.crestez@nxp.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Peter Zijlstra authored
[ Upstream commit 952041a8 ] While reviewing rwsem down_slowpath, Will noticed ldsem had a copy of a bug we just found for rwsem. X = 0; CPU0 CPU1 rwsem_down_read() for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); X = 1; rwsem_up_write(); rwsem_mark_wake() atomic_long_add(adjustment, &sem->count); smp_store_release(&waiter->task, NULL); if (!waiter.task) break; ... } r = X; Allows 'r == 0'. Reported-by:
Will Deacon <will@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Will Deacon <will@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 4898e640 ("tty: Add timed, writer-prioritized rw semaphore") Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Hannes Reinecke authored
[ Upstream commit 20122994 ] Retrying immediately after we've received a 'transitioning' sense code is pretty much pointless, we should always use a delay before retrying. So ensure the default delay is applied before retrying. Signed-off-by:
Hannes Reinecke <hare@suse.com> Tested-by:
Zhangguanghui <zhang.guanghui@h3c.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Tyrel Datwyler authored
[ Upstream commit 5578257c ] While removing an ibmvfc client adapter a WARN_ON like the following WARN_ON is seen in the kernel log: WARNING: CPU: 6 PID: 5421 at ./include/linux/dma-mapping.h:541 ibmvfc_free_event_pool+0x12c/0x1f0 [ibmvfc] CPU: 6 PID: 5421 Comm: rmmod Tainted: G E 4.17.0-rc1-next-20180419-autotest #1 NIP: d00000000290328c LR: d00000000290325c CTR: c00000000036ee20 REGS: c000000288d1b7e0 TRAP: 0700 Tainted: G E (4.17.0-rc1-next-20180419-autotest) MSR: 800000010282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 44008828 XER: 20000000 CFAR: c00000000036e408 SOFTE: 1 GPR00: d00000000290325c c000000288d1ba60 d000000002917900 c000000289d75448 GPR04: 0000000000000071 c0000000ff870000 0000000018040000 0000000000000001 GPR08: 0000000000000000 c00000000156e838 0000000000000001 d00000000290c640 GPR12: c00000000036ee20 c00000001ec4dc00 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 00000100276901e0 0000000010020598 GPR20: 0000000010020550 0000000010020538 0000000010020578 00000000100205b0 GPR24: 0000000000000000 0000000000000000 0000000010020590 5deadbeef0000100 GPR28: 5deadbeef0000200 d000000002910b00 0000000000000071 c0000002822f87d8 NIP [d00000000290328c] ibmvfc_free_event_pool+0x12c/0x1f0 [ibmvfc] LR [d00000000290325c] ibmvfc_free_event_pool+0xfc/0x1f0 [ibmvfc] Call Trace: [c000000288d1ba60] [d00000000290325c] ibmvfc_free_event_pool+0xfc/0x1f0 [ibmvfc] (unreliable) [c000000288d1baf0] [d000000002909390] ibmvfc_abort_task_set+0x7b0/0x8b0 [ibmvfc] [c000000288d1bb70] [c0000000000d8c68] vio_bus_remove+0x68/0x100 [c000000288d1bbb0] [c0000000007da7c4] device_release_driver_internal+0x1f4/0x2d0 [c000000288d1bc00] [c0000000007da95c] driver_detach+0x7c/0x100 [c000000288d1bc40] [c0000000007d8af4] bus_remove_driver+0x84/0x140 [c000000288d1bcb0] [c0000000007db6ac] driver_unregister+0x4c/0xa0 [c000000288d1bd20] [c0000000000d6e7c] vio_unregister_driver+0x2c/0x50 [c000000288d1bd50] [d00000000290ba0c] cleanup_module+0x24/0x15e0 [ibmvfc] [c000000288d1bd70] [c0000000001dadb0] sys_delete_module+0x220/0x2d0 [c000000288d1be30] [c00000000000b284] system_call+0x58/0x6c Instruction dump: e8410018 e87f0068 809f0078 e8bf0080 e8df0088 2fa30000 419e008c e9230200 2fa90000 419e0080 894d098a 794a07e0 <0b0a0000> e9290008 2fa90000 419e0028 This is tripped as a result of irqs being disabled during the call to dma_free_coherent() by ibmvfc_free_event_pool(). At this point in the code path we have quiesced the adapter and its overly paranoid anyways to be holding the host lock. Reported-by:
Abdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by:
Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Junxiao Bi authored
[ Upstream commit 3b5f307e ] While loading fw crashdump in function fw_crash_buffer_show(), left bytes in one dma chunk was not checked, if copying size over it, overflow access will cause kernel panic. Signed-off-by:
Junxiao Bi <junxiao.bi@oracle.com> Acked-by:
Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Arnd Bergmann authored
[ Upstream commit d64b212e ] When building a multiplatform kernel that includes armv4 support, the default target CPU does not support the blx instruction, which leads to a build failure: arch/arm/mach-davinci/sleep.S: Assembler messages: arch/arm/mach-davinci/sleep.S:56: Error: selected processor does not support `blx ip' in ARM mode Add a .arch statement in the sources to make this file build. Link: https://lore.kernel.org/r/20190722145211.1154785-1-arnd@arndb.deAcked-by:
Sekhar Nori <nsekhar@ti.com> Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Lorenzo Pieralisi authored
[ Upstream commit 5a46d3f7 ] Static analysis identified that index comparison against ITS entries in iort_dev_find_its_id() is off by one. Update the comparison condition and clarify the resulting error message. Fixes: 4bf2efd2 ("ACPI: Add new IORT functions to support MSI domain handling") Link: https://lore.kernel.org/linux-arm-kernel/20190613065410.GB16334@mwanda/Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Will Deacon <will@kernel.org> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Will Deacon <will@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Arnd Bergmann authored
[ Upstream commit 77ce56e2 ] Building with clang and KASAN, we get a warning about an overly large stack frame on 32-bit architectures: drivers/block/drbd/drbd_receiver.c:921:31: error: stack frame size of 1280 bytes in function 'conn_connect' [-Werror,-Wframe-larger-than=] We already allocate other data dynamically in this function, so just do the same for the shash descriptor, which makes up most of this memory. Link: https://lore.kernel.org/lkml/20190617132440.2721536-1-arnd@arndb.de/Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Roland Kammerer <roland.kammerer@linbit.com> Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Arnaldo Carvalho de Melo authored
[ Upstream commit d95daf5a ] When perf_add_probe_events() we call cleanup_perf_probe_events() for the pev pointer it receives, then, as part of handling this failure the main 'perf probe' goes on and calls cleanup_params() and that will again call cleanup_perf_probe_events()for the same pointer, so just set nevents to zero when handling the failure of perf_add_probe_events() to avoid the double free. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-x8qgma4g813z96dvtw9w219q@git.kernel.orgSigned-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Charles Keepax authored
[ Upstream commit 3b817994 ] Draining makes little sense in the situation of hardware overrun, as the hardware will have consumed all its available samples. Additionally, draining whilst the stream is paused would presumably get stuck as no data is being consumed on the DSP side. Signed-off-by:
Charles Keepax <ckeepax@opensource.cirrus.com> Acked-by:
Vinod Koul <vkoul@kernel.org> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Charles Keepax authored
[ Upstream commit a70ab8a8 ] Partial drain and next track are intended for gapless playback and don't really have an obvious interpretation for a capture stream, so makes sense to not allow those operations on capture streams. Signed-off-by:
Charles Keepax <ckeepax@opensource.cirrus.com> Acked-by:
Vinod Koul <vkoul@kernel.org> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-