- 14 May, 2017 4 commits
-
-
Jarkko Sakkinen authored
commit 7d761119 upstream. The error code handling is broken as any error code that has the same bits set as TPM_RC_HASH passes. Implemented tpm2_rc_value() helper to parse the error value from FMT0 and FMT1 error codes so that these types of mistakes are prevented in the future. Fixes: 5ca4c20c ("keys, trusted: select hash algorithm for TPM2 chips") Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit d66777ca upstream. pwm4 is enabled if bit 2 of GPIO control register 4 is disabled, not when it is enabled. Since the check is for the skip condition, it is reversed. This applies to both IT8620 and IT8628. Fixes: 36c4d98a ("hwmon: (it87) Add support for all pwm channels ...") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vincent Abriou authored
commit 2f410f88 upstream. On stih407-410 chip family the GDP layers are able to support up to UHD resolution (3840 x 2160). Signed-off-by: Vincent Abriou <vincent.abriou@st.com> Acked-by: Lee Jones <lee.jones@linaro.org> Tested-by: Lee Jones <lee.jones@linaro.org> Link: http://patchwork.freedesktop.org/patch/msgid/1490280292-30466-1-git-send-email-vincent.abriou@st.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Cong Wang authored
commit b5c66bab upstream. posix_acl_update_mode() could possibly clear 'acl', if so we leak the memory pointed by 'acl'. Save this pointer before calling posix_acl_update_mode() and release the memory if 'acl' really gets cleared. Link: http://lkml.kernel.org/r/1486678332-2430-1-git-send-email-xiyou.wangcong@gmail.comSigned-off-by: Cong Wang <xiyou.wangcong@gmail.com> Reported-by: Mark Salyzyn <salyzyn@android.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Greg Kurz <groug@kaod.org> Cc: Eric Van Hensbergen <ericvh@gmail.com> Cc: Ron Minnich <rminnich@sandia.gov> Cc: Latchesar Ionkov <lucho@ionkov.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 May, 2017 10 commits
-
-
Greg Kroah-Hartman authored
-
Adrian Salido authored
commit 4617f564 upstream. When calling a dm ioctl that doesn't process any data (IOCTL_FLAGS_NO_PARAMS), the contents of the data field in struct dm_ioctl are left initialized. Current code is incorrectly extending the size of data copied back to user, causing the contents of kernel stack to be leaked to user. Fix by only copying contents before data and allow the functions processing the ioctl to override. Signed-off-by: Adrian Salido <salidoa@google.com> Reviewed-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Andrzej Siewior authored
commit dc434e05 upstream. The setup/remove_state/instance() functions in the hotplug core code are serialized against concurrent CPU hotplug, but unfortunately not serialized against themself. As a consequence a concurrent invocation of these function results in corruption of the callback machinery because two instances try to invoke callbacks on remote cpus at the same time. This results in missing callback invocations and initiator threads waiting forever on the completion. The obvious solution to replace get_cpu_online() with cpu_hotplug_begin() is not possible because at least one callsite calls into these functions from a get_online_cpu() locked region. Extend the protection scope of the cpuhp_state_mutex from solely protecting the state arrays to cover the callback invocation machinery as well. Fixes: 5b7aa87e ("cpu/hotplug: Implement setup/removal interface") Reported-and-tested-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: hpa@zytor.com Cc: mingo@kernel.org Cc: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org Link: http://lkml.kernel.org/r/20170314150645.g4tdyoszlcbajmna@linutronix.deSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yan, Zheng authored
commit 2b1ac852 upstream. For readahead/fadvise cases, caller of ceph_readpages does not hold buffer capability. Pages can be added to page cache while there is no buffer capability. This can cause data integrity issue. Signed-off-by: Yan, Zheng <zyan@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gabriel Krisman Bertazi authored
commit c130b666 upstream. Commit f209fa03 ("serial: 8250_pci: Detach low-level driver during PCI error recovery") introduces a potential use-after-free in case the pciserial_init_ports call in serial8250_io_resume fails, which may happen if a memory allocation fails or if the .init quirk failed for whatever reason). If this happen, further pci_get_drvdata will return a pointer to freed memory. This patch reworks the PCI recovery resume hook to restore the old priv structure in this case, which should be ok, since the ports were already detached. Such error during recovery causes us to give up on the recovery. Fixes: f209fa03 ("serial: 8250_pci: Detach low-level driver during PCI error recovery") Reported-by: Michal Suchanek <msuchanek@suse.com> Signed-off-by: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com> Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 8358378b upstream. IT8705F is known to respond on both SIO addresses. Registering it twice may result in system lockups. Reported-by: Russell King <linux@armlinux.org.uk> Fixes: e84bd953 ("hwmon: (it87) Add support for second Super-IO chip") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Cc: Jean Delvare <jdelvare@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen Hemminger authored
commit f1c635b4 upstream. Hyper-V host emulation of SCSI for virtual DVD device reports SCSI version 0 (UNKNOWN) but is still capable of supporting REPORTLUN. Without this patch, a GEN2 Linux guest on Hyper-V will not boot 4.11 successfully with virtual DVD ROM device. What happens is that the SCSI scan process falls back to doing sequential probing by INQUIRY. But the storvsc driver has a previous workaround that masks/blocks all errors reports from INQUIRY (or MODE_SENSE) commands. This workaround causes the scan to then populate a full set of bogus LUN's on the target and then sends kernel spinning off into a death spiral doing block reads on the non-existent LUNs. By setting the correct blacklist flags, the target with the DVD device is scanned with REPORTLUN and that works correctly. Patch needs to go in current 4.11, it is safe but not necessary in older kernels. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maciej S. Szmigiero authored
commit 1d70fe9d upstream. Since commit 1107d065 ("tpm_tis: Introduce intermediate layer for TPM access") Atmel 3203 TPM on ThinkPad X61S (TPM firmware version 13.9) no longer works. The initialization proceeds fine until we get and start using chip-reported timeouts - and the chip reports C and D timeouts of zero. It turns out that until commit 8e54caf4 ("tpm: Provide a generic means to override the chip returned timeouts") we had actually let default timeout values remain in this case, so let's bring back this behavior to make chips like Atmel 3203 work again. Use a common code that was introduced by that commit so a warning is printed in this case and /sys/class/tpm/tpm*/timeouts correctly says the timeouts aren't chip-original. This is a backport for 4.9 kernel version of the original commit, with renaming of "TPM_TIS_ITPM_POSSIBLE" flag removed since it was only a cosmetic change and not a part of the real bug fix. Fixes: 1107d065 ("tpm_tis: Introduce intermediate layer for TPM access") Cc: stable@vger.kernel.org Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sachin Prabhu authored
commit 38bd4906 upstream. A signal can interrupt a SendReceive call which result in incoming responses to the call being ignored. This is a problem for calls such as open which results in the successful response being ignored. This results in an open file resource on the server. The patch looks into responses which were cancelled after being sent and in case of successful open closes the open fids. For this patch, the check is only done in SendReceive2() RH-bz: 1403319 Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Acked-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 1e38da30 upstream. The handling of the might_cancel queueing is not properly protected, so parallel operations on the file descriptor can race with each other and lead to list corruptions or use after free. Protect the context for these operations with a seperate lock. The wait queue lock cannot be reused for this because that would create a lock inversion scenario vs. the cancel lock. Replacing might_cancel with an atomic (atomic_t or atomic bit) does not help either because it still can race vs. the actual list operation. Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "linux-fsdevel@vger.kernel.org" Cc: syzkaller <syzkaller@googlegroups.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-fsdevel@vger.kernel.org Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701311521430.3457@nanosSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 03 May, 2017 26 commits
-
-
Greg Kroah-Hartman authored
-
Josh Poimboeuf authored
commit 34a477e5 upstream. On x86-32, with CONFIG_FIRMWARE and multiple CPUs, if you enable function graph tracing and then suspend to RAM, it will triple fault and reboot when it resumes. The first fault happens when booting a secondary CPU: startup_32_smp() load_ucode_ap() prepare_ftrace_return() ftrace_graph_is_dead() (accesses 'kill_ftrace_graph') The early head_32.S code calls into load_ucode_ap(), which has an an ftrace hook, so it calls prepare_ftrace_return(), which calls ftrace_graph_is_dead(), which tries to access the global 'kill_ftrace_graph' variable with a virtual address, causing a fault because the CPU is still in real mode. The fix is to add a check in prepare_ftrace_return() to make sure it's running in protected mode before continuing. The check makes sure the stack pointer is a virtual kernel address. It's a bit of a hack, but it's not very intrusive and it works well enough. For reference, here are a few other (more difficult) ways this could have potentially been fixed: - Move startup_32_smp()'s call to load_ucode_ap() down to *after* paging is enabled. (No idea what that would break.) - Track down load_ucode_ap()'s entire callee tree and mark all the functions 'notrace'. (Probably not realistic.) - Pause graph tracing in ftrace_suspend_notifier_call() or bringup_cpu() or __cpu_up(), and ensure that the pause facility can be queried from real mode. Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net> Cc: linux-acpi@vger.kernel.org Cc: Borislav Petkov <bp@alien8.de> Cc: Len Brown <lenb@kernel.org> Link: http://lkml.kernel.org/r/5c1272269a580660703ed2eccf44308e790c7a98.1492123841.git.jpoimboe@redhat.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vineet Gupta authored
commit ecd43afd upstream. This is not exposed to userspace debugers yet, which can be done independently as a seperate patch ! Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maksim Salau authored
commit b05c73bd upstream. Allocate buffers on HEAP instead of STACK for local structures that are to be sent using usb_control_msg(). Signed-off-by: Maksim Salau <maksim.salau@gmail.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason A. Donenfeld authored
commit 4d6fa57b upstream. While this may appear as a humdrum one line change, it's actually quite important. An sk_buff stores data in three places: 1. A linear chunk of allocated memory in skb->data. This is the easiest one to work with, but it precludes using scatterdata since the memory must be linear. 2. The array skb_shinfo(skb)->frags, which is of maximum length MAX_SKB_FRAGS. This is nice for scattergather, since these fragments can point to different pages. 3. skb_shinfo(skb)->frag_list, which is a pointer to another sk_buff, which in turn can have data in either (1) or (2). The first two are rather easy to deal with, since they're of a fixed maximum length, while the third one is not, since there can be potentially limitless chains of fragments. Fortunately dealing with frag_list is opt-in for drivers, so drivers don't actually have to deal with this mess. For whatever reason, macsec decided it wanted pain, and so it explicitly specified NETIF_F_FRAGLIST. Because dealing with (1), (2), and (3) is insane, most users of sk_buff doing any sort of crypto or paging operation calls a convenient function called skb_to_sgvec (which happens to be recursive if (3) is in use!). This takes a sk_buff as input, and writes into its output pointer an array of scattergather list items. Sometimes people like to declare a fixed size scattergather list on the stack; othertimes people like to allocate a fixed size scattergather list on the heap. However, if you're doing it in a fixed-size fashion, you really shouldn't be using NETIF_F_FRAGLIST too (unless you're also ensuring the sk_buff and its frag_list children arent't shared and then you check the number of fragments in total required.) Macsec specifically does this: size += sizeof(struct scatterlist) * (MAX_SKB_FRAGS + 1); tmp = kmalloc(size, GFP_ATOMIC); *sg = (struct scatterlist *)(tmp + sg_offset); ... sg_init_table(sg, MAX_SKB_FRAGS + 1); skb_to_sgvec(skb, sg, 0, skb->len); Specifying MAX_SKB_FRAGS + 1 is the right answer usually, but not if you're using NETIF_F_FRAGLIST, in which case the call to skb_to_sgvec will overflow the heap, and disaster ensues. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yan, Zheng authored
commit 8179a101 upstream. ceph_set_acl() calls __ceph_setattr() if the setacl operation needs to modify inode's i_mode. __ceph_setattr() updates inode's i_mode, then calls posix_acl_chmod(). The problem is that __ceph_setattr() calls posix_acl_chmod() before sending the setattr request. The get_acl() call in posix_acl_chmod() can trigger a getxattr request. The reply of the getxattr request can restore inode's i_mode to its old value. The set_acl() call in posix_acl_chmod() sees old value of inode's i_mode, so it calls __ceph_setattr() again. Link: http://tracker.ceph.com/issues/19688Reported-by: Jerry Lee <leisurelysw24@gmail.com> Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Tested-by: Luis Henriques <lhenriques@suse.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit 13bf9fbf upstream. The NFSv2/v3 code does not systematically check whether we decode past the end of the buffer. This generally appears to be harmless, but there are a few places where we do arithmetic on the pointers involved and don't account for the possibility that a length could be negative. Add checks to catch these. Reported-by: Tuomas Haanpää <thaan@synopsys.com> Reported-by: Ari Kauppi <ari@synopsys.com> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit db44bac4 upstream. Use a couple shortcuts that will simplify a following bugfix. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit e6838a29 upstream. A client can append random data to the end of an NFSv2 or NFSv3 RPC call without our complaining; we'll just stop parsing at the end of the expected data and ignore the rest. Encoded arguments and replies are stored together in an array of pages, and if a call is too large it could leave inadequate space for the reply. This is normally OK because NFS RPC's typically have either short arguments and long replies (like READ) or long arguments and short replies (like WRITE). But a client that sends an incorrectly long reply can violate those assumptions. This was observed to cause crashes. Also, several operations increment rq_next_page in the decode routine before checking the argument size, which can leave rq_next_page pointing well past the end of the page array, causing trouble later in svc_free_pages. So, following a suggestion from Neil Brown, add a central check to enforce our expectation that no NFSv2/v3 call has both a large call and a large reply. As followup we may also want to rewrite the encoding routines to check more carefully that they aren't running off the end of the page array. We may also consider rejecting calls that have any extra garbage appended. That would be safer, and within our rights by spec, but given the age of our server and the NFS protocol, and the fact that we've never enforced this before, we may need to balance that against the possibility of breaking some oddball client. Reported-by: Tuomas Haanpää <thaan@synopsys.com> Reported-by: Ari Kauppi <ari@synopsys.com> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dmitry Torokhov authored
commit 7c5bb4ac upstream. Clevo P650RS and other similar devices require i8042 to be reset in order to detect Synaptics touchpad. Reported-by: Paweł Bylica <chfast@gmail.com> Tested-by: Ed Bordin <edbordin@gmail.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=190301Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 6e4cac23 upstream. The FE setups of Intel SST bytcr_rt5640 and bytcr_rt5651 drivers carry the ignore_suspend flag, and this prevents the suspend/resume working properly while the stream is running, since SST core code has the check of the running streams and returns -EBUSY. Drop these superfluous flags for fixing the behavior. Also, the bytcr_rt5640 driver lacks of nonatomic flag in some FE definitions, which leads to the kernel Oops at suspend/resume like: BUG: scheduling while atomic: systemd-sleep/3144/0x00000003 Call Trace: dump_stack+0x5c/0x7a __schedule_bug+0x55/0x70 __schedule+0x63c/0x8c0 schedule+0x3d/0x90 schedule_timeout+0x16b/0x320 ? del_timer_sync+0x50/0x50 ? sst_wait_timeout+0xa9/0x170 [snd_intel_sst_core] ? sst_wait_timeout+0xa9/0x170 [snd_intel_sst_core] ? remove_wait_queue+0x60/0x60 ? sst_prepare_and_post_msg+0x275/0x960 [snd_intel_sst_core] ? sst_pause_stream+0x9b/0x110 [snd_intel_sst_core] .... This patch addresses these appropriately, too. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Al Viro authored
commit 71d6ad08 upstream. Don't assume that server is sane and won't return more data than asked for. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Cowgill authored
commit c46f59e9 upstream. arch_check_elf contains a usage of current_cpu_data that will call smp_processor_id() with preemption enabled and therefore triggers a "BUG: using smp_processor_id() in preemptible" warning when an fpxx executable is loaded. As a follow-up to commit b244614a ("MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)"), apply the same fix to arch_check_elf by using raw_current_cpu_data instead. The rationale quoted from the previous commit: "It is assumed throughout the kernel that if any CPU has an FPU, then all CPUs would have an FPU as well, so it is safe to perform the check with preemption enabled - change the code to use raw_ variant of the check to avoid the warning." Fixes: 46490b57 ("MIPS: kernel: elf: Improve the overall ABI and FPU mode checks") Signed-off-by: James Cowgill <James.Cowgill@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15951/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Hogan authored
commit 9d7f29cd upstream. calculate_min_delta() may incorrectly access a 4th element of buf2[] which only has 3 elements. This may trigger undefined behaviour and has been reported to cause strange crashes in start_kernel() sometime after timer initialization when built with GCC 5.3, possibly due to register/stack corruption: sched_clock: 32 bits at 200MHz, resolution 5ns, wraps every 10737418237ns CPU 0 Unable to handle kernel paging request at virtual address ffffb0aa, epc == 8067daa8, ra == 8067da84 Oops[#1]: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.18 #51 task: 8065e3e0 task.stack: 80644000 $ 0 : 00000000 00000001 00000000 00000000 $ 4 : 8065b4d0 00000000 805d0000 00000010 $ 8 : 00000010 80321400 fffff000 812de408 $12 : 00000000 00000000 00000000 ffffffff $16 : 00000002 ffffffff 80660000 806a666c $20 : 806c0000 00000000 00000000 00000000 $24 : 00000000 00000010 $28 : 80644000 80645ed0 00000000 8067da84 Hi : 00000000 Lo : 00000000 epc : 8067daa8 start_kernel+0x33c/0x500 ra : 8067da84 start_kernel+0x318/0x500 Status: 11000402 KERNEL EXL Cause : 4080040c (ExcCode 03) BadVA : ffffb0aa PrId : 0501992c (MIPS 1004Kc) Modules linked in: Process swapper/0 (pid: 0, threadinfo=80644000, task=8065e3e0, tls=00000000) Call Trace: [<8067daa8>] start_kernel+0x33c/0x500 Code: 24050240 0c0131f9 24849c64 <a200b0a8> 41606020 000000c0 0c1a45e6 00000000 0c1a5f44 UBSAN also detects the same issue: ================================================================ UBSAN: Undefined behaviour in arch/mips/kernel/cevt-r4k.c:85:41 load of address 80647e4c with insufficient space for an object of type 'unsigned int' CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.18 #47 Call Trace: [<80028f70>] show_stack+0x88/0xa4 [<80312654>] dump_stack+0x84/0xc0 [<8034163c>] ubsan_epilogue+0x14/0x50 [<803417d8>] __ubsan_handle_type_mismatch+0x160/0x168 [<8002dab0>] r4k_clockevent_init+0x544/0x764 [<80684d34>] time_init+0x18/0x90 [<8067fa5c>] start_kernel+0x2f0/0x500 ================================================================= buf2[] is intentionally only 3 elements so that the last element is the median once 5 samples have been inserted, so explicitly prevent the possibility of comparing against the 4th element rather than extending the array. Fixes: 1fa40555 ("MIPS: cevt-r4k: Dynamically calculate min_delta_ns") Reported-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Tested-by: Rabin Vincent <rabinv@axis.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15892/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Hogan authored
commit 162b270c upstream. KGDB is a kernel debug stub and it can't be used to debug userland as it can only safely access kernel memory. On MIPS however KGDB has always got the register state of sleeping processes from the userland register context at the beginning of the kernel stack. This is meaningless for kernel threads (which never enter userland), and for user threads it prevents the user seeing what it is doing while in the kernel: (gdb) info threads Id Target Id Frame ... 3 Thread 2 (kthreadd) 0x0000000000000000 in ?? () 2 Thread 1 (init) 0x000000007705c4b4 in ?? () 1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201 Get the register state instead from the (partial) kernel register context stored in the task's thread_struct for resume() to restore. All threads now correctly appear to be in context_switch(): (gdb) info threads Id Target Id Frame ... 3 Thread 2 (kthreadd) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903 2 Thread 1 (init) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903 1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201 Call clobbered registers which aren't saved and exception registers (BadVAddr & Cause) which can't be easily determined without stack unwinding are reported as 0. The PC is taken from the return address, such that the state presented matches that found immediately after returning from resume(). Fixes: 88547001 ("[MIPS] kgdb: add arch support for the kernel's kgdb core") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15829/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Noam Camus authored
commit 6492f09e upstream. Make ATOMIC_INIT available for all ARC platforms (including plat-eznps) Signed-off-by: Noam Camus <noamca@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 4e7655fd upstream. The snd_use_lock_sync() (thus its implementation snd_use_lock_sync_helper()) has the 5 seconds timeout to break out of the sync loop. It was introduced from the beginning, just to be "safer", in terms of avoiding the stupid bugs. However, as Ben Hutchings suggested, this timeout rather introduces a potential leak or use-after-free that was apparently fixed by the commit 2d7d5400 ("ALSA: seq: Fix race during FIFO resize"): for example, snd_seq_fifo_event_in() -> snd_seq_event_dup() -> copy_from_user() could block for a long time, and snd_use_lock_sync() goes timeout and still leaves the cell at releasing the pool. For fixing such a problem, we remove the break by the timeout while still keeping the warning. Suggested-by: Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Sakamoto authored
commit dfb00a56 upstream. An abstraction of asynchronous transaction for transmission of MIDI messages was introduced in Linux v4.4. Each driver can utilize this abstraction to transfer MIDI messages via fixed-length payload of transaction to a certain unit address. Filling payload of the transaction is done by callback. In this callback, each driver can return negative error code, however current implementation assigns the return value to unsigned variable. This commit changes type of the variable to fix the bug. Reported-by: Julia Lawall <Julia.Lawall@lip6.fr> Fixes: 585d7cba ("ALSA: firewire-lib: add helper functions for asynchronous transactions to transfer MIDI messages") Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Sakamoto authored
commit 3d016d57 upstream. At a commit 6c29230e ("ALSA: oxfw: delayed registration of sound card"), ALSA oxfw driver fails to handle SCS.1m/1d, due to -EBUSY at a call of snd_card_register(). The cause is that the driver manages to register two rawmidi instances with the same device number 0. This is a regression introduced since kernel 4.7. This commit fixes the regression, by fixing up device property after discovering stream formats. Fixes: 6c29230e ("ALSA: oxfw: delayed registration of sound card") Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jamie Bainbridge authored
[ Upstream commit 105f5528 ] In situations where an skb is paged, the transport header pointer and tail pointer can be the same because the skb contents are in frags. This results in ioctl(SIOCINQ/FIONREAD) incorrectly returning a length of 0 when the length to receive is actually greater than zero. skb->len is already correctly set in ip6_input_finish() with pskb_pull(), so use skb->len as it always returns the correct result for both linear and paged data. Signed-off-by: Jamie Bainbridge <jbainbri@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wei Wang authored
[ Upstream commit c1201444 ] Always zero out ca_priv data in tcp_assign_congestion_control() so that ca_priv data is cleared out during socket creation. Also always zero out ca_priv data in tcp_reinit_congestion_control() so that when cc algorithm is changed, ca_priv data is cleared out as well. We should still zero out ca_priv data even in TCP_CLOSE state because user could call connect() on AF_UNSPEC to disconnect the socket and leave it in TCP_CLOSE state and later call setsockopt() to switch cc algorithm on this socket. Fixes: 2b0a8c9e ("tcp: add CDG congestion control") Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Wei Wang <weiwan@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
WANG Cong authored
[ Upstream commit 199ab00f ] Andrey reported a out-of-bound access in ip6_tnl_xmit(), this is because we use an ipv4 dst in ip6_tnl_xmit() and cast an IPv4 neigh key as an IPv6 address: neigh = dst_neigh_lookup(skb_dst(skb), &ipv6_hdr(skb)->daddr); if (!neigh) goto tx_err_link_failure; addr6 = (struct in6_addr *)&neigh->primary_key; // <=== HERE addr_type = ipv6_addr_type(addr6); if (addr_type == IPV6_ADDR_ANY) addr6 = &ipv6_hdr(skb)->daddr; memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr)); Also the network header of the skb at this point should be still IPv4 for 4in6 tunnels, we shold not just use it as IPv6 header. This patch fixes it by checking if skb->protocol is ETH_P_IPV6: if it is, we are safe to do the nexthop lookup using skb_dst() and ipv6_hdr(skb)->daddr; if not (aka IPv4), we have no clue about which dest address we can pick here, we have to rely on callers to fill it from tunnel config, so just fall to ip6_route_output() to make the decision. Fixes: ea3dc960 ("ip6_tunnel: Add support for wildcard tunnel endpoints.") Reported-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Kochetkov authored
[ Upstream commit f555f34f ] The Ethernet link on an interrupt driven PHY was not coming up if the Ethernet cable was plugged before the Ethernet interface was brought up. The patch trigger PHY state machine to update link state if PHY was requested to do auto-negotiation and auto-negotiation complete flag already set. During power-up cycle the PHY do auto-negotiation, generate interrupt and set auto-negotiation complete flag. Interrupt is handled by PHY state machine but doesn't update link state because PHY is in PHY_READY state. After some time MAC bring up, start and request PHY to do auto-negotiation. If there are no new settings to advertise genphy_config_aneg() doesn't start PHY auto-negotiation. PHY continue to stay in auto-negotiation complete state and doesn't fire interrupt. At the same time PHY state machine expect that PHY started auto-negotiation and is waiting for interrupt from PHY and it won't get it. Fixes: 321beec5 ("net: phy: Use interrupts when available in NOLINK state") Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com> Cc: stable <stable@vger.kernel.org> # v4.9+ Tested-by: Roger Quadros <rogerq@ti.com> Tested-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Ahern authored
[ Upstream commit 8048ced9 ] Taking down the loopback device wreaks havoc on IPv6 routing. By extension, taking down a VRF device wreaks havoc on its table. Dmitry and Andrey both reported heap out-of-bounds reports in the IPv6 FIB code while running syzkaller fuzzer. The root cause is a dead dst that is on the garbage list gets reinserted into the IPv6 FIB. While on the gc (or perhaps when it gets added to the gc list) the dst->next is set to an IPv4 dst. A subsequent walk of the ipv6 tables causes the out-of-bounds access. Andrey's reproducer was the key to getting to the bottom of this. With IPv6, host routes for an address have the dst->dev set to the loopback device. When the 'lo' device is taken down, rt6_ifdown initiates a walk of the fib evicting routes with the 'lo' device which means all host routes are removed. That process moves the dst which is attached to an inet6_ifaddr to the gc list and marks it as dead. The recent change to keep global IPv6 addresses added a new function, fixup_permanent_addr, that is called on admin up. That function restarts dad for an inet6_ifaddr and when it completes the host route attached to it is inserted into the fib. Since the route was marked dead and moved to the gc list, re-inserting the route causes the reported out-of-bounds accesses. If the device with the address is taken down or the address is removed, the WARN_ON in fib6_del is triggered. All of those faults are fixed by regenerating the host route if the existing one has been moved to the gc list, something that can be determined by checking if the rt6i_ref counter is 0. Fixes: f1705ec1 ("net: ipv6: Make address flushing on ifdown optional") Reported-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Herbert Xu authored
[ Upstream commit f6478218 ] When a parent macvlan device is destroyed we end up purging its broadcast queue without dropping the device reference count on the packet source device. This causes the source device to linger. This patch drops that reference count. Fixes: 260916df ("macvlan: Fix potential use-after free for...") Reported-by: Joe Ghalam <Joe.Ghalam@dell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ilan Tayari authored
[ Upstream commit 5e82c9e4 ] Handler for ETHTOOL_GRXCLSRLALL must set info->data to the size of the table, regardless of the amount of entries in it. Existing code does not do that, and this breaks all usage of ethtool -N or -n without explicit location, with this error: rmgr: Invalid RX class rules table size: Success Set info->data to the table size. Tested: ethtool -n ens8 ethtool -N ens8 flow-type ip4 src-ip 1.1.1.1 dst-ip 2.2.2.2 action 1 ethtool -N ens8 flow-type ip4 src-ip 1.1.1.1 dst-ip 2.2.2.2 action 1 loc 55 ethtool -n ens8 ethtool -N ens8 delete 1023 ethtool -N ens8 delete 55 Fixes: f913a72a ("net/mlx5e: Add support to get ethtool flow rules") Signed-off-by: Ilan Tayari <ilant@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-