- 10 Oct, 2014 40 commits
-
-
Alex Deucher authored
Use the new vga_switcheroo_fini_domain_pm_ops function to unregister the pm ops. Based on a patch from: Pali Rohár <pali.rohar@gmail.com> bug: https://bugzilla.kernel.org/show_bug.cgi?id=84431Reviewed-by:
Ben Skeggs <bskeggs@redhat.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Cc: Ben Skeggs <bskeggs@redhat.com> (cherry picked from commit 53beaa01) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Zhiqiang Zhang authored
ref-cycles event is specially to Intel core, but can still used in arm architecture with the wrong return value with 3.10 stable. this patch fix the bug and make it return NOT SUPPORTED distinctly. In upstream this bug has been fixed by other way, which changes more than one file and more than 1000 lines. the primary commit is 6b7658ec. besides we can not simply cherry-pick. Signed-off-by:
Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: Christopher Covington <cov@codeaurora.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit e7a0374e) (cherry picked from commit HEAD) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Cong Wang authored
We saw a kernel soft lockup in perf_remove_from_context(), it looks like the `perf` process, when exiting, could not go out of the retry loop. Meanwhile, the target process was forking a child. So either the target process should execute the smp function call to deactive the event (if it was running) or it should do a context switch which deactives the event. It seems we optimize out a context switch in perf_event_context_sched_out(), and what's more important, we still test an obsolete task pointer when retrying, so no one actually would deactive that event in this situation. Fix it directly by reloading the task pointer in perf_remove_from_context(). This should cure the above soft lockup. Signed-off-by:
Cong Wang <cwang@twopensource.com> Signed-off-by:
Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/1409696840-843-1-git-send-email-xiyou.wangcong@gmail.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> (cherry picked from commit 3577af70) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Matan Barak authored
When marsheling a user path to the kernel struct ib_sa_path, need to zero smac, dmac and set the vlan id to the "no vlan" value. Fixes: dd5f03be ("IB/core: Ethernet L2 attributes in verbs/cm structures") Reported-by:
Aleksey Senin <alekseys@mellanox.com> Signed-off-by:
Matan Barak <matanb@mellanox.com> Signed-off-by:
Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> (cherry picked from commit a59c5850) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Richard Larocque authored
Locks the k_itimer's it_lock member when handling the alarm timer's expiry callback. The regular posix timers defined in posix-timers.c have this lock held during timout processing because their callbacks are routed through posix_timer_fn(). The alarm timers follow a different path, so they ought to grab the lock somewhere else. Cc: stable@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Sharvil Nanavati <sharvil@google.com> Signed-off-by:
Richard Larocque <rlarocque@google.com> Signed-off-by:
John Stultz <john.stultz@linaro.org> (cherry picked from commit 474e941b) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Richard Larocque authored
Avoids sending a signal to alarm timers created with sigev_notify set to SIGEV_NONE by checking for that special case in the timeout callback. The regular posix timers avoid sending signals to SIGEV_NONE timers by not scheduling any callbacks for them in the first place. Although it would be possible to do something similar for alarm timers, it's simpler to handle this as a special case in the timeout. Prior to this patch, the alarm timer would ignore the sigev_notify value and try to deliver signals to the process anyway. Even worse, the sanity check for the value of sigev_signo is skipped when SIGEV_NONE was specified, so the signal number could be bogus. If sigev_signo was an unitialized value (as it often would be if SIGEV_NONE is used), then it's hard to predict which signal will be sent. Cc: stable@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Sharvil Nanavati <sharvil@google.com> Signed-off-by:
Richard Larocque <rlarocque@google.com> Signed-off-by:
John Stultz <john.stultz@linaro.org> (cherry picked from commit 265b81d2) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Richard Larocque authored
Returns the time remaining for an alarm timer, rather than the time at which it is scheduled to expire. If the timer has already expired or it is not currently scheduled, the it_value's members are set to zero. This new behavior matches that of the other posix-timers and the POSIX specifications. This is a change in user-visible behavior, and may break existing applications. Hopefully, few users rely on the old incorrect behavior. Cc: stable@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Sharvil Nanavati <sharvil@google.com> Signed-off-by:
Richard Larocque <rlarocque@google.com> [jstultz: minor style tweak] Signed-off-by:
John Stultz <john.stultz@linaro.org> (cherry picked from commit e86fea76) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
John David Anglin authored
In spite of what the GCC manual says, the -mfast-indirect-calls has never been supported in the 64-bit parisc compiler. Indirect calls have always been done using function descriptors irrespective of the -mfast-indirect-calls option. Recently, it was noticed that a function descriptor was always requested when the -mfast-indirect-calls option was specified. This caused problems when the option was used in application code and doesn't make any sense because the whole point of the option is to avoid using a function descriptor for indirect calls. Fixing this broke 64-bit kernel builds. I will fix GCC but for now we need the attached change. This results in the same kernel code as before. Signed-off-by:
John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # v3.0+ Signed-off-by:
Helge Deller <deller@gmx.de> (cherry picked from commit d26a7730) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Guy Martin authored
The current LWS cas only works correctly for 32bit. The new LWS allows for CAS operations of variable size. Signed-off-by:
Guy Martin <gmsoft@tuxicoman.be> Cc: <stable@vger.kernel.org> # 3.13+ Signed-off-by:
Helge Deller <deller@gmx.de> (cherry picked from commit 89206491) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Richard Genoud authored
In set_termios(), interrupts where not disabled if UART_ENABLE_MS() was false. Tested on at91sam9g35. Signed-off-by:
Richard Genoud <richard.genoud@gmail.com> Cc: stable <stable@vger.kernel.org> # >= 3.16 Reviewed-by:
Peter Hurley <peter@hurleysoftware.com> Acked-by:
Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 35b675b9) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Michael Ellerman authored
Similar to the previous commit which described why we need to add a barrier to arch_spin_is_locked(), we have a similar problem with spin_unlock_wait(). We need a barrier on entry to ensure any spinlock we have previously taken is visibly locked prior to the load of lock->slock. It's also not clear if spin_unlock_wait() is intended to have ACQUIRE semantics. For now be conservative and add a barrier on exit to give it ACQUIRE semantics. Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> (cherry picked from commit 78e05b14) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Anton Blanchard authored
ABIv2 kernels are failing to backtrace through the kernel. An example: 39.30% readseek2_proce [kernel.kallsyms] [k] find_get_entry | --- find_get_entry __GI___libc_read The problem is in valid_next_sp() where we check that the new stack pointer is at least STACK_FRAME_OVERHEAD below the previous one. ABIv1 has a minimum stack frame size of 112 bytes consisting of 48 bytes and 64 bytes of parameter save area. ABIv2 changes that to 32 bytes with no paramter save area. STACK_FRAME_OVERHEAD is in theory the minimum stack frame size, but we over 240 uses of it, some of which assume that it includes space for the parameter area. We need to work through all our stack defines and rationalise them but let's fix perf now by creating STACK_FRAME_MIN_SIZE and using in valid_next_sp(). This fixes the issue: 30.64% readseek2_proce [kernel.kallsyms] [k] find_get_entry | --- find_get_entry pagecache_get_page generic_file_read_iter new_sync_read vfs_read sys_read syscall_exit __GI___libc_read Cc: stable@vger.kernel.org # 3.16+ Reported-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Anton Blanchard <anton@samba.org> (cherry picked from commit 85101af1) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Wanpeng Li authored
The following bug can be triggered by hot adding and removing a large number of xen domain0's vcpus repeatedly: BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 IP: [..] find_busiest_group PGD 5a9d5067 PUD 13067 PMD 0 Oops: 0000 [#3] SMP [...] Call Trace: load_balance ? _raw_spin_unlock_irqrestore idle_balance __schedule schedule schedule_timeout ? lock_timer_base schedule_timeout_uninterruptible msleep lock_device_hotplug_sysfs online_store dev_attr_store sysfs_write_file vfs_write SyS_write system_call_fastpath Last level cache shared mask is built during CPU up and the build_sched_domain() routine takes advantage of it to setup the sched domain CPU topology. However, llc_shared_mask is not released during CPU disable, which leads to an invalid sched domainCPU topology. This patch fix it by releasing the llc_shared_mask correctly during CPU disable. Yasuaki also reported that this can happen on real hardware: https://lkml.org/lkml/2014/7/22/1018 His case is here: == Here is an example on my system. My system has 4 sockets and each socket has 15 cores and HT is enabled. In this case, each core of sockes is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-44, 90-104 Socket#3 | 45-59, 105-119 Then llc_shared_mask of CPU#30 has 0x3fff80000001fffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-44 and 90-104. When hot-removing socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 But llc_shared_mask is not cleared. So llc_shared_mask of CPU#30 remains having 0x3fff80000001fffc0000000. After that, when hot-adding socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-59 Socket#3 | 90-119 Then llc_shared_mask of CPU#30 becomes 0x3fff8000fffffffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-59 and 90-104. So the mask has the wrong value. Signed-off-by:
Wanpeng Li <wanpeng.li@linux.intel.com> Tested-by:
Linn Crosetto <linn@hp.com> Reviewed-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Toshi Kani <toshi.kani@hp.com> Reviewed-by:
Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: <stable@vger.kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1411547885-48165-1-git-send-email-wanpeng.li@linux.intel.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> (cherry picked from commit 03bd4e1f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Joseph Qi authored
There is a deadlock case which reported by Guozhonghua: https://oss.oracle.com/pipermail/ocfs2-devel/2014-September/010079.html This case is caused by &res->spinlock and &dlm->master_lock misordering in different threads. It was introduced by commit 8d400b81 ("ocfs2/dlm: Clean up refmap helpers"). Since lockres is new, it doesn't not require the &res->spinlock. So remove it. Fixes: 8d400b81 ("ocfs2/dlm: Clean up refmap helpers") Signed-off-by:
Joseph Qi <joseph.qi@huawei.com> Reviewed-by:
joyce.xue <xuejiufei@huawei.com> Reported-by:
Guozhonghua <guozhonghua@h3c.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 5760a97c) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Andreas Rohner authored
This bug leads to reproducible silent data loss, despite the use of msync(), sync() and a clean unmount of the file system. It is easily reproducible with the following script: ----------------[BEGIN SCRIPT]-------------------- mkfs.nilfs2 -f /dev/sdb mount /dev/sdb /mnt dd if=/dev/zero bs=1M count=30 of=/mnt/testfile umount /mnt mount /dev/sdb /mnt CHECKSUM_BEFORE="$(md5sum /mnt/testfile)" /root/mmaptest/mmaptest /mnt/testfile 30 10 5 sync CHECKSUM_AFTER="$(md5sum /mnt/testfile)" umount /mnt mount /dev/sdb /mnt CHECKSUM_AFTER_REMOUNT="$(md5sum /mnt/testfile)" umount /mnt echo "BEFORE MMAP:\t$CHECKSUM_BEFORE" echo "AFTER MMAP:\t$CHECKSUM_AFTER" echo "AFTER REMOUNT:\t$CHECKSUM_AFTER_REMOUNT" ----------------[END SCRIPT]-------------------- The mmaptest tool looks something like this (very simplified, with error checking removed): ----------------[BEGIN mmaptest]-------------------- data = mmap(NULL, file_size - file_offset, PROT_READ | PROT_WRITE, MAP_SHARED, fd, file_offset); for (i = 0; i < write_count; ++i) { memcpy(data + i * 4096, buf, sizeof(buf)); msync(data, file_size - file_offset, MS_SYNC)) } ----------------[END mmaptest]-------------------- The output of the script looks something like this: BEFORE MMAP: 281ed1d5ae50e8419f9b978aab16de83 /mnt/testfile AFTER MMAP: 6604a1c31f10780331a6850371b3a313 /mnt/testfile AFTER REMOUNT: 281ed1d5ae50e8419f9b978aab16de83 /mnt/testfile So it is clear, that the changes done using mmap() do not survive a remount. This can be reproduced a 100% of the time. The problem was introduced in commit 136e8770 ("nilfs2: fix issue of nilfs_set_page_dirty() for page at EOF boundary"). If the page was read with mpage_readpage() or mpage_readpages() for example, then it has no buffers attached to it. In that case page_has_buffers(page) in nilfs_set_page_dirty() will be false. Therefore nilfs_set_file_dirty() is never called and the pages are never collected and never written to disk. This patch fixes the problem by also calling nilfs_set_file_dirty() if the page has no buffers attached to it. [akpm@linux-foundation.org: s/PAGE_SHIFT/PAGE_CACHE_SHIFT/] Signed-off-by:
Andreas Rohner <andreas.rohner@gmx.net> Tested-by:
Andreas Rohner <andreas.rohner@gmx.net> Signed-off-by:
Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 56d7acc7) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Andrey Vagin authored
Currently we handle only ENOSPC. In case of other errors the file_handle variable isn't filled properly and we will show a part of stack. Signed-off-by:
Andrey Vagin <avagin@openvz.org> Acked-by:
Cyrill Gorcunov <gorcunov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 7e882481) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Andrey Vagin authored
MAX_HANDLE_SZ is equal to 128, but currently the size of pad is only 64 bytes, so exportfs_encode_inode_fh can return an error. Signed-off-by:
Andrey Vagin <avagin@openvz.org> Acked-by:
Cyrill Gorcunov <gorcunov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 1fc98d11) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Rasmus Villemoes authored
The C operator <= defines a perfectly fine total ordering on the set of values representable in a long. However, unlike its namesake in the integers, it is not translation invariant, meaning that we do not have "b <= c" iff "a+b <= a+c" for all a,b,c. This means that it is always wrong to try to boil down the relationship between two longs to a question about the sign of their difference, because the resulting relation [a LEQ b iff a-b <= 0] is neither anti-symmetric or transitive. The former is due to -LONG_MIN==LONG_MIN (take any two a,b with a-b = LONG_MIN; then a LEQ b and b LEQ a, but a != b). The latter can either be seen observing that x LEQ x+1 for all x, implying x LEQ x+1 LEQ x+2 ... LEQ x-1 LEQ x; or more directly with the simple example a=LONG_MIN, b=0, c=1, for which a-b < 0, b-c < 0, but a-c > 0. Note that it makes absolutely no difference that a transmogrying bijection has been applied before the comparison is done. In fact, had the obfuscation not been done, one could probably not observe the bug (assuming all values being compared always lie in one half of the address space, the mathematical value of a-b is always representable in a long). As it stands, one can easily obtain three file descriptors exhibiting the non-transitivity of kcmp(). Side note 1: I can't see that ensuring the MSB of the multiplier is set serves any purpose other than obfuscating the obfuscating code. Side note 2: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <assert.h> #include <sys/syscall.h> enum kcmp_type { KCMP_FILE, KCMP_VM, KCMP_FILES, KCMP_FS, KCMP_SIGHAND, KCMP_IO, KCMP_SYSVSEM, KCMP_TYPES, }; pid_t pid; int kcmp(pid_t pid1, pid_t pid2, int type, unsigned long idx1, unsigned long idx2) { return syscall(SYS_kcmp, pid1, pid2, type, idx1, idx2); } int cmp_fd(int fd1, int fd2) { int c = kcmp(pid, pid, KCMP_FILE, fd1, fd2); if (c < 0) { perror("kcmp"); exit(1); } assert(0 <= c && c < 3); return c; } int cmp_fdp(const void *a, const void *b) { static const int normalize[] = {0, -1, 1}; return normalize[cmp_fd(*(int*)a, *(int*)b)]; } #define MAX 100 /* This is plenty; I've seen it trigger for MAX==3 */ int main(int argc, char *argv[]) { int r, s, count = 0; int REL[3] = {0,0,0}; int fd[MAX]; pid = getpid(); while (count < MAX) { r = open("/dev/null", O_RDONLY); if (r < 0) break; fd[count++] = r; } printf("opened %d file descriptors\n", count); for (r = 0; r < count; ++r) { for (s = r+1; s < count; ++s) { REL[cmp_fd(fd[r], fd[s])]++; } } printf("== %d\t< %d\t> %d\n", REL[0], REL[1], REL[2]); qsort(fd, count, sizeof(fd[0]), cmp_fdp); memset(REL, 0, sizeof(REL)); for (r = 0; r < count; ++r) { for (s = r+1; s < count; ++s) { REL[cmp_fd(fd[r], fd[s])]++; } } printf("== %d\t< %d\t> %d\n", REL[0], REL[1], REL[2]); return (REL[0] + REL[2] != 0); } Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by:
Cyrill Gorcunov <gorcunov@openvz.org> "Eric W. Biederman" <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit acbbe6fb) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Steven Rostedt (Red Hat) authored
When updating what an ftrace_ops traces, if it is registered (that is, actively tracing), and that ftrace_ops uses the shared global_ops local_hash, then we need to update all tracers that are active and also share the global_ops' ftrace_hash_ops. Cc: stable@vger.kernel.org # 3.16 (apply after 3.17-rc4 is out) Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> (cherry picked from commit 84261912) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Felipe Balbi authored
After commit 2ec2a8be (usb: dwc3: gadget: always enable IOC on bulk/interrupt transfers) we created a situation where it was possible to hang a bulk/interrupt endpoint if we had more than one pending request in our queue and they were both started with a single Start Transfer command. The problems triggers because we had not enabled Transfer In Progress event for those endpoints and we were not able to process early giveback of requests completed without LST bit set. Fix the problem by finally enabling Xfer In Progress event for all endpoint types, except control. Fixes: 2ec2a8be (usb: dwc3: gadget: always enable IOC on bulk/interrupt transfers) Cc: <stable@vger.kernel.org> # v3.14+ Reported-by:
Pratyush Anand <pratyush.anand@st.com> Signed-off-by:
Felipe Balbi <balbi@ti.com> (cherry picked from commit 0b93a4c8) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Jens Axboe authored
Commit 2da78092 changed the locking from a mutex to a spinlock, so we now longer sleep in this context. But there was a leftover might_sleep() in there, which now triggers since we do the final free from an RCU callback. Get rid of it. Reported-by:
Pontus Fuchs <pontus.fuchs@gmail.com> Signed-off-by:
Jens Axboe <axboe@fb.com> (cherry picked from commit 46f341ff) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
J. Bruce Fields authored
Nikita Yuschenko reported that booting a kernel with init=/bin/sh and then nfs mounting without portmap or rpcbind running using a busybox mount resulted in: # mount -t nfs 10.30.130.21:/opt /mnt svc: failed to register lockdv1 RPC service (errno 111). lockd_up: makesock failed, error=-111 Unable to handle kernel paging request for data at address 0x00000030 Faulting instruction address: 0xc055e65c Oops: Kernel access of bad area, sig: 11 [#1] MPC85xx CDS Modules linked in: CPU: 0 PID: 1338 Comm: mount Not tainted 3.10.44.cge #117 task: cf29cea0 ti: cf35c000 task.ti: cf35c000 NIP: c055e65c LR: c0566490 CTR: c055e648 REGS: cf35dad0 TRAP: 0300 Not tainted (3.10.44.cge) MSR: 00029000 <CE,EE,ME> CR: 22442488 XER: 20000000 DEAR: 00000030, ESR: 00000000 GPR00: c05606f4 cf35db80 cf29cea0 cf0ded80 cf0dedb8 00000001 1dec3086 00000000 GPR08: 00000000 c07b1640 00000007 1dec3086 22442482 100b9758 00000000 10090ae8 GPR16: 00000000 000186a5 00000000 00000000 100c3018 bfa46edc 100b0000 bfa46ef0 GPR24: cf386ae0 c07834f0 00000000 c0565f88 00000001 cf0dedb8 00000000 cf0ded80 NIP [c055e65c] call_start+0x14/0x34 LR [c0566490] __rpc_execute+0x70/0x250 Call Trace: [cf35db80] [00000080] 0x80 (unreliable) [cf35dbb0] [c05606f4] rpc_run_task+0x9c/0xc4 [cf35dbc0] [c0560840] rpc_call_sync+0x50/0xb8 [cf35dbf0] [c056ee90] rpcb_register_call+0x54/0x84 [cf35dc10] [c056f24c] rpcb_register+0xf8/0x10c [cf35dc70] [c0569e18] svc_unregister.isra.23+0x100/0x108 [cf35dc90] [c0569e38] svc_rpcb_cleanup+0x18/0x30 [cf35dca0] [c0198c5c] lockd_up+0x1dc/0x2e0 [cf35dcd0] [c0195348] nlmclnt_init+0x2c/0xc8 [cf35dcf0] [c015bb5c] nfs_start_lockd+0x98/0xec [cf35dd20] [c015ce6c] nfs_create_server+0x1e8/0x3f4 [cf35dd90] [c0171590] nfs3_create_server+0x10/0x44 [cf35dda0] [c016528c] nfs_try_mount+0x158/0x1e4 [cf35de20] [c01670d0] nfs_fs_mount+0x434/0x8c8 [cf35de70] [c00cd3bc] mount_fs+0x20/0xbc [cf35de90] [c00e4f88] vfs_kern_mount+0x50/0x104 [cf35dec0] [c00e6e0c] do_mount+0x1d0/0x8e0 [cf35df10] [c00e75ac] SyS_mount+0x90/0xd0 [cf35df40] [c000ccf4] ret_from_syscall+0x0/0x3c The addition of svc_shutdown_net() resulted in two calls to svc_rpcb_cleanup(); the second is no longer necessary and crashes when it calls rpcb_register_call with clnt=NULL. Reported-by:
Nikita Yushchenko <nyushchenko@dev.rtsoft.ru> Fixes: 679b033d "lockd: ensure we tear down any live sockets when socket creation fails during lockd_up" Cc: stable@vger.kernel.org Acked-by:
Jeff Layton <jlayton@primarydata.com> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> (cherry picked from commit 7c17705e) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Larry Finger authored
The Sitecom WLA-2102 adapter uses this driver. Reported-by:
Nico Baggus <nico-linux@noci.xs4all.nl> Signed-off-by:
Larry Finger <Larry.Finger@lwfinger.net> Cc: Nico Baggus <nico-linux@noci.xs4all.nl> Cc: Stable <stable@vger.kernel.org> Signed-off-by:
John W. Linville <linville@tuxdriver.com> (cherry picked from commit c6651716) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Tejun Heo authored
If pcpu_map_pages() fails midway, it unmaps the already mapped pages. Currently, it doesn't flush tlb after the partial unmapping. This may be okay in most cases as the established mapping hasn't been used at that point but it can go wrong and when it goes wrong it'd be extremely difficult to track down. Flush tlb after the partial unmapping. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org (cherry picked from commit 849f5169) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Tejun Heo authored
When pcpu_alloc_pages() fails midway, pcpu_free_pages() is invoked to free what has already been allocated. The invocation is across the whole requested range and pcpu_free_pages() will try to free all non-NULL pages; unfortunately, this is incorrect as pcpu_get_pages_and_bitmap(), unlike what its comment suggests, doesn't clear the pages array and thus the array may have entries from the previous invocations making the partial failure path free incorrect pages. Fix it by open-coding the partial freeing of the already allocated pages. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org (cherry picked from commit f0d27965) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Honggang Li authored
This reverts commit 3189eddb ("percpu: free percpu allocation info for uniprocessor system"). The commit causes a hang with a crisv32 image. This may be an architecture problem, but at least for now the revert is necessary to be able to boot a crisv32 image. Cc: Tejun Heo <tj@kernel.org> Cc: Honggang Li <enjoymindful@gmail.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Tejun Heo <tj@kernel.org> Fixes: 3189eddb ("percpu: free percpu allocation info for uniprocessor system") Cc: stable@vger.kernel.org # Please don't apply 3189eddb percpu-refcount: make percpu_ref based on longs instead of ints percpu_ref is currently based on ints and the number of refs it can cover is (1 << 31). This makes it impossible to use a percpu_ref to count memory objects or pages on 64bit machines as it may overflow. This forces those users to somehow aggregate the references before contributing to the percpu_ref which is often cumbersome and sometimes challenging to get the same level of performance as using the percpu_ref directly. While using ints for the percpu counters makes them pack tighter on 64bit machines, the possible gain from using ints instead of longs is extremely small compared to the overall gain from per-cpu operation. This patch makes percpu_ref based on longs so that it can be used to directly count memory objects or pages. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kent Overstreet <kmo@daterainc.com> Cc: Johannes Weiner <hannes@cmpxchg.org> percpu-refcount: improve WARN messages percpu_ref's WARN messages can be a lot more helpful by indicating who's the culprit. Make them report the release function that the offending percpu-refcount is associated with. This should make it a lot easier to track down the reported invalid refcnting operations. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kent Overstreet <kmo@daterainc.com> percpu: fix locking regression in the failure path of pcpu_alloc() While updating locking, b38d08f3 ("percpu: restructure locking") broke pcpu_create_chunk() creation path in pcpu_alloc(). It returns without releasing pcpu_alloc_mutex. Fix it. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Julia Lawall <julia.lawall@lip6.fr> percpu-refcount: add @gfp to percpu_ref_init() Percpu allocator now supports allocation mask. Add @gfp to percpu_ref_init() so that !GFP_KERNEL allocation masks can be used with percpu_refs too. This patch doesn't make any functional difference. v2: blk-mq conversion was missing. Updated. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kent Overstreet <koverstreet@google.com> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Li Zefan <lizefan@huawei.com> Cc: Nicholas A. Bellinger <nab@linux-iscsi.org> Cc: Jens Axboe <axboe@kernel.dk> proportions: add @gfp to init functions Percpu allocator now supports allocation mask. Add @gfp to [flex_]proportions init functions so that !GFP_KERNEL allocation masks can be used with them too. This patch doesn't make any functional difference. Signed-off-by:
Tejun Heo <tj@kernel.org> Reviewed-by:
Jan Kara <jack@suse.cz> Cc: Peter Zijlstra <peterz@infradead.org> percpu_counter: add @gfp to percpu_counter_init() Percpu allocator now supports allocation mask. Add @gfp to percpu_counter_init() so that !GFP_KERNEL allocation masks can be used with percpu_counters too. We could have left percpu_counter_init() alone and added percpu_counter_init_gfp(); however, the number of users isn't that high and introducing _gfp variants to all percpu data structures would be quite ugly, so let's just do the conversion. This is the one with the most users. Other percpu data structures are a lot easier to convert. This patch doesn't make any functional difference. Signed-off-by:
Tejun Heo <tj@kernel.org> Acked-by:
Jan Kara <jack@suse.cz> Acked-by:
"David S. Miller" <davem@davemloft.net> Cc: x86@kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> percpu_counter: make percpu_counters_lock irq-safe percpu_counter is scheduled to grow @gfp support to allow atomic initialization. This patch makes percpu_counters_lock irq-safe so that it can be safely used from atomic contexts. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: implement asynchronous chunk population The percpu allocator now supports atomic allocations by only allocating from already populated areas but the mechanism to ensure that there's adequate amount of populated areas was missing. This patch expands pcpu_balance_work so that in addition to freeing excess free chunks it also populates chunks to maintain an adequate level of populated areas. pcpu_alloc() schedules pcpu_balance_work if the amount of free populated areas is too low or after an atomic allocation failure. * PERPCU_DYNAMIC_RESERVE is increased by two pages to account for PCPU_EMPTY_POP_PAGES_LOW. * pcpu_async_enabled is added to gate both async jobs - chunk->map_extend_work and pcpu_balance_work - so that we don't end up scheduling them while the needed subsystems aren't up yet. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: rename pcpu_reclaim_work to pcpu_balance_work pcpu_reclaim_work will also be used to populate chunks asynchronously. Rename it to pcpu_balance_work in preparation. pcpu_reclaim() is renamed to pcpu_balance_workfn() and some of its local variables are renamed too. This is pure rename. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: implmeent pcpu_nr_empty_pop_pages and chunk->nr_populated pcpu_nr_empty_pop_pages counts the number of empty populated pages across all chunks and chunk->nr_populated counts the number of populated pages in a chunk. Both will be used to implement pre/async population for atomic allocations. pcpu_chunk_[de]populated() are added to update chunk->populated, chunk->nr_populated and pcpu_nr_empty_pop_pages together. All successful chunk [de]populations should be followed by the corresponding pcpu_chunk_[de]populated() calls. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: make sure chunk->map array has available space An allocation attempt may require extending chunk->map array which requires GFP_KERNEL context which isn't available for atomic allocations. This patch ensures that chunk->map array usually keeps some amount of available space by directly allocating buffer space during GFP_KERNEL allocations and scheduling async extension during atomic ones. This should make atomic allocation failures from map space exhaustion rare. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: implement [__]alloc_percpu_gfp() Now that pcpu_alloc_area() can allocate only from populated areas, it's easy to add atomic allocation support to [__]alloc_percpu(). Update pcpu_alloc() so that it accepts @gfp and skips all the blocking operations and allocates only from the populated areas if @gfp doesn't contain GFP_KERNEL. New interface functions [__]alloc_percpu_gfp() are added. While this means that atomic allocations are possible, this isn't complete yet as there's no mechanism to ensure that certain amount of populated areas is kept available and atomic allocations may keep failing under certain conditions. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: indent the population block in pcpu_alloc() The next patch will conditionalize the population block in pcpu_alloc() which will end up making a rather large indentation change obfuscating the actual logic change. This patch puts the block under "if (true)" so that the next patch can avoid indentation changes. The defintions of the local variables which are used only in the block are moved into the block. This patch is purely cosmetic. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: make pcpu_alloc_area() capable of allocating only from populated areas Update pcpu_alloc_area() so that it can skip unpopulated areas if the new parameter @pop_only is true. This is implemented by a new function, pcpu_fit_in_area(), which determines the amount of head padding considering the alignment and populated state. @pop_only is currently always false but this will be used to implement atomic allocation. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: restructure locking At first, the percpu allocator required a sleepable context for both alloc and free paths and used pcpu_alloc_mutex to protect everything. Later, pcpu_lock was introduced to protect the index data structure so that the free path can be invoked from atomic contexts. The conversion only updated what's necessary and left most of the allocation path under pcpu_alloc_mutex. The percpu allocator is planned to add support for atomic allocation and this patch restructures locking so that the coverage of pcpu_alloc_mutex is further reduced. * pcpu_alloc() now grab pcpu_alloc_mutex only while creating a new chunk and populating the allocated area. Everything else is now protected soley by pcpu_lock. After this change, multiple instances of pcpu_extend_area_map() may race but the function already implements sufficient synchronization using pcpu_lock. This also allows multiple allocators to arrive at new chunk creation. To avoid creating multiple empty chunks back-to-back, a new chunk is created iff there is no other empty chunk after grabbing pcpu_alloc_mutex. * pcpu_lock is now held while modifying chunk->populated bitmap. After this, all data structures are protected by pcpu_lock. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: make percpu-km set chunk->populated bitmap properly percpu-km instantiates the whole chunk on creation and doesn't make use of chunk->populated bitmap and leaves it as zero. While this currently doesn't cause any problem, the inconsistency makes it difficult to build further logic on top of chunk->populated. This patch makes percpu-km fill chunk->populated on creation so that the bitmap is always consistent. Signed-off-by:
Tejun Heo <tj@kernel.org> Acked-by:
Christoph Lameter <cl@linux.com> percpu: move region iterations out of pcpu_[de]populate_chunk() Previously, pcpu_[de]populate_chunk() were called with the range which may contain multiple target regions in it and pcpu_[de]populate_chunk() iterated over the regions. This has the benefit of batching up cache flushes for all the regions; however, we're planning to add more bookkeeping logic around [de]population to support atomic allocations and this delegation of iterations gets in the way. This patch moves the region iterations out of pcpu_[de]populate_chunk() into its callers - pcpu_alloc() and pcpu_reclaim() - so that we can later add logic to track more states around them. This change may make cache and tlb flushes more frequent but multi-region [de]populations are rare anyway and if this actually becomes a problem, it's not difficult to factor out cache flushes as separate callbacks which are directly invoked from percpu.c. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: move common parts out of pcpu_[de]populate_chunk() percpu-vm and percpu-km implement separate versions of pcpu_[de]populate_chunk() and some part which is or should be common are currently in the specific implementations. Make the following changes. * Allocate area clearing is moved from the pcpu_populate_chunk() implementations to pcpu_alloc(). This makes percpu-km's version noop. * Quick exit tests in pcpu_[de]populate_chunk() of percpu-vm are moved to their respective callers so that they are applied to percpu-km too. This doesn't make any meaningful difference as both functions are noop for percpu-km; however, this is more consistent and will help implementing atomic allocation support. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: remove @may_alloc from pcpu_get_pages() pcpu_get_pages() creates the temp pages array if not already allocated and returns the pointer to it. As the function is called from both [de]population paths and depopulation can only happen after at least one successful population, the param doesn't make any difference - the allocation will always happen on the population path anyway. Remove @may_alloc from pcpu_get_pages(). Also, add an lockdep assertion pcpu_alloc_mutex instead of vaguely stating that the exclusion is the caller's responsibility. Signed-off-by:
Tejun Heo <tj@kernel.org> percpu: remove the usage of separate populated bitmap in percpu-vm percpu-vm uses pcpu_get_pages_and_bitmap() to acquire temp pages array and populated bitmap and uses the two during [de]population. The temp bitmap is used only to build the new bitmap that is copied to chunk->populated after the operation succeeds; however, the new bitmap can be trivially set after success without using the temp bitmap. This patch removes the temp populated bitmap usage from percpu-vm.c. * pcpu_get_pages_and_bitmap() is renamed to pcpu_get_pages() and no longer hands out the temp bitmap. * @populated arugment is dropped from all the related functions. @populated updates in pcpu_[un]map_pages() are dropped. * Two loops in pcpu_map_pages() are merged. * pcpu_[de]populated_chunk() modify chunk->populated bitmap directly from @page_start and @page_end after success. Signed-off-by:
Tejun Heo <tj@kernel.org> Acked-by:
Christoph Lameter <cl@linux.com> percpu: free percpu allocation info for uniprocessor system Currently, only SMP system free the percpu allocation info. Uniprocessor system should free it too. For example, one x86 UML virtual machine with 256MB memory, UML kernel wastes one page memory. Signed-off-by:
Honggang Li <enjoymindful@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org (cherry picked from commit bb2e226b 3189eddb) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
James Ralston authored
This patch adds the IDE mode SATA Device IDs for the Intel 9 Series PCH. Signed-off-by:
James Ralston <james.d.ralston@intel.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org (cherry picked from commit 6cad1376) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
The sys_vendor / product_name are somewhat generic unfortunately, so this may lead to some false positives. But nomux usually does no harm, where as not having it clearly is causing problems on the Avatar AVIU-145A6. https://bugzilla.kernel.org/show_bug.cgi?id=77391 Cc: stable@vger.kernel.org Reported-by:
Hugo P <saurosii@gmail.com> Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit d2682118) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
https://bugzilla.kernel.org/show_bug.cgi?id=69731 Cc: stable@vger.kernel.org Reported-by:
Jason Robinson <mail@jasonrobinson.me> Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit cc18a69c) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
Adjust Elantech signature validation to account fo rnewer models of touchpads. Cc: stable@vger.kernel.org Reported-and-tested-by:
Màrius Monton <marius.monton@gmail.com> Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit 271329b3) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Dmitry Torokhov authored
ForcePads are found on HP EliteBook 1040 laptops. They lack any kind of physical buttons, instead they generate primary button click when user presses somewhat hard on the surface of the touchpad. Unfortunately they also report primary button click whenever there are 2 or more contacts on the pad, messing up all multi-finger gestures (2-finger scrolling, multi-finger tapping, etc). To cope with this behavior we introduce a delay (currently 50 msecs) in reporting primary press in case more contacts appear. Cc: stable@vger.kernel.org Reviewed-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit 5715fc76) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
John Sung authored
When running a 32-bit inputattach utility in a 64-bit system, there will be error code "inputattach: can't set device type". This is caused by the serport device driver not supporting compat_ioctl, so that SPIOCSTYPE ioctl fails. Cc: stable@vger.kernel.org Signed-off-by:
John Sung <penmount.touch@gmail.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit a80d8b02) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Mikulas Patocka authored
The DM crypt target accesses memory beyond allocated space resulting in a crash on 32 bit x86 systems. This bug is very old (it dates back to 2.6.25 commit 3a7f6c99 "dm crypt: use async crypto"). However, this bug was masked by the fact that kmalloc rounds the size up to the next power of two. This bug wasn't exposed until 3.17-rc1 commit 298a9fa0 ("dm crypt: use per-bio data"). By switching to using per-bio data there was no longer any padding beyond the end of a dm-crypt allocated memory block. To minimize allocation overhead dm-crypt puts several structures into one block allocated with kmalloc. The block holds struct ablkcipher_request, cipher-specific scratch pad (crypto_ablkcipher_reqsize(any_tfm(cc))), struct dm_crypt_request and an initialization vector. The variable dmreq_start is set to offset of struct dm_crypt_request within this memory block. dm-crypt allocates the block with this size: cc->dmreq_start + sizeof(struct dm_crypt_request) + cc->iv_size. When accessing the initialization vector, dm-crypt uses the function iv_of_dmreq, which performs this calculation: ALIGN((unsigned long)(dmreq + 1), crypto_ablkcipher_alignmask(any_tfm(cc)) + 1). dm-crypt allocated "cc->iv_size" bytes beyond the end of dm_crypt_request structure. However, when dm-crypt accesses the initialization vector, it takes a pointer to the end of dm_crypt_request, aligns it, and then uses it as the initialization vector. If the end of dm_crypt_request is not aligned on a crypto_ablkcipher_alignmask(any_tfm(cc)) boundary the alignment causes the initialization vector to point beyond the allocated space. Fix this bug by calculating the variable iv_size_padding and adding it to the allocated size. Also correct the alignment of dm_crypt_request. struct dm_crypt_request is specific to dm-crypt (it isn't used by the crypto subsystem at all), so it is aligned on __alignof__(struct dm_crypt_request). Also align per_bio_data_size on ARCH_KMALLOC_MINALIGN, so that it is aligned as if the block was allocated with kmalloc. Reported-by:
Krzysztof Kolasa <kkolasa@winsoft.pl> Tested-by:
Milan Broz <gmazyland@gmail.com> Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> (cherry picked from commit d49ec52f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Thomas Gleixner authored
futex_wait_requeue_pi() calls futex_wait_setup(). If futex_wait_setup() succeeds it returns with hb->lock held and preemption disabled. Now the sanity check after this does: if (match_futex(&q.key, &key2)) { ret = -EINVAL; goto out_put_keys; } which releases the keys but does not release hb->lock. So we happily return to user space with hb->lock held and therefor preemption disabled. Unlock hb->lock before taking the exit route. Reported-by:
Dave "Trinity" Jones <davej@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Darren Hart <dvhart@linux.intel.com> Reviewed-by:
Davidlohr Bueso <dave@stgolabs.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1409112318500.4178@nanosSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> (cherry picked from commit 13c42c2f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Mike Christie authored
commit db9bfd64 upstream. This patches fixes a potential buffer overrun in __iscsi_conn_send_pdu. This function is used by iscsi drivers and userspace to send iscsi PDUs/ commands. For login commands, we have a set buffer size. For all other commands we do not support data buffers. This was reported by Dan Carpenter here: http://www.spinics.net/lists/linux-scsi/msg66838.htmlReported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Mike Christie <michaelc@cs.wisc.edu> Reviewed-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
James Bottomley <JBottomley@Parallels.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 8e36f1f3) (cherry picked from commit HEAD) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Nicholas Bellinger authored
This patch fixes a bug in iscsit_logout_post_handler_diffcid() where a pointer used as storage for list_for_each_entry() was incorrectly being used to determine if no matching entry had been found. This patch changes iscsit_logout_post_handler_diffcid() to key off bool conn_found to determine if the function needs to exit early. Reported-by:
Joern Engel <joern@logfs.org> Cc: <stable@vger.kernel.org> # v3.1+ Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> (cherry picked from commit b53b0d99) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Joern Engel authored
In iscsi_copy_param_list() a failed iscsi_param_list memory allocation currently invokes iscsi_release_param_list() to cleanup, and will promptly trigger a NULL pointer dereference. Instead, go ahead and return for the first iscsi_copy_param_list() failure case. Found by coverity. Signed-off-by:
Joern Engel <joern@logfs.org> Cc: <stable@vger.kernel.org> # v3.1+ Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> (cherry picked from commit 8ae757d0) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Srinivas Pandruvada authored
This can result in wrong reference count for trigger device, call iio_trigger_get to increment reference. Refer to http://www.spinics.net/lists/linux-iio/msg13669.html for discussion with Jonathan. Signed-off-by:
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by:
Lars-Peter Clausen <lars@metafoo.de> Signed-off-by:
Jonathan Cameron <jic23@kernel.org> Cc: Stable@vger.kernel.org (cherry picked from commit 9e5846be) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Srinivas Pandruvada authored
This can result in wrong reference count for trigger device, call iio_trigger_get to increment reference. Refer to http://www.spinics.net/lists/linux-iio/msg13669.html for discussion with Jonathan. Signed-off-by:
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by:
Lars-Peter Clausen <lars@metafoo.de> Signed-off-by:
Jonathan Cameron <jic23@kernel.org> Cc: Stable@vger.kernel.org (cherry picked from commit 04950811) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Srinivas Pandruvada authored
Instead of a void function, return the trigger pointer. Whilst not in of itself a fix, this makes the following set of 7 fixes cleaner than they would otherwise be. Signed-off-by:
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by:
Jonathan Cameron <jic23@kernel.org> Cc: Stable@vger.kernel.org (cherry picked from commit f1535665) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-