- 01 Jul, 2014 40 commits
-
-
Nicholas Bellinger authored
commit a95d6511 upstream. This patch fixes a bug where multiple waiters on ->t_transport_stop_comp occurs due to a concurrent ABORT_TASK and session reset both invoking transport_wait_for_tasks(), while waiting for the associated se_cmd descriptor backend processing to complete. For this case, complete_all() should be invoked in order to wake up both waiters in core_tmr_abort_task() + transport_generic_free_cmd() process contexts. Cc: Thomas Glanzmann <thomas@glanzmann.de> Cc: Charalampos Pournaris <charpour@gmail.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicholas Bellinger authored
commit f15e9cd9 upstream. This patch fixes a bug where se_cmd descriptors associated with a Task Management Request (TMR) where not setting CMD_T_ACTIVE before being dispatched into target_tmr_work() process context. This is required in order for transport_generic_free_cmd() -> transport_wait_for_tasks() to wait on se_cmd->t_transport_stop_comp if a session reset event occurs while an ABORT_TASK is outstanding waiting for another I/O to complete. Cc: Thomas Glanzmann <thomas@glanzmann.de> Cc: Charalampos Pournaris <charpour@gmail.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit f5ebec96 upstream. disconnected_handler works are scheduled on system_wq. When attempting to unload, first make sure all works have cleaned up. Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit 88c4015f upstream. There are 4 RDMA_CM events that all basically mean that the user should teardown the IB connection: - DISCONNECTED - ADDR_CHANGE - DEVICE_REMOVAL - TIMEWAIT_EXIT Only in DISCONNECTED/ADDR_CHANGE it makes sense to call rdma_disconnect (send DREQ/DREP to our initiator). So we keep the same teardown handler for all of them but only indicate calling rdma_disconnect for the relevant events. This patch also removes redundant debug prints for each single event. v2 changes: - Call isert_disconnected_handler() for DEVICE_REMOVAL (Or + Sag) Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit 9d49f5e2 upstream. In ungraceful teardowns isert close flows seem racy such that isert_wait_conn hangs as RDMA_CM_EVENT_DISCONNECTED never gets invoked (no one called rdma_disconnect). Both graceful and ungraceful teardowns will have rx flush errors (isert posts a batch once connection is established). Once all flush errors are consumed we invoke isert_wait_conn and it will be responsible for calling rdma_disconnect. This way it can be sure that rdma_disconnect was called and it won't wait forever. This patch also removes the logout_posted indicator. either the logout completion was consumed and no problem decrementing the post_send_buf_count, or it was consumed as a flush error. no point of keeping it for isert_wait_conn as there is no danger that isert_conn will be accidentally removed while it is running. (Drop unnecessary sleep_on_conn_wait_comp check in isert_cq_rx_comp_err - nab) Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit e346ab34 upstream. In case np_thread state is in RESET/SHUTDOWN/EXIT states, no point for isert to stall there as we may get a hang in case no one will wake it up later. Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jukka Taimisto authored
commit 8a96f3cd upstream. -[0x01 Introduction We have found a programming error causing a deadlock in Bluetooth subsystem of Linux kernel. The problem is caused by missing release_sock() call when L2CAP connection creation fails due full accept queue. The issue can be reproduced with 3.15-rc5 kernel and is also present in earlier kernels. -[0x02 Details The problem occurs when multiple L2CAP connections are created to a PSM which contains listening socket (like SDP) and left pending, for example, configuration (the underlying ACL link is not disconnected between connections). When L2CAP connection request is received and listening socket is found the l2cap_sock_new_connection_cb() function (net/bluetooth/l2cap_sock.c) is called. This function locks the 'parent' socket and then checks if the accept queue is full. 1178 lock_sock(parent); 1179 1180 /* Check for backlog size */ 1181 if (sk_acceptq_is_full(parent)) { 1182 BT_DBG("backlog full %d", parent->sk_ack_backlog); 1183 return NULL; 1184 } If case the accept queue is full NULL is returned, but the 'parent' socket is not released. Thus when next L2CAP connection request is received the code blocks on lock_sock() since the parent is still locked. Also note that for connections already established and waiting for configuration to complete a timeout will occur and l2cap_chan_timeout() (net/bluetooth/l2cap_core.c) will be called. All threads calling this function will also be blocked waiting for the channel mutex since the thread which is waiting on lock_sock() alread holds the channel mutex. We were able to reproduce this by sending continuously L2CAP connection request followed by disconnection request containing invalid CID. This left the created connections pending configuration. After the deadlock occurs it is impossible to kill bluetoothd, btmon will not get any more data etc. requiring reboot to recover. -[0x03 Fix Releasing the 'parent' socket when l2cap_sock_new_connection_cb() returns NULL seems to fix the issue. Signed-off-by:
Jukka Taimisto <jtt@codenomicon.com> Reported-by:
Tommi Mäkilä <tmakila@codenomicon.com> Signed-off-by:
Johan Hedberg <johan.hedberg@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jukka Rissanen authored
commit 62bbd5b3 upstream. The universal/local bit handling was incorrectly done in the code. So when setting EUI address from BD address we do this: - If BD address type is PUBLIC, then we clear the universal bit in EUI address. If the address type is RANDOM, then the universal bit is set (BT 6lowpan draft chapter 3.2.2) - After this we invert the universal/local bit according to RFC 2464 When figuring out BD address we do the reverse: - Take EUI address from stateless IPv6 address, invert the universal/local bit according to RFC 2464 - If universal bit is 1 in this modified EUI address, then address type is set to RANDOM, otherwise it is PUBLIC Note that 6lowpan_iphc.[ch] does the final toggling of U/L bit before sending or receiving the network packet. Signed-off-by:
Jukka Rissanen <jukka.rissanen@linux.intel.com> Signed-off-by:
Marcel Holtmann <marcel@holtmann.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Felipe Balbi authored
commit da64c27d upstream. LDISCs shouldn't call tty->ops->write() from within ->write_wakeup(). ->write_wakeup() is called with port lock taken and IRQs disabled, tty->ops->write() will try to acquire the same port lock and we will deadlock. Acked-by:
Marcel Holtmann <marcel@holtmann.org> Reviewed-by:
Peter Hurley <peter@hurleysoftware.com> Reported-by:
Huang Shijie <b32955@freescale.com> Signed-off-by:
Felipe Balbi <balbi@ti.com> Tested-by:
Andreas Bießmann <andreas@biessmann.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chander Kashyap authored
commit 086abb58 upstream. In of_init_opp_table function, if a failure to add an OPP is detected, the count of OPPs, yet to be added is not updated. Fix this by decrementing this count on failure as well. Signed-off-by:
Chander Kashyap <k.chander@samsung.com> Signed-off-by:
Inderpal Singh <inderpal.s@samsung.com> Acked-by:
Viresh Kumar <viresh.kumar@linaro.org> Acked-by:
Nishanth Menon <nm@ti.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jianguo Wu authored
commit 86f40622 upstream. When enable LPAE and big-endian in a hisilicon board, while specify mem=384M mem=512M@7680M, will get bad page state: Freeing unused kernel memory: 180K (c0466000 - c0493000) BUG: Bad page state in process init pfn:fa442 page:c7749840 count:0 mapcount:-1 mapping: (null) index:0x0 page flags: 0x40000400(reserved) Modules linked in: CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66 [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14) [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104) [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c) [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120) [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354) [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90) [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40) The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging, I find in page fault handler, will get wrong pfn from pte just after set pte, as follow: do_anonymous_page() { ... set_pte_at(mm, address, page_table, entry); //debug code pfn = pte_pfn(entry); pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry)); //read out the pte just set new_pte = pte_offset_map(pmd, address); new_pfn = pte_pfn(*new_pte); pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry)); ... } pfn: 0x1fa4f5, pte:0xc00001fa4f575f new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f //new pfn/pte is wrong. The bug is happened in cpu_v7_set_pte_ext(ptep, pte): An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers. On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB. On a BE kernel, the assignment is reversed. Unfortunately, the current code always assumes the LE case, leading to corruption of the PTE when clearing/setting bits. This patch fixes this issue much like it has been done already in the cpu_v7_switch_mm case. Signed-off-by:
Jianguo Wu <wujianguo@huawei.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Russell King authored
commit 3683f44c upstream. While debugging the FEC ethernet driver using stacktrace, it was noticed that the stacktraces always begin as follows: [<c00117b4>] save_stack_trace_tsk+0x0/0x98 [<c0011870>] save_stack_trace+0x24/0x28 ... This is because the stack trace code includes the stack frames for itself. This is incorrect behaviour, and also leads to "skip" doing the wrong thing (which is the number of stack frames to avoid recording.) Perversely, it does the right thing when passed a non-current thread. Fix this by ensuring that we have a known constant number of frames above the main stack trace function, and always skip these. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans Verkuil authored
commit 17e7f1b5 upstream. This solves this bug: https://bugzilla.kernel.org/show_bug.cgi?id=73361 The problem is that when you quit tvtime it calls STREAMOFF, but then it queues a bunch of buffers for no good reason before closing the file descriptor. In the past closing the fd would free the vb queue since that was part of the file handle struct. Since that was moved to the global struct that no longer happened. This wouldn't be a problem, but the extra QBUF calls that tvtime does meant that the buffer list in videobuf (q->stream) contained buffers, so REQBUFS would fail with -EBUSY. The solution is to init the list head explicitly when releasing the file descriptor and to not free the video resource when calling streamoff. The real fix will hopefully go into kernel 3.16 when the vb2 conversion is merged. Basically the saa7134 driver with the old videobuf is so full of holes it ain't funny anymore, so consider this a band-aid for kernels 3.14 and 15. Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pali Rohár authored
commit 5d60122b upstream. This patch fixes an off by one check in bcm2048_set_region(). Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Pali Rohár <pali.rohar@gmail.com> Signed-off-by:
Pavel Machek <pavel@ucw.cz> Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olivier Langlois authored
commit 3b35fc81 upstream. timestamps in v4l2 buffers returned to userspace are updated in uvc_video_clock_update() which uses timestamps fetched from uvc_video_clock_decode() by calling unconditionally ktime_get_ts(). Hence setting the module clock param to realtime has no effect before this patch. This has been tested with ffmpeg: ffmpeg -y -f v4l2 -input_format yuyv422 -video_size 640x480 -framerate 30 -i /dev/video0 \ -f alsa -acodec pcm_s16le -ar 16000 -ac 1 -i default \ -c:v libx264 -preset ultrafast \ -c:a libfdk_aac \ out.mkv and inspecting the v4l2 input starting timestamp. Signed-off-by:
Olivier Langlois <olivier@trillion01.com> Signed-off-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 27e35715 upstream. When the rtmutex fast path is enabled the slow unlock function can create the following situation: spin_lock(foo->m->wait_lock); foo->m->owner = NULL; rt_mutex_lock(foo->m); <-- fast path free = atomic_dec_and_test(foo->refcnt); rt_mutex_unlock(foo->m); <-- fast path if (free) kfree(foo); spin_unlock(foo->m->wait_lock); <--- Use after free. Plug the race by changing the slow unlock to the following scheme: while (!rt_mutex_has_waiters(m)) { /* Clear the waiters bit in m->owner */ clear_rt_mutex_waiters(m); owner = rt_mutex_owner(m); spin_unlock(m->wait_lock); if (cmpxchg(m->owner, owner, 0) == owner) return; spin_lock(m->wait_lock); } So in case of a new waiter incoming while the owner tries the slow path unlock we have two situations: unlock(wait_lock); lock(wait_lock); cmpxchg(p, owner, 0) == owner mark_rt_mutex_waiters(lock); acquire(lock); Or: unlock(wait_lock); lock(wait_lock); mark_rt_mutex_waiters(lock); cmpxchg(p, owner, 0) != owner enqueue_waiter(); unlock(wait_lock); lock(wait_lock); wakeup_next waiter(); unlock(wait_lock); lock(wait_lock); acquire(lock); If the fast path is disabled, then the simple m->owner = NULL; unlock(m->wait_lock); is sufficient as all access to m->owner is serialized via m->wait_lock; Also document and clarify the wakeup_next_waiter function as suggested by Oleg Nesterov. Reported-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140611183852.937945560@linutronix.deSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 3d5c9340 upstream. Even in the case when deadlock detection is not requested by the caller, we can detect deadlocks. Right now the code stops the lock chain walk and keeps the waiter enqueued, even on itself. Silly not to yell when such a scenario is detected and to keep the waiter enqueued. Return -EDEADLK unconditionally and handle it at the call sites. The futex calls return -EDEADLK. The non futex ones dequeue the waiter, throw a warning and put the task into a schedule loop. Tagged for stable as it makes the code more robust. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brad Mouring <bmouring@ni.com> Link: http://lkml.kernel.org/r/20140605152801.836501969@linutronix.deSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 82084984 upstream. When we walk the lock chain, we drop all locks after each step. So the lock chain can change under us before we reacquire the locks. That's harmless in principle as we just follow the wrong lock path. But it can lead to a false positive in the dead lock detection logic: T0 holds L0 T0 blocks on L1 held by T1 T1 blocks on L2 held by T2 T2 blocks on L3 held by T3 T4 blocks on L4 held by T4 Now we walk the chain lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> drop locks T2 times out and blocks on L0 Now we continue: lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all. Brad tried to work around that in the deadlock detection logic itself, but the more I looked at it the less I liked it, because it's crystal ball magic after the fact. We actually can detect a chain change very simple: lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> next_lock = T2->pi_blocked_on->lock; drop locks T2 times out and blocks on L0 Now we continue: lock T2 -> if (next_lock != T2->pi_blocked_on->lock) return; So if we detect that T2 is now blocked on a different lock we stop the chain walk. That's also correct in the following scenario: lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> next_lock = T2->pi_blocked_on->lock; drop locks T3 times out and drops L3 T2 acquires L3 and blocks on L4 now Now we continue: lock T2 -> if (next_lock != T2->pi_blocked_on->lock) return; We don't have to follow up the chain at that point, because T2 propagated our priority up to T4 already. [ Folded a cleanup patch from peterz ] Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reported-by:
Brad Mouring <bmouring@ni.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140605152801.930031935@linutronix.deSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lv Zheng authored
commit 73577d1d upstream. This patch fixes the following issue: If DSDT is customized, no local DSDT copy is needed. References: https://bugzilla.kernel.org/show_bug.cgi?id=69711Signed-off-by:
Enrico Etxe Arte <goitizena.generoa@gmail.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> [rjw: Subject] Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Binderman authored
commit 5d42b0fa upstream. ACPICA BZ 1077. David Binderman. References: https://bugs.acpica.org/show_bug.cgi?id=1077Signed-off-by:
David Binderman <dcb314@hotmail.com> Signed-off-by:
Bob Moore <robert.moore@intel.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bjørn Mork authored
commit 45fef5b8 upstream. Commit 1a699476 ("ACPI / hotplug / PCI: Hotplug notifications from acpi_bus_notify()") added debug messages for a few common events. These debug messages are unconditionally enabled if CONFIG_DYNAMIC_DEBUG is defined, contrary to the documented meaning, making the ACPI system spew lots of unwanted noise on any kernel with dynamic debugging. The bug was introduced by commit fbfddae6 ("ACPI: Add acpi_handle_<level>() interfaces"), which added the CONFIG_DYNAMIC_DEBUG dependency without respecting its meaning. Fix by adding real support for dynamic_debug. Fixes: fbfddae6 ("ACPI: Add acpi_handle_<level>() interfaces") Signed-off-by:
Bjørn Mork <bjorn@mork.no> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ezequiel Garcia authored
commit 85ac1a17 upstream. Currently stk1160_read_reg() uses a stack-allocated char to get the read control value. This is wrong because usb_control_msg() requires a kmalloc-ed buffer. This commit fixes such issue by kmalloc'ating a 1-byte buffer to receive the read value. While here, let's remove the urb_buf array which was meant for a similar purpose, but never really used. Cc: Alan Stern <stern@rowland.harvard.edu> Reported-by:
Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit deb29e90 upstream. When ivtv PCM device is accessed at the state where no firmware is loaded, it oopses like: BUG: unable to handle kernel NULL pointer dereference at 0000000000000050 IP: [<ffffffffa049a881>] try_mailbox.isra.0+0x11/0x50 [ivtv] Call Trace: [<ffffffffa049aa20>] ivtv_api_call+0x160/0x6b0 [ivtv] [<ffffffffa049af86>] ivtv_api+0x16/0x40 [ivtv] [<ffffffffa049b10c>] ivtv_vapi+0xac/0xc0 [ivtv] [<ffffffffa049d40d>] ivtv_start_v4l2_encode_stream+0x19d/0x630 [ivtv] [<ffffffffa0530653>] snd_ivtv_pcm_capture_open+0x173/0x1c0 [ivtv_alsa] [<ffffffffa04526f1>] snd_pcm_open_substream+0x51/0x100 [snd_pcm] [<ffffffffa0452853>] snd_pcm_open+0xb3/0x260 [snd_pcm] [<ffffffffa0452a37>] snd_pcm_capture_open+0x37/0x50 [snd_pcm] [<ffffffffa033f557>] snd_open+0xa7/0x1e0 [snd] [<ffffffff8118a628>] chrdev_open+0x88/0x1d0 [<ffffffff811840be>] do_dentry_open+0x1de/0x270 [<ffffffff81193a73>] do_last+0x1c3/0xec0 [<ffffffff81194826>] path_openat+0xb6/0x670 [<ffffffff81195b65>] do_filp_open+0x35/0x80 [<ffffffff81185449>] do_sys_open+0x129/0x210 [<ffffffff815b782d>] system_call_fastpath+0x1a/0x1f This patch adds the check of firmware at PCM open callback like other open callbacks of this driver. Bugzilla: https://apibugzilla.novell.com/show_bug.cgi?id=875440Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit c14829fa upstream. Only call usb_autopm_put_interface() if the corresponding usb_autopm_get_interface() was successful. This prevents a potential runtime PM counter imbalance should usb_autopm_get_interface() fail. Note that the USB PM usage counter is reset when the interface is unbound, but that the runtime PM counter may be left unbalanced. Also add comment on why we don't need to worry about racing resume/suspend on autopm_get failures. Fixes: d5fd650c ("usb: serial: prevent suspend/resume from racing against probe/remove") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksander Morgado authored
commit 0ce5fb58 upstream. A set of new VID/PIDs retrieved from the out-of-tree GobiNet/GobiSerial Sierra Wireless drivers. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Link: http://marc.info/?l=linux-usb&m=140136310027293&w=2 Cc: <stable@vger.kernel.org> # backport in link above Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksander Morgado authored
commit ff1fcd50 upstream. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 80cc0fcb upstream. Make sure that needs_remote_wake up is always set when there are open ports. Currently close() would unconditionally set needs_remote_wakeup to 0 even though there might still be open ports. This could lead to blocked input and possibly dropped data on devices that do not support remote wakeup (and which must therefore not be runtime suspended while open). Add an open_ports counter (protected by the susp_lock) and only clear needs_remote_wakeup when the last port is closed. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 014333f7 upstream. The delayed-write queue was never emptied on disconnect, something which would lead to leaked urbs and transfer buffers if the device is disconnected before being runtime resumed due to a write. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 7fdd26a0 upstream. Neither the transfer buffer or the urb itself were released in the resume error path for delayed writes. Also on errors, the remainder of the queue was not even processed, which leads to further urb and buffer leaks. The same error path also failed to balance the outstanding-urb counter, something which results in degraded throughput or completely blocked writes. Fix this by releasing urb and buffer and balancing counters on errors, and by always processing the whole queue even when submission of one urb fails. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 8452727d upstream. Fix use after free or NULL-pointer dereference during suspend and resume. The port data may never have been allocated (port probe failed) or may already have been released by port_remove (e.g. driver is unloaded) when suspend and resume are called. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 353fe198 upstream. Fix AA deadlock in open error path that would call close() and try to grab the already held disc_mutex. Fixes: b9a44bc1 ("sierra: driver urb handling improvements") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit fb7ad4f9 upstream. Keep trying to submit urbs rather than bail out on first read-urb submission error, which would also prevent I/O for any further ports from being resumed. Instead keep an error count, for all types of failed submissions, and let USB core know that something went wrong. Also make sure to always clear the suspended flag. Currently a failed read-urb submission would prevent cached writes as well as any subsequent writes from being submitted until next suspend-resume cycle, something which may not even necessarily happen. Note that USB core currently only logs an error if an interface resume failed. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 9096f1fb upstream. The interrupt urb was submitted unconditionally at resume, something which could lead to a NULL-pointer dereference in the urb completion handler as resume may be called after the port and port data is gone. Fix this by making sure the interrupt urb is only submitted and active when the port is open. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 79eed03e upstream. The delayed-write queue was never emptied at shutdown (close), something which could lead to leaked urbs if the port is closed before being runtime resumed due to a write. When this happens the output buffer would not drain on close (closing_wait timeout), and after consecutive opens, writes could be corrupted with previously buffered data, transfered with reduced throughput or completely blocked. Note that unbusy_queued_urb() was simply moved out of CONFIG_PM. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 170fad9e upstream. Fix race between write() and suspend() which could lead to writes being dropped (or I/O while suspended) if the device is runtime suspended while a write request is being processed. Specifically, suspend() releases the susp_lock after determining the device is idle but before setting the suspended flag, thus leaving a window where a concurrent write() can submit an urb. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
xiao jin authored
commit d9e93c08 upstream. We find a race between write and resume. usb_wwan_resume run play_delayed() and spin_unlock, but intfdata->suspended still is not set to zero. At this time usb_wwan_write is called and anchor the urb to delay list. Then resume keep running but the delayed urb have no chance to be commit until next resume. If the time of next resume is far away, tty will be blocked in tty_wait_until_sent during time. The race also can lead to writes being reordered. This patch put play_Delayed and intfdata->suspended together in the spinlock, it's to avoid the write race during resume. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
xiao jin <jin.xiao@intel.com> Signed-off-by:
Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by:
David Cohen <david.a.cohen@linux.intel.com> Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
xiao jin authored
commit db090473 upstream. When enable usb serial for modem data, sometimes the tty is blocked in tty_wait_until_sent because portdata->out_busy always is set and have no chance to be cleared. We find a bug in write error path. usb_wwan_write set portdata->out_busy firstly, then try autopm async with error. No out urb submit and no usb_wwan_outdat_callback to this write, portdata->out_busy can't be cleared. This patch clear portdata->out_busy if usb_wwan_write try autopm async with error. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
xiao jin <jin.xiao@intel.com> Signed-off-by:
Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by:
David Cohen <david.a.cohen@linux.intel.com> Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 972754cf upstream. I had occasional screen corruption with the matrox framebuffer driver and I found out that the reason for the corruption is that the hardware blitter accesses the videoram while it is being written to. The matrox driver has a macro WaitTillIdle() that should wait until the blitter is idle, but it sometimes doesn't work. I added a dummy read mga_inl(M_STATUS) to WaitTillIdle() to fix the problem. The dummy read will flush the write buffer in the PCI chipset, and the next read of M_STATUS will return the hardware status. Since applying this patch, I had no screen corruption at all. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maurizio Lombardi authored
commit b5b60778 upstream. The variable "size" is expressed as number of blocks and not as number of clusters, this could trigger a kernel panic when using ext4 with the size of a cluster different from the size of a block. Signed-off-by:
Maurizio Lombardi <mlombard@redhat.com> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit eeece469 upstream. Tail of a page straddling inode size must be zeroed when being written out due to POSIX requirement that modifications of mmaped page beyond inode size must not be written to the file. ext4_bio_write_page() did this only for blocks fully beyond inode size but didn't properly zero blocks partially beyond inode size. Fix this. The problem has been uncovered by mmap_11-4 test in openposix test suite (part of LTP). Reported-by:
Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com> Fixes: 5a0dc736 Fixes: bd2d0210 CC: stable@vger.kernel.org Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-