- 01 Jul, 2014 40 commits
-
-
Olivier Langlois authored
commit 3b35fc81 upstream. timestamps in v4l2 buffers returned to userspace are updated in uvc_video_clock_update() which uses timestamps fetched from uvc_video_clock_decode() by calling unconditionally ktime_get_ts(). Hence setting the module clock param to realtime has no effect before this patch. This has been tested with ffmpeg: ffmpeg -y -f v4l2 -input_format yuyv422 -video_size 640x480 -framerate 30 -i /dev/video0 \ -f alsa -acodec pcm_s16le -ar 16000 -ac 1 -i default \ -c:v libx264 -preset ultrafast \ -c:a libfdk_aac \ out.mkv and inspecting the v4l2 input starting timestamp. Signed-off-by:
Olivier Langlois <olivier@trillion01.com> Signed-off-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 27e35715 upstream. When the rtmutex fast path is enabled the slow unlock function can create the following situation: spin_lock(foo->m->wait_lock); foo->m->owner = NULL; rt_mutex_lock(foo->m); <-- fast path free = atomic_dec_and_test(foo->refcnt); rt_mutex_unlock(foo->m); <-- fast path if (free) kfree(foo); spin_unlock(foo->m->wait_lock); <--- Use after free. Plug the race by changing the slow unlock to the following scheme: while (!rt_mutex_has_waiters(m)) { /* Clear the waiters bit in m->owner */ clear_rt_mutex_waiters(m); owner = rt_mutex_owner(m); spin_unlock(m->wait_lock); if (cmpxchg(m->owner, owner, 0) == owner) return; spin_lock(m->wait_lock); } So in case of a new waiter incoming while the owner tries the slow path unlock we have two situations: unlock(wait_lock); lock(wait_lock); cmpxchg(p, owner, 0) == owner mark_rt_mutex_waiters(lock); acquire(lock); Or: unlock(wait_lock); lock(wait_lock); mark_rt_mutex_waiters(lock); cmpxchg(p, owner, 0) != owner enqueue_waiter(); unlock(wait_lock); lock(wait_lock); wakeup_next waiter(); unlock(wait_lock); lock(wait_lock); acquire(lock); If the fast path is disabled, then the simple m->owner = NULL; unlock(m->wait_lock); is sufficient as all access to m->owner is serialized via m->wait_lock; Also document and clarify the wakeup_next_waiter function as suggested by Oleg Nesterov. Reported-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140611183852.937945560@linutronix.deSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 3d5c9340 upstream. Even in the case when deadlock detection is not requested by the caller, we can detect deadlocks. Right now the code stops the lock chain walk and keeps the waiter enqueued, even on itself. Silly not to yell when such a scenario is detected and to keep the waiter enqueued. Return -EDEADLK unconditionally and handle it at the call sites. The futex calls return -EDEADLK. The non futex ones dequeue the waiter, throw a warning and put the task into a schedule loop. Tagged for stable as it makes the code more robust. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brad Mouring <bmouring@ni.com> Link: http://lkml.kernel.org/r/20140605152801.836501969@linutronix.deSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 82084984 upstream. When we walk the lock chain, we drop all locks after each step. So the lock chain can change under us before we reacquire the locks. That's harmless in principle as we just follow the wrong lock path. But it can lead to a false positive in the dead lock detection logic: T0 holds L0 T0 blocks on L1 held by T1 T1 blocks on L2 held by T2 T2 blocks on L3 held by T3 T4 blocks on L4 held by T4 Now we walk the chain lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> drop locks T2 times out and blocks on L0 Now we continue: lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all. Brad tried to work around that in the deadlock detection logic itself, but the more I looked at it the less I liked it, because it's crystal ball magic after the fact. We actually can detect a chain change very simple: lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> next_lock = T2->pi_blocked_on->lock; drop locks T2 times out and blocks on L0 Now we continue: lock T2 -> if (next_lock != T2->pi_blocked_on->lock) return; So if we detect that T2 is now blocked on a different lock we stop the chain walk. That's also correct in the following scenario: lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 -> next_lock = T2->pi_blocked_on->lock; drop locks T3 times out and drops L3 T2 acquires L3 and blocks on L4 now Now we continue: lock T2 -> if (next_lock != T2->pi_blocked_on->lock) return; We don't have to follow up the chain at that point, because T2 propagated our priority up to T4 already. [ Folded a cleanup patch from peterz ] Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reported-by:
Brad Mouring <bmouring@ni.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140605152801.930031935@linutronix.deSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lv Zheng authored
commit 73577d1d upstream. This patch fixes the following issue: If DSDT is customized, no local DSDT copy is needed. References: https://bugzilla.kernel.org/show_bug.cgi?id=69711Signed-off-by:
Enrico Etxe Arte <goitizena.generoa@gmail.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> [rjw: Subject] Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Binderman authored
commit 5d42b0fa upstream. ACPICA BZ 1077. David Binderman. References: https://bugs.acpica.org/show_bug.cgi?id=1077Signed-off-by:
David Binderman <dcb314@hotmail.com> Signed-off-by:
Bob Moore <robert.moore@intel.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bjørn Mork authored
commit 45fef5b8 upstream. Commit 1a699476 ("ACPI / hotplug / PCI: Hotplug notifications from acpi_bus_notify()") added debug messages for a few common events. These debug messages are unconditionally enabled if CONFIG_DYNAMIC_DEBUG is defined, contrary to the documented meaning, making the ACPI system spew lots of unwanted noise on any kernel with dynamic debugging. The bug was introduced by commit fbfddae6 ("ACPI: Add acpi_handle_<level>() interfaces"), which added the CONFIG_DYNAMIC_DEBUG dependency without respecting its meaning. Fix by adding real support for dynamic_debug. Fixes: fbfddae6 ("ACPI: Add acpi_handle_<level>() interfaces") Signed-off-by:
Bjørn Mork <bjorn@mork.no> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ezequiel Garcia authored
commit 85ac1a17 upstream. Currently stk1160_read_reg() uses a stack-allocated char to get the read control value. This is wrong because usb_control_msg() requires a kmalloc-ed buffer. This commit fixes such issue by kmalloc'ating a 1-byte buffer to receive the read value. While here, let's remove the urb_buf array which was meant for a similar purpose, but never really used. Cc: Alan Stern <stern@rowland.harvard.edu> Reported-by:
Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit deb29e90 upstream. When ivtv PCM device is accessed at the state where no firmware is loaded, it oopses like: BUG: unable to handle kernel NULL pointer dereference at 0000000000000050 IP: [<ffffffffa049a881>] try_mailbox.isra.0+0x11/0x50 [ivtv] Call Trace: [<ffffffffa049aa20>] ivtv_api_call+0x160/0x6b0 [ivtv] [<ffffffffa049af86>] ivtv_api+0x16/0x40 [ivtv] [<ffffffffa049b10c>] ivtv_vapi+0xac/0xc0 [ivtv] [<ffffffffa049d40d>] ivtv_start_v4l2_encode_stream+0x19d/0x630 [ivtv] [<ffffffffa0530653>] snd_ivtv_pcm_capture_open+0x173/0x1c0 [ivtv_alsa] [<ffffffffa04526f1>] snd_pcm_open_substream+0x51/0x100 [snd_pcm] [<ffffffffa0452853>] snd_pcm_open+0xb3/0x260 [snd_pcm] [<ffffffffa0452a37>] snd_pcm_capture_open+0x37/0x50 [snd_pcm] [<ffffffffa033f557>] snd_open+0xa7/0x1e0 [snd] [<ffffffff8118a628>] chrdev_open+0x88/0x1d0 [<ffffffff811840be>] do_dentry_open+0x1de/0x270 [<ffffffff81193a73>] do_last+0x1c3/0xec0 [<ffffffff81194826>] path_openat+0xb6/0x670 [<ffffffff81195b65>] do_filp_open+0x35/0x80 [<ffffffff81185449>] do_sys_open+0x129/0x210 [<ffffffff815b782d>] system_call_fastpath+0x1a/0x1f This patch adds the check of firmware at PCM open callback like other open callbacks of this driver. Bugzilla: https://apibugzilla.novell.com/show_bug.cgi?id=875440Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit c14829fa upstream. Only call usb_autopm_put_interface() if the corresponding usb_autopm_get_interface() was successful. This prevents a potential runtime PM counter imbalance should usb_autopm_get_interface() fail. Note that the USB PM usage counter is reset when the interface is unbound, but that the runtime PM counter may be left unbalanced. Also add comment on why we don't need to worry about racing resume/suspend on autopm_get failures. Fixes: d5fd650c ("usb: serial: prevent suspend/resume from racing against probe/remove") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksander Morgado authored
commit 0ce5fb58 upstream. A set of new VID/PIDs retrieved from the out-of-tree GobiNet/GobiSerial Sierra Wireless drivers. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Link: http://marc.info/?l=linux-usb&m=140136310027293&w=2 Cc: <stable@vger.kernel.org> # backport in link above Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksander Morgado authored
commit ff1fcd50 upstream. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 80cc0fcb upstream. Make sure that needs_remote_wake up is always set when there are open ports. Currently close() would unconditionally set needs_remote_wakeup to 0 even though there might still be open ports. This could lead to blocked input and possibly dropped data on devices that do not support remote wakeup (and which must therefore not be runtime suspended while open). Add an open_ports counter (protected by the susp_lock) and only clear needs_remote_wakeup when the last port is closed. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 014333f7 upstream. The delayed-write queue was never emptied on disconnect, something which would lead to leaked urbs and transfer buffers if the device is disconnected before being runtime resumed due to a write. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 7fdd26a0 upstream. Neither the transfer buffer or the urb itself were released in the resume error path for delayed writes. Also on errors, the remainder of the queue was not even processed, which leads to further urb and buffer leaks. The same error path also failed to balance the outstanding-urb counter, something which results in degraded throughput or completely blocked writes. Fix this by releasing urb and buffer and balancing counters on errors, and by always processing the whole queue even when submission of one urb fails. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 8452727d upstream. Fix use after free or NULL-pointer dereference during suspend and resume. The port data may never have been allocated (port probe failed) or may already have been released by port_remove (e.g. driver is unloaded) when suspend and resume are called. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 353fe198 upstream. Fix AA deadlock in open error path that would call close() and try to grab the already held disc_mutex. Fixes: b9a44bc1 ("sierra: driver urb handling improvements") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit fb7ad4f9 upstream. Keep trying to submit urbs rather than bail out on first read-urb submission error, which would also prevent I/O for any further ports from being resumed. Instead keep an error count, for all types of failed submissions, and let USB core know that something went wrong. Also make sure to always clear the suspended flag. Currently a failed read-urb submission would prevent cached writes as well as any subsequent writes from being submitted until next suspend-resume cycle, something which may not even necessarily happen. Note that USB core currently only logs an error if an interface resume failed. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 9096f1fb upstream. The interrupt urb was submitted unconditionally at resume, something which could lead to a NULL-pointer dereference in the urb completion handler as resume may be called after the port and port data is gone. Fix this by making sure the interrupt urb is only submitted and active when the port is open. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 79eed03e upstream. The delayed-write queue was never emptied at shutdown (close), something which could lead to leaked urbs if the port is closed before being runtime resumed due to a write. When this happens the output buffer would not drain on close (closing_wait timeout), and after consecutive opens, writes could be corrupted with previously buffered data, transfered with reduced throughput or completely blocked. Note that unbusy_queued_urb() was simply moved out of CONFIG_PM. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 170fad9e upstream. Fix race between write() and suspend() which could lead to writes being dropped (or I/O while suspended) if the device is runtime suspended while a write request is being processed. Specifically, suspend() releases the susp_lock after determining the device is idle but before setting the suspended flag, thus leaving a window where a concurrent write() can submit an urb. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
xiao jin authored
commit d9e93c08 upstream. We find a race between write and resume. usb_wwan_resume run play_delayed() and spin_unlock, but intfdata->suspended still is not set to zero. At this time usb_wwan_write is called and anchor the urb to delay list. Then resume keep running but the delayed urb have no chance to be commit until next resume. If the time of next resume is far away, tty will be blocked in tty_wait_until_sent during time. The race also can lead to writes being reordered. This patch put play_Delayed and intfdata->suspended together in the spinlock, it's to avoid the write race during resume. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
xiao jin <jin.xiao@intel.com> Signed-off-by:
Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by:
David Cohen <david.a.cohen@linux.intel.com> Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
xiao jin authored
commit db090473 upstream. When enable usb serial for modem data, sometimes the tty is blocked in tty_wait_until_sent because portdata->out_busy always is set and have no chance to be cleared. We find a bug in write error path. usb_wwan_write set portdata->out_busy firstly, then try autopm async with error. No out urb submit and no usb_wwan_outdat_callback to this write, portdata->out_busy can't be cleared. This patch clear portdata->out_busy if usb_wwan_write try autopm async with error. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by:
xiao jin <jin.xiao@intel.com> Signed-off-by:
Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by:
David Cohen <david.a.cohen@linux.intel.com> Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 972754cf upstream. I had occasional screen corruption with the matrox framebuffer driver and I found out that the reason for the corruption is that the hardware blitter accesses the videoram while it is being written to. The matrox driver has a macro WaitTillIdle() that should wait until the blitter is idle, but it sometimes doesn't work. I added a dummy read mga_inl(M_STATUS) to WaitTillIdle() to fix the problem. The dummy read will flush the write buffer in the PCI chipset, and the next read of M_STATUS will return the hardware status. Since applying this patch, I had no screen corruption at all. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maurizio Lombardi authored
commit b5b60778 upstream. The variable "size" is expressed as number of blocks and not as number of clusters, this could trigger a kernel panic when using ext4 with the size of a cluster different from the size of a block. Signed-off-by:
Maurizio Lombardi <mlombard@redhat.com> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit eeece469 upstream. Tail of a page straddling inode size must be zeroed when being written out due to POSIX requirement that modifications of mmaped page beyond inode size must not be written to the file. ext4_bio_write_page() did this only for blocks fully beyond inode size but didn't properly zero blocks partially beyond inode size. Fix this. The problem has been uncovered by mmap_11-4 test in openposix test suite (part of LTP). Reported-by:
Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com> Fixes: 5a0dc736 Fixes: bd2d0210 CC: stable@vger.kernel.org Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Namjae Jeon authored
commit 1c8349a1 upstream. When we perform a data integrity sync we tag all the dirty pages with PAGECACHE_TAG_TOWRITE at start of ext4_da_writepages. Later we check for this tag in write_cache_pages_da and creates a struct mpage_da_data containing contiguously indexed pages tagged with this tag and sync these pages with a call to mpage_da_map_and_submit. This process is done in while loop until all the PAGECACHE_TAG_TOWRITE pages are synced. We also do journal start and stop in each iteration. journal_stop could initiate journal commit which would call ext4_writepage which in turn will call ext4_bio_write_page even for delayed OR unwritten buffers. When ext4_bio_write_page is called for such buffers, even though it does not sync them but it clears the PAGECACHE_TAG_TOWRITE of the corresponding page and hence these pages are also not synced by the currently running data integrity sync. We will end up with dirty pages although sync is completed. This could cause a potential data loss when the sync call is followed by a truncate_pagecache call, which is exactly the case in collapse_range. (It will cause generic/127 failure in xfstests) To avoid this issue, we can use set_page_writeback_keepwrite instead of set_page_writeback, which doesn't clear TOWRITE tag. Signed-off-by:
Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by:
Ashish Sangwan <a.sangwan@samsung.com> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Reviewed-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Borntraeger authored
commit 993072ee upstream. The IRB might be 96 bytes if the extended-I/O-measurement facility is used. This feature is currently not used by Linux, but struct irb already has the emw defined. So let's make the irb in lowcore match the size of the internal data structure to be future proof. We also have to add a pad, to correctly align the paste. The bigger irb field also circumvents a bug in some QEMU versions that always write the emw field on test subchannel and therefore destroy the paste definitions of this CPU. Running under these QEMU version broke some timing functions in the VDSO and all users of these functions, e.g. some JREs. Signed-off-by:
Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martin Schwidefsky authored
commit b6f42962 upstream. Analog to git commit 28b92e09 first cast tk->wall_to_monotonic.tv_nsec to u64 before doing the shift with tk->shift to avoid loosing relevant bits on a 32-bit kernel. Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lai Jiangshan authored
commit 3afb69cb upstream. idr_replace() open-codes the logic to calculate the maximum valid ID given the height of the idr tree; unfortunately, the open-coded logic doesn't account for the fact that the top layer may have unused slots and over-shifts the limit to zero when the tree is at its maximum height. The following test code shows it fails to replace the value for id=((1<<27)+42): static void test5(void) { int id; DEFINE_IDR(test_idr); #define TEST5_START ((1<<27)+42) /* use the highest layer */ printk(KERN_INFO "Start test5\n"); id = idr_alloc(&test_idr, (void *)1, TEST5_START, 0, GFP_KERNEL); BUG_ON(id != TEST5_START); TEST_BUG_ON(idr_replace(&test_idr, (void *)2, TEST5_START) != (void *)1); idr_destroy(&test_idr); printk(KERN_INFO "End of test5\n"); } Fix the bug by using idr_max() which correctly takes into account the maximum allowed shift. sub_alloc() shares the same problem and may incorrectly fail with -EAGAIN; however, this bug doesn't affect correct operation because idr_get_empty_slot(), which already uses idr_max(), retries with the increased @id in such cases. [tj@kernel.org: Updated patch description.] Signed-off-by:
Lai Jiangshan <laijs@cn.fujitsu.com> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Victor Kamensky authored
commit 2227901a upstream. Currently core file of aarch32 process prstatus note has empty registers set. As result aarch32 core files create by V8 kernel are not very useful. It happens because compat_gpr_get and compat_gpr_set functions can copy registers values to/from either kbuf or ubuf. ELF core file collection function fill_thread_core_info calls compat_gpr_get with kbuf set and ubuf set to 0. But current compat_gpr_get and compat_gpr_set function handle copy to/from only ubuf case. Fix is to handle kbuf and ubuf as two separate cases in similar way as other functions like user_regset_copyout, user_regset_copyin do. Signed-off-by:
Victor Kamensky <victor.kamensky@linaro.org> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Will Deacon authored
commit c1688707 upstream. Our compat PTRACE_POKEUSR implementation simply passes the user data to regset_copy_from_user after some simple range checking. Unfortunately, the data in question has already been copied to the kernel stack by this point, so the subsequent access_ok check fails and the ptrace request returns -EFAULT. This causes problems tracing fork() with older versions of strace. This patch briefly changes the fs to KERNEL_DS, so that the access_ok check passes even with a kernel address. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Matthew Dempsky authored
commit 4e52365f upstream. When tracing a process in another pid namespace, it's important for fork event messages to contain the child's pid as seen from the tracer's pid namespace, not the parent's. Otherwise, the tracer won't be able to correlate the fork event with later SIGTRAP signals it receives from the child. We still risk a race condition if a ptracer from a different pid namespace attaches after we compute the pid_t value. However, sending a bogus fork event message in this unlikely scenario is still a vast improvement over the status quo where we always send bogus fork event messages to debuggers in a different pid namespace than the forking process. Signed-off-by:
Matthew Dempsky <mdempsky@chromium.org> Acked-by:
Oleg Nesterov <oleg@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Julien Tinnes <jln@chromium.org> Cc: Roland McGrath <mcgrathr@chromium.org> Cc: Jan Kratochvil <jan.kratochvil@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Weiner authored
commit 71abdc15 upstream. When kswapd exits, it can end up taking locks that were previously held by allocating tasks while they waited for reclaim. Lockdep currently warns about this: On Wed, May 28, 2014 at 06:06:34PM +0800, Gu Zheng wrote: > inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-R} usage. > kswapd2/1151 [HC0[0]:SC0[0]:HE1:SE1] takes: > (&sig->group_rwsem){+++++?}, at: exit_signals+0x24/0x130 > {RECLAIM_FS-ON-W} state was registered at: > mark_held_locks+0xb9/0x140 > lockdep_trace_alloc+0x7a/0xe0 > kmem_cache_alloc_trace+0x37/0x240 > flex_array_alloc+0x99/0x1a0 > cgroup_attach_task+0x63/0x430 > attach_task_by_pid+0x210/0x280 > cgroup_procs_write+0x16/0x20 > cgroup_file_write+0x120/0x2c0 > vfs_write+0xc0/0x1f0 > SyS_write+0x4c/0xa0 > tracesys+0xdd/0xe2 > irq event stamp: 49 > hardirqs last enabled at (49): _raw_spin_unlock_irqrestore+0x36/0x70 > hardirqs last disabled at (48): _raw_spin_lock_irqsave+0x2b/0xa0 > softirqs last enabled at (0): copy_process.part.24+0x627/0x15f0 > softirqs last disabled at (0): (null) > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(&sig->group_rwsem); > <Interrupt> > lock(&sig->group_rwsem); > > *** DEADLOCK *** > > no locks held by kswapd2/1151. > > stack backtrace: > CPU: 30 PID: 1151 Comm: kswapd2 Not tainted 3.10.39+ #4 > Call Trace: > dump_stack+0x19/0x1b > print_usage_bug+0x1f7/0x208 > mark_lock+0x21d/0x2a0 > __lock_acquire+0x52a/0xb60 > lock_acquire+0xa2/0x140 > down_read+0x51/0xa0 > exit_signals+0x24/0x130 > do_exit+0xb5/0xa50 > kthread+0xdb/0x100 > ret_from_fork+0x7c/0xb0 This is because the kswapd thread is still marked as a reclaimer at the time of exit. But because it is exiting, nobody is actually waiting on it to make reclaim progress anymore, and it's nothing but a regular thread at this point. Be tidy and strip it of all its powers (PF_MEMALLOC, PF_SWAPWRITE, PF_KSWAPD, and the lockdep reclaim state) before returning from the thread function. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Reported-by:
Gu Zheng <guz.fnst@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Tang Chen <tangchen@cn.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kees Cook authored
commit 1b15d2e5 upstream. Some drivers use the first HID report in the list instead of using an index. In these cases, validation uses ID 0, which was supposed to mean "first known report". This fixes the problem, which was causing at least the lgff family of devices to stop working since hid_validate_values was being called with ID 0, but the devices used single numbered IDs for their reports: 0x05, 0x01, /* Usage Page (Desktop), */ 0x09, 0x05, /* Usage (Gamepad), */ 0xA1, 0x01, /* Collection (Application), */ 0xA1, 0x02, /* Collection (Logical), */ 0x85, 0x01, /* Report ID (1), */ ... Reported-by:
Simon Wood <simon@mungewell.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hugh Dickins authored
commit 7f39dda9 upstream. Trinity reports BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:47 in_atomic(): 0, irqs_disabled(): 0, pid: 5787, name: trinity-c27 __might_sleep < down_write < __put_anon_vma < page_get_anon_vma < migrate_pages < compact_zone < compact_zone_order < try_to_compact_pages .. Right, since conversion to mutex then rwsem, we should not put_anon_vma() from inside an rcu_read_lock()ed section: fix the two places that did so. And add might_sleep() to anon_vma_free(), as suggested by Peter Zijlstra. Fixes: 88c22088 ("mm: optimize page_lock_anon_vma() fast-path") Reported-by:
Dave Jones <davej@redhat.com> Signed-off-by:
Hugh Dickins <hughd@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Naoya Horiguchi authored
commit 3ba08129 upstream. Currently memory error handler handles action optional errors in the deferred manner by default. And if a recovery aware application wants to handle it immediately, it can do it by setting PF_MCE_EARLY flag. However, such signal can be sent only to the main thread, so it's problematic if the application wants to have a dedicated thread to handler such signals. So this patch adds dedicated thread support to memory error handler. We have PF_MCE_EARLY flags for each thread separately, so with this patch AO signal is sent to the thread with PF_MCE_EARLY flag set, not the main thread. If you want to implement a dedicated thread, you call prctl() to set PF_MCE_EARLY on the thread. Memory error handler collects processes to be killed, so this patch lets it check PF_MCE_EARLY flag on each thread in the collecting routines. No behavioral change for all non-early kill cases. Tony said: : The old behavior was crazy - someone with a multithreaded process might : well expect that if they call prctl(PF_MCE_EARLY) in just one thread, then : that thread would see the SIGBUS with si_code = BUS_MCEERR_A0 - even if : that thread wasn't the main thread for the process. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Cc: Kamil Iskra <iskra@mcs.anl.gov> Cc: Andi Kleen <andi@firstfloor.org> Cc: Borislav Petkov <bp@suse.de> Cc: Chen Gong <gong.chen@linux.jf.intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Luck authored
commit 74614de1 upstream. When Linux sees an "action optional" machine check (where h/w has reported an error that is not in the current execution path) we generally do not want to signal a process, since most processes do not have a SIGBUS handler - we'd just prematurely terminate the process for a problem that they might never actually see. task_early_kill() decides whether to consider a process - and it checks whether this specific process has been marked for early signals with "prctl", or if the system administrator has requested early signals for all processes using /proc/sys/vm/memory_failure_early_kill. But for MF_ACTION_REQUIRED case we must not defer. The error is in the execution path of the current thread so we must send the SIGBUS immediatley. Fix by passing a flag argument through collect_procs*() to task_early_kill() so it knows whether we can defer or must take action. Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Borislav Petkov <bp@suse.de> Cc: Chen Gong <gong.chen@linux.jf.intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Luck authored
commit a70ffcac upstream. When a thread in a multi-threaded application hits a machine check because of an uncorrectable error in memory - we want to send the SIGBUS with si.si_code = BUS_MCEERR_AR to that thread. Currently we fail to do that if the active thread is not the primary thread in the process. collect_procs() just finds primary threads and this test: if ((flags & MF_ACTION_REQUIRED) && t == current) { will see that the thread we found isn't the current thread and so send a si.si_code = BUS_MCEERR_AO to the primary (and nothing to the active thread at this time). We can fix this by checking whether "current" shares the same mm with the process that collect_procs() said owned the page. If so, we send the SIGBUS to current (with code BUS_MCEERR_AR). Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by:
Otto Bruggeman <otto.g.bruggeman@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Borislav Petkov <bp@suse.de> Cc: Chen Gong <gong.chen@linux.jf.intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mel Gorman authored
commit e58469ba upstream. The test_bit operations in get/set pageblock flags are expensive. This patch reads the bitmap on a word basis and use shifts and masks to isolate the bits of interest. Similarly masks are used to set a local copy of the bitmap and then use cmpxchg to update the bitmap if there have been no other changes made in parallel. In a test running dd onto tmpfs the overhead of the pageblock-related functions went from 1.27% in profiles to 0.5%. In addition to the performance benefits, this patch closes races that are possible between: a) get_ and set_pageblock_migratetype(), where get_pageblock_migratetype() reads part of the bits before and other part of the bits after set_pageblock_migratetype() has updated them. b) set_pageblock_migratetype() and set_pageblock_skip(), where the non-atomic read-modify-update set bit operation in set_pageblock_skip() will cause lost updates to some bits changed in the set_pageblock_migratetype(). Joonsoo Kim first reported the case a) via code inspection. Vlastimil Babka's testing with a debug patch showed that either a) or b) occurs roughly once per mmtests' stress-highalloc benchmark (although not necessarily in the same pageblock). Furthermore during development of unrelated compaction patches, it was observed that frequent calls to {start,undo}_isolate_page_range() the race occurs several thousands of times and has resulted in NULL pointer dereferences in move_freepages() and free_one_page() in places where free_list[migratetype] is manipulated by e.g. list_move(). Further debugging confirmed that migratetype had invalid value of 6, causing out of bounds access to the free_list array. That confirmed that the race exist, although it may be extremely rare, and currently only fatal where page isolation is performed due to memory hot remove. Races on pageblocks being updated by set_pageblock_migratetype(), where both old and new migratetype are lower MIGRATE_RESERVE, currently cannot result in an invalid value being observed, although theoretically they may still lead to unexpected creation or destruction of MIGRATE_RESERVE pageblocks. Furthermore, things could get suddenly worse when memory isolation is used more, or when new migratetypes are added. After this patch, the race has no longer been observed in testing. Signed-off-by:
Mel Gorman <mgorman@suse.de> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Reported-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Reported-and-tested-by:
Vlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jan Kara <jack@suse.cz> Cc: Michal Hocko <mhocko@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-