- 28 Apr, 2014 40 commits
-
-
H. Peter Anvin authored
commit b3b42ac2 upstream. The IRET instruction, when returning to a 16-bit segment, only restores the bottom 16 bits of the user space stack pointer. We have a software workaround for that ("espfix") for the 32-bit kernel, but it relies on a nonzero stack segment base which is not available in 32-bit mode. Since 16-bit support is somewhat crippled anyway on a 64-bit kernel (no V86 mode), and most (if not quite all) 64-bit processors support virtualization for the users who really need it, simply reject attempts at creating a 16-bit segment when running on top of a 64-bit kernel. Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-kicdm89kzw9lldryb1br9od0@git.kernel.orgSigned-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Rafał Miłecki authored
commit 12cd43c6 upstream. Register B43_MMIO_PSM_PHY_HDR is 16 bit one, so accessing it with 32b functions isn't safe. On my machine it causes delayed (!) CPU exception: Disabling lock debugging due to kernel taint mce: [Hardware Error]: CPU 0: Machine Check Exception: 4 Bank 4: b200000000070f0f mce: [Hardware Error]: TSC 164083803dc mce: [Hardware Error]: PROCESSOR 2:20fc2 TIME 1396650505 SOCKET 0 APIC 0 microcode 0 mce: [Hardware Error]: Run the above through 'mcelog --ascii' mce: [Hardware Error]: Machine check: Processor context corrupt Kernel panic - not syncing: Fatal machine check on current CPU Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff) Signed-off-by:
Rafał Miłecki <zajec5@gmail.com> Acked-by:
Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Hui Wang authored
commit 137bcc33 upstream. When we plug a 3-ring headset on the Dell machine (VID: 0x10ec0283, SID: 0x10280667), the headset mic can't be detected, after apply this patch, the headset mic can work well. BugLink: https://bugs.launchpad.net/bugs/1297581 Cc: David Henningsson <david.henningsson@canonical.com> Signed-off-by:
Hui Wang <hui.wang@canonical.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> [ luis: backported to 3.11: adjusted context ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
NeilBrown authored
commit da1aab3d upstream. When performing a user-request check/repair (MD_RECOVERY_REQUEST is set) on a raid1, we allocate multiple bios each with their own set of pages. If the page allocations for one bio fails, we currently do *not* free the pages allocated for the previous bios, nor do we free the bio itself. This patch frees all the already-allocate pages, and makes sure that all the bios are freed as well. This bug can cause a memory leak which can ultimately OOM a machine. It was introduced in 3.10-rc1. Fixes: a0787606 Cc: Kent Overstreet <koverstreet@google.com> Reported-by:
Russell King - ARM Linux <linux@arm.linux.org.uk> Signed-off-by:
NeilBrown <neilb@suse.de> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Jens Axboe authored
commit e39435ce upstream. I got a bug report yesterday from Laszlo Ersek in which he states that his kvm instance fails to suspend. Laszlo bisected it down to this commit 1cf7e9c6 ("virtio_blk: blk-mq support") where virtio-blk is converted to use the blk-mq infrastructure. After digging a bit, it became clear that the issue was with the queue drain. blk-mq tracks queue usage in a percpu counter, which is incremented on request alloc and decremented when the request is freed. The initial hunt was for an inconsistency in blk-mq, but everything seemed fine. In fact, the counter only returned crazy values when suspend was in progress. When a CPU is unplugged, the percpu counters merges that CPU state with the general state. blk-mq takes care to register a hotcpu notifier with the appropriate priority, so we know it runs after the percpu counter notifier. However, the percpu counter notifier only merges the state when the CPU is fully gone. This leaves a state transition where the CPU going away is no longer in the online mask, yet it still holds private values. This means that in this state, percpu_counter_sum() returns invalid results, and the suspend then hangs waiting for abs(dead-cpu-value) requests to complete which of course will never happen. Fix this by clearing the state earlier, so we never have a case where the CPU isn't in online mask but still holds private state. This bug has been there since forever, I guess we don't have a lot of users where percpu counters needs to be reliable during the suspend cycle. Signed-off-by:
Jens Axboe <axboe@fb.com> Reported-by:
Laszlo Ersek <lersek@redhat.com> Tested-by:
Laszlo Ersek <lersek@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Takashi Iwai authored
commit 4f8e9400 upstream. PCM pointer callbacks in ice1712 driver check the buffer size boundary wrongly between bytes and frames. This leads to PCM core warnings like: snd_pcm_update_hw_ptr0: 105 callbacks suppressed ALSA pcm_lib.c:352 BUG: pcmC3D0c:0, pos = 5461, buffer size = 5461, period size = 2730 This patch fixes these checks to be placed after the proper unit conversions. Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Liu Hua authored
commit 80df2847 upstream. As sysctl_hung_task_timeout_sec is unsigned long, when this value is larger then LONG_MAX/HZ, the function schedule_timeout_interruptible in watchdog will return immediately without sleep and with print : schedule_timeout: wrong timeout value ffffffffffffff83 and then the funtion watchdog will call schedule_timeout_interruptible again and again. The screen will be filled with "schedule_timeout: wrong timeout value ffffffffffffff83" This patch does some check and correction in sysctl, to let the function schedule_timeout_interruptible allways get the valid parameter. Signed-off-by:
Liu Hua <sdu.liu@huawei.com> Tested-by:
Satoru Takeuchi <satoru.takeuchi@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> [ luis: backported to 3.11: - pick hung_task_timeout_secs documentation from commit 270750d ("hung_task: Display every hung task warning") ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Oleg Nesterov authored
commit dfccbb5e upstream. wait_task_zombie() first does EXIT_ZOMBIE->EXIT_DEAD transition and drops tasklist_lock. If this task is not the natural child and it is traced, we change its state back to EXIT_ZOMBIE for ->real_parent. The last transition is racy, this is even documented in 50b8d257 "ptrace: partially fix the do_wait(WEXITED) vs EXIT_DEAD->EXIT_ZOMBIE race". wait_consider_task() tries to detect this transition and clear ->notask_error but we can't rely on ptrace_reparented(), debugger can exit and do ptrace_unlink() before its sub-thread sets EXIT_ZOMBIE. And there is another problem which were missed before: this transition can also race with reparent_leader() which doesn't reset >exit_signal if EXIT_DEAD, assuming that this task must be reaped by someone else. So the tracee can be re-parented with ->exit_signal != SIGCHLD, and if /sbin/init doesn't use __WALL it becomes unreapable. Change reparent_leader() to update ->exit_signal even if EXIT_DEAD. Note: this is the simple temporary hack for -stable, it doesn't try to solve all problems, it will be reverted by the next changes. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Jan Kratochvil <jan.kratochvil@redhat.com> Reported-by:
Michal Schmidt <mschmidt@redhat.com> Tested-by:
Michal Schmidt <mschmidt@redhat.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Lennart Poettering <lpoetter@redhat.com> Cc: Roland McGrath <roland@hack.frob.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Oleg Nesterov authored
commit c39df5fa upstream. Commit 8aac6270 ("move exit_task_namespaces() outside of exit_notify()") breaks pppd and the exiting service crashes the kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 IP: ppp_register_channel+0x13/0x20 [ppp_generic] Call Trace: ppp_asynctty_open+0x12b/0x170 [ppp_async] tty_ldisc_open.isra.2+0x27/0x60 tty_ldisc_hangup+0x1e3/0x220 __tty_hangup+0x2c4/0x440 disassociate_ctty+0x61/0x270 do_exit+0x7f2/0xa50 ppp_register_channel() needs ->net_ns and current->nsproxy == NULL. Move disassociate_ctty() before exit_task_namespaces(), it doesn't make sense to delay it after perf_event_exit_task() or cgroup_exit(). This also allows to use task_work_add() inside the (nontrivial) code paths in disassociate_ctty(). Investigated by Peter Hurley. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Sree Harsha Totakura <sreeharsha@totakura.in> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Sree Harsha Totakura <sreeharsha@totakura.in> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Andrey Vagin <avagin@openvz.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Mizuma, Masayoshi authored
commit 55f67141 upstream. When I decrease the value of nr_hugepage in procfs a lot, softlockup happens. It is because there is no chance of context switch during this process. On the other hand, when I allocate a large number of hugepages, there is some chance of context switch. Hence softlockup doesn't happen during this process. So it's necessary to add the context switch in the freeing process as same as allocating process to avoid softlockup. When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing process occupied a CPU over 150 seconds and following softlockup message appeared twice or more. $ echo 6000000 > /proc/sys/vm/nr_hugepages $ cat /proc/sys/vm/nr_hugepages 6000000 $ grep ^Huge /proc/meminfo HugePages_Total: 6000000 HugePages_Free: 6000000 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB $ echo 0 > /proc/sys/vm/nr_hugepages BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ... Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1 Call Trace: free_pool_huge_page+0xb8/0xd0 set_max_huge_pages+0x128/0x190 hugetlb_sysctl_handler_common+0x113/0x140 hugetlb_sysctl_handler+0x1e/0x20 proc_sys_call_handler+0x97/0xd0 proc_sys_write+0x14/0x20 vfs_write+0xb8/0x1a0 sys_write+0x51/0x90 __audit_syscall_exit+0x265/0x290 system_call_fastpath+0x16/0x1b I have not confirmed this problem with upstream kernels because I am not able to prepare the machine equipped with 12TB memory now. However I confirmed that the amount of decreasing hugepages was directly proportional to the amount of required time. I measured required times on a smaller machine. It showed 130-145 hugepages decreased in a millisecond. Amount of decreasing Required time Decreasing rate hugepages (msec) (pages/msec) ------------------------------------------------------------ 10,000 pages == 20GB 70 - 74 135-142 30,000 pages == 60GB 208 - 229 131-144 It means decrement of 6TB hugepages will trigger softlockup with the default threshold 20sec, in this decreasing rate. Signed-off-by:
Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Vlastimil Babka authored
commit 57e68e9c upstream. A BUG_ON(!PageLocked) was triggered in mlock_vma_page() by Sasha Levin fuzzing with trinity. The call site try_to_unmap_cluster() does not lock the pages other than its check_page parameter (which is already locked). The BUG_ON in mlock_vma_page() is not documented and its purpose is somewhat unclear, but apparently it serializes against page migration, which could otherwise fail to transfer the PG_mlocked flag. This would not be fatal, as the page would be eventually encountered again, but NR_MLOCK accounting would become distorted nevertheless. This patch adds a comment to the BUG_ON in mlock_vma_page() and munlock_vma_page() to that effect. The call site try_to_unmap_cluster() is fixed so that for page != check_page, trylock_page() is attempted (to avoid possible deadlocks as we already have check_page locked) and mlock_vma_page() is performed only upon success. If the page lock cannot be obtained, the page is left without PG_mlocked, which is again not a problem in the whole unevictable memory design. Signed-off-by:
Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Bob Liu <bob.liu@oracle.com> Reported-by:
Sasha Levin <sasha.levin@oracle.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Michel Lespinasse <walken@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by:
Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Nicholas Bellinger authored
commit d444edc6 upstream. This patch fixes a long-standing bug in iscsit_build_conn_drop_async_message() where during ERL=2 connection recovery, a bogus conn_p pointer could end up being used to send the ISCSI_OP_ASYNC_EVENT + DROPPING_CONNECTION notifying the initiator that cmd->logout_cid has failed. The bug was manifesting itself as an OOPs in iscsit_allocate_cmd() with a bogus conn_p pointer in iscsit_build_conn_drop_async_message(). Reported-by:
Arshad Hussain <arshad.hussain@calsoftinc.com> Reported-by:
santosh kulkarni <santosh.kulkarni@calsoftinc.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Vineet Gupta authored
commit 61fb4bfc upstream. Despite the switch to right UART driver (prev patch), serial console still doesn't work due to missing CONFIG_SERIAL_OF_PLATFORM Also fix the default cmdline in DT to not refer to out-of-tree ARC framebuffer driver for console. Signed-off-by:
Vineet Gupta <vgupta@synopsys.com> Cc: Francois Bedard <Francois.Bedard@synopsys.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Mischa Jonker authored
commit 6eda477b upstream. The Synopsys APB DW UART has a couple of special features that are not in the System C model. In 3.8, the 8250_dw driver didn't really use these features, but from 3.9 onwards, the 8250_dw driver has become incompatible with our model. Signed-off-by:
Mischa Jonker <mjonker@synopsys.com> Signed-off-by:
Vineet Gupta <vgupta@synopsys.com> Cc: Francois Bedard <Francois.Bedard@synopsys.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit 0596661f upstream. When suspending a cache the policy is walked and the individual policy hints written to the metadata via sync_metadata(). This led to this lock order: policy->lock cache_metadata->root_lock When loading the cache target the policy is populated while the metadata lock is held: cache_metadata->root_lock policy->lock Fix this potential lock-inversion (ABBA) deadlock in sync_metadata() by ensuring the cache_metadata root_lock is held whilst all the hints are written, rather than being repeatedly locked while policy->lock is held (as was the case with each callout that policy_walk_mappings() made to the old save_hint() method). Found by turning on the CONFIG_PROVE_LOCKING ("Lock debugging: prove locking correctness") build option. However, it is not clear how the LOCKDEP reported paths can lead to a deadlock since the two paths, suspending a target and loading a target, never occur at the same time. But that doesn't mean the same lock-inversion couldn't have occurred elsewhere. Reported-by:
Marian Csontos <mcsontos@redhat.com> Signed-off-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> [ luis: backported to 3.11: - version update: 1.1.1 -> 1.1.2 ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Giacomo Comes authored
commit 10b6ee4a upstream. The Dell XPS 8700 has a onboard Display port and HDMI port and no VGA port. The call intel_crt_init freeze the machine, so skip such call. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73559 Signed-off-by: Giacomo Comes <comes at naic.edu> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
alex chen authored
commit f7cf4f5b upstream. Do not put bh when buffer_uptodate failed in ocfs2_write_block and ocfs2_write_super_or_backup, because it will put bh in b_end_io. Otherwise it will hit a warning "VFS: brelse: Trying to free free buffer". Signed-off-by:
Alex Chen <alex.chen@huawei.com> Reviewed-by:
Joseph Qi <joseph.qi@huawei.com> Reviewed-by:
Srinivas Eeda <srinivas.eeda@oracle.com> Cc: Mark Fasheh <mfasheh@suse.com> Acked-by:
Joel Becker <jlbec@evilplan.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Junxiao Bi authored
commit ded2cf71 upstream. There is a race window in dlm_do_recovery() between dlm_remaster_locks() and dlm_reset_recovery() when the recovery master nearly finish the recovery process for a dead node. After the master sends FINALIZE_RECO message in dlm_remaster_locks(), another node may become the recovery master for another dead node, and then send the BEGIN_RECO message to all the nodes included the old master, in the handler of this message dlm_begin_reco_handler() of old master, dlm->reco.dead_node and dlm->reco.new_master will be set to the second dead node and the new master, then in dlm_reset_recovery(), these two variables will be reset to default value. This will cause new recovery master can not finish the recovery process and hung, at last the whole cluster will hung for recovery. old recovery master: new recovery master: dlm_remaster_locks() become recovery master for another dead node. dlm_send_begin_reco_message() dlm_begin_reco_handler() { if (dlm->reco.state & DLM_RECO_STATE_FINALIZE) { return -EAGAIN; } dlm_set_reco_master(dlm, br->node_idx); dlm_set_reco_dead_node(dlm, br->dead_node); } dlm_reset_recovery() { dlm_set_reco_dead_node(dlm, O2NM_INVALID_NODE_NUM); dlm_set_reco_master(dlm, O2NM_INVALID_NODE_NUM); } will hang in dlm_remaster_locks() for request dlm locks info Before send FINALIZE_RECO message, recovery master should set DLM_RECO_STATE_FINALIZE for itself and clear it after the recovery done, this can break the race windows as the BEGIN_RECO messages will not be handled before DLM_RECO_STATE_FINALIZE flag is cleared. A similar race may happen between new recovery master and normal node which is in dlm_finalize_reco_handler(), also fix it. Signed-off-by:
Junxiao Bi <junxiao.bi@oracle.com> Reviewed-by:
Srinivas Eeda <srinivas.eeda@oracle.com> Reviewed-by:
Wengang Wang <wen.gang.wang@oracle.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Junxiao Bi authored
commit 34aa8dac upstream. This issue was introduced by commit 800deef3 ("ocfs2: use list_for_each_entry where benefical") in 2007 where it replaced list_for_each with list_for_each_entry. The variable "lock" will point to invalid data if "tmpq" list is empty and a panic will be triggered due to this. Sunil advised reverting it back, but the old version was also not right. At the end of the outer for loop, that list_for_each_entry will also set "lock" to an invalid data, then in the next loop, if the "tmpq" list is empty, "lock" will be an stale invalid data and cause the panic. So reverting the list_for_each back and reset "lock" to NULL to fix this issue. Another concern is that this seemes can not happen because the "tmpq" list should not be empty. Let me describe how. old lock resource owner(node 1): migratation target(node 2): image there's lockres with a EX lock from node 2 in granted list, a NR lock from node x with convert_type EX in converting list. dlm_empty_lockres() { dlm_pick_migration_target() { pick node 2 as target as its lock is the first one in granted list. } dlm_migrate_lockres() { dlm_mark_lockres_migrating() { res->state |= DLM_LOCK_RES_BLOCK_DIRTY; wait_event(dlm->ast_wq, !dlm_lockres_is_dirty(dlm, res)); //after the above code, we can not dirty lockres any more, // so dlm_thread shuffle list will not run downconvert lock from EX to NR upconvert lock from NR to EX <<< migration may schedule out here, then <<< node 2 send down convert request to convert type from EX to <<< NR, then send up convert request to convert type from NR to <<< EX, at this time, lockres granted list is empty, and two locks <<< in the converting list, node x up convert lock followed by <<< node 2 up convert lock. // will set lockres RES_MIGRATING flag, the following // lock/unlock can not run dlm_lockres_release_ast(dlm, res); } dlm_send_one_lockres() dlm_process_recovery_data() for (i=0; i<mres->num_locks; i++) if (ml->node == dlm->node_num) for (j = DLM_GRANTED_LIST; j <= DLM_BLOCKED_LIST; j++) { list_for_each_entry(lock, tmpq, list) if (lock) break; <<< lock is invalid as grant list is empty. } if (lock->ml.node != ml->node) BUG() >>> crash here } I see the above locks status from a vmcore of our internal bug. Signed-off-by:
Junxiao Bi <junxiao.bi@oracle.com> Reviewed-by:
Wengang Wang <wen.gang.wang@oracle.com> Cc: Sunil Mushran <sunil.mushran@gmail.com> Reviewed-by:
Srinivas Eeda <srinivas.eeda@oracle.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Jan Kara authored
commit 5acda9d1 upstream. After commit 839a8e86 ("writeback: replace custom worker pool implementation with unbound workqueue") when device is removed while we are writing to it we crash in bdi_writeback_workfn() -> set_worker_desc() because bdi->dev is NULL. This can happen because even though bdi_unregister() cancels all pending flushing work, nothing really prevents new ones from being queued from balance_dirty_pages() or other places. Fix the problem by clearing BDI_registered bit in bdi_unregister() and checking it before scheduling of any flushing work. Fixes: 839a8e86Reviewed-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Jan Kara <jack@suse.cz> Cc: Derek Basehore <dbasehore@chromium.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Derek Basehore authored
commit 6ca738d6 upstream. bdi_wakeup_thread_delayed() used the mod_delayed_work() function to schedule work to writeback dirty inodes. The problem with this is that it can delay work that is scheduled for immediate execution, such as the work from sync_inodes_sb(). This can happen since mod_delayed_work() can now steal work from a work_queue. This fixes the problem by using queue_delayed_work() instead. This is a regression caused by commit 839a8e86 ("writeback: replace custom worker pool implementation with unbound workqueue"). The reason that this causes a problem is that laptop-mode will change the delay, dirty_writeback_centisecs, to 60000 (10 minutes) by default. In the case that bdi_wakeup_thread_delayed() races with sync_inodes_sb(), sync will be stopped for 10 minutes and trigger a hung task. Even if dirty_writeback_centisecs is not long enough to cause a hung task, we still don't want to delay sync for that long. We fix the problem by using queue_delayed_work() when we want to schedule writeback sometime in future. This function doesn't change the timer if it is already armed. For the same reason, we also change bdi_writeback_workfn() to immediately queue the work again in the case that the work_list is not empty. The same problem can happen if the sync work is run on the rescue worker. [jack@suse.cz: update changelog, add comment, use bdi_wakeup_thread_delayed()] Signed-off-by:
Derek Basehore <dbasehore@chromium.org> Reviewed-by:
Jan Kara <jack@suse.cz> Cc: Alexander Viro <viro@zento.linux.org.uk> Reviewed-by:
Tejun Heo <tj@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Darrick J. Wong" <darrick.wong@oracle.com> Cc: Derek Basehore <dbasehore@chromium.org> Cc: Kees Cook <keescook@chromium.org> Cc: Benson Leung <bleung@chromium.org> Cc: Sonny Rao <sonnyrao@chromium.org> Cc: Luigi Semenzato <semenzato@chromium.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Matt Fleming authored
commit a0c32761 upstream. Kees reported the following error: arch/sh/kernel/dumpstack.c: In function 'print_trace_address': arch/sh/kernel/dumpstack.c:118:2: error: format not a string literal and no format arguments [-Werror=format-security] Use the "%s" format so that it's impossible to interpret 'data' as a format string. Signed-off-by:
Matt Fleming <matt.fleming@intel.com> Reported-by:
Kees Cook <keescook@chromium.org> Acked-by:
Kees Cook <keescook@chromium.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Mark Tinguely authored
commit c88547a8 upstream. Commit f5ea1100 ("xfs: add CRCs to dir2/da node blocks") introduced in 3.10 incorrectly converted the btree hash index array pointer in xfs_da3_fixhashpath(). It resulted in the the current hash always being compared against the first entry in the btree rather than the current block index into the btree block's hash entry array. As a result, it was comparing the wrong hashes, and so could misorder the entries in the btree. For most cases, this doesn't cause any problems as it requires hash collisions to expose the ordering problem. However, when there are hash collisions within a directory there is a very good probability that the entries will be ordered incorrectly and that actually matters when duplicate hashes are placed into or removed from the btree block hash entry array. This bug results in an on-disk directory corruption and that results in directory verifier functions throwing corruption warnings into the logs. While no data or directory entries are lost, access to them may be compromised, and attempts to remove entries from a directory that has suffered from this corruption may result in a filesystem shutdown. xfs_repair will fix the directory hash ordering without data loss occuring. [dchinner: wrote useful a commit message] Reported-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
Mark Tinguely <tinguely@sgi.com> Reviewed-by:
Ben Myers <bpm@sgi.com> Signed-off-by:
Dave Chinner <david@fromorbit.com> [ luis: backported to 3.11: adjusted context ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Alex Deucher authored
commit f1553174 upstream. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Reviewed-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Alex Deucher authored
commit 16086279 upstream. This needs to be done to update some of the fields in the connector structure used by the audio code. Noticed by several users on irc. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Takashi Iwai authored
commit 415d555e upstream. The recent fixups for HP laptops to support the mute LED made the speaker output silent on some machines. It turned out that they use the NID 0x18 for the speaker while it's also used for controlling the LED via VREF bits although the current driver code blindly assumes that such a node is a mic pin (where 0x18 is usually so). This patch fixes the problem by only changing the VREF bits and keeping the other pin ctl bits. Reported-and-tested-by:
Hui Wang <hui.wang@canonical.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Christopher Friedt authored
commit aa6de142 upstream. Previously, the vmwgfx_fb driver would allow users to call FBIOSET_VINFO, but it would not adjust the FINFO properly, resulting in distorted screen rendering. The patch corrects that behaviour. See https://bugs.gentoo.org/show_bug.cgi?id=494794 for examples. Signed-off-by:
Christopher Friedt <chrisfriedt@gmail.com> Reviewed-by:
Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Oleg Nesterov authored
commit d2308225 upstream. pidns_get()->get_pid_ns() can hit ns == NULL. This task_struct can't go away, but task_active_pid_ns(task) is NULL if release_task(task) was already called. Alternatively we could change get_pid_ns(ns) to check ns != NULL, but it seems that other callers are fine. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman ebiederm@xmission.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Jeff Mahoney authored
commit 01d88857 upstream. jdm-20004 reiserfs_delete_xattrs: Couldn't delete all xattrs (-2) The -ENOENT is due to readdir calling dir_emit on the same entry twice. If the dir_emit callback sleeps and the tree is changed underneath us, we won't be able to trust deh_offset(deh) anymore. We need to save next_pos before we might sleep so we can find the next entry. Signed-off-by:
Jeff Mahoney <jeffm@suse.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Al Viro authored
commit dd20908a upstream. it's pointless and actually leads to wrong behaviour in at least one moderately convoluted case (pipe(), close one end, try to get to another via /proc/*/fd and run into ETXTBUSY). Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Maarten Lankhorst authored
commit 41ccec35 upstream. This fixes a BUG_ON(bo->sync_obj != NULL); in ttm_bo_release_list. Signed-off-by:
Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Eric Whitney authored
commit ad6599ab upstream. Xfstests generic/311 and shared/298 fail when run on a bigalloc file system. Kernel error messages produced during the tests report that blocks to be freed are already on the to-be-freed list. When e2fsck is run at the end of the tests, it typically reports bad i_blocks and bad free blocks counts. The bug that causes these failures is located in ext4_ext_rm_leaf(). Code at the end of the function frees a partial cluster if it's not shared with an extent remaining in the leaf. However, if all the extents in the leaf have been removed, the code dereferences an invalid extent pointer (off the front of the leaf) when the check for sharing is made. This generally has the effect of unconditionally freeing the partial cluster, which leads to the observed failures when the partial cluster is shared with the last extent in the next leaf. Fix this by attempting to free the cluster only if extents remain in the leaf. Any remaining partial cluster will be freed if possible when the next leaf is processed or when leaf removal is complete. Signed-off-by:
Eric Whitney <enwlinux@gmail.com> Signed-off-by:
"Theodore Ts'o" <tytso@mit.edu> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Yann Droneaud authored
commit 5bdb0f02 upstream. In case of error when writing to userspace, function ehca_create_cq() does not set an error code before following its error path. This patch sets the error code to -EFAULT when ib_copy_to_udata() fails. This was caught when using spatch (aka. coccinelle) to rewrite call to ib_copy_{from,to}_udata(). Link: https://www.gitorious.org/opteya/coccib/source/75ebf2c1033c64c1d81df13e4ae44ee99c989eba:ib_copy_udata.cocci Link: http://marc.info/?i=cover.1394485254.git.ydroneaud@opteya.comSigned-off-by:
Yann Droneaud <ydroneaud@opteya.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Yann Droneaud authored
commit 08e74c4b upstream. In case of error when writing to userspace, the function mthca_create_cq() does not set an error code before following its error path. This patch sets the error code to -EFAULT when ib_copy_to_udata() fails. This was caught when using spatch (aka. coccinelle) to rewrite call to ib_copy_{from,to}_udata(). Link: https://www.gitorious.org/opteya/coccib/source/75ebf2c1033c64c1d81df13e4ae44ee99c989eba:ib_copy_udata.cocci Link: http://marc.info/?i=cover.1394485254.git.ydroneaud@opteya.comSigned-off-by:
Yann Droneaud <ydroneaud@opteya.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Stanislav Kinsbursky authored
commit 30646394 upstream. There could be a case, when NFSd file system is mounted in network, different to socket's one, like below: "ip netns exec" creates new network and mount namespace, which duplicates NFSd mount point, created in init_net context. And thus NFS server stop in nested network context leads to RPCBIND client destruction in init_net. Then, on NFSd start in nested network context, rpc.nfsd process creates socket in nested net and passes it into "write_ports", which leads to RPCBIND sockets creation in init_net context because of the same reason (NFSd monut point was created in init_net context). An attempt to register passed socket in nested net leads to panic, because no RPCBIND client present in nexted network namespace. This patch add check that passed socket's net matches NFSd superblock's one. And returns -EINVAL error to user psace otherwise. v2: Put socket on exit. Reported-by:
Weng Meiling <wengmeiling.weng@huawei.com> Signed-off-by:
Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Neil Horman authored
commit 6f8a1b33 upstream. Commit 03bbcb2e (iommu/vt-d: add quirk for broken interrupt remapping on 55XX chipsets) properly disables irq remapping on the 5500/5520 chipsets that don't correctly perform that feature. However, when I wrote it, I followed the errata sheet linked in that commit too closely, and explicitly tied the activation of the quirk to revision 0x13 of the chip, under the assumption that earlier revisions were not in the field. Recently a system was reported to be suffering from this remap bug and the quirk hadn't triggered, because the revision id register read at a lower value that 0x13, so the quirk test failed improperly. Given this, it seems only prudent to adjust this quirk so that any revision less than 0x13 has the quirk asserted. [ tglx: Removed the 0x12 comparison of pci id 3405 as this is covered by the <= 0x13 check already ] Signed-off-by:
Neil Horman <nhorman@tuxdriver.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/1394649873-14913-1-git-send-email-nhorman@tuxdriver.comSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
W. Trevor King authored
commit a4b7f21d upstream. The `lspci -nnvv` output contains (wrapped for line length): 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04) Subsystem: ASUSTeK Computer Inc. Device [1043:115d] Signed-off-by:
W. Trevor King <wking@tremily.us> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Huacai Chen authored
commit c14af233 upstream. The original MIPS hibernate code flushes cache and TLB entries in swsusp_arch_resume(). But they are removed in Commit 44eeab67 (MIPS: Hibernation: Remove SMP TLB and cacheflushing code.). A cross- CPU flush is surely unnecessary because all but the local CPU have already been disabled. But a local flush (at least the TLB flush) is needed. When we do hibernation on Loongson-3 with an E1000E NIC, it is very easy to produce a kernel panic (kernel page fault, or unaligned access). The root cause is E1000E driver use vzalloc_node() to allocate pages, the stale TLB entries of the booting kernel will be misused by the resumed target kernel. Signed-off-by:
Huacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/6643/Signed-off-by:
Ralf Baechle <ralf@linux-mips.org> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
J. Bruce Fields authored
commit 480efaee upstream. Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-
Trond Myklebust authored
commit e911b815 upstream. If we interrupt the nfs4_wait_for_completion_rpc_task() call in nfs4_run_open_task(), then we don't prevent the RPC call from completing. So freeing up the opendata->f_attr.mdsthreshold in the error path in _nfs4_do_open() leads to a use-after-free when the XDR decoder tries to decode the mdsthreshold information from the server. Fixes: 82be417a (NFSv4.1 cache mdsthreshold values on OPEN) Tested-by:
Steve Dickson <SteveD@redhat.com> Signed-off-by:
Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by:
Luis Henriques <luis.henriques@canonical.com>
-