- 05 Oct, 2014 25 commits
-
-
Arjun Sreedharan authored
commit 4dc7c76c upstream. scc_bus_softreset not necessarily should return zero. Propagate the error code. Signed-off-by:
Arjun Sreedharan <arjun024@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tejun Heo authored
commit 2a13772a upstream. Crucial M550 may cause data corruption on queued trims and is blacklisted. The pattern used for it fails to match 1TB one as the capacity section will be four chars instead of three. Widen the pattern. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Charles Reiss <woggling@gmail.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81071Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Fainelli authored
commit a9ecdc0f upstream. In case the Device Tree blob passed by the boot agent supplies both an 'interrupts-extended' and an 'interrupts' property in order to allow for older kernels to be usable, prefer the new-style 'interrupts-extended' property which conveys a lot more information. This allows us to have bootloaders willingly maintaining backwards compatibility with older kernels without entirely deprecating the 'interrupts' property. Update the bindings documentation to describe a situation where both the 'interrupts-extended' and the 'interrupts' property are present, and which one takes precedence over the other. Acked-by:
Rob Herring <robh@kernel.org> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Grant Likely <grant.likely@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 730a336c upstream. There still seem to be stability problems with other systems. Bug: https://bugs.freedesktop.org/show_bug.cgi?id=72921Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 0c78a449 upstream. bapm enabled the GPU and CPU to share TDP headroom. It was disabled by default since some laptops hung when it was enabled in conjunction with dpm. It seems to be stable on desktop boards and fixes hangs on boot with dpm enabled on certain boards, so enable it by default on desktop boards. bug: https://bugs.freedesktop.org/show_bug.cgi?id=72921Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jiri Kosina authored
commit ece4a17d upstream. Withtout this, ring initialization fails reliabily during resume with [drm:init_ring_common] *ERROR* render ring initialization failed ctl 0001f001 head ffffff8804 tail 00000000 start 000e4000 This is not a complete fix, but it is verified to make the ring initialization failures during resume much less likely. We were not able to root-cause this bug (likely HW-specific to Gen4 chips) yet. This is therefore used as a ducttape before problem is fully understood and proper fix created, so that people don't suffer from completely unusable systems in the meantime. The discussion and debugging is happening at https://bugs.freedesktop.org/show_bug.cgi?id=76554Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 3c64bd26 upstream. Return 2 so we can be sure the kernel has the necessary changes for acceleration to work. Note: This patch depends on these two commits: - drm/radeon: fix cut and paste issue for hawaii. - drm/radeon: use packet2 for nop on hawaii with old firmware Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Andreas Boll <andreas.boll.dev@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit e9f274b2 upstream. Some hawaii boards use a different method for fetching the voltage information from the vbios. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian König authored
commit f1d2a26b upstream. Seems to make VM flushes more stable on SI and CIK. v2: only use the PFP on the GFX ring on CIK Signed-off-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 5dc35532 upstream. Looks like the lm63 driver supports the lm64 as well. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit c08abf11 upstream. This patch depends on: e0792981 (drm/radeon/dpm: fix typo in vddci setup for eg/btc) bugs: https://bugs.freedesktop.org/show_bug.cgi?id=73053 https://bugzilla.kernel.org/show_bug.cgi?id=68571Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 8f500af4 upstream. This patch depends on: b0880e87 (drm/radeon/dpm: fix vddci setup typo on cayman) bug: https://bugs.freedesktop.org/show_bug.cgi?id=69723Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 6b57f20c upstream. Some hawaii cards use a different method to fetch the voltage info from the vbios. bug: https://bugs.freedesktop.org/show_bug.cgi?id=74250Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit a91576d7 upstream. Commit 7dc19d5a "drivers: convert shrinkers to new count/scan API" added deadlock warnings that ttm_page_pool_free() and ttm_dma_page_pool_free() are currently doing GFP_KERNEL allocation. But these functions did not get updated to receive gfp_t argument. This patch explicitly passes sc->gfp_mask or GFP_KERNEL to these functions, and removes the deadlock warning. Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 71336e01 upstream. While ttm_dma_pool_shrink_scan() tries to take mutex before doing GFP_KERNEL allocation, ttm_pool_shrink_scan() does not do it. This can result in stack overflow if kmalloc() in ttm_page_pool_free() triggered recursion due to memory pressure. shrink_slab() => ttm_pool_shrink_scan() => ttm_page_pool_free() => kmalloc(GFP_KERNEL) => shrink_slab() => ttm_pool_shrink_scan() => ttm_page_pool_free() => kmalloc(GFP_KERNEL) Change ttm_pool_shrink_scan() to do like ttm_dma_pool_shrink_scan() does. Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 22e71691 upstream. I can observe that RHEL7 environment stalls with 100% CPU usage when a certain type of memory pressure is given. While the shrinker functions are called by shrink_slab() before the OOM killer is triggered, the stall lasts for many minutes. One of reasons of this stall is that ttm_dma_pool_shrink_count()/ttm_dma_pool_shrink_scan() are called and are blocked at mutex_lock(&_manager->lock). GFP_KERNEL allocation with _manager->lock held causes someone (including kswapd) to deadlock when these functions are called due to memory pressure. This patch changes "mutex_lock();" to "if (!mutex_trylock()) return ...;" in order to avoid deadlock. Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 46c2df68 upstream. We can use "unsigned int" instead of "atomic_t" by updating start_pool variable under _manager->lock. This patch will make it possible to avoid skipping when choosing a pool to shrink in round-robin style, after next patch changes mutex_lock(_manager->lock) to !mutex_trylock(_manager->lork). Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 11e504cc upstream. list_empty(&_manager->pools) being false before taking _manager->lock does not guarantee that _manager->npools != 0 after taking _manager->lock because _manager->npools is updated under _manager->lock. Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit c9a3ad25 upstream. display_timings_release calls kfree on the display_timings object passed to it. Calling kfree after it is wrong. SLUB debug showed the following warning: ============================================================================= BUG kmalloc-64 (Tainted: G W ): Object already free ----------------------------------------------------------------------------- Disabling lock debugging due to kernel taint INFO: Allocated in of_get_display_timings+0x2c/0x214 age=601 cpu=0 pid=884 __slab_alloc.constprop.79+0x2e0/0x33c kmem_cache_alloc+0xac/0xdc of_get_display_timings+0x2c/0x214 panel_probe+0x7c/0x314 [tilcdc] platform_drv_probe+0x18/0x48 [..snip..] INFO: Freed in panel_destroy+0x18/0x3c [tilcdc] age=0 cpu=0 pid=907 __slab_free+0x34/0x330 panel_destroy+0x18/0x3c [tilcdc] tilcdc_unload+0xd0/0x118 [tilcdc] drm_dev_unregister+0x24/0x98 [..snip..] Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit eb565a2b upstream. Unregister resources in the correct order on tilcdc_drm_fini, which is the reverse order they were registered during tilcdc_drm_init. This also means unregistering the driver before releasing its resources. Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit 3a490122 upstream. The driver did not unregister the allocated framebuffer, which caused memory leaks (and memory manager WARNs) when unloading. Also, the framebuffer device under /dev still existed after unloading. Add a call to drm_fbdev_cma_fini when unloading the module to prevent both issues. Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit 16dcbdef upstream. Add a drm_sysfs_connector_remove call when we destroy the panel to make sure the connector node in sysfs gets deleted. This is required for proper unload and re-load of this driver, otherwise we will get a warning about a duplicate filename in sysfs. Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit daa15b4c upstream. Add a drm_sysfs_connector_remove call when we destroy the panel to make sure the connector node in sysfs gets deleted. This is required for proper unload and re-load of this driver as a module. Without this, we would get a warning at re-load time like so: tda998x 0-0070: found TDA19988 ------------[ cut here ]------------ WARNING: CPU: 0 PID: 825 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x54/0x74() sysfs: cannot create duplicate filename '/class/drm/card0-HDMI-A-1' Modules linked in: [..] CPU: 0 PID: 825 Comm: modprobe Not tainted 3.15.0-rc4-00027-g9dcdef4 #82 [<c0013bb8>] (unwind_backtrace) from [<c0011824>] (show_stack+0x10/0x14) [<c0011824>] (show_stack) from [<c0034e8c>] (warn_slowpath_common+0x68/0x88) [<c0034e8c>] (warn_slowpath_common) from [<c0034edc>] (warn_slowpath_fmt+0x30/0x40) [<c0034edc>] (warn_slowpath_fmt) from [<c01243f4>] (sysfs_warn_dup+0x54/0x74) [<c01243f4>] (sysfs_warn_dup) from [<c0124708>] (sysfs_do_create_link_sd.isra.2+0xb0/0xb8) [<c0124708>] (sysfs_do_create_link_sd.isra.2) from [<c02ae37c>] (device_add+0x338/0x520) [<c02ae37c>] (device_add) from [<c02ae6e8>] (device_create_groups_vargs+0xa0/0xc4) [<c02ae6e8>] (device_create_groups_vargs) from [<c02ae758>] (device_create+0x24/0x2c) [<c02ae758>] (device_create) from [<c029b4ec>] (drm_sysfs_connector_add+0x64/0x204) [<c029b4ec>] (drm_sysfs_connector_add) from [<bf0b1b40>] (slave_modeset_init+0x120/0x1bc [tilcdc]) [<bf0b1b40>] (slave_modeset_init [tilcdc]) from [<bf0b2be8>] (tilcdc_load+0x214/0x4c0 [tilcdc]) [<bf0b2be8>] (tilcdc_load [tilcdc]) from [<c029955c>] (drm_dev_register+0xa4/0x104) [..snip..] ---[ end trace 4df8d614936ebdee ]--- [drm:drm_sysfs_connector_add] *ERROR* failed to register connector device: -17 Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guido Martínez authored
commit e396900e upstream. Add a drm_sysfs_connector_remove call when we destroy the panel to make sure the connector node in sysfs gets deleted. This is required for proper unload and re-load of this driver as a module. Without this, we would get a warning at re-load time like so: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 824 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x54/0x74() sysfs: cannot create duplicate filename '/class/drm/card0-LVDS-1' Modules linked in: [...] CPU: 0 PID: 824 Comm: modprobe Not tainted 3.15.0-rc4-00027-g6484f96-dirty #81 [<c0013bb8>] (unwind_backtrace) from [<c0011824>] (show_stack+0x10/0x14) [<c0011824>] (show_stack) from [<c0034e8c>] (warn_slowpath_common+0x68/0x88) [<c0034e8c>] (warn_slowpath_common) from [<c0034edc>] (warn_slowpath_fmt+0x30/0x40) [<c0034edc>] (warn_slowpath_fmt) from [<c01243f4>] (sysfs_warn_dup+0x54/0x74) [<c01243f4>] (sysfs_warn_dup) from [<c0124708>] (sysfs_do_create_link_sd.isra.2+0xb0/0xb8) [<c0124708>] (sysfs_do_create_link_sd.isra.2) from [<c02ae37c>] (device_add+0x338/0x520) [<c02ae37c>] (device_add) from [<c02ae6e8>] (device_create_groups_vargs+0xa0/0xc4) [<c02ae6e8>] (device_create_groups_vargs) from [<c02ae758>] (device_create+0x24/0x2c) [<c02ae758>] (device_create) from [<c029b4ec>] (drm_sysfs_connector_add+0x64/0x204) [<c029b4ec>] (drm_sysfs_connector_add) from [<bf0b1fec>] (panel_modeset_init+0xb8/0x134 [tilcdc]) [<bf0b1fec>] (panel_modeset_init [tilcdc]) from [<bf0b2bf0>] (tilcdc_load+0x214/0x4c0 [tilcdc]) [<bf0b2bf0>] (tilcdc_load [tilcdc]) from [<c029955c>] (drm_dev_register+0xa4/0x104) [ .. snip .. ] ---[ end trace b2d09cd9578b0497 ]--- [drm:drm_sysfs_connector_add] *ERROR* failed to register connector device: -17 Signed-off-by:
Guido Martínez <guido@vanguardiasur.com.ar> Tested-by:
Darren Etheridge <detheridge@ti.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ronald Wahl authored
commit 671796dd upstream. The driver assumes that endpoint 4 is always an interrupt endpoint. Unfortunately the type differs between high-speed and full-speed configurations while in the former case it is indeed an interrupt endpoint this is not true for the latter case - here it is a bulk endpoint. When sending URBs with the wrong type the kernel will generate a warning message including backtrace. In this specific case there will be a huge amount of warnings which can bring the system to freeze. To fix this we are now sending URBs to endpoint 4 using the type found in the endpoint descriptor. A side note: The carl9170 firmware currently specifies endpoint 4 as interrupt endpoint even in the full-speed configuration but this has no relevance because before this firmware is loaded the endpoint type is as described above and after the firmware is running the stick is not reenumerated and so the old descriptor is used. Signed-off-by:
Ronald Wahl <ronald.wahl@raritan.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 Sep, 2014 15 commits
-
-
Greg Kroah-Hartman authored
-
David Howells authored
commit 95389b08 upstream. This fixes CVE-2014-3631. It is possible for an associative array to end up with a shortcut node at the root of the tree if there are more than fan-out leaves in the tree, but they all crowd into the same slot in the lowest level (ie. they all have the same first nibble of their index keys). When assoc_array_gc() returns back up the tree after scanning some leaves, it can fall off of the root and crash because it assumes that the back pointer from a shortcut (after label ascend_old_tree) must point to a normal node - which isn't true of a shortcut node at the root. Should we find we're ascending rootwards over a shortcut, we should check to see if the backpointer is zero - and if it is, we have completed the scan. This particular bug cannot occur if the root node is not a shortcut - ie. if you have fewer than 17 keys in a keyring or if you have at least two keys that sit into separate slots (eg. a keyring and a non keyring). This can be reproduced by: ring=`keyctl newring bar @s` for ((i=1; i<=18; i++)); do last_key=`keyctl newring foo$i $ring`; done keyctl timeout $last_key 2 Doing this: echo 3 >/proc/sys/kernel/keys/gc_delay first will speed things up. If we do fall off of the top of the tree, we get the following oops: BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 IP: [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 PGD dae15067 PUD cfc24067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: xt_nat xt_mark nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_ni CPU: 0 PID: 26011 Comm: kworker/0:1 Not tainted 3.14.9-200.fc20.x86_64 #1 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Workqueue: events key_garbage_collector task: ffff8800918bd580 ti: ffff8800aac14000 task.ti: ffff8800aac14000 RIP: 0010:[<ffffffff8136cea7>] [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP: 0018:ffff8800aac15d40 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8800aaecacc0 RDX: ffff8800daecf440 RSI: 0000000000000001 RDI: ffff8800aadc2bc0 RBP: ffff8800aac15da8 R08: 0000000000000001 R09: 0000000000000003 R10: ffffffff8136ccc7 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000070 R15: 0000000000000001 FS: 0000000000000000(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000018 CR3: 00000000db10d000 CR4: 00000000000006f0 Stack: ffff8800aac15d50 0000000000000011 ffff8800aac15db8 ffffffff812e2a70 ffff880091a00600 0000000000000000 ffff8800aadc2bc3 00000000cd42c987 ffff88003702df20 ffff88003702dfa0 0000000053b65c09 ffff8800aac15fd8 Call Trace: [<ffffffff812e2a70>] ? keyring_detect_cycle_iterator+0x30/0x30 [<ffffffff812e3e75>] keyring_gc+0x75/0x80 [<ffffffff812e1424>] key_garbage_collector+0x154/0x3c0 [<ffffffff810a67b6>] process_one_work+0x176/0x430 [<ffffffff810a744b>] worker_thread+0x11b/0x3a0 [<ffffffff810a7330>] ? rescuer_thread+0x3b0/0x3b0 [<ffffffff810ae1a8>] kthread+0xd8/0xf0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 [<ffffffff816ffb7c>] ret_from_fork+0x7c/0xb0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 Code: 08 4c 8b 22 0f 84 bf 00 00 00 41 83 c7 01 49 83 e4 fc 41 83 ff 0f 4c 89 65 c0 0f 8f 5a fe ff ff 48 8b 45 c0 4d 63 cf 49 83 c1 02 <4e> 8b 34 c8 4d 85 f6 0f 84 be 00 00 00 41 f6 c6 01 0f 84 92 RIP [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP <ffff8800aac15d40> CR2: 0000000000000018 ---[ end trace 1129028a088c0cbd ]--- Signed-off-by:
David Howells <dhowells@redhat.com> Acked-by:
Don Zickus <dzickus@redhat.com> Signed-off-by:
James Morris <james.l.morris@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Howells authored
commit 27419604 upstream. An edit script should be considered inaccessible by a function once it has called assoc_array_apply_edit() or assoc_array_cancel_edit(). However, assoc_array_gc() is accessing the edit script just after the gc_complete: label. Reported-by:
Andreea-Cristina Bernat <bernat.ada@gmail.com> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Andreea-Cristina Bernat <bernat.ada@gmail.com> cc: shemming@brocade.com cc: paulmck@linux.vnet.ibm.com Signed-off-by:
James Morris <james.l.morris@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sage Weil authored
commit 73c3d481 upstream. We preallocate a few of the message types we get back from the mon. If we get a larger message than we are expecting, fall back to trying to allocate a new one instead of blindly using the one we have. Signed-off-by:
Sage Weil <sage@redhat.com> Reviewed-by:
Ilya Dryomov <ilya.dryomov@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Linus Torvalds authored
commit 99d263d4 upstream. Josef Bacik found a performance regression between 3.2 and 3.10 and narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long' accesses for dcache name comparison and hashing"). He reports: "The test case is essentially for (i = 0; i < 1000000; i++) mkdir("a$i"); On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k dir/sec with 3.10. This is because we spend waaaaay more time in __d_lookup on 3.10 than in 3.2. The new hashing function for strings is suboptimal for < sizeof(unsigned long) string names (and hell even > sizeof(unsigned long) string names that I've tested). I broke out the old hashing function and the new one into a userspace helper to get real numbers and this is what I'm getting: Old hash table had 1000000 entries, 0 dupes, 0 max dupes New hash table had 12628 entries, 987372 dupes, 900 max dupes We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash My test does the hash, and then does the d_hash into a integer pointer array the same size as the dentry hash table on my system, and then just increments the value at the address we got to see how many entries we overlap with. As you can see the old hash function ended up with all 1 million entries in their own bucket, whereas the new one they are only distributed among ~12.5k buckets, which is why we're using so much more CPU in __d_lookup". The reason for this hash regression is two-fold: - On 64-bit architectures the down-mixing of the original 64-bit word-at-a-time hash into the final 32-bit hash value is very simplistic and suboptimal, and just adds the two 32-bit parts together. In particular, because there is no bit shuffling and the mixing boundary is also a byte boundary, similar character patterns in the low and high word easily end up just canceling each other out. - the old byte-at-a-time hash mixed each byte into the final hash as it hashed the path component name, resulting in the low bits of the hash generally being a good source of hash data. That is not true for the word-at-a-time case, and the hash data is distributed among all the bits. The fix is the same in both cases: do a better job of mixing the bits up and using as much of the hash data as possible. We already have the "hash_32|64()" functions to do that. Reported-by:
Josef Bacik <jbacik@fb.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: Chris Mason <clm@fb.com> Cc: linux-fsdevel@vger.kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mario Kleiner authored
commit 7820e5ee upstream. Linux 3.16 fixed multiple bugs in kms pageflip completion events and timestamping, which were originally introduced in Linux 3.13. These fixes have been backported to all stable kernels since 3.13. However, the userspace nouveau-ddx needs to be aware if it is running on a kernel on which these bugs are fixed, or not. Bump the patchlevel of the drm driver version to signal this, so backporting this patch to stable 3.13+ kernels will give the ddx the required info. Signed-off-by:
Mario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by:
Ben Skeggs <bskeggs@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit bcc05910 upstream. If scsi_remove_host() is invoked after a SCSI device has been blocked, if the fast_io_fail_tmo or dev_loss_tmo work gets scheduled on the workqueue executing srp_remove_work() and if an I/O request is scheduled after the SCSI device had been blocked by e.g. multipathd then the following deadlock can occur: kworker/6:1 D ffff880831f3c460 0 195 2 0x00000000 Call Trace: [<ffffffff814aafd9>] schedule+0x29/0x70 [<ffffffff814aa0ef>] schedule_timeout+0x10f/0x2a0 [<ffffffff8105af6f>] msleep+0x2f/0x40 [<ffffffff8123b0ae>] __blk_drain_queue+0x4e/0x180 [<ffffffff8123d2d5>] blk_cleanup_queue+0x225/0x230 [<ffffffffa0010732>] __scsi_remove_device+0x62/0xe0 [scsi_mod] [<ffffffffa000ed2f>] scsi_forget_host+0x6f/0x80 [scsi_mod] [<ffffffffa0002eba>] scsi_remove_host+0x7a/0x130 [scsi_mod] [<ffffffffa07cf5c5>] srp_remove_work+0x95/0x180 [ib_srp] [<ffffffff8106d7aa>] process_one_work+0x1ea/0x6c0 [<ffffffff8106dd9b>] worker_thread+0x11b/0x3a0 [<ffffffff810758bd>] kthread+0xed/0x110 [<ffffffff814b972c>] ret_from_fork+0x7c/0xb0 multipathd D ffff880096acc460 0 5340 1 0x00000000 Call Trace: [<ffffffff814aafd9>] schedule+0x29/0x70 [<ffffffff814aa0ef>] schedule_timeout+0x10f/0x2a0 [<ffffffff814ab79b>] io_schedule_timeout+0x9b/0xf0 [<ffffffff814abe1c>] wait_for_completion_io_timeout+0xdc/0x110 [<ffffffff81244b9b>] blk_execute_rq+0x9b/0x100 [<ffffffff8124f665>] sg_io+0x1a5/0x450 [<ffffffff8124fd21>] scsi_cmd_ioctl+0x2a1/0x430 [<ffffffff8124fef2>] scsi_cmd_blk_ioctl+0x42/0x50 [<ffffffffa00ec97e>] sd_ioctl+0xbe/0x140 [sd_mod] [<ffffffff8124bd04>] blkdev_ioctl+0x234/0x840 [<ffffffff811cb491>] block_ioctl+0x41/0x50 [<ffffffff811a0df0>] do_vfs_ioctl+0x300/0x520 [<ffffffff811a1051>] SyS_ioctl+0x41/0x80 [<ffffffff814b9962>] tracesys+0xd0/0xd5 Fix this by scheduling removal work on another workqueue than the transport layer timers. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Reviewed-by:
Sagi Grimberg <sagig@mellanox.com> Reviewed-by:
David Dillow <dave@thedillows.org> Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Roger Quadros authored
commit 40ddbf50 upstream. commit 65b97cf6 introduced in v3.7 caused a regression by using a reversed CS_MASK thus causing omap_calculate_ecc to always fail. As the NAND base driver never checks for .calculate()'s return value, the zeroed ECC values are used as is without showing any error to the user. However, this won't work and the NAND device won't be guarded by any error code. Fix the issue by using the correct mask. Code was tested on omap3beagle using the following procedure - flash the primary bootloader (MLO) from the kernel to the first NAND partition using nandwrite. - boot the board from NAND. This utilizes OMAP ROM loader that relies on 1-bit Hamming code ECC. Fixes: 65b97cf6 (mtd: nand: omap2: handle nand on gpmc) Signed-off-by:
Roger Quadros <rogerq@ti.com> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kevin Hao authored
commit a152056c upstream. I got the following panic on my fsl p5020ds board. Unable to handle kernel paging request for data at address 0x7375627379737465 Faulting instruction address: 0xc000000000100778 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=24 CoreNet Generic Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.15.0-next-20140613 #145 task: c0000000fe080000 ti: c0000000fe088000 task.ti: c0000000fe088000 NIP: c000000000100778 LR: c00000000010073c CTR: 0000000000000000 REGS: c0000000fe08aa00 TRAP: 0300 Not tainted (3.15.0-next-20140613) MSR: 0000000080029000 <CE,EE,ME> CR: 24ad2e24 XER: 00000000 DEAR: 7375627379737465 ESR: 0000000000000000 SOFTE: 1 GPR00: c0000000000c99b0 c0000000fe08ac80 c0000000009598e0 c0000000fe001d80 GPR04: 00000000000000d0 0000000000000913 c000000007902b20 0000000000000000 GPR08: c0000000feaae888 0000000000000000 0000000007091000 0000000000200200 GPR12: 0000000028ad2e28 c00000000fff4000 c0000000007abe08 0000000000000000 GPR16: c0000000007ab160 c0000000007aaf98 c00000000060ba68 c0000000007abda8 GPR20: c0000000007abde8 c0000000feaea6f8 c0000000feaea708 c0000000007abd10 GPR24: c000000000989370 c0000000008c6228 00000000000041ed c0000000fe00a400 GPR28: c00000000017c1cc 00000000000000d0 7375627379737465 c0000000fe001d80 NIP [c000000000100778] .__kmalloc_track_caller+0x70/0x168 LR [c00000000010073c] .__kmalloc_track_caller+0x34/0x168 Call Trace: [c0000000fe08ac80] [c00000000087e6b8] uevent_sock_list+0x0/0x10 (unreliable) [c0000000fe08ad20] [c0000000000c99b0] .kstrdup+0x44/0x90 [c0000000fe08adc0] [c00000000017c1cc] .__kernfs_new_node+0x4c/0x130 [c0000000fe08ae70] [c00000000017d7e4] .kernfs_new_node+0x2c/0x64 [c0000000fe08aef0] [c00000000017db00] .kernfs_create_dir_ns+0x34/0xc8 [c0000000fe08af80] [c00000000018067c] .sysfs_create_dir_ns+0x58/0xcc [c0000000fe08b010] [c0000000002c711c] .kobject_add_internal+0xc8/0x384 [c0000000fe08b0b0] [c0000000002c7644] .kobject_add+0x64/0xc8 [c0000000fe08b140] [c000000000355ebc] .device_add+0x11c/0x654 [c0000000fe08b200] [c0000000002b5988] .add_disk+0x20c/0x4b4 [c0000000fe08b2c0] [c0000000003a21d4] .add_mtd_blktrans_dev+0x340/0x514 [c0000000fe08b350] [c0000000003a3410] .mtdblock_add_mtd+0x74/0xb4 [c0000000fe08b3e0] [c0000000003a32cc] .blktrans_notify_add+0x64/0x94 [c0000000fe08b470] [c00000000039b5b4] .add_mtd_device+0x1d4/0x368 [c0000000fe08b520] [c00000000039b830] .mtd_device_parse_register+0xe8/0x104 [c0000000fe08b5c0] [c0000000003b8408] .of_flash_probe+0x72c/0x734 [c0000000fe08b750] [c00000000035ba40] .platform_drv_probe+0x38/0x84 [c0000000fe08b7d0] [c0000000003599a4] .really_probe+0xa4/0x29c [c0000000fe08b870] [c000000000359d3c] .__driver_attach+0x100/0x104 [c0000000fe08b900] [c00000000035746c] .bus_for_each_dev+0x84/0xe4 [c0000000fe08b9a0] [c0000000003593c0] .driver_attach+0x24/0x38 [c0000000fe08ba10] [c000000000358f24] .bus_add_driver+0x1c8/0x2ac [c0000000fe08bab0] [c00000000035a3a4] .driver_register+0x8c/0x158 [c0000000fe08bb30] [c00000000035b9f4] .__platform_driver_register+0x6c/0x80 [c0000000fe08bba0] [c00000000084e080] .of_flash_driver_init+0x1c/0x30 [c0000000fe08bc10] [c000000000001864] .do_one_initcall+0xbc/0x238 [c0000000fe08bd00] [c00000000082cdc0] .kernel_init_freeable+0x188/0x268 [c0000000fe08bdb0] [c0000000000020a0] .kernel_init+0x1c/0xf7c [c0000000fe08be30] [c000000000000884] .ret_from_kernel_thread+0x58/0xd4 Instruction dump: 41bd0010 480000c8 4bf04eb5 60000000 e94d0028 e93f0000 7cc95214 e8a60008 7fc9502a 2fbe0000 419e00c8 e93f0022 <7f7e482a> 39200000 88ed06b2 992d06b2 ---[ end trace b4c9a94804a42d40 ]--- It seems that the corrupted partition header on my mtd device triggers a bug in the ftl. In function build_maps() it will allocate the buffers needed by the mtd partition, but if something goes wrong such as kmalloc failure, mtd read error or invalid partition header parameter, it will free all allocated buffers and then return non-zero. In my case, it seems that partition header parameter 'NumTransferUnits' is invalid. And the ftl_freepart() is a function which free all the partition buffers allocated by build_maps(). Given the build_maps() is a self cleaning function, so there is no need to invoke this function even if build_maps() return with error. Otherwise it will causes the buffers to be freed twice and then weird things would happen. Signed-off-by:
Kevin Hao <haokexin@gmail.com> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pavel Shilovsky authored
commit f736906a upstream. The existing code calls server->ops->close() that is not right. This causes XFS test generic/310 to fail. Fix this by using server->ops->closedir() function. Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pavel Shilovsky authored
commit 1bbe4997 upstream. The existing code uses the old MAX_NAME constant. This causes XFS test generic/013 to fail. Fix it by replacing MAX_NAME with PATH_MAX that SMB1 uses. Also remove an unused MAX_NAME constant definition. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pavel Shilovsky authored
commit a07d3220 upstream. CIFS servers process nlink counts differently for files and directories. In cifs_rename() if we the request fails on the existing target, we try to remove it through cifs_unlink() but this is not what we want to do for directories. As the result the following sequence of commands mkdir {1,2}; mv -T 1 2; rmdir {1,2}; mkdir {1,2}; echo foo > 2/bar and XFS test generic/023 fail with -ENOENT error. That's why the second mkdir reuses the existing inode (target inode of the mv -T command) with S_DEAD flag. Fix this by checking whether the target is directory or not and calling cifs_rmdir() rather than cifs_unlink() for directories. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit 44b1d530 upstream. Add d_is_dir(dentry) helper which is analogous to S_ISDIR(). To avoid confusion, rename d_is_directory() to d_can_lookup(). Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pavel Shilovsky authored
commit b46799a8 upstream. When we requests rename we also need to update attributes of both source and target parent directories. Not doing it causes generic/309 xfstest to fail on SMB2 mounts. Fix this by marking these directories for force revalidating. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve French authored
commit 18f39e7b upstream. As Raphael Geissert pointed out, tcon_error_exit can dereference tcon and there is one path in which tcon can be null. Signed-off-by:
Steve French <smfrench@gmail.com> Reported-by:
Raphael Geissert <geissert@debian.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-