- 17 Jul, 2014 22 commits
-
-
Lv Zheng authored
commit f92fca00 upstream. Move the first command byte write into advance_transaction() so that all EC register accesses that can affect the command processing state machine can happen in this asynchronous state machine advancement function. The advance_transaction() function then can be a complete implementation of an asyncrhonous transaction for a single command so that: 1. The first command byte can be written in the interrupt context; 2. The command completion waiter can also be used to wait the first command byte's timeout; 3. In BURST mode, the follow-up command bytes can be written in the interrupt context directly, so that it doesn't need to return to the task context. Returning to the task context reduces the throughput of the BURST mode and in the worst cases where the system workload is very high, this leads to the hardware driven automatic BURST mode exit. In order not to increase memory consumption, convert 'done' into 'flags' to contain multiple indications: 1. ACPI_EC_COMMAND_COMPLETE: converting from original 'done' condition, indicating the completion of the command transaction. 2. ACPI_EC_COMMAND_POLL: indicating the availability of writing the first command byte. A new command can utilize this flag to compete for the right of accessing the underlying hardware. There is a follow-up bug fix that has utilized this new flag. The 2 flags are important because it also reflects a key concept of IO programs' design used in the system softwares. Normally an IO program running in the kernel should first be implemented in the asynchronous way. And the 2 flags are the most common way to implement its synchronous operations on top of the asynchronous operations: 1. POLL: This flag can be used to block until the asynchronous operations can happen. 2. COMPLETE: This flag can be used to block until the asynchronous operations have completed. By constructing code cleanly in this way, many difficult problems can be solved smoothly. Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891 Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931 Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911Reported-and-tested-by:
Gareth Williams <gareth@garethwilliams.me.uk> Reported-and-tested-by:
Hans de Goede <jwrdegoede@fedoraproject.org> Reported-by:
Barton Xu <tank.xuhan@gmail.com> Tested-by:
Steffen Weber <steffen.weber@gmail.com> Tested-by:
Arthur Chen <axchen@nvidia.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lv Zheng authored
commit 66b42b78 upstream. The advance_transaction() will be invoked from the IRQ context GPE handler and the task context ec_poll(). The handling of this function is locked so that the EC state machine are ensured to be advanced sequentially. But there is a problem. Before invoking advance_transaction(), EC_SC(R) is read. Then for advance_transaction(), there could be race condition around the lock from both contexts. The first one reading the register could fail this race and when it passes the stale register value to the state machine advancement code, the hardware condition is totally different from when the register is read. And the hardware accesses determined from the wrong hardware status can break the EC state machine. And there could be cases that the functionalities of the platform firmware are seriously affected. For example: 1. When 2 EC_DATA(W) writes compete the IBF=0, the 2nd EC_DATA(W) write may be invalid due to IBF=1 after the 1st EC_DATA(W) write. Then the hardware will either refuse to respond a next EC_SC(W) write of the next command or discard the current WR_EC command when it receives a EC_SC(W) write of the next command. 2. When 1 EC_SC(W) write and 1 EC_DATA(W) write compete the IBF=0, the EC_DATA(W) write may be invalid due to IBF=1 after the EC_SC(W) write. The next EC_DATA(R) could never be responded by the hardware. This is the root cause of the reported issue. Fix this issue by moving the EC_SC(R) access into the lock so that we can ensure that the state machine is advanced consistently. Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891 Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931 Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911Reported-and-tested-by:
Gareth Williams <gareth@garethwilliams.me.uk> Reported-and-tested-by:
Hans de Goede <jwrdegoede@fedoraproject.org> Reported-by:
Barton Xu <tank.xuhan@gmail.com> Tested-by:
Steffen Weber <steffen.weber@gmail.com> Tested-by:
Arthur Chen <axchen@nvidia.com> Signed-off-by:
Lv Zheng <lv.zheng@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andy Whitcroft authored
commit 867f9d46 upstream. The recently merged change (in v3.14-rc6) to ACPI resource detection (below) causes all zero length ACPI resources to be elided from the table: commit b355cee8 Author: Zhang Rui <rui.zhang@intel.com> Date: Thu Feb 27 11:37:15 2014 +0800 ACPI / resources: ignore invalid ACPI device resources This change has caused a regression in (at least) serial port detection for a number of machines (see LP#1313981 [1]). These seem to represent their IO regions (presumably incorrectly) as a zero length region. Reverting the above commit restores these serial devices. Only elide zero length resources which lie at address 0. Fixes: b355cee8 (ACPI / resources: ignore invalid ACPI device resources) Signed-off-by:
Andy Whitcroft <apw@canonical.com> Acked-by:
Zhang Rui <rui.zhang@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lan Tianyu authored
commit e63f6e28 upstream. Revert commit ab0fd674 (ACPI / AC: Remove AC's proc directory.), because some old tools (e.g. kpowersave from kde 3.5.10) are still using /proc/acpi/ac_adapter. Fixes: ab0fd674 (ACPI / AC: Remove AC's proc directory.) Reported-and-tested-by:
Sorin Manolache <sorinm@gmail.com> Signed-off-by:
Lan Tianyu <tianyu.lan@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit c024044d upstream. The module test script for the adm1021 driver exposes a cache problem when writing temperature limits. temp_min and temp_max are expected to be stored in milli-degrees C but are stored in degrees C. Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 1035a9e3 upstream. Writing to fanX_div does not clear the cache. As a result, reading from fanX_div may return the old value for up to two seconds after writing a new value. This patch ensures the fan_div cache is updated in set_fan_div(). Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 145e74a4 upstream. Upper limit for write operations to temperature limit registers was clamped to a fractional value. However, limit registers do not support fractional values. As a result, upper limits of 127.5 degrees C or higher resulted in a rounded limit of 128 degrees C. Since limit registers are signed, this was stored as -128 degrees C. Clamp limits to (-55, +127) degrees C to solve the problem. Value on writes to auto_temp[12]_min and auto_temp[12]_max were not clamped at all, but masked. As a result, out-of-range writes resulted in a more or less arbitrary limit. Clamp those attributes to (0, 127) degrees C for more predictable results. Cc: Axel Lin <axel.lin@ingics.com> Reviewed-by:
Jean Delvare <jdelvare@suse.de> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit f6c2dd20 upstream. It is customary to clamp limits instead of bailing out with an error if a configured limit is out of the range supported by the driver. This simplifies limit configuration, since the user will not typically know chip and/or driver specific limits. Reviewed-by:
Jean Delvare <jdelvare@suse.de> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit df86754b upstream. temp2_input should not be writable, fix it. Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aaron Lu authored
commit e8db5d67 upstream. On 05/21/2014 04:22 PM, Aaron Lu wrote: > On 05/21/2014 01:57 PM, Kui Zhang wrote: >> Hello, >> >> I get following error when rmmod thermal. >> >> rmmod thermal >> Killed While dealing with this problem, I found another problem that also results in a kernel crash on thermal module removal: From: Aaron Lu <aaron.lu@intel.com> Date: Wed, 21 May 2014 16:05:38 +0800 Subject: thermal: hwmon: Make the check for critical temp valid consistent We used the tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, temp) to decide if we need to create the temp_crit attribute file but we just check if tz->ops->get_crit_temp exists to decide if we need to remove that attribute file. Some ACPI thermal zone doesn't have a valid critical trip point and that would result in removing a non-existent device file on thermal module unload. Signed-off-by:
Aaron Lu <aaron.lu@intel.com> Signed-off-by:
Zhang Rui <rui.zhang@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 6d827fbc upstream. Commit f36fdb9f (i8k: Force SMM to run on CPU 0) adds support for multi-core CPUs to the driver. Unfortunately, that causes it to fail loading if compiled without SMP support, at least on 32 bit kernels. Kernel log shows "i8k: unable to get SMM Dell signature", and function i8k_smm is found to return -EINVAL. Testing revealed that the culprit is the missing return value check of set_cpus_allowed_ptr. Fixes: f36fdb9f (i8k: Force SMM to run on CPU 0) Reported-by:
Jim Bos <jim876@xs4all.nl> Tested-by:
Jim Bos <jim876@xs4all.nl> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Cc: Andreas Mohr <andi@lisas.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yasuaki Ishimatsu authored
commit 5a6024f1 upstream. When hot-adding and onlining CPU, kernel panic occurs, showing following call trace. BUG: unable to handle kernel paging request at 0000000000001d08 IP: [<ffffffff8114acfd>] __alloc_pages_nodemask+0x9d/0xb10 PGD 0 Oops: 0000 [#1] SMP ... Call Trace: [<ffffffff812b8745>] ? cpumask_next_and+0x35/0x50 [<ffffffff810a3283>] ? find_busiest_group+0x113/0x8f0 [<ffffffff81193bc9>] ? deactivate_slab+0x349/0x3c0 [<ffffffff811926f1>] new_slab+0x91/0x300 [<ffffffff815de95a>] __slab_alloc+0x2bb/0x482 [<ffffffff8105bc1c>] ? copy_process.part.25+0xfc/0x14c0 [<ffffffff810a3c78>] ? load_balance+0x218/0x890 [<ffffffff8101a679>] ? sched_clock+0x9/0x10 [<ffffffff81105ba9>] ? trace_clock_local+0x9/0x10 [<ffffffff81193d1c>] kmem_cache_alloc_node+0x8c/0x200 [<ffffffff8105bc1c>] copy_process.part.25+0xfc/0x14c0 [<ffffffff81114d0d>] ? trace_buffer_unlock_commit+0x4d/0x60 [<ffffffff81085a80>] ? kthread_create_on_node+0x140/0x140 [<ffffffff8105d0ec>] do_fork+0xbc/0x360 [<ffffffff8105d3b6>] kernel_thread+0x26/0x30 [<ffffffff81086652>] kthreadd+0x2c2/0x300 [<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60 [<ffffffff815f20ec>] ret_from_fork+0x7c/0xb0 [<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60 In my investigation, I found the root cause is wq_numa_possible_cpumask. All entries of wq_numa_possible_cpumask is allocated by alloc_cpumask_var_node(). And these entries are used without initializing. So these entries have wrong value. When hot-adding and onlining CPU, wq_update_unbound_numa() is called. wq_update_unbound_numa() calls alloc_unbound_pwq(). And alloc_unbound_pwq() calls get_unbound_pool(). In get_unbound_pool(), worker_pool->node is set as follow: 3592 /* if cpumask is contained inside a NUMA node, we belong to that node */ 3593 if (wq_numa_enabled) { 3594 for_each_node(node) { 3595 if (cpumask_subset(pool->attrs->cpumask, 3596 wq_numa_possible_cpumask[node])) { 3597 pool->node = node; 3598 break; 3599 } 3600 } 3601 } But wq_numa_possible_cpumask[node] does not have correct cpumask. So, wrong node is selected. As a result, kernel panic occurs. By this patch, all entries of wq_numa_possible_cpumask are allocated by zalloc_cpumask_var_node to initialize them. And the panic disappeared. Signed-off-by:
Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Reviewed-by:
Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Fixes: bce90380 ("workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]") Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gu Zheng authored
commit 391acf97 upstream. When runing with the kernel(3.15-rc7+), the follow bug occurs: [ 9969.258987] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:586 [ 9969.359906] in_atomic(): 1, irqs_disabled(): 0, pid: 160655, name: python [ 9969.441175] INFO: lockdep is turned off. [ 9969.488184] CPU: 26 PID: 160655 Comm: python Tainted: G A 3.15.0-rc7+ #85 [ 9969.581032] Hardware name: FUJITSU-SV PRIMEQUEST 1800E/SB, BIOS PRIMEQUEST 1000 Series BIOS Version 1.39 11/16/2012 [ 9969.706052] ffffffff81a20e60 ffff8803e941fbd0 ffffffff8162f523 ffff8803e941fd18 [ 9969.795323] ffff8803e941fbe0 ffffffff8109995a ffff8803e941fc58 ffffffff81633e6c [ 9969.884710] ffffffff811ba5dc ffff880405c6b480 ffff88041fdd90a0 0000000000002000 [ 9969.974071] Call Trace: [ 9970.003403] [<ffffffff8162f523>] dump_stack+0x4d/0x66 [ 9970.065074] [<ffffffff8109995a>] __might_sleep+0xfa/0x130 [ 9970.130743] [<ffffffff81633e6c>] mutex_lock_nested+0x3c/0x4f0 [ 9970.200638] [<ffffffff811ba5dc>] ? kmem_cache_alloc+0x1bc/0x210 [ 9970.272610] [<ffffffff81105807>] cpuset_mems_allowed+0x27/0x140 [ 9970.344584] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150 [ 9970.409282] [<ffffffff811b1385>] __mpol_dup+0xe5/0x150 [ 9970.471897] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150 [ 9970.536585] [<ffffffff81068c86>] ? copy_process.part.23+0x606/0x1d40 [ 9970.613763] [<ffffffff810bf28d>] ? trace_hardirqs_on+0xd/0x10 [ 9970.683660] [<ffffffff810ddddf>] ? monotonic_to_bootbased+0x2f/0x50 [ 9970.759795] [<ffffffff81068cf0>] copy_process.part.23+0x670/0x1d40 [ 9970.834885] [<ffffffff8106a598>] do_fork+0xd8/0x380 [ 9970.894375] [<ffffffff81110e4c>] ? __audit_syscall_entry+0x9c/0xf0 [ 9970.969470] [<ffffffff8106a8c6>] SyS_clone+0x16/0x20 [ 9971.030011] [<ffffffff81642009>] stub_clone+0x69/0x90 [ 9971.091573] [<ffffffff81641c29>] ? system_call_fastpath+0x16/0x1b The cause is that cpuset_mems_allowed() try to take mutex_lock(&callback_mutex) under the rcu_read_lock(which was hold in __mpol_dup()). And in cpuset_mems_allowed(), the access to cpuset is under rcu_read_lock, so in __mpol_dup, we can reduce the rcu_read_lock protection region to protect the access to cpuset only in current_cpuset_is_being_rebound(). So that we can avoid this bug. This patch is a temporary solution that just addresses the bug mentioned above, can not fix the long-standing issue about cpuset.mems rebinding on fork(): "When the forker's task_struct is duplicated (which includes ->mems_allowed) and it races with an update to cpuset_being_rebound in update_tasks_nodemask() then the task's mems_allowed doesn't get updated. And the child task's mems_allowed can be wrong if the cpuset's nodemask changes before the child has been added to the cgroup's tasklist." Signed-off-by:
Gu Zheng <guz.fnst@cn.fujitsu.com> Acked-by:
Li Zefan <lizefan@huawei.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maxime Bizon authored
commit bddbceb6 upstream. Uevents are suppressed during attributes registration, but never restored, so kobject_uevent() does nothing. Signed-off-by:
Maxime Bizon <mbizon@freebox.fr> Signed-off-by:
Tejun Heo <tj@kernel.org> Fixes: 226223abSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit 042d27ac upstream. This patch affects only architectures where the stack grows upwards (currently parisc and metag only). On those do not hardcode the maximum initial stack size to 1GB for 32-bit processes, but make it configurable via a config option. The main problem with the hardcoded stack size is, that we have two memory regions which grow upwards: stack and heap. To keep most of the memory available for heap in a flexmap memory layout, it makes no sense to hard allocate up to 1GB of the memory for stack which can't be used as heap then. This patch makes the stack size for 32-bit processes configurable and uses 80MB as default value which has been in use during the last few years on parisc and which hasn't showed any problems yet. Signed-off-by:
Helge Deller <deller@gmx.de> Signed-off-by:
James Hogan <james.hogan@imgtec.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: linux-parisc@vger.kernel.org Cc: linux-metag@vger.kernel.org Cc: John David Anglin <dave.anglin@bell.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit ab8a261b upstream. On parisc we can not use the existing compat implementation for fanotify_mark() because for the 64bit mask parameter the higher and lower 32bits are ordered differently than what the compat function expects from big endian architectures. Specifically: It finally turned out, that on hppa we end up with different assignments of parameters to kernel arguments depending on if we call the glibc wrapper function int fanotify_mark (int __fanotify_fd, unsigned int __flags, uint64_t __mask, int __dfd, const char *__pathname); or directly calling the syscall manually syscall(__NR_fanotify_mark, ...) Reason is, that the syscall() function is implemented as C-function and because we now have the sysno as first parameter in front of the other parameters the compiler will unexpectedly add an empty paramenter in front of the u64 value to ensure the correct calling alignment for 64bit values. This means, on hppa you can't simply use syscall() to call the kernel fanotify_mark() function directly, but you have to use the glibc function instead. This patch fixes the kernel in the hppa-arch specifc coding to adjust the parameters in a way as if userspace calls the glibc wrapper function fanotify_mark(). Signed-off-by:
Helge Deller <deller@gmx.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit eadcc720 upstream. Signed-off-by:
Helge Deller <deller@gmx.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kardell authored
commit baa3c652 upstream. Since AI lines could be selected at will (linux-3.11) the sending and receiving ends of the FIFO does not agree about what step is used for a line. It only works if the last lines are used, like 5,6,7, and fails if ie 2,4,6 is selected in DT. Signed-off-by:
Jan Kardell <jan.kardell@telliq.com> Tested-by:
Zubair Lutfullah <zubair.lutfullah@gmail.com> Signed-off-by:
Jonathan Cameron <jic23@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Sojka authored
commit d8279a40 upstream. This adds support for Infineon TriBoard TC1798 [1]. Only interface 1 is used as serial line (see [2], Figure 8-6). [1] http://www.infineon.com/cms/de/product/microcontroller/development-tools-software-and-kits/tricore-tm-development-tools-software-and-kits/starterkits-and-evaluation-boards/starter-kit-tc1798/channel.html?channel=db3a304333b8a7ca0133cfa3d73e4268 [2] http://www.infineon.com/dgdl/TriBoardManual-TC1798-V10.pdf?folderId=db3a304412b407950112b409ae7c0343&fileId=db3a304333b8a7ca0133cfae99fe426aSigned-off-by:
Michal Sojka <sojkam1@fel.cvut.cz> Cc: Johan Hovold <johan@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bert Vermeulen authored
commit 5a7fbe7e upstream. This patch adds PID 0x0003 to the VID 0x128d (Testo). At least the Testo 435-4 uses this, likely other gear as well. Signed-off-by:
Bert Vermeulen <bert@biot.com> Cc: Johan Hovold <johan@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andras Kovacs authored
commit b9326057 upstream. Corsair USB Dongles are shipped with Corsair AXi series PSUs. These are cp210x serial usb devices, so make driver detect these. I have a program, that can get information from these PSUs. Tested with 2 different dongles shipped with Corsair AX860i and AX1200i units. Signed-off-by:
Andras Kovacs <andras@sth.sze.hu> Signed-off-by:
Johan Hovold <johan@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bernd Wachter authored
commit 3d28bd84 upstream. Add ID of the Telewell 4G v2 hardware to option driver to get legacy serial interface working Signed-off-by:
Bernd Wachter <bernd.wachter@jolla.com> Signed-off-by:
Johan Hovold <johan@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 Jul, 2014 18 commits
-
-
Greg Kroah-Hartman authored
-
Hugh Dickins authored
commit d05f0cdc upstream. In v2.6.34 commit 9d8cebd4 ("mm: fix mbind vma merge problem") introduced vma merging to mbind(), but it should have also changed the convention of passing start vma from queue_pages_range() (formerly check_range()) to new_vma_page(): vma merging may have already freed that structure, resulting in BUG at mm/mempolicy.c:1738 and probably worse crashes. Fixes: 9d8cebd4 ("mm: fix mbind vma merge problem") Reported-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Tested-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by:
Hugh Dickins <hughd@google.com> Acked-by:
Christoph Lameter <cl@linux.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit fd1232b2 upstream. This patch fixes I/O errors with the sym53c8xx_2 driver when the disk returns QUEUE FULL status. When the controller encounters an error (including QUEUE FULL or BUSY status), it aborts all not yet submitted requests in the function sym_dequeue_from_squeue. This function aborts them with DID_SOFT_ERROR. If the disk has full tag queue, the request that caused the overflow is aborted with QUEUE FULL status (and the scsi midlayer properly retries it until it is accepted by the disk), but the sym53c8xx_2 driver aborts the following requests with DID_SOFT_ERROR --- for them, the midlayer does just a few retries and then signals the error up to sd. The result is that disk returning QUEUE FULL causes request failures. The error was reproduced on 53c895 with COMPAQ BD03685A24 disk (rebranded ST336607LC) with command queue 48 or 64 tags. The disk has 64 tags, but under some access patterns it return QUEUE FULL when there are less than 64 pending tags. The SCSI specification allows returning QUEUE FULL anytime and it is up to the host to retry. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Cc: Matthew Wilcox <matthew@wil.cx> Cc: James Bottomley <JBottomley@Parallels.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joonsoo Kim authored
commit 03787301 upstream. Commit b1cb0982 ("change the management method of free objects of the slab") introduced a bug on slab leak detector ('/proc/slab_allocators'). This detector works like as following decription. 1. traverse all objects on all the slabs. 2. determine whether it is active or not. 3. if active, print who allocate this object. but that commit changed the way how to manage free objects, so the logic determining whether it is active or not is also changed. In before, we regard object in cpu caches as inactive one, but, with this commit, we mistakenly regard object in cpu caches as active one. This intoduces kernel oops if DEBUG_PAGEALLOC is enabled. If DEBUG_PAGEALLOC is enabled, kernel_map_pages() is used to detect who corrupt free memory in the slab. It unmaps page table mapping if object is free and map it if object is active. When slab leak detector check object in cpu caches, it mistakenly think this object active so try to access object memory to retrieve caller of allocation. At this point, page table mapping to this object doesn't exist, so oops occurs. Following is oops message reported from Dave. It blew up when something tried to read /proc/slab_allocators (Just cat it, and you should see the oops below) Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC Modules linked in: [snip...] CPU: 1 PID: 9386 Comm: trinity-c33 Not tainted 3.14.0-rc5+ #131 task: ffff8801aa46e890 ti: ffff880076924000 task.ti: ffff880076924000 RIP: 0010:[<ffffffffaa1a8f4a>] [<ffffffffaa1a8f4a>] handle_slab+0x8a/0x180 RSP: 0018:ffff880076925de0 EFLAGS: 00010002 RAX: 0000000000001000 RBX: 0000000000000000 RCX: 000000005ce85ce7 RDX: ffffea00079be100 RSI: 0000000000001000 RDI: ffff880107458000 RBP: ffff880076925e18 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 000000000000000f R12: ffff8801e6f84000 R13: ffffea00079be100 R14: ffff880107458000 R15: ffff88022bb8d2c0 FS: 00007fb769e45740(0000) GS:ffff88024d040000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff8801e6f84ff8 CR3: 00000000a22db000 CR4: 00000000001407e0 DR0: 0000000002695000 DR1: 0000000002695000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000070602 Call Trace: leaks_show+0xce/0x240 seq_read+0x28e/0x490 proc_reg_read+0x3d/0x80 vfs_read+0x9b/0x160 SyS_read+0x58/0xb0 tracesys+0xd4/0xd9 Code: f5 00 00 00 0f 1f 44 00 00 48 63 c8 44 3b 0c 8a 0f 84 e3 00 00 00 83 c0 01 44 39 c0 72 eb 41 f6 47 1a 01 0f 84 e9 00 00 00 89 f0 <4d> 8b 4c 04 f8 4d 85 c9 0f 84 88 00 00 00 49 8b 7e 08 4d 8d 46 RIP handle_slab+0x8a/0x180 To fix the problem, I introduce an object status buffer on each slab. With this, we can track object status precisely, so slab leak detector would not access active object and no kernel oops would occur. Memory overhead caused by this fix is only imposed to CONFIG_DEBUG_SLAB_LEAK which is mainly used for debugging, so memory overhead isn't big problem. Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Reported-by:
Dave Jones <davej@redhat.com> Reported-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by:
Vladimir Davydov <vdavydov@parallels.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rik van Riel authored
commit 107437fe upstream. Changing PTEs and PMDs to pte_numa & pmd_numa is done with the mmap_sem held for reading, which means a pmd can be instantiated and turned into a numa one while __handle_mm_fault() is examining the value of old_pmd. If that happens, __handle_mm_fault() should just return and let the page fault retry, instead of throwing an oops. This is handled by the test for pmd_trans_huge(*pmd) below. Signed-off-by:
Rik van Riel <riel@redhat.com> Reviewed-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by:
Sunil Pandey <sunil.k.pandey@intel.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: linux-mm@kvack.org Cc: lwoodman@redhat.com Cc: dave.hansen@intel.com Link: http://lkml.kernel.org/r/20140429153615.2d72098e@annuminas.surriel.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> Patrick McLean <chutzpah@gentoo.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zhichuang SUN authored
commit fbc6c4a1 upstream. Function unifb_mmap calls functions which are defined in linux/mm.h and asm/pgtable.h The related error (for unicore32 with unicore32_defconfig): CC drivers/video/fbdev/fb-puv3.o drivers/video/fbdev/fb-puv3.c: In function 'unifb_mmap': drivers/video/fbdev/fb-puv3.c:646: error: implicit declaration of function 'vm_iomap_memory' drivers/video/fbdev/fb-puv3.c:646: error: implicit declaration of function 'pgprot_noncached' Signed-off-by:
Zhichuang Sun <sunzc522@gmail.com> Cc: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Jingoo Han <jg1.han@samsung.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Joe Perches <joe@perches.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: linux-fbdev@vger.kernel.org Acked-by:
Xuetao Guan <gxt@mprc.pku.edu.cn> Signed-off-by:
Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chen Gang authored
commit 1ff38c56 upstream. Need include "asm/pgtable.h" to include "asm-generic/pgtable-nopmd.h", so can let 'pmd_t' defined. The related error with allmodconfig: CC arch/unicore32/mm/alignment.o In file included from arch/unicore32/mm/alignment.c:24: arch/unicore32/include/asm/tlbflush.h:135: error: expected .). before .*. token arch/unicore32/include/asm/tlbflush.h:154: error: expected .). before .*. token In file included from arch/unicore32/mm/alignment.c:27: arch/unicore32/mm/mm.h:15: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token arch/unicore32/mm/mm.h:20: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token arch/unicore32/mm/mm.h:25: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token make[1]: *** [arch/unicore32/mm/alignment.o] Error 1 make: *** [arch/unicore32/mm] Error 2 Signed-off-by:
Chen Gang <gang.chen.5i5j@gmail.com> Acked-by:
Xuetao Guan <gxt@mprc.pku.edu.cn> Signed-off-by:
Xuetao Guan <gxt@mprc.pku.edu.cn> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sander Eikelenboom authored
commit b7a77235 upstream. This (widely used) construction: if(printk_ratelimit()) dev_dbg() Causes the ratelimiting to spam the kernel log with the "callbacks suppressed" message below, even while the dev_dbg it is supposed to rate limit wouldn't print anything because DEBUG is not defined for this device. [ 533.803964] retire_playback_urb: 852 callbacks suppressed [ 538.807930] retire_playback_urb: 852 callbacks suppressed [ 543.811897] retire_playback_urb: 852 callbacks suppressed [ 548.815745] retire_playback_urb: 852 callbacks suppressed [ 553.819826] retire_playback_urb: 852 callbacks suppressed So use dev_dbg_ratelimited() instead of this construction. Signed-off-by:
Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Gardner authored
commit a5065eb6 upstream. BugLink: http://bugs.launchpad.net/bugs/1305133 Malfunctioning or slow devices can cause a flood of dmesg SPAM. I've ignored checkpatch.pl complaints about the use of printk_ratelimit() in favour of prior art in sound/usb/pcm.c. WARNING: Prefer printk_ratelimited or pr_<level>_ratelimited to printk_ratelimit + if (printk_ratelimit() && Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.de> Cc: Eldad Zack <eldad@fogrefinery.com> Cc: Daniel Mack <zonque@gmail.com> Cc: Clemens Ladisch <clemens@ladisch.de> Signed-off-by:
Tim Gardner <tim.gardner@canonical.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Richard Guy Briggs authored
commit aa589a13 upstream. The new- prefix on ses and auid are un-necessary and break ausearch. Signed-off-by:
Richard Guy Briggs <rgb@redhat.com> Reported-by:
Steve Grubb <sgrubb@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Anatol Pomozov authored
commit e02ba72a upstream. deletes aio context and all resources related to. It makes sense that no IO operations connected to the context should be running after the context is destroyed. As we removed io_context we have no chance to get requests status or call io_getevents(). man page for io_destroy says that this function may block until all context's requests are completed. Before kernel 3.11 io_destroy() blocked indeed, but since aio refactoring in 3.11 it is not true anymore. Here is a pseudo-code that shows a testcase for a race condition discovered in 3.11: initialize io_context io_submit(read to buffer) io_destroy() // context is destroyed so we can free the resources free(buffers); // if the buffer is allocated by some other user he'll be surprised // to learn that the buffer still filled by an outstanding operation // from the destroyed io_context The fix is straight-forward - add a completion struct and wait on it in io_destroy, complete() should be called when number of in-fligh requests reaches zero. If two or more io_destroy() called for the same context simultaneously then only the first one waits for IO completion, other calls behaviour is undefined. Tested: ran http://pastebin.com/LrPsQ4RL testcase for several hours and do not see the race condition anymore. Signed-off-by:
Anatol Pomozov <anatol.pomozov@gmail.com> Signed-off-by:
Benjamin LaHaise <bcrl@kvack.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Imre Deak authored
commit b8c000d9 upstream. Atm, we refcount both power domains and power wells and intel_display_power_enabled_sw() returns the power domain refcount. What the callers are really interested in though is the sw state of the underlying power wells. Due to this we will report incorrectly that a given power domain is off if its power wells were enabled via another power domain, for example POWER_DOMAIN_INIT which enables all power wells. As a fix return instead the state based on the refcount of all power wells included in the passed in power domain. References: https://bugs.freedesktop.org/show_bug.cgi?id=79505 References: https://bugs.freedesktop.org/show_bug.cgi?id=79038Reported-by:
Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by:
Imre Deak <imre.deak@intel.com> Reviewed-by:
Damien Lespiau <damien.lespiau@intel.com> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by:
Jani Nikula <jani.nikula@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Micky Ching authored
commit 5027251e upstream. a27fbf2f ("mmc: add ignorance case for CMD13 CRC error") produced a cmd.flags unhandled in realtek pci host driver. This will make MMC card fail to initialize, this patch is used to handle the new cmd.flags condition and MMC card can be used. Signed-off-by:
Micky Ching <micky_ching@realsil.com.cn> Signed-off-by:
Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by:
Chris Ball <chris@printf.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
commit ffa216bb upstream. brcmfmac has been broken on my cubietruck with a BCM43362: brcmfmac: brcmf_chip_recognition: found AXI chip: BCM43362, rev=1 brcmfmac: brcmf_c_preinit_dcmds: Firmware version = wl0: Apr 22 2013 14:50:00 version 5.90.195.89.6 FWID 01-b30a427d since commit 53036261: "brcmfmac: update core reset and disable routines". The problem is that since this commit brcmf_chip_ai_resetcore no longer sets BCMA_IOCTL itself before bringing the core out of reset, instead relying on brcmf_chip_ai_coredisable to do so. But brcmf_chip_ai_coredisable is a nop of the chip is already in reset. This patch modifies brcmf_chip_ai_coredisable to always set BCMA_IOCTL even if the core is already in reset. This fixes brcmfmac hanging in firmware loading on my board. Cc: stable@vger.kernel.org # v3.14 Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Acked-by:
Arend van Spriel <arend@broadcom.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> [arend@broadcom.com: rebase patch on linux-3.14.y branch] Signed-off-by:
Arend van Spriel <arend@broadcom.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Westphal authored
commit 945b2b2d upstream. Quoting Samu Kallio: Basically what's happening is, during netns cleanup, nf_nat_net_exit gets called before ipv4_net_exit. As I understand it, nf_nat_net_exit is supposed to kill any conntrack entries which have NAT context (through nf_ct_iterate_cleanup), but for some reason this doesn't happen (perhaps something else is still holding refs to those entries?). When ipv4_net_exit is called, conntrack entries (including those with NAT context) are cleaned up, but the nat_bysource hashtable is long gone - freed in nf_nat_net_exit. The bug happens when attempting to free a conntrack entry whose NAT hash 'prev' field points to a slot in the freed hash table (head for that bin). We ignore conntracks with null nat bindings. But this is wrong, as these are in bysource hash table as well. Restore nat-cleaning for the netns-is-being-removed case. bug: https://bugzilla.kernel.org/show_bug.cgi?id=65191 Fixes: c2d421e1 ('netfilter: nf_nat: fix race when unloading protocol modules') Reported-by:
Samu Kallio <samu.kallio@aberdeencloud.com> Debugged-by:
Samu Kallio <samu.kallio@aberdeencloud.com> Signed-off-by:
Florian Westphal <fw@strlen.de> Tested-by:
Samu Kallio <samu.kallio@aberdeencloud.com> Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Hurley authored
commit 66528f90 upstream. If INPCK is not set, input parity detection should be disabled. This means parity errors should not be received from the tty driver, and the data received should be treated normally. SUS v3, 11.2.2, General Terminal Interface - Input Modes, states: "If INPCK is set, input parity checking shall be enabled. If INPCK is not set, input parity checking shall be disabled, allowing output parity generation without input parity errors. Note that whether input parity checking is enabled or disabled is independent of whether parity detection is enabled or disabled (see Control Modes). If parity detection is enabled but input parity checking is disabled, the hardware to which the terminal is connected shall recognize the parity bit, but the terminal special file shall not check whether or not this bit is correctly set." Ignore parity errors reported by the tty driver when INPCK is not set, and handle the received data normally. Fixes: Bugzilla #71681, 'Improvement of n_tty_receive_parity_error from n_tty.c' Reported-by:
Ivan <athlon_@mail.ru> Signed-off-by:
Peter Hurley <peter@hurleysoftware.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Hurley authored
commit ef8b9ddc upstream. If IGNBRK is set without either BRKINT or PARMRK set, some uart drivers send a 0x00 byte for BREAK without the TTYBREAK flag to the line discipline, when it should send either nothing or the TTYBREAK flag set. This happens because the read_status_mask masks out the BI condition, which uart_insert_char() then interprets as a normal 0x00 byte. SUS v3 is clear regarding the meaning of IGNBRK; Section 11.2.2, General Terminal Interface - Input Modes, states: "If IGNBRK is set, a break condition detected on input shall be ignored; that is, not put on the input queue and therefore not read by any process." Fix read_status_mask to include the BI bit if IGNBRK is set; the lsr status retains the BI bit if a BREAK is recv'd, which is subsequently ignored in uart_insert_char() when masked with the ignore_status_mask. Affected drivers: 8250 - all serial_txx9 mfd amba-pl010 amba-pl011 atmel_serial bfin_uart dz ip22zilog max310x mxs-auart netx-serial pnx8xxx_uart pxa sb1250-duart sccnxp serial_ks8695 sirfsoc_uart st-asc vr41xx_siu zs sunzilog fsl_lpuart sunsab ucc_uart bcm63xx_uart sunsu efm32-uart pmac_zilog mpsc msm_serial m32r_sio Unaffected drivers: omap-serial rp2 sa1100 imx icom Annotated for fixes: altera_uart mcf Drivers without break detection: 21285 xilinx-uartps altera_jtaguart apbuart arc-uart clps711x max3100 uartlite msm_serial_hs nwpserial lantiq vt8500_serial Unknown: samsung mpc52xx_uart bfin_sport_uart cpm_uart/core Fixes: Bugzilla #71651, '8250_core.c incorrectly handles IGNBRK flag' Reported-by:
Ivan <athlon_@mail.ru> Signed-off-by:
Peter Hurley <peter@hurleysoftware.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen Boyd authored
commit 437ae6a1 upstream. We forgot to add the status bit for the PLLs and we were using the wrong register and masks for configuration, leading to unexpected PLL configurations. Fix this. Fixes: d8b21201 (clk: qcom: Add support for MSM8974's multimedia clock controller (MMCC)) Signed-off-by:
Stephen Boyd <sboyd@codeaurora.org> Signed-off-by:
Mike Turquette <mturquette@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-