- 12 Sep, 2014 40 commits
-
-
Sarah Sharp authored
commit 1aa9578c upstream. Don Zickus <dzickus@redhat.com> writes: Some co-workers of mine bought Samsung laptops that had mostly usb3 ports. Those ports did not resume correctly (the driver would timeout communicating and fail). This led to frustration as suspend/resume is a common use for laptops. Poking around, I applied the reset on resume quirk to this chipset and the resume started working. Reloading the xhci_hcd module had been the temporary workaround. Signed-off-by:
Sarah Sharp <sarah.a.sharp@linux.intel.com> Reported-by:
Don Zickus <dzickus@redhat.com> Tested-by:
Prarit Bhargava <prarit@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit ebaacf5c)
-
Marcelo Tosatti authored
After free_loaded_vmcs executes, the "loaded_vmcs" structure is kfreed, and now vmx->loaded_vmcs points to a kfreed area. Subsequent free_loaded_vmcs then attempts to manipulate vmx->loaded_vmcs. Switch the order to avoid the problem. https://bugzilla.redhat.com/show_bug.cgi?id=1047892Reviewed-by:
Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> (cherry picked from commit 26a865f4) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Marcelo Tosatti authored
It is possible for __direct_map to be called on invalid root_hpa (-1), two examples: 1) try_async_pf -> can_do_async_pf -> vmx_interrupt_allowed -> nested_vmx_vmexit 2) vmx_handle_exit -> vmx_interrupt_allowed -> nested_vmx_vmexit Then to load_vmcs12_host_state and kvm_mmu_reset_context. Check for this possibility, let fault exception be regenerated. BZ: https://bugzilla.redhat.com/show_bug.cgi?id=924916Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 989c6b34) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Rob Herring authored
Move the outer_cache declaration of the CONFIG_OUTER_CACHE ifdef so that outer_cache can be used inside IS_ENABLED condition. Signed-off-by:
Rob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> (cherry picked from commit 0b53c11d) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Dan Carpenter authored
The call to clamp_t() first truncates the variable signed 8 bit and as a result, the actual clamp is a no-op. Fixes: 0d78156e ('p54: improve site survey') Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> (cherry picked from commit 608cfbe4) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Ben Hutchings authored
builddeb generates a control file that says the linux-headers package can only be built for the build system primary architecture. This breaks cross-building configurations. We should use $debarch for this instead. Since $debarch is not yet set when generating the control file, set Architecture: any and use control file variables to fill in the description. Fixes: cd8d60a2 ('kbuild: create linux-headers package in deb-pkg') Reported-and-tested-by:
"Niew, Sh." <shniew@gmail.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Michal Marek <mmarek@suse.cz> (cherry picked from commit f8ce239d) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Alexei Starovoitov authored
Commit a998d434 claimed to introduce negative offset support to x86 jit, but it couldn't be working, since at the time of the execution of LD+ABS or LD+IND instructions via call into bpf_internal_load_pointer_neg_helper() the %edx (3rd argument of this func) had junk value instead of access size in bytes (1 or 2 or 4). Store size into %edx instead of %ecx (what original commit intended to do) Fixes: a998d434 ("bpf jit: Let the x86 jit handle negative offsets") Signed-off-by:
Alexei Starovoitov <ast@plumgrid.com> Cc: Jan Seiffert <kaffeemonster@googlemail.com> Cc: Eric Dumazet <edumazet@google.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit fdfaf64e) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Josh Durgin authored
With the current full handling, there is a race between osds and clients getting the first map marked full. If the osd wins, it will return -ENOSPC to any writes, but the client may already have writes in flight. This results in the client getting the error and propagating it up the stack. For rbd, the block layer turns this into EIO, which can cause corruption in filesystems above it. To avoid this race, osds are being changed to drop writes that came from clients with an osdmap older than the last osdmap marked full. In order for this to work, clients must resend all writes after they encounter a full -> not full transition in the osdmap. osds will wait for an updated map instead of processing a request from a client with a newer map, so resent writes will not be dropped by the osd unless there is another not full -> full transition. This approach requires both osds and clients to be fixed to avoid the race. Old clients talking to osds with this fix may hang instead of returning EIO and potentially corrupting an fs. New clients talking to old osds have the same behavior as before if they encounter this race. Fixes: http://tracker.ceph.com/issues/6938Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Josh Durgin <josh.durgin@inktank.com> (cherry picked from commit 9a1ea2db) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Paul E. McKenney authored
According to the C standard 3.4.3p3, overflow of a signed integer results in undefined behavior. This commit therefore changes the definitions of time_after(), time_after_eq(), time_after64(), and time_after_eq64() to avoid this undefined behavior. The trick is that the subtraction is done using unsigned arithmetic, which according to 6.2.5p9 cannot overflow because it is defined as modulo arithmetic. This has the added (though admittedly quite small) benefit of shortening four lines of code by four characters each. Note that the C standard considers the cast from unsigned to signed to be implementation-defined, see 6.3.1.3p3. However, on a two's-complement system, an implementation that defines anything other than a reinterpretation of the bits is free to come to me, and I will be happy to act as a witness for its being committed to an insane asylum. (Although I have nothing against saturating arithmetic or signals in some cases, these things really should not be the default when compiling an operating-system kernel.) Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: John Stultz <john.stultz@linaro.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Kevin Easton <kevin@guarana.org> [ paulmck: Included time_after64() and time_after_eq64(), as suggested by Eric Dumazet, also fixed commit message.] Reviewed-by:
Josh Triplett <josh@joshtriplett.org> (cherry picked from commit 5a581b36) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Roman Volkov authored
commit 1f91ecc1 upstream. When selecting the audio output destinations (headphones, FP headphones, multichannel output), the channel routing should be changed depending on what destination selected. Also unnecessary I2S channels are digitally muted. This function called when the user selects the destination in the ALSA mixer. Signed-off-by:
Roman Volkov <v1ron@mail.ru> Signed-off-by:
Clemens Ladisch <clemens@ladisch.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 212b4654)
-
Filipe David Borba Manana authored
When using a mix of compressed file extents and prealloc extents, it is possible to fill a page of a file with random, garbage data from some unrelated previous use of the page, instead of a sequence of zeroes. A simple sequence of steps to get into such case, taken from the test case I made for xfstests, is: _scratch_mkfs _scratch_mount "-o compress-force=lzo" $XFS_IO_PROG -f -c "pwrite -S 0x06 -b 18670 266978 18670" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "falloc 26450 665194" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "truncate 542872" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar This results in the following file items in the fs tree: item 4 key (257 INODE_ITEM 0) itemoff 15879 itemsize 160 inode generation 6 transid 6 size 542872 block group 0 mode 100600 item 5 key (257 INODE_REF 256) itemoff 15863 itemsize 16 inode ref index 2 namelen 6 name: foobar item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53 extent data disk byte 0 nr 0 gen 6 extent data offset 0 nr 24576 ram 266240 extent compression 0 item 7 key (257 EXTENT_DATA 24576) itemoff 15757 itemsize 53 prealloc data disk byte 12849152 nr 241664 gen 6 prealloc data offset 0 nr 241664 item 8 key (257 EXTENT_DATA 266240) itemoff 15704 itemsize 53 extent data disk byte 12845056 nr 4096 gen 6 extent data offset 0 nr 20480 ram 20480 extent compression 2 item 9 key (257 EXTENT_DATA 286720) itemoff 15651 itemsize 53 prealloc data disk byte 13090816 nr 405504 gen 6 prealloc data offset 0 nr 258048 The on disk extent at offset 266240 (which corresponds to 1 single disk block), contains 5 compressed chunks of file data. Each of the first 4 compress 4096 bytes of file data, while the last one only compresses 3024 bytes of file data. Therefore a read into the file region [285648 ; 286720[ (length = 4096 - 3024 = 1072 bytes) should always return zeroes (our next extent is a prealloc one). The solution here is the compression code path to zero the remaining (untouched) bytes of the last page it uncompressed data into, as the information about how much space the file data consumes in the last page is not known in the upper layer fs/btrfs/extent_io.c:__do_readpage(). In __do_readpage we were correctly zeroing the remainder of the page but only if it corresponds to the last page of the inode and if the inode's size is not a multiple of the page size. This would cause not only returning random data on reads, but also permanently storing random data when updating parts of the region that should be zeroed. For the example above, it means updating a single byte in the region [285648 ; 286720[ would store that byte correctly but also store random data on disk. A test case for xfstests follows soon. Signed-off-by:
Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by:
Chris Mason <clm@fb.com> (cherry picked from commit a2aa75e1) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Marc Kleine-Budde authored
If flexcan_chip_start() in flexcan_open() fails, the interrupt is not freed, this patch adds the missing cleanup. Cc: linux-stable <stable@vger.kernel.org> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> (cherry picked from commit 7e9e148a) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Tejun Heo authored
PREPARE_[DELAYED_]WORK() are being phased out. They have few users and a nasty surprise in terms of reentrancy guarantee as workqueue considers work items to be different if they don't have the same work function. firewire core-device and sbp2 have been been multiplexing work items with multiple work functions. Introduce fw_device_workfn() and sbp2_lu_workfn() which invoke fw_device->workfn and sbp2_logical_unit->workfn respectively and always use the two functions as the work functions and update the users to set the ->workfn fields instead of overriding work functions using PREPARE_DELAYED_WORK(). This fixes a variety of possible regressions since a2c1c57b "workqueue: consider work function when searching for busy work items" due to which fw_workqueue lost its required non-reentrancy property. Signed-off-by:
Tejun Heo <tj@kernel.org> Acked-by:
Stefan Richter <stefanr@s5r6.in-berlin.de> Cc: linux1394-devel@lists.sourceforge.net Cc: stable@vger.kernel.org # v3.9+ Cc: stable@vger.kernel.org # v3.8.2+ Cc: stable@vger.kernel.org # v3.4.60+ Cc: stable@vger.kernel.org # v3.2.40+ (cherry picked from commit 70044d71) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Roman Volkov authored
Actually CS4245 connected to the I2S channel 1 for capture, not channel 2. Otherwise capturing and playback does not work for CS4245. Signed-off-by:
Roman Volkov <v1ron@mail.ru> Signed-off-by:
Clemens Ladisch <clemens@ladisch.de> (cherry picked from commit 3dd77654) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Jan Beulich authored
Backends may need to protect themselves against an insane number of produced requests stored by a frontend, in case they iterate over requests until reaching the req_prod value. There can't be more requests on the ring than the difference between produced requests and produced (but possibly not yet published) responses. This is a more strict alternative to a patch previously posted by Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>. Signed-off-by:
Jan Beulich <jbeulich@suse.com> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (cherry picked from commit 8d925690) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
stephen hemminger authored
Fix warning about 0 used as NULL. Signed-off-by:
Stephen Hemminger <stephen@networkplumber.org> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 9eaee8be) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Maxim Patlasov authored
The way how fuse calls truncate_pagecache() from fuse_change_attributes() is completely wrong. Because, w/o i_mutex held, we never sure whether 'oldsize' and 'attr->size' are valid by the time of execution of truncate_pagecache(inode, oldsize, attr->size). In fact, as soon as we released fc->lock in the middle of fuse_change_attributes(), we completely loose control of actions which may happen with given inode until we reach truncate_pagecache. The list of potentially dangerous actions includes mmap-ed reads and writes, ftruncate(2) and write(2) extending file size. The typical outcome of doing truncate_pagecache() with outdated arguments is data corruption from user point of view. This is (in some sense) acceptable in cases when the issue is triggered by a change of the file on the server (i.e. externally wrt fuse operation), but it is absolutely intolerable in scenarios when a single fuse client modifies a file without any external intervention. A real life case I discovered by fsx-linux looked like this: 1. Shrinking ftruncate(2) comes to fuse_do_setattr(). The latter sends FUSE_SETATTR to the server synchronously, but before getting fc->lock ... 2. fuse_dentry_revalidate() is asynchronously called. It sends FUSE_LOOKUP to the server synchronously, then calls fuse_change_attributes(). The latter updates i_size, releases fc->lock, but before comparing oldsize vs attr->size.. 3. fuse_do_setattr() from the first step proceeds by acquiring fc->lock and updating attributes and i_size, but now oldsize is equal to outarg.attr.size because i_size has just been updated (step 2). Hence, fuse_do_setattr() returns w/o calling truncate_pagecache(). 4. As soon as ftruncate(2) completes, the user extends file size by write(2) making a hole in the middle of file, then reads data from the hole either by read(2) or mmap-ed read. The user expects to get zero data from the hole, but gets stale data because truncate_pagecache() is not executed yet. The scenario above illustrates one side of the problem: not truncating the page cache even though we should. Another side corresponds to truncating page cache too late, when the state of inode changed significantly. Theoretically, the following is possible: 1. As in the previous scenario fuse_dentry_revalidate() discovered that i_size changed (due to our own fuse_do_setattr()) and is going to call truncate_pagecache() for some 'new_size' it believes valid right now. But by the time that particular truncate_pagecache() is called ... 2. fuse_do_setattr() returns (either having called truncate_pagecache() or not -- it doesn't matter). 3. The file is extended either by write(2) or ftruncate(2) or fallocate(2). 4. mmap-ed write makes a page in the extended region dirty. The result will be the lost of data user wrote on the fourth step. The patch is a hotfix resolving the issue in a simplistic way: let's skip dangerous i_size update and truncate_pagecache if an operation changing file size is in progress. This simplistic approach looks correct for the cases w/o external changes. And to handle them properly, more sophisticated and intrusive techniques (e.g. NFS-like one) would be required. I'd like to postpone it until the issue is well discussed on the mailing list(s). Changed in v2: - improved patch description to cover both sides of the issue. Signed-off-by:
Maxim Patlasov <mpatlasov@parallels.com> Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Cc: stable@vger.kernel.org (cherry picked from commit 06a7c3c2) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Tejun Heo authored
task->cgroups is a RCU pointer pointing to struct css_set. A task switches to a different css_set on cgroup migration but a css_set doesn't change once created and its pointers to cgroup_subsys_states aren't RCU protected. task_subsys_state[_check]() is the macro to acquire css given a task and subsys_id pair. It RCU-dereferences task->cgroups->subsys[] not task->cgroups, so the RCU pointer task->cgroups ends up being dereferenced without read_barrier_depends() after it. It's broken. Fix it by introducing task_css_set[_check]() which does RCU-dereference on task->cgroups. task_subsys_state[_check]() is reimplemented to directly dereference ->subsys[] of the css_set returned from task_css_set[_check](). This removes some of sparse RCU warnings in cgroup. v2: Fixed unbalanced parenthsis and there's no need to use rcu_dereference_raw() when !CONFIG_PROVE_RCU. Both spotted by Li. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Acked-by:
Li Zefan <lizefan@huawei.com> Cc: stable@vger.kernel.org (cherry picked from commit 14611e51) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Lai Jiangshan authored
When a kworker should die, the kworkre is notified through WORKER_DIE flag instead of kthread_should_stop(). This, IIRC, is primarily to keep the test synchronized inside worker_pool lock. WORKER_DIE is first set while holding pool->lock, the lock is dropped and kthread_stop() is called. Unfortunately, this means that there's a slight chance that the target kworker may see WORKER_DIE before kthread_stop() finishes and exits and frees the target task before or during kthread_stop(). Fix it by pinning the target task before setting WORKER_DIE and putting it after kthread_stop() is done. tj: Improved patch description and comment. Moved pinning above WORKER_DIE for better signify what it's protecting. CC: stable@vger.kernel.org Signed-off-by:
Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by:
Tejun Heo <tj@kernel.org> (cherry picked from commit 5bdfff96) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Emil Goode authored
This patch removes a generic hard_header_len check from the usbnet module that is causing dropped packages under certain circumstances for devices that send rx packets that cross urb boundaries. One example is the AX88772B which occasionally send rx packets that cross urb boundaries where the remaining partial packet is sent with no hardware header. When the buffer with a partial packet is of less number of octets than the value of hard_header_len the buffer is discarded by the usbnet module. With AX88772B this can be reproduced by using ping with a packet size between 1965-1976. The bug has been reported here: https://bugzilla.kernel.org/show_bug.cgi?id=29082 This patch introduces the following changes: - Removes the generic hard_header_len check in the rx_complete function in the usbnet module. - Introduces a ETH_HLEN check for skbs that are not cloned from within a rx_fixup callback. - For safety a hard_header_len check is added to each rx_fixup callback function that could be affected by this change. These extra checks could possibly be removed by someone who has the hardware to test. - Removes a call to dev_kfree_skb_any() and instead utilizes the dev->done list to queue skbs for cleanup. The changes place full responsibility on the rx_fixup callback functions that clone skbs to only pass valid skbs to the usbnet_skb_return function. Signed-off-by:
Emil Goode <emilgoode@gmail.com> Reported-by:
Igor Gnatenko <i.gnatenko.brain@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit eb85569f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Martin Schwidefsky authored
The kernel currently crashes with a low-address-protection exception if a user space process executes an instruction that tries to use the linkage stack. Set the base-ASTE origin and the subspace-ASTE origin of the dispatchable-unit-control-table to point to a dummy ASTE. Set up control register 15 to point to an empty linkage stack with no room left. A user space process with a linkage stack instruction will still crash but with a different exception which is correctly translated to a segmentation fault instead of a kernel oops. Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> (cherry picked from commit 8d7f6690) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Nicholas Bellinger authored
This patch re-adds the ability to optionally run in buffered FILEIO mode (eg: w/o O_DSYNC) for device backends in order to once again use the Linux buffered cache as a write-back storage mechanism. This logic was originally dropped with mainline v3.5-rc commit: commit a4dff304 Author: Nicholas Bellinger <nab@linux-iscsi.org> Date: Wed May 30 16:25:41 2012 -0700 target/file: Use O_DSYNC by default for FILEIO backends This difference with this patch is that fd_create_virtdevice() now forces the explicit setting of emulate_write_cache=1 when buffered FILEIO operation has been enabled. (v2: Switch to FDBD_HAS_BUFFERED_IO_WCE + add more detailed comment as requested by hch) Reported-by:
Ferry <iscsitmp@bananateam.nl> Cc: Christoph Hellwig <hch@lst.de> Cc: <stable@vger.kernel.org> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> (cherry picked from commit b32f4c7e) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Jan Kara authored
qib_user_sdma_queue_pkts() gets called with mmap_sem held for writing. Except for get_user_pages() deep down in qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even more interestingly the function qib_user_sdma_queue_pkts() (and also qib_user_sdma_coalesce() called somewhat later) call copy_from_user() which can hit a page fault and we deadlock on trying to get mmap_sem when handling that fault. So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and leave mmap_sem locking for mm. This deadlock has actually been observed in the wild when the node is under memory pressure. Cc: <stable@vger.kernel.org> Reviewed-by:
Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Roland Dreier <roland@purestorage.com> (cherry picked from commit 603e7729) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Steven Rostedt authored
ftrace_trace_function is a variable that holds what function will be called directly by the assembly code (mcount). If just a single function is registered and it handles recursion itself, then the assembly will call that function directly without any helper function. It also passes in the ftrace_op that was registered with the callback. The ftrace_op to send is stored in the function_trace_op variable. The ftrace_trace_function and function_trace_op needs to be coordinated such that the called callback wont be called with the wrong ftrace_op, otherwise bad things can happen if it expected a different op. Luckily, there's no callback that doesn't use the helper functions that requires this. But there soon will be and this needs to be fixed. Use a set_function_trace_op to store the ftrace_op to set the function_trace_op to when it is safe to do so (during the update function within the breakpoint or stop machine calls). Or if dynamic ftrace is not being used (static tracing) then we have to do a bit more synchronization when the ftrace_trace_function is set as that takes affect immediately (as oppose to dynamic ftrace doing it with the modification of the trampoline). Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> (cherry picked from commit 405e1d83) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Ben Hutchings authored
This function is largely a duplicate of paste_selection() in drivers/tty/vt/selection.c, but with its own selection state. The speakup selection mechanism should really be merged with vt. For now, apply the changes from 'TTY: vt, fix paste_selection ldisc handling', 'tty: Make ldisc input flow control concurrency-friendly', and 'tty: Fix unsafe vt paste_selection()'. References: https://bugs.debian.org/735202 References: https://bugs.debian.org/744015Reported-by:
Paul Gevers <elbrus@debian.org> Reported-and-tested-by:
Jarek Czekalski <jarekczek@poczta.onet.pl> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Cc: <stable@vger.kernel.org> # v3.8 but needs backporting for < 3.12 Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 28a821c3) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Pratyush Anand authored
Problem Summary: Problem has been observed generally with PM states where VBUS goes off during suspend. There are some SS USB devices which take longer time for link training compared to many others. Such devices fail to reconnect with same old address which was associated with it before suspend. When system resumes, at some point of time (dpm_run_callback-> usb_dev_resume->usb_resume->usb_resume_both->usb_resume_device-> usb_port_resume) SW reads hub status. If device is present, then it finishes port resume and re-enumerates device with same address. If device is not present then, SW thinks that device was removed during suspend and therefore does logical disconnection and removes all the resource allocated for this device. Now, if I put sufficient delay just before root hub status read in usb_resume_device then, SW sees always that device is present. In normal course(without any delay) SW sees that no device is present and then SW removes all resource associated with the device at this port. In the latter case, after sometime, device says that hey I am here, now host enumerates it, but with new address. Problem had been reproduced when I connect verbatim USB3.0 hard disc with my STiH407 XHCI host running with 3.10 kernel. I see that similar problem has been reported here. https://bugzilla.kernel.org/show_bug.cgi?id=53211 Reading above it seems that bug was not in 3.6.6 and was present in 3.8 and again it was not present for some in 3.12.6, while it was present for few others. I tested with 3.13-FC19 running at i686 desktop, problem was still there. However, I was failed to reproduce it with 3.16-RC4 running at same i686 machine. I would say it is just a random observation. Problem for few devices is always there, as I am unable to find a proper fix for the issue. So, now question is what should be the amount of delay so that host is always able to recognize suspended device after resume. XHCI specs 4.19.4 says that when Link training is successful, port sets CSC bit to 1. So if SW reads port status before successful link training, then it will not find device to be present. USB Analyzer log with such buggy devices show that in some cases device switch on the RX termination after long delay of host enabling the VBUS. In few other cases it has been seen that device fails to negotiate link training in first attempt. It has been reported till now that few devices take as long as 2000 ms to train the link after host enabling its VBUS and RX termination. This patch implements a 2000 ms timeout for CSC bit to set ie for link training. If in a case link trains before timeout, loop will exit earlier. This patch implements above delay, but only for SS device and when persist is enabled. So, for the good device overhead is almost none. While for the bad devices penalty could be the time which it take for link training. But, If a device was connected before suspend, and was removed while system was asleep, then the penalty would be the timeout ie 2000 ms. Results: Verbatim USB SS hard disk connected with STiH407 USB host running 3.10 Kernel resumes in 461 msecs without this patch, but hard disk is assigned a new device address. Same system resumes in 790 msecs with this patch, but with old device address. Cc: <stable@vger.kernel.org> Signed-off-by:
Pratyush Anand <pratyush.anand@st.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit a40178b2) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
David S. Miller authored
Based almost entirely upon a patch by Christopher Alexander Tobias Schulze. In commit db64fe02 ("mm: rewrite vmap layer") lazy VMAP tlb flushing was added to the vmalloc layer. This causes problems on sparc64. Sparc64 has two VMAP mapped regions and they are not contiguous with eachother. First we have the malloc mapping area, then another unrelated region, then the vmalloc region. This "another unrelated region" is where the firmware is mapped. If the lazy TLB flushing logic in the vmalloc code triggers after we've had both a module unload and a vfree or similar, it will pass an address range that goes from somewhere inside the malloc region to somewhere inside the vmalloc region, and thus covering the openfirmware area entirely. The sparc64 kernel learns about openfirmware's dynamic mappings in this region early in the boot, and then services TLB misses in this area. But openfirmware has some locked TLB entries which are not mentioned in those dynamic mappings and we should thus not disturb them. These huge lazy TLB flush ranges causes those openfirmware locked TLB entries to be removed, resulting in all kinds of problems including hard hangs and crashes during reboot/reset. Besides causing problems like this, such huge TLB flush ranges are also incredibly inefficient. A plea has been made with the author of the VMAP lazy TLB flushing code, but for now we'll put a safety guard into our flush_tlb_kernel_range() implementation. Since the implementation has become non-trivial, stop defining it as a macro and instead make it a function in a C source file. Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 4ca9a237) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
David S. Miller authored
The assumption was that update_mmu_cache() (and the equivalent for PMDs) would only be called when the PTE being installed will be accessible by the user. This is not true for code paths originating from remove_migration_pte(). There are dire consequences for placing a non-valid PTE into the TSB. The TLB miss frramework assumes thatwhen a TSB entry matches we can just load it into the TLB and return from the TLB miss trap. So if a non-valid PTE is in there, we will deadlock taking the TLB miss over and over, never satisfying the miss. Just exit early from update_mmu_cache() and friends in this situation. Based upon a report and patch from Christopher Alexander Tobias Schulze. Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 18f38132) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Eric Dumazet authored
Dave reported following splat, caused by improper use of IP_INC_STATS_BH() in process context. BUG: using __this_cpu_add() in preemptible [00000000] code: trinity-c117/14551 caller is __this_cpu_preempt_check+0x13/0x20 CPU: 3 PID: 14551 Comm: trinity-c117 Not tainted 3.16.0+ #33 ffffffff9ec898f0 0000000047ea7e23 ffff88022d32f7f0 ffffffff9e7ee207 0000000000000003 ffff88022d32f818 ffffffff9e397eaa ffff88023ee70b40 ffff88022d32f970 ffff8801c026d580 ffff88022d32f828 ffffffff9e397ee3 Call Trace: [<ffffffff9e7ee207>] dump_stack+0x4e/0x7a [<ffffffff9e397eaa>] check_preemption_disabled+0xfa/0x100 [<ffffffff9e397ee3>] __this_cpu_preempt_check+0x13/0x20 [<ffffffffc0839872>] sctp_packet_transmit+0x692/0x710 [sctp] [<ffffffffc082a7f2>] sctp_outq_flush+0x2a2/0xc30 [sctp] [<ffffffff9e0d985c>] ? mark_held_locks+0x7c/0xb0 [<ffffffff9e7f8c6d>] ? _raw_spin_unlock_irqrestore+0x5d/0x80 [<ffffffffc082b99a>] sctp_outq_uncork+0x1a/0x20 [sctp] [<ffffffffc081e112>] sctp_cmd_interpreter.isra.23+0x1142/0x13f0 [sctp] [<ffffffffc081c86b>] sctp_do_sm+0xdb/0x330 [sctp] [<ffffffff9e0b8f1b>] ? preempt_count_sub+0xab/0x100 [<ffffffffc083b350>] ? sctp_cname+0x70/0x70 [sctp] [<ffffffffc08389ca>] sctp_primitive_ASSOCIATE+0x3a/0x50 [sctp] [<ffffffffc083358f>] sctp_sendmsg+0x88f/0xe30 [sctp] [<ffffffff9e0d673a>] ? lock_release_holdtime.part.28+0x9a/0x160 [<ffffffff9e0d62ce>] ? put_lock_stats.isra.27+0xe/0x30 [<ffffffff9e73b624>] inet_sendmsg+0x104/0x220 [<ffffffff9e73b525>] ? inet_sendmsg+0x5/0x220 [<ffffffff9e68ac4e>] sock_sendmsg+0x9e/0xe0 [<ffffffff9e1c0c09>] ? might_fault+0xb9/0xc0 [<ffffffff9e1c0bae>] ? might_fault+0x5e/0xc0 [<ffffffff9e68b234>] SYSC_sendto+0x124/0x1c0 [<ffffffff9e0136b0>] ? syscall_trace_enter+0x250/0x330 [<ffffffff9e68c3ce>] SyS_sendto+0xe/0x10 [<ffffffff9e7f9be4>] tracesys+0xdd/0xe2 This is a followup of commits f1d8cba6 ("inet: fix possible seqlock deadlocks") and 7f88c6b2 ("ipv6: fix possible seqlock deadlock in ip6_finish_output2") Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Reported-by:
Dave Jones <davej@redhat.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 757efd32) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
H. Peter Anvin authored
Make espfix64 a hidden Kconfig option. This fixes the x86-64 UML build which had broken due to the non-existence of init_espfix_bsp() in UML: since UML uses its own Kconfig, this option does not appear in the UML build. This also makes it possible to make support for 16-bit segments a configuration option, for the people who want to minimize the size of the kernel. Reported-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Cc: Richard Weinberger <richard@nod.at> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com (cherry picked from commit 197725de) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Greg Kroah-Hartman authored
commit bdd405d2 ("usb: hub: Prevent hub autosuspend if usbcore.autosuspend is -1") causes a build error if CONFIG_PM_RUNTIME is disabled. Fix that by doing a simple #ifdef guard around it. Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Reported-by:
kbuild test robot <fengguang.wu@intel.com> Cc: Roger Quadros <rogerq@ti.com> Cc: Michael Welling <mwelling@emacinc.com> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit a9ef803d) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Oleg Nesterov authored
Aleksei hit the soft lockup during reading /proc/PID/smaps. David investigated the problem and suggested the right fix. while_each_thread() is racy and should die, this patch updates vm_is_stack(). Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Aleksei Besogonov <alex.besogonov@gmail.com> Tested-by:
Aleksei Besogonov <alex.besogonov@gmail.com> Suggested-by:
David Rientjes <rientjes@google.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 4449a51a) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Trond Myklebust authored
In the presence of delegations, we can no longer assume that the state->n_rdwr, state->n_rdonly, state->n_wronly reflect the open stateid share mode, and so we need to calculate the initial value for calldata->arg.fmode using the state->flags. Reported-by:
James Drews <drews@engr.wisc.edu> Fixes: 88069f77 (NFSv41: Fix a potential state leakage when...) Cc: stable@vger.kernel.org # 2.6.33+ Signed-off-by:
Trond Myklebust <trond.myklebust@primarydata.com> (cherry picked from commit aee7af35) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Trond Myklebust authored
When creating a new object on the NFS server, we should not be sending posix setacl requests unless the preceding posix_acl_create returned a non-trivial acl. Doing so, causes Solaris servers in particular to return an EINVAL. Fixes: 013cdf10 (nfs: use generic posix ACL infrastructure,,,) Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132786 Cc: stable@vger.kernel.org # 3.14+ Signed-off-by:
Trond Myklebust <trond.myklebust@primarydata.com> (cherry picked from commit f87d928f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Chuck Lever authored
The current code always selects XPRT_TRANSPORT_BC_TCP for the back channel, even when the forward channel was not TCP (eg, RDMA). When a 4.1 mount is attempted with RDMA, the server panics in the TCP BC code when trying to send CB_NULL. Instead, construct the transport protocol number from the forward channel transport or'd with XPRT_TRANSPORT_BC. Transports that do not support bi-directional RPC will not have registered a "BC" transport, causing create_backchannel_client() to fail immediately. Fixes: https://bugzilla.linux-nfs.org/show_bug.cgi?id=265Signed-off-by:
Chuck Lever <chuck.lever@oracle.com> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> (cherry picked from commit 3c45ddf8) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Kinglong Mee authored
A memory allocation failure could cause nfsd_startup_generic to fail, in which case nfsd_users wouldn't be incorrectly left elevated. After nfsd restarts nfsd_startup_generic will then succeed without doing anything--the first consequence is likely nfs4_start_net finding a bad laundry_wq and crashing. Signed-off-by:
Kinglong Mee <kinglongmee@gmail.com> Fixes: 4539f149 "nfsd: replace boolean nfsd_up flag by users counter" Cc: stable@vger.kernel.org Signed-off-by:
J. Bruce Fields <bfields@redhat.com> (cherry picked from commit d9499a95) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Roger Quadros authored
If user specifies that USB autosuspend must be disabled by module parameter "usbcore.autosuspend=-1" then we must prevent autosuspend of USB hub devices as well. commit 596d789a introduced in v3.8 changed the original behaivour and stopped respecting the usbcore.autosuspend parameter for hubs. Fixes: 596d789a "USB: set hub's default autosuspend delay as 0" Cc: [3.8+] <stable@vger.kernel.org> Signed-off-by:
Roger Quadros <rogerq@ti.com> Tested-by:
Michael Welling <mwelling@emacinc.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit bdd405d2) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
James Forshaw authored
This patch fixes a potential security issue in the whiteheat USB driver which might allow a local attacker to cause kernel memory corrpution. This is due to an unchecked memcpy into a fixed size buffer (of 64 bytes). On EHCI and XHCI busses it's possible to craft responses greater than 64 bytes leading a buffer overflow. Signed-off-by:
James Forshaw <forshaw@google.com> Cc: stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 6817ae22) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Johan Hovold authored
Add device id for Basic Micro ATOM Nano USB2Serial adapters. Reported-by:
Nicolas Alt <n.alt@mytum.de> Tested-by:
Nicolas Alt <n.alt@mytum.de> Cc: stable <stable@vger.kernel.org> Signed-off-by:
Johan Hovold <johan@kernel.org> (cherry picked from commit 6552cc7f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Tony Lindgren authored
Looks like MUSB cable removal can cause wake-up interrupts to stop working for device tree based booting at least for UART3 even as nothing is dynamically remuxed. This can be fixed by calling reconfigure_io_chain() for device tree based booting in hwmod code. Note that we already do that for legacy booting if the legacy mux is configured. My guess is that this is related to UART3 and MUSB ULPI hsusb0_data0 and hsusb0_data1 support for Carkit mode that somehow affect the configured IO chain for UART3 and require rearming the wake-up interrupts. In general, for device tree based booting, pinctrl-single calls the rearm hook that in turn calls reconfigure_io_chain so calling reconfigure_io_chain should not be needed from the hwmod code for other events. So let's limit the hwmod rearming of iochain only to HWMOD_FORCE_MSTANDBY where MUSB is currently the only user of it. If we see other devices needing similar changes we can add more checks for it. Cc: Paul Walmsley <paul@pwsan.com> Cc: stable@vger.kernel.org # v3.16 Signed-off-by:
Tony Lindgren <tony@atomide.com> (cherry picked from commit cc824534) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-