- 19 Jan, 2016 40 commits
-
-
Ville Syrjälä authored
commit 4655a12b upstream. The way the mode probing works is this: 1. All modes currently on the mode list are marked as UNVERIFIED 2. New modes are on the probed_modes list (they start with status OK) 3. Modes are moved from the probed_modes list to the actual mode list. If a mode already on the mode list is deemed to match one of the probed modes, the duplicate is dropped and the mode status updated to OK. After this the probed_modes list will be empty. 4. All modes on the mode list are verified to not violate any constraints. Any that do are marked as such. 5. Any mode left with a non-OK status is pruned from the list, with an appropriate debug message. What all this means is that any mode on the original list that didn't have a duplicate on the probed_modes list, should be left with status UNVERFIED (or previously could have been left with some other status, but never OK). I broke that in commit 05acaec3 ("drm: Reorganize probed mode validation") by always assigning something to the mode->status during the validation step. So any mode from the old list that still passed the validation would be left on the list with status OK in the end. Fix this by not doing the basic mode validation unless the mode already has status OK (meaning it came from the probed_modes list, or at least a duplicate of it was on that list). This way we will correctly prune away any mode from the old mode list that didn't appear on the probed_modes list. Cc: Adam Jackson <ajax@redhat.com> Fixes: 05acaec3 ("drm: Reorganize probed mode validation") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1449177255-9515-2-git-send-email-ville.syrjala@linux.intel.com Testcase: igt/kms_force_connector_basic/prune-stale-modes Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93332 [danvet: Also applying to drm-misc to avoid too much conflict hell - there's a big pile of patches from Ville on top of this one.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Anssi Hannula authored
commit 12a6116e upstream. Avoid getting sample rate on AudioQuest DragonFly as it is unsupported and causes noisy "cannot get freq at ep 0x1" messages when playback starts. Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Anssi Hannula authored
commit 42e3121d upstream. AudioQuest DragonFly DAC reports a volume control range of 0..50 (0x0000..0x0032) which in USB Audio means a range of 0 .. 0.2dB, which is obviously incorrect and would cause software using the dB information in e.g. volume sliders to have a massive volume difference in 100..102% range. Commit 2d1cb7f6 ("ALSA: usb-audio: add dB range mapping for some devices") added a dB range mapping for it with range 0..50 dB. However, the actual volume mapping seems to be neither linear volume nor linear dB scale, but instead quite close to the cubic mapping e.g. alsamixer uses, with a range of approx. -53...0 dB. Replace the previous quirk with a custom dB mapping based on some basic output measurements, using a 10-item range TLV (which will still fit in alsa-lib MAX_TLV_RANGE_SIZE). Tested on AudioQuest DragonFly HW v1.2. The quirk is only applied if the range is 0..50, so if this gets fixed/changed in later HW revisions it will no longer be applied. v2: incorporated Takashi Iwai's suggestion for the quirk application method Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Peter Hurley authored
commit 9ce119f3 upstream. A line discipline which does not define a receive_buf() method can can cause a GPF if data is ever received [1]. Oddly, this was known to the author of n_tracesink in 2011, but never fixed. [1] GPF report BUG: unable to handle kernel NULL pointer dereference at (null) IP: [< (null)>] (null) PGD 3752d067 PUD 37a7b067 PMD 0 Oops: 0010 [#1] SMP KASAN Modules linked in: CPU: 2 PID: 148 Comm: kworker/u10:2 Not tainted 4.4.0-rc2+ #51 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Workqueue: events_unbound flush_to_ldisc task: ffff88006da94440 ti: ffff88006db60000 task.ti: ffff88006db60000 RIP: 0010:[<0000000000000000>] [< (null)>] (null) RSP: 0018:ffff88006db67b50 EFLAGS: 00010246 RAX: 0000000000000102 RBX: ffff88003ab32f88 RCX: 0000000000000102 RDX: 0000000000000000 RSI: ffff88003ab330a6 RDI: ffff88003aabd388 RBP: ffff88006db67c48 R08: ffff88003ab32f9c R09: ffff88003ab31fb0 R10: ffff88003ab32fa8 R11: 0000000000000000 R12: dffffc0000000000 R13: ffff88006db67c20 R14: ffffffff863df820 R15: ffff88003ab31fb8 FS: 0000000000000000(0000) GS:ffff88006dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000000 CR3: 0000000037938000 CR4: 00000000000006e0 Stack: ffffffff829f46f1 ffff88006da94bf8 ffff88006da94bf8 0000000000000000 ffff88003ab31fb0 ffff88003aabd438 ffff88003ab31ff8 ffff88006430fd90 ffff88003ab32f9c ffffed0007557a87 1ffff1000db6cf78 ffff88003ab32078 Call Trace: [<ffffffff8127cf91>] process_one_work+0x8f1/0x17a0 kernel/workqueue.c:2030 [<ffffffff8127df14>] worker_thread+0xd4/0x1180 kernel/workqueue.c:2162 [<ffffffff8128faaf>] kthread+0x1cf/0x270 drivers/block/aoe/aoecmd.c:1302 [<ffffffff852a7c2f>] ret_from_fork+0x3f/0x70 arch/x86/entry/entry_64.S:468 Code: Bad RIP value. RIP [< (null)>] (null) RSP <ffff88006db67b50> CR2: 0000000000000000 ---[ end trace a587f8947e54d6ea ]--- Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Peter Hurley authored
commit ac8f3bf8 upstream. commit 40d5e090 ("n_tty: Fix EOF push handling") fixed EOF push for reads. However, that approach still allows a condition mismatch between poll() and read(), where poll() returns POLLIN but read() blocks. This state can happen when a previous read() returned because the user buffer was full and the next character was an EOF not at the beginning of the line. While the next read() will properly identify the condition and advance the read buffer tail without improperly indicating an EOF file condition (ie., read() will not mistakenly return 0), poll() will mistakenly indicate POLLIN. Although a possible solution would be to peek at the input buffer in n_tty_poll(), the better solution in this patch is to eat the EOF during the previous read() (ie., fix the problem by eliminating the condition). The current canon line buffer copy limits the scan for next end-of-line to the smaller of either, a. the remaining user buffer size b. completed lines in the input buffer When the remaining user buffer size is exactly one less than the end-of-line marked by EOF push, the EOF is not scanned nor skipped but left for subsequent reads. In the example below, the scan index 'eol' has stopped at the EOF because it is past the scan limit of 5 (not because it has found the next set bit in read_flags) user buffer [*nr = 5] _ _ _ _ _ read_flags 0 0 0 0 0 1 input buffer h e l l o [EOF] ^ ^ / / tail eol result: found = 0, tail += 5, *nr += 5 Instead, allow the scan to peek ahead 1 byte (while still limiting the scan to completed lines in the input buffer). For the example above, result: found = 1, tail += 6, *nr += 5 Because the scan limit is now bumped +1 byte, when the scan is completed, the tail advance and the user buffer copy limit is re-clamped to *nr when EOF is _not_ found. Fixes: 40d5e090 ("n_tty: Fix EOF push handling") Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Mans Rullgard authored
commit 1ea5998a upstream. Attempting to use this codec driver triggers a BUG() in regcache_sync() since no cache type is set. The register map of this device is fairly small and has few holes so a flat cache is suitable. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Xiangliang Yu authored
commit 2d244c81 upstream. Because of some hardware limitation, AMD I2C controller can't trigger pending interrupt if interrupt status has been changed after clearing interrupt status bits. Then, I2C will lost interrupt and IO timeout. According to hardware design, this patch implements a workaround to disable i2c controller interrupt and re-enable i2c interrupt before exiting ISR. To reduce the performance impacts on other vendors, use unlikely function to check flag in ISR. Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ken Xue authored
commit 3eddad96 upstream. The patch reverts commit a445900c (i2c: designware: Add support for AMD I2C controller). It never worked anyhow because it did not register a proper clkdev. Since kernel 4.1 starts to support APD, there is no need to get freq from id->driver_data for AMD0010. clkdev is supposed to be already registered in APD. So, revert old design and make AMD0010 looks like other ones. Signed-off-by: Ken Xue <Ken.Xue@amd.com> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> [ kamal: 4.2-stable prereq for 2d244c81 i2c: designware: fix IO timeout issue for AMD controller ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Vineet Gupta authored
commit 8eb0984b upstream. As part of fixing another perf issue, observed that after a perf run, the interrupt got disabled on one/more cores. Turns out that despite requesting perf irq as percpu, the flow handler registered was not handle_percpu_irq() Given that on ARCv2 cores, IRQs < 24 are always private to cpu, we register the right handler at the very onset. Before Fix | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 0 0 0 0 ARCv2 core Intc 20 ARC perf counters | | [ARCLinux]# perf record -c 20000 /sbin/hackbench | Running with 10*40 (== 400) tasks. | | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 0 522 8 51916 ARCv2 core Intc 20 ARC perf counters | | [ARCLinux]# perf record -c 20000 /sbin/hackbench | Running with 10*40 (== 400) tasks. | | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 0 522 8 104368 ARCv2 core Intc 20 ARC perf counters After Fix | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 0 0 0 0 ARCv2 core Intc 20 ARC perf counters | | [ARCLinux]# perf record -c 20000 /sbin/hackbench | Running with 10*40 (== 400) tasks. | | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 64198 62012 62697 67803 ARCv2 core Intc 20 ARC perf counters | | [ARCLinux]# perf record -c 20000 /sbin/hackbench | Running with 10*40 (== 400) tasks. | | [ARCLinux]# cat /proc/interrupts | grep perf | 20: 126014 122792 123301 133654 ARCv2 core Intc 20 ARC perf counters Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexey Brodkin <abrodkin@synopsys.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Prarit Bhargava authored
commit 79a21dbf upstream. Intel RAPL initialized on several systems where the BIOS lock bit (msr 0x610, bit 63) was set. This occured because the return value of rapl_read_data_raw() was being checked, rather than the value of the variable passed in, locked. This patch properly implments the rapl_read_data_raw() call to check the variable locked, and now the Intel RAPL driver outputs the warning: intel_rapl: RAPL package 0 domain package locked by BIOS and does not initialize for the package. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
James Bottomley authored
commit 5e103356 upstream. KASAN found that our additional element processing scripts drop off the end of the VPD page into unallocated space. The reason is that not every element has additional information but our traversal routines think they do, leading to them expecting far more additional information than is present. Fix this by adding a gate to the traversal routine so that it only processes elements that are expected to have additional information (list is in SES-2 section 6.1.13.1: Additional Element Status diagnostic page overview) Reported-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Tested-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Peter Ujfalusi authored
commit e2a0c9fa upstream. The condition for checking for XDAT being cleared was not correct. Fixes: 36bcecd0 ("ASoC: davinci-mcasp: Correct TX start sequence") Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ken Xue authored
commit 1c69d3b6 upstream. This reverts commit 49718f0f ("SCSI: Fix NULL pointer dereference in runtime PM") The old commit may lead to a issue that blk_{pre|post}_runtime_suspend and blk_{pre|post}_runtime_resume may not be called in pairs. Take sr device as example, when sr device goes to runtime suspend, blk_{pre|post}_runtime_suspend will be called since sr device defined pm->runtime_suspend. But blk_{pre|post}_runtime_resume will not be called since sr device doesn't have pm->runtime_resume. so, sr device can not resume correctly anymore. More discussion can be found from below link. http://marc.info/?l=linux-scsi&m=144163730531875&w=2Signed-off-by: Ken Xue <Ken.Xue@amd.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Cc: Xiangliang Yu <Xiangliang.Yu@amd.com> Cc: James E.J. Bottomley <JBottomley@odin.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Michael Terry <Michael.terry@canonical.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
James Bottomley authored
commit 3417c1b5 upstream. Simple enclosure implementations (mostly USB) are allowed to return only page 8 to every diagnostic query. That really confuses our implementation because we assume the return is the page we asked for and end up doing incorrect offsets based on bogus information leading to accesses outside of allocated ranges. Fix that by checking the page code of the return and giving an error if it isn't the one we asked for. This should fix reported bugs with USB storage by simply refusing to attach to enclosures that behave like this. It's also good defensive practise now that we're starting to see more USB enclosures. Reported-by: Andrea Gelmini <andrea.gelmini@gelma.net> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Johannes Berg authored
commit b7bb1100 upstream. Some users of rfkill, like NFC and cfg80211, use a dynamic name when allocating rfkill, in those cases dev_name(). Therefore, the pointer passed to rfkill_alloc() might not be valid forever, I specifically found the case that the rfkill name was quite obviously an invalid pointer (or at least garbage) when the wiphy had been renamed. Fix this by making a copy of the rfkill name in rfkill_alloc(). Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Cyrille Pitchen authored
commit aa876cd4 upstream. This patch fixes at_xdmac_prep_dma_memcpy(). Indeed the data width field of the Channel Configuration register was not updated properly in the loop: the bits of the dwidth field were not cleared before adding their new value. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Fixes: e1f7c9ee ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver") Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Paul Mackerras authored
commit c20875a3 upstream. Currently it is possible for userspace (e.g. QEMU) to set a value for the MSR for a guest VCPU which has both of the TS bits set, which is an illegal combination. The result of this is that when we execute a hrfid (hypervisor return from interrupt doubleword) instruction to enter the guest, the CPU will take a TM Bad Thing type of program interrupt (vector 0x700). Now, if PR KVM is configured in the kernel along with HV KVM, we actually handle this without crashing the host or giving hypervisor privilege to the guest; instead what happens is that we deliver a program interrupt to the guest, with SRR0 reflecting the address of the hrfid instruction and SRR1 containing the MSR value at that point. If PR KVM is not configured in the kernel, then we try to run the host's program interrupt handler with the MMU set to the guest context, which almost certainly causes a host crash. This closes the hole by making kvmppc_set_msr_hv() check for the illegal combination and force the TS field to a safe value (00, meaning non-transactional). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
John Keeping authored
commit 84ebac4d upstream. This is using completely the wrong mask and value when updating the register. Since the correct values are already defined in the header, switch to using a table with explicit constants rather than shifting the array index. Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jason A. Donenfeld authored
commit 70d906bc upstream. Some ciphers actually support encrypting zero length plaintexts. For example, many AEAD modes support this. The resulting ciphertext for those winds up being only the authentication tag, which is a result of the key, the iv, the additional data, and the fact that the plaintext had zero length. The blkcipher constructors won't copy the IV to the right place, however, when using a zero length input, resulting in some significant problems when ciphers call their initialization routines, only to find that the ->iv parameter is uninitialized. One such example of this would be using chacha20poly1305 with a zero length input, which then calls chacha20, which calls the key setup routine, which eventually OOPSes due to the uninitialized ->iv member. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Wang Dongsheng authored
commit acfc1cc1 upstream. If diu_ops is not implemented on platform, kernel will access a NULL pointer. We need to check this pointer in DIU initialization. Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com> Acked-by: Timur Tabi <timur@tabi.org> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ludovic Desroches authored
commit 15a03850 upstream. Fix typo in a macro which was not used until now. It explains why there is no error at compilation time. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Fixes: e1f7c9ee "dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver" Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Marcin Wojtas authored
commit b5015854 upstream. In hitherto code in case of RX buffer allocation error during refill, original buffer is pushed to the network stack, but the amount of available buffer pointers in BM pool is decreased. This commit fixes the situation by moving refill call before skb_put(), and returning original buffer pointer to the pool in case of an error. Signed-off-by: Marcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Marcin Wojtas authored
commit 4229d502 upstream. Each allocated buffer, whose pointer is put into BM pool is DMA-mapped. Hence it should be properly unmapped after usage or when removing buffers from pool. This commit fixes DMA handling on RX path by adding dma_unmap_single() in mvpp2_rx() and in mvpp2_bufs_free(). The latter function's argument number had to be increased for this purpose. Signed-off-by: Marcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Marcin Wojtas authored
commit e864b4c7 upstream. The Tx descriptor release code currently calls dma_unmap_single() and dev_kfree_skb_any() if the descriptor is associated with a non-NULL skb. This condition is true only for the last fragment of the packet. Since every descriptor's buffer is DMA-mapped it has to be properly unmapped. Signed-off-by: Marcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Will Deacon authored
commit 40ee068e upstream. Under some unusual context-switching patterns, it is possible to end up with multiple threads from the same mm running concurrently with different ASIDs: 1. CPU x schedules task t with mm p containing ASID a and generation g This task doesn't block and the CPU doesn't context switch. So: * per_cpu(active_asid, x) = {g,a} * p->context.id = {g,a} 2. Some other CPU generates an ASID rollover. The global generation is now (g + 1). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a} 3. CPU y schedules task t', which shares mm p with t. The generation mismatches, so we take the slowpath and hit the reserved ASID from CPU x. p is then updated so that p->context.id = {g + 1,a} 4. CPU y schedules some other task u, which has an mm != p. 5. Some other CPU generates *another* CPU rollover. The global generation is now (g + 2). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a}. 6. CPU y once again schedules task t', but now *fails* to hit the reserved ASID from CPU x because of the generation mismatch. This results in a new ASID being allocated, despite the fact that t is still running on CPU x with the same mm. Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised between the two threads. This patch fixes the problem by updating all of the matching reserved ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps the reserved ASIDs in-sync with the mm and avoids the problem. Reported-by: Tony Thompson <anthony.thompson@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ross Lagerwall authored
commit 3de88d62 upstream. When a CPU is offlined, there may be unprocessed events on a port for that CPU. If the port is subsequently reused on a different CPU, it could be in an unexpected state with the link bit set, resulting in interrupts being missed. Fix this by consuming any unprocessed events for a particular CPU when that CPU dies. Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Hans de Goede authored
commit bba61f50 upstream. According to the datasheets the n factor for dividing the tclk is 2 to the power n on Allwinner SoCs, not 2 to the power n + 1 as it is on other mv64xxx implementations. I've contacted Allwinner about this and they have confirmed that the datasheet is correct. This commit fixes the clk-divider calculations for Allwinner SoCs accordingly. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Tested-by: Olliver Schinagl <oliver@schinagl.nl> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Wolfram Sang authored
commit 9abd29e7 upstream. Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Arnd Bergmann authored
commit 183e53e8 upstream. The CPPI-4.1 driver selects TI_CPPI41, which is a dmaengine driver and that may not be available when CONFIG_DMADEVICES is not set: warning: (USB_TI_CPPI41_DMA) selects TI_CPPI41 which has unmet direct dependencies (DMADEVICES && ARCH_OMAP) This adds an extra dependency to avoid generating warnings in randconfig builds. Ideally we'd remove the 'select' statement, but that has the potential to break defconfig files. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 411dd19c ("usb: musb: Kconfig: Select the DMA driver if DMA mode of MUSB is enabled") Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Arnd Bergmann authored
commit 4f1dd973 upstream. The newly added suspend/resume implementation for ahci_mvebu causes a link error when CONFIG_PM_SLEEP is disabled: ERROR: "ahci_platform_suspend_host" [drivers/ata/ahci_mvebu.ko] undefined! ERROR: "ahci_platform_resume_host" [drivers/ata/ahci_mvebu.ko] undefined! This adds the same #ifdef here that exists in the ahci_platform driver which defines the above functions. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: d6ecf158 ("ata: ahci_mvebu: add suspend/resume support") Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Peter Zijlstra authored
commit dfd01f02 upstream. Jan Stancek reported that I wrecked things for him by fixing things for Vladimir :/ His report was due to an UNINTERRUPTIBLE wait getting -EINTR, which should not be possible, however my previous patch made this possible by unconditionally checking signal_pending(). We cannot use current->state as was done previously, because the instruction after the store to that variable it can be changed. We must instead pass the initial state along and use that. Fixes: 68985633 ("sched/wait: Fix signal handling in bit wait helpers") Reported-by: Jan Stancek <jstancek@redhat.com> Reported-by: Chris Mason <clm@fb.com> Tested-by: Jan Stancek <jstancek@redhat.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Chris Mason <clm@fb.com> Reviewed-by: Paul Turner <pjt@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: tglx@linutronix.de Cc: Oleg Nesterov <oleg@redhat.com> Cc: hpa@zytor.com Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dmitry V. Levin authored
commit 2d33fa10 upstream. According to arch/sh/kernel/syscalls_64.S and common sense, __NR_fgetxattr has to be defined to 259, but it doesn't. Instead, it's defined to 269, which is of course used by another syscall, __NR_sched_setaffinity in this case. This bug was found by strace test suite. Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Junxiao Bi authored
commit 854ee2e9 upstream. Commit 8f1eb487 ("ocfs2: fix umask ignored issue") introduced an issue, SGID of sub dir was not inherited from its parents dir. It is because SGID is set into "inode->i_mode" in ocfs2_get_init_inode(), but is overwritten by "mode" which don't have SGID set later. Fixes: 8f1eb487 ("ocfs2: fix umask ignored issue") Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com> Cc: Mark Fasheh <mfasheh@suse.de> Cc: Joel Becker <jlbec@evilplan.org> Acked-by: Srinivas Eeda <srinivas.eeda@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Seth Jennings authored
commit 26bbe7ef upstream. Commit bdee237c ("x86: mm: Use 2GB memory block size on large-memory x86-64 systems") and 982792c7 ("x86, mm: probe memory block size for generic x86 64bit") introduced large block sizes for x86. This made it possible to have multiple sections per memory block where previously, there was a only every one section per block. Since blocks consist of contiguous ranges of section, there can be holes in the blocks where sections are not present. If one attempts to offline such a block, a crash occurs since the code is not designed to deal with this. This patch is a quick fix to gaurd against the crash by not allowing blocks with non-present sections to be offlined. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=107781Signed-off-by: Seth Jennings <sjennings@variantweb.net> Reported-by: Andrew Banman <abanman@sgi.com> Cc: Daniel J Blueman <daniel@numascale.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Greg KH <greg@kroah.com> Cc: Russ Anderson <rja@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Naoya Horiguchi authored
commit 0d777df5 upstream. Currently at the beginning of hugetlb_fault(), we call huge_pte_offset() and check whether the obtained *ptep is a migration/hwpoison entry or not. And if not, then we get to call huge_pte_alloc(). This is racy because the *ptep could turn into migration/hwpoison entry after the huge_pte_offset() check. This race results in BUG_ON in huge_pte_alloc(). We don't have to call huge_pte_alloc() when the huge_pte_offset() returns non-NULL, so let's fix this bug with moving the code into else block. Note that the *ptep could turn into a migration/hwpoison entry after this block, but that's not a problem because we have another !pte_present check later (we never go into hugetlb_no_page() in that case.) Fixes: 290408d4 ("hugetlb: hugepage migration core") Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Michal Hocko authored
commit 373ccbe5 upstream. Tetsuo Handa has reported that the system might basically livelock in OOM condition without triggering the OOM killer. The issue is caused by internal dependency of the direct reclaim on vmstat counter updates (via zone_reclaimable) which are performed from the workqueue context. If all the current workers get assigned to an allocation request, though, they will be looping inside the allocator trying to reclaim memory but zone_reclaimable can see stalled numbers so it will consider a zone reclaimable even though it has been scanned way too much. WQ concurrency logic will not consider this situation as a congested workqueue because it relies that worker would have to sleep in such a situation. This also means that it doesn't try to spawn new workers or invoke the rescuer thread if the one is assigned to the queue. In order to fix this issue we need to do two things. First we have to let wq concurrency code know that we are in trouble so we have to do a short sleep. In order to prevent from issues handled by 0e093d99 ("writeback: do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encountered in the current zone") we limit the sleep only to worker threads which are the ones of the interest anyway. The second thing to do is to create a dedicated workqueue for vmstat and mark it WQ_MEM_RECLAIM to note it participates in the reclaim and to have a spare worker thread for it. Signed-off-by: Michal Hocko <mhocko@suse.com> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Tejun Heo <tj@kernel.org> Cc: Cristopher Lameter <clameter@sgi.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Arkadiusz Miskiewicz <arekm@maven.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [ kamal: backport to 4.2-stable: use queue_delayed_work() in vmstat_update ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Naoya Horiguchi authored
commit a88c7695 upstream. When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back on alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator. In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement h->resv_huge_pages, which means that successful hugetlb_fault() returns without releasing the reserve count. As a result, subsequent hugetlb_fault() might fail despite that there are still free hugepages. This patch simply adds decrementing code on that code path. I reproduced this problem when testing v4.3 kernel in the following situation: - the test machine/VM is a NUMA system, - hugepage overcommiting is enabled, - most of hugepages are allocated and there's only one free hugepage which is on node 0 (for example), - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to node 1, tries to allocate a hugepage, - the allocation should fail but the reserve count is still hold. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [ luis: backported to 3.16: - use 'chg' instead of 'gbl_chg' - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Mikulas Patocka authored
commit e46e31a3 upstream. When using the Promise TX2+ SATA controller on PA-RISC, the system often crashes with kernel panic, for example just writing data with the dd utility will make it crash. Kernel panic - not syncing: drivers/parisc/sba_iommu.c: I/O MMU @ 000000000000a000 is out of mapping resources CPU: 0 PID: 18442 Comm: mkspadfs Not tainted 4.4.0-rc2 #2 Backtrace: [<000000004021497c>] show_stack+0x14/0x20 [<0000000040410bf0>] dump_stack+0x88/0x100 [<000000004023978c>] panic+0x124/0x360 [<0000000040452c18>] sba_alloc_range+0x698/0x6a0 [<0000000040453150>] sba_map_sg+0x260/0x5b8 [<000000000c18dbb4>] ata_qc_issue+0x264/0x4a8 [libata] [<000000000c19535c>] ata_scsi_translate+0xe4/0x220 [libata] [<000000000c19a93c>] ata_scsi_queuecmd+0xbc/0x320 [libata] [<0000000040499bbc>] scsi_dispatch_cmd+0xfc/0x130 [<000000004049da34>] scsi_request_fn+0x6e4/0x970 [<00000000403e95a8>] __blk_run_queue+0x40/0x60 [<00000000403e9d8c>] blk_run_queue+0x3c/0x68 [<000000004049a534>] scsi_run_queue+0x2a4/0x360 [<000000004049be68>] scsi_end_request+0x1a8/0x238 [<000000004049de84>] scsi_io_completion+0xfc/0x688 [<0000000040493c74>] scsi_finish_command+0x17c/0x1d0 The cause of the crash is not exhaustion of the IOMMU space, there is plenty of free pages. The function sba_alloc_range is called with size 0x11000, thus the pages_needed variable is 0x11. The function sba_search_bitmap is called with bits_wanted 0x11 and boundary size is 0x10 (because dma_get_seg_boundary(dev) returns 0xffff). The function sba_search_bitmap attempts to allocate 17 pages that must not cross 16-page boundary - it can't satisfy this requirement (iommu_is_span_boundary always returns true) and fails even if there are many free entries in the IOMMU space. How did it happen that we try to allocate 17 pages that don't cross 16-page boundary? The cause is in the function iommu_coalesce_chunks. This function tries to coalesce adjacent entries in the scatterlist. The function does several checks if it may coalesce one entry with the next, one of those checks is this: if (startsg->length + dma_len > max_seg_size) break; When it finishes coalescing adjacent entries, it allocates the mapping: sg_dma_len(contig_sg) = dma_len; dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE); sg_dma_address(contig_sg) = PIDE_FLAG | (iommu_alloc_range(ioc, dev, dma_len) << IOVP_SHIFT) | dma_offset; It is possible that (startsg->length + dma_len > max_seg_size) is false (we are just near the 0x10000 max_seg_size boundary), so the funcion decides to coalesce this entry with the next entry. When the coalescing succeeds, the function performs dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE); And now, because of non-zero dma_offset, dma_len is greater than 0x10000. iommu_alloc_range (a pointer to sba_alloc_range) is called and it attempts to allocate 17 pages for a device that must not cross 16-page boundary. To fix the bug, we must make sure that dma_len after addition of dma_offset and alignment doesn't cross the segment boundary. I.e. change if (startsg->length + dma_len > max_seg_size) break; to if (ALIGN(dma_len + dma_offset + startsg->length, IOVP_SIZE) > max_seg_size) break; This patch makes this change (it precalculates max_seg_boundary at the beginning of the function iommu_coalesce_chunks). I also added a check that the mapping length doesn't exceed dma_get_seg_boundary(dev) (it is not needed for Promise TX2+ SATA, but it may be needed for other devices that have dma_get_seg_boundary lower than dma_get_max_seg_size). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Alan Stern authored
commit ad87e032 upstream. Some USB device / host controller combinations seem to have problems with Link Power Management. For example, Steinar found that his xHCI controller wouldn't handle bandwidth calculations correctly for two video cards simultaneously when LPM was enabled, even though the bus had plenty of bandwidth available. This patch introduces a new quirk flag for devices that should remain disabled for LPM, and creates quirk entries for Steinar's devices. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Steinar H. Gunderson <sgunderson@bigfoot.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Mathias Nyman authored
commit f69115fd upstream. According to USB 2 specs ports need to signal resume for at least 20ms, in practice even longer, before moving to U0 state. Both host and devices can initiate resume. On device initiated resume, a port status interrupt with the port in resume state in issued. The interrupt handler tags a resume_done[port] timestamp with current time + USB_RESUME_TIMEOUT, and kick roothub timer. Root hub timer requests for port status, finds the port in resume state, checks if resume_done[port] timestamp passed, and set port to U0 state. On host initiated resume, current code sets the port to resume state, sleep 20ms, and finally sets the port to U0 state. This should also be changed to work in a similar way as the device initiated resume, with timestamp tagging, but that is not yet tested and will be a separate fix later. There are a few issues with this approach 1. A host initiated resume will also generate a resume event. The event handler will find the port in resume state, believe it's a device initiated resume, and act accordingly. 2. A port status request might cut the resume signalling short if a get_port_status request is handled during the host resume signalling. The port will be found in resume state. The timestamp is not set leading to time_after_eq(jiffies, timestamp) returning true, as timestamp = 0. get_port_status will proceed with moving the port to U0. 3. If an error, or anything else happens to the port during device initiated resume signalling it will leave all the device resume parameters hanging uncleared, preventing further suspend, returning -EBUSY, and cause the pm thread to busyloop trying to enter suspend. Fix this by using the existing resuming_ports bitfield to indicate that resume signalling timing is taken care of. Check if the resume_done[port] is set before using it for timestamp comparison, and also clear out any resume signalling related variables if port is not in U0 or Resume state This issue was discovered when a PM thread busylooped, trying to runtime suspend the xhci USB 2 roothub on a Dell XPS Reported-by: Daniel J Blueman <daniel@quora.org> Tested-by: Daniel J Blueman <daniel@quora.org> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-