- 11 May, 2016 1 commit
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 10 May, 2016 17 commits
-
-
Tony Luck authored
[ Upstream commit ff15e95c ] In commit: eb1af3b7 ("Fix computation of channel address") I switched the "sck_way" variable from holding the log2 value read from the h/w to instead be the actual number. Unfortunately it is needed in log2 form when used to shift the address. Tested-by: Patrick Geary <patrickg@supermicro.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Acked-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Cc: Aristeu Rozanski <arozansk@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-edac@vger.kernel.org Cc: stable@vger.kernel.org Fixes: eb1af3b7 ("Fix computation of channel address") Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Jan Beulich authored
[ Upstream commit 103f6112 ] Huge pages are not normally available to PV guests. Not suppressing hugetlbfs use results in an endless loop of page faults when user mode code tries to access a hugetlbfs mapped area (since the hypervisor denies such PTEs to be created, but error indications can't be propagated out of xen_set_pte_at(), just like for various of its siblings), and - once killed in an oops like this: kernel BUG at .../fs/hugetlbfs/inode.c:428! invalid opcode: 0000 [#1] SMP ... RIP: e030:[<ffffffff811c333b>] [<ffffffff811c333b>] remove_inode_hugepages+0x25b/0x320 ... Call Trace: [<ffffffff811c3415>] hugetlbfs_evict_inode+0x15/0x40 [<ffffffff81167b3d>] evict+0xbd/0x1b0 [<ffffffff8116514a>] __dentry_kill+0x19a/0x1f0 [<ffffffff81165b0e>] dput+0x1fe/0x220 [<ffffffff81150535>] __fput+0x155/0x200 [<ffffffff81079fc0>] task_work_run+0x60/0xa0 [<ffffffff81063510>] do_exit+0x160/0x400 [<ffffffff810637eb>] do_group_exit+0x3b/0xa0 [<ffffffff8106e8bd>] get_signal+0x1ed/0x470 [<ffffffff8100f854>] do_signal+0x14/0x110 [<ffffffff810030e9>] prepare_exit_to_usermode+0xe9/0xf0 [<ffffffff814178a5>] retint_user+0x8/0x13 This is CVE-2016-3961 / XSA-174. Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <JGross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: stable@vger.kernel.org Cc: xen-devel <xen-devel@lists.xenproject.org> Link: http://lkml.kernel.org/r/57188ED802000078000E431C@prv-mh.provo.novell.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dominik Dingel authored
[ Upstream commit 7f9be775 ] On s390 we only can enable hugepages if the underlying hardware/hypervisor also does support this. Common code now would assume this to be signaled by setting HPAGE_SHIFT to 0. But on s390, where we only support one hugepage size, there is a link between HPAGE_SHIFT and pageblock_order. So instead of setting HPAGE_SHIFT to 0, we will implement the check for the hardware capability. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dominik Dingel authored
[ Upstream commit 2531c8cf ] s390 has a constant hugepage size, by setting HPAGE_SHIFT we also change e.g. the pageblock_order, which should be independent in respect to hugepage support. With this patch every architecture is free to define how to check for hugepage support. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Huacai Chen authored
[ Upstream commit 221004c6 ] Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: stable@vger.kernel.org Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Jérôme Glisse authored
[ Upstream commit b5dcec69 ] Allowing userptr bo which are basicly a list of page from some vma (so either anonymous page or file backed page) would lead to serious corruption of kernel structures and counters (because we overwrite the page->mapping field when mapping buffer). This will already block if the buffer was populated before anyone does try to mmap it because then TTM_PAGE_FLAG_SG would be set in in the ttm_tt flags. But that flag is check before ttm_tt_populate in the ttm vm fault handler. So to be safe just add a check to verify_access() callback. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
cpaul@redhat.com authored
[ Upstream commit deba0a2a ] With the joys of things running concurrently, there's always a chance that the port we get passed in drm_dp_payload_send_msg() isn't actually valid anymore. Because of this, we need to make sure we validate the reference to the port before we use it otherwise we risk running into various race conditions. For instance, on the Dell MST monitor I have here for testing, hotplugging it enough times causes us to kernel panic: [drm:intel_mst_enable_dp] 1 [drm:drm_dp_update_payload_part2] payload 0 1 [drm:intel_get_hpd_pins] hotplug event received, stat 0x00200000, dig 0x10101011, pins 0x00000020 [drm:intel_hpd_irq_handler] digital hpd port B - short [drm:intel_dp_hpd_pulse] got hpd irq on port B - short [drm:intel_dp_check_mst_status] got esi 00 10 00 [drm:drm_dp_update_payload_part2] payload 1 1 general protection fault: 0000 [#1] SMP … Call Trace: [<ffffffffa012b632>] drm_dp_update_payload_part2+0xc2/0x130 [drm_kms_helper] [<ffffffffa032ef08>] intel_mst_enable_dp+0xf8/0x180 [i915] [<ffffffffa0310dbd>] haswell_crtc_enable+0x3ed/0x8c0 [i915] [<ffffffffa030c84d>] intel_atomic_commit+0x5ad/0x1590 [i915] [<ffffffffa01db877>] ? drm_atomic_set_crtc_for_connector+0x57/0xe0 [drm] [<ffffffffa01dc4e7>] drm_atomic_commit+0x37/0x60 [drm] [<ffffffffa0130a3a>] drm_atomic_helper_set_config+0x7a/0xb0 [drm_kms_helper] [<ffffffffa01cc482>] drm_mode_set_config_internal+0x62/0x100 [drm] [<ffffffffa01d02ad>] drm_mode_setcrtc+0x3cd/0x4e0 [drm] [<ffffffffa01c18e3>] drm_ioctl+0x143/0x510 [drm] [<ffffffffa01cfee0>] ? drm_mode_setplane+0x1b0/0x1b0 [drm] [<ffffffff810f79a7>] ? hrtimer_start_range_ns+0x1b7/0x3a0 [<ffffffff81212962>] do_vfs_ioctl+0x92/0x570 [<ffffffff81590852>] ? __sys_recvmsg+0x42/0x80 [<ffffffff81212eb9>] SyS_ioctl+0x79/0x90 [<ffffffff816b4e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4 RIP [<ffffffffa012b026>] drm_dp_payload_send_msg+0x146/0x1f0 [drm_kms_helper] Which occurs because of the hotplug event shown in the log, which ends up causing DRM's dp helpers to drop the port we're updating the payload on and panic. CC: stable@vger.kernel.org Signed-off-by: Lyude <cpaul@redhat.com> Reviewed-by: David Airlie <airlied@linux.ie> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Takashi Iwai authored
[ Upstream commit 67f3754b ] The commit [9bef72bd: ALSA: pcxhr: Use nonatomic PCM ops] converted to non-atomic PCM ops, but shamelessly with an unbalanced mutex locking, which leads to the hangup easily. Fix it. Fixes: 9bef72bd ('ALSA: pcxhr: Use nonatomic PCM ops') Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=116441 Cc: <stable@vger.kernel.org> # 3.18+ Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sebastian Andrzej Siewior authored
[ Upstream commit 89e9e66b ] If userspace calls UNLOCK_PI unconditionally without trying the TID -> 0 transition in user space first then the user space value might not have the waiters bit set. This opens the following race: CPU0 CPU1 uval = get_user(futex) lock(hb) lock(hb) futex |= FUTEX_WAITERS .... unlock(hb) cmpxchg(futex, uval, newval) So the cmpxchg fails and returns -EINVAL to user space, which is wrong because the futex value is valid. To handle this (yes, yet another) corner case gracefully, check for a flag change and retry. [ tglx: Massaged changelog and slightly reworked implementation ] Fixes: ccf9e6a8 ("futex: Make unlock_pi more robust") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: stable@vger.kernel.org Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Darren Hart <dvhart@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1460723739-5195-1-git-send-email-bigeasy@linutronix.deSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lu, Han authored
[ Upstream commit 9859a971 ] Add HD Audio Device PCI ID for the Intel Broxton-T platform. It is an HDA Intel PCH controller. Signed-off-by: Lu, Han <han.lu@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lu, Han authored
[ Upstream commit c87693da ] Add HD Audio Device PCI ID for the Intel Broxton platform. It is an HDA Intel PCH controller. Signed-off-by: Lu, Han <han.lu@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lars-Peter Clausen authored
[ Upstream commit 38740a5b ] When using asynchronous read or write operations on the USB endpoints the issuer of the IO request is notified by calling the ki_complete() callback of the submitted kiocb when the URB has been completed. Calling this ki_complete() callback will free kiocb. Make sure that the structure is no longer accessed beyond that point, otherwise undefined behaviour might occur. Fixes: 2e4c7553 ("usb: gadget: f_fs: add aio support") Cc: <stable@vger.kernel.org> # v3.15+ Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit bfaddd9f ] This reverts commit e64c952e. ATPX is the ACPI method for controlling AMD PowerXpress laptops. There are flags to indicate which methods are supported. If the dGPU power down flag is not supported, the driver needs to implement the dGPU power down manually. We had previously always forced the driver to assume the ATPX dGPU power down was present, but this causes problems on boards where it is not, leading to GPU hangs when attempting to power down the dGPU. Manual dGPU power down is not currently supported in the Linux driver. Some laptops indicate that the ATPX dGPU power down method is not present, but it actually apparently is. I'm not sure if this is a bios bug and it should be set or if there is a reason it was unset and the method should not be used. This is not an issue on other OSes since both the ATPX and the manual driver power down methods are supported. This is apparently fairly widespread, so just revert for now. bugs: https://bugzilla.kernel.org/show_bug.cgi?id=115321 https://bugzilla.kernel.org/show_bug.cgi?id=116581 https://bugzilla.kernel.org/show_bug.cgi?id=116251 Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit bcb31eba ] bug: https://bugs.freedesktop.org/show_bug.cgi?id=76490Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit a64663d9 ] bug: https://bugzilla.kernel.org/show_bug.cgi?id=115291Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit 2b02ec79 ] Bug: https://bugs.freedesktop.org/show_bug.cgi?id=92260Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Maxim Sheviakov authored
[ Upstream commit e7865479 ] Just adds the quirk for MSI R7 370 Armor 2X Bug: https://bugs.freedesktop.org/show_bug.cgi?id=91294Signed-off-by: Maxim Sheviakov <mrader3940@yandex.ru> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 08 May, 2016 14 commits
-
-
Anton Blanchard authored
[ Upstream commit beff8237 ] scan_features() updates cpu_user_features but not cpu_user_features2. Amongst other things, cpu_user_features2 contains the user TM feature bits which we must keep in sync with the kernel TM feature bit. Signed-off-by: Anton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Anton Blanchard authored
[ Upstream commit 6997e57d ] The REAL_LE feature entry in the ibm_pa_feature struct is missing an MMU feature value, meaning all the remaining elements initialise the wrong values. This means instead of checking for byte 5, bit 0, we check for byte 0, bit 0, and then we incorrectly set the CPU feature bit as well as MMU feature bit 1 and CPU user feature bits 0 and 2 (5). Checking byte 0 bit 0 (IBM numbering), means we're looking at the "Memory Management Unit (MMU)" feature - ie. does the CPU have an MMU. In practice that bit is set on all platforms which have the property. This means we set CPU_FTR_REAL_LE always. In practice that seems not to matter because all the modern cpus which have this property also implement REAL_LE, and we've never needed to disable it. We're also incorrectly setting MMU feature bit 1, which is: #define MMU_FTR_TYPE_8xx 0x00000002 Luckily the only place that looks for MMU_FTR_TYPE_8xx is in Book3E code, which can't run on the same cpus as scan_features(). So this also doesn't matter in practice. Finally in the CPU user feature mask, we're setting bits 0 and 2. Bit 2 is not currently used, and bit 0 is: #define PPC_FEATURE_PPC_LE 0x00000001 Which says the CPU supports the old style "PPC Little Endian" mode. Again this should be harmless in practice as no 64-bit CPUs implement that mode. Fix the code by adding the missing initialisation of the MMU feature. Also add a comment marking CPU user feature bit 2 (0x4) as reserved. It would be unsafe to start using it as old kernels incorrectly set it. Fixes: 44ae3ab3 ("powerpc: Free up some CPU feature bits by moving out MMU-related features") Signed-off-by: Anton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org [mpe: Flesh out changelog, add comment reserving 0x4] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Bastien Nocera authored
[ Upstream commit afecb146 ] The Optiplex 9020m with Haswell-DT processor needs a quirk for the headset jack at the front of the machine to be able to use microphones. A quirk for this model was originally added in 31278997, but c77900e6 removed it in favour of a more generic version. Unfortunately, pin configurations can changed based on firmware/BIOS versions, and the generic version doesn't have any effect on newer versions of the machine/firmware anymore. With help from David Henningsson <diwic@ubuntu.com> Signed-off-by: Bastien Nocera <hadess@hadess.net> Tested-by: Bastien Nocera <hadess@hadess.net> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ville Syrjälä authored
[ Upstream commit 31318a92 ] HSW still has the wake FIFO, so let's check it. Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Deepak S <deepak.s@linux.intel.com> Fixes: 05a2fb15 ("drm/i915: Consolidate forcewake code") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460633942-24013-1-git-send-email-ville.syrjala@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> (cherry picked from commit 3d7d0c85) Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tom Lendacky authored
[ Upstream commit f709b45e ] Prevent information from leaking to userspace by doing a memset to 0 of the export state structure before setting the structure values and copying it. This prevents un-initialized padding areas from being copied into the export area. Cc: <stable@vger.kernel.org> # 3.14.x- Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Xiaodong Liu authored
[ Upstream commit 0851561d ] In sha_complete_job, incorrect mcryptd_hash_request_ctx pointer is used when check and complete other jobs. If the memory of first completed req is freed, while still completing other jobs in the func, kernel will crash since NULL pointer is assigned to RIP. Cc: <stable@vger.kernel.org> Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Yingjoe Chen authored
[ Upstream commit 5fedbb92 ] The debounce time unit for gpio_chip.set_debounce is us but mtk_gpio_set_debounce regard it as ms. Fix this by correct debounce time array dbnc_arr so it can find correct debounce setting. Debounce time for first debounce setting is 500us, correct this as well. While I'm at it, also change the debounce time array name to "debounce_time" for readability. Cc: stable@vger.kernel.org Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com> Reviewed-by: Daniel Kurtz <djkurtz@chromium.org> Acked-by: Hongzhou Yang <hongzhou.yang@mediatek.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit 7403c515 ] This got lost somewhere along the way. This fixes audio not working until set_property was called. Noticed-by: Hyungwon Hwang <hyungwon.hwang7@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dmitry Ivanov authored
[ Upstream commit 8f815cdd ] A non-privileged user can create a netlink socket with the same port_id as used by an existing open nl80211 netlink socket (e.g. as used by a hostapd process) with a different protocol number. Closing this socket will then lead to the notification going to nl80211's socket release notification handler, and possibly cause an action such as removing a virtual interface. Fix this issue by checking that the netlink protocol is NETLINK_GENERIC. Since generic netlink has no notifier chain of its own, we can't fix the problem more generically. Fixes: 026331c4 ("cfg80211/mac80211: allow registering for and sending action frames") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Ivanov <dima@ubnt.com> [rewrite commit message] Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dmitry Ivanov authored
[ Upstream commit e2726020 ] All existing users of NETLINK_URELEASE use it to clean up resources that were previously allocated to a socket via some command. As a result, no users require getting this notification for unbound sockets. Sending it for unbound sockets, however, is a problem because any user (including unprivileged users) can create a socket that uses the same ID as an existing socket. Binding this new socket will fail, but if the NETLINK_URELEASE notification is generated for such sockets, the users thereof will be tricked into thinking the socket that they allocated the resources for is closed. In the nl80211 case, this will cause destruction of virtual interfaces that still belong to an existing hostapd process; this is the case that Dmitry noticed. In the NFC case, it will cause a poll abort. In the case of netlink log/queue it will cause them to stop reporting events, as if NFULNL_CFG_CMD_UNBIND/NFQNL_CFG_CMD_UNBIND had been called. Fix this problem by checking that the socket is bound before generating the NETLINK_URELEASE notification. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Ivanov <dima@ubnt.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sebastian Ott authored
[ Upstream commit 9d89d9e6 ] Newer machines might use a different (larger) format for function measurement blocks. To ensure that we comply with the alignment requirement on these machines and prevent memory corruption (when firmware writes more data than we expect) add 16 padding bytes at the end of the fmb. Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Vladis Dronov authored
[ Upstream commit 162f98de ] The gtco driver expects at least one valid endpoint. If given malicious descriptors that specify 0 for the number of endpoints, it will crash in the probe function. Ensure there is at least one endpoint on the interface before using it. Also let's fix a minor coding style issue. The full correct report of this issue can be found in the public Red Hat Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1283385Reported-by: Ralf Spenneberg <ralf@spenneberg.net> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Emmanuel Grumbach authored
[ Upstream commit 9fc515bc ] IWL_INFO is not an error but still printed by default. "can't access the RSA semaphore it is write protected" seems worrisome but it is not really a problem. CC: <stable@vger.kernel.org> [4.1+] Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sasha Levin authored
This reverts commit 79b768de. Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 25 Apr, 2016 3 commits
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mike Galbraith authored
Backport of 81ad4276 failed to adjust for intervening ->get_trip_temp() argument type change, thus causing stack protector to panic. drivers/thermal/thermal_core.c: In function ‘thermal_zone_device_register’: drivers/thermal/thermal_core.c:1569:41: warning: passing argument 3 of ‘tz->ops->get_trip_temp’ from incompatible pointer type [-Wincompatible-pointer-types] if (tz->ops->get_trip_temp(tz, count, &trip_temp)) ^ drivers/thermal/thermal_core.c:1569:41: note: expected ‘long unsigned int *’ but argument is of type ‘int *’ CC: <stable@vger.kernel.org> #3.18,#4.1 Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
-
Eric Dumazet authored
[ Upstream commit c2e7204d ] Tracking idle time in bictcp_cwnd_event() is imprecise, as epoch_start is normally set at ACK processing time, not at send time. Doing a proper fix would need to add an additional state variable, and does not seem worth the trouble, given CUBIC bug has been there forever before Jana noticed it. Let's simply not set epoch_start in the future, otherwise bictcp_update() could overflow and CUBIC would again grow cwnd too fast. This was detected thanks to a packetdrill test Neal wrote that was flaky before applying this fix. Fixes: 30927520 ("tcp_cubic: better follow cubic curve after idle period") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Jana Iyengar <jri@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 20 Apr, 2016 5 commits
-
-
Eric Dumazet authored
[ Upstream commit 30927520 ] Jana Iyengar found an interesting issue on CUBIC : The epoch is only updated/reset initially and when experiencing losses. The delta "t" of now - epoch_start can be arbitrary large after app idle as well as the bic_target. Consequentially the slope (inverse of ca->cnt) would be really large, and eventually ca->cnt would be lower-bounded in the end to 2 to have delayed-ACK slow-start behavior. This particularly shows up when slow_start_after_idle is disabled as a dangerous cwnd inflation (1.5 x RTT) after few seconds of idle time. Jana initial fix was to reset epoch_start if app limited, but Neal pointed out it would ask the CUBIC algorithm to recalculate the curve so that we again start growing steeply upward from where cwnd is now (as CUBIC does just after a loss). Ideally we'd want the cwnd growth curve to be the same shape, just shifted later in time by the amount of the idle period. Reported-by: Jana Iyengar <jri@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Sangtae Ha <sangtae.ha@gmail.com> Cc: Lawrence Brakmo <lawrence@brakmo.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Robert Dobrowolski authored
[ Upstream commit e86103a7 ] On BXT platform Host Controller and Device Controller figure as same PCI device but with different device function. HCD should not pass data to Device Controller but only to Host Controllers. Checking if companion device is Host Controller, otherwise skip. Cc: <stable@vger.kernel.org> Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
[ Upstream commit 13630746 ] Add a new NO_REPORT_LUNS quirk and set it for Seagate drives with an usb-id of: 0bc2:331a, as these will fail to respond to a REPORT_LUNS command. Cc: stable@vger.kernel.org Reported-and-tested-by: David Webb <djw@noc.ac.uk> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mathias Nyman authored
[ Upstream commit 98d74f9c ] PCI hotpluggable xhci controllers such as some Alpine Ridge solutions will remove the xhci controller from the PCI bus when the last USB device is disconnected. Add a flag to indicate that the host is being removed to avoid queueing configure_endpoint commands for the dropped endpoints. For PCI hotplugged controllers this will prevent 5 second command timeouts For static xhci controllers the configure_endpoint command is not needed in the removal case as everything will be returned, freed, and the controller is reset. For now the flag is only set for PCI connected host controllers. Cc: <stable@vger.kernel.org> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Roger Quadros authored
[ Upstream commit ad6b1d91 ] The problem seems to be that if a new device is detected while we have already removed the shared HCD, then many of the xhci operations (e.g. xhci_alloc_dev(), xhci_setup_device()) hang as command never completes. I don't think XHCI can operate without the shared HCD as we've already called xhci_halt() in xhci_only_stop_hcd() when shared HCD goes away. We need to prevent new commands from being queued not only when HCD is dying but also when HCD is halted. The following lockup was detected while testing the otg state machine. [ 178.199951] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.205799] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1 [ 178.214458] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220f04c hci version 0x100 quirks 0x00010010 [ 178.223619] xhci-hcd xhci-hcd.0.auto: irq 400, io mem 0x48890000 [ 178.230677] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 178.237796] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.245358] usb usb1: Product: xHCI Host Controller [ 178.250483] usb usb1: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.257783] usb usb1: SerialNumber: xhci-hcd.0.auto [ 178.267014] hub 1-0:1.0: USB hub found [ 178.272108] hub 1-0:1.0: 1 port detected [ 178.278371] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.284171] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2 [ 178.294038] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003 [ 178.301183] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.308776] usb usb2: Product: xHCI Host Controller [ 178.313902] usb usb2: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.321222] usb usb2: SerialNumber: xhci-hcd.0.auto [ 178.329061] hub 2-0:1.0: USB hub found [ 178.333126] hub 2-0:1.0: 1 port detected [ 178.567585] dwc3 48890000.usb: usb_otg_start_host 0 [ 178.572707] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 178.578064] usb usb2: USB disconnect, device number 1 [ 178.586565] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered [ 178.592585] xhci-hcd xhci-hcd.0.auto: remove, state 1 [ 178.597924] usb usb1: USB disconnect, device number 1 [ 178.603248] usb 1-1: new high-speed USB device number 2 using xhci-hcd [ 190.597337] INFO: task kworker/u4:0:6 blocked for more than 10 seconds. [ 190.604273] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.610228] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.618443] kworker/u4:0 D c05c0ac0 0 6 2 0x00000000 [ 190.625120] Workqueue: usb_otg usb_otg_work [ 190.629533] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98) [ 190.636915] [<c05c10ac>] (schedule) from [<c05c1318>] (schedule_preempt_disabled+0xc/0x10) [ 190.645591] [<c05c1318>] (schedule_preempt_disabled) from [<c05c23d0>] (mutex_lock_nested+0x1ac/0x3fc) [ 190.655353] [<c05c23d0>] (mutex_lock_nested) from [<c046cf8c>] (usb_disconnect+0x3c/0x208) [ 190.664043] [<c046cf8c>] (usb_disconnect) from [<c0470cf0>] (_usb_remove_hcd+0x98/0x1d8) [ 190.672535] [<c0470cf0>] (_usb_remove_hcd) from [<c0485da8>] (usb_otg_start_host+0x50/0xf4) [ 190.681299] [<c0485da8>] (usb_otg_start_host) from [<c04849a4>] (otg_set_protocol+0x5c/0xd0) [ 190.690153] [<c04849a4>] (otg_set_protocol) from [<c0484b88>] (otg_set_state+0x170/0xbfc) [ 190.698735] [<c0484b88>] (otg_set_state) from [<c0485740>] (otg_statemachine+0x12c/0x470) [ 190.707326] [<c0485740>] (otg_statemachine) from [<c0053c84>] (process_one_work+0x1b4/0x4a0) [ 190.716162] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c) [ 190.724742] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0) [ 190.732328] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24) [ 190.739898] 5 locks held by kworker/u4:0/6: [ 190.744274] #0: ("%s""usb_otg"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.752799] #1: ((&otgd->work)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.761326] #2: (&otgd->fsm.lock){+.+.+.}, at: [<c048562c>] otg_statemachine+0x18/0x470 [ 190.769934] #3: (usb_bus_list_lock){+.+.+.}, at: [<c0470ce8>] _usb_remove_hcd+0x90/0x1d8 [ 190.778635] #4: (&dev->mutex){......}, at: [<c046cf8c>] usb_disconnect+0x3c/0x208 [ 190.786700] INFO: task kworker/1:0:14 blocked for more than 10 seconds. [ 190.793633] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.799567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.807783] kworker/1:0 D c05c0ac0 0 14 2 0x00000000 [ 190.814457] Workqueue: usb_hub_wq hub_event [ 190.818866] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98) [ 190.826252] [<c05c10ac>] (schedule) from [<c05c4e40>] (schedule_timeout+0x13c/0x1ec) [ 190.834377] [<c05c4e40>] (schedule_timeout) from [<c05c19f0>] (wait_for_common+0xbc/0x150) [ 190.843062] [<c05c19f0>] (wait_for_common) from [<bf068a3c>] (xhci_setup_device+0x164/0x5cc [xhci_hcd]) [ 190.852986] [<bf068a3c>] (xhci_setup_device [xhci_hcd]) from [<c046b7f4>] (hub_port_init+0x3f4/0xb10) [ 190.862667] [<c046b7f4>] (hub_port_init) from [<c046eb64>] (hub_event+0x704/0x1018) [ 190.870704] [<c046eb64>] (hub_event) from [<c0053c84>] (process_one_work+0x1b4/0x4a0) [ 190.878919] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c) [ 190.887503] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0) [ 190.895076] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24) [ 190.902650] 5 locks held by kworker/1:0/14: [ 190.907023] #0: ("usb_hub_wq"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.915454] #1: ((&hub->events)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.924070] #2: (&dev->mutex){......}, at: [<c046e490>] hub_event+0x30/0x1018 [ 190.931768] #3: (&port_dev->status_lock){+.+.+.}, at: [<c046eb50>] hub_event+0x6f0/0x1018 [ 190.940558] #4: (&bus->usb_address0_mutex){+.+.+.}, at: [<c046b458>] hub_port_init+0x58/0xb10 Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-