- 17 Sep, 2014 40 commits
-
-
K. Y. Srinivasan authored
commit 3533f860 upstream. On some Windows hosts on FC SANs, TEST_UNIT_READY can return SRB_STATUS_ERROR. Correctly handle this. Note that there is sufficient sense information to support scsi error handling even in this case. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit f885fb73 upstream. Correctly set SRB flags for all valid I/O directions. Some IHV drivers on the Windows host require this. The host validates the command and SRB flags prior to passing the command down to native driver stack. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit adb6f9e1 upstream. Based on the negotiated VMBUS protocol version, we adjust the size of the storage protocol messages. The two sizes we currently handle are pre-win8 and post-win8. In WS2012 R2, we are negotiating higher VMBUS protocol version than the win8 version. Make adjustments to correctly handle this. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 52f9614d upstream. Set cmd_per_lun to reflect value supported by the Host. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 4cd83ecd upstream. Hyper-V hosts can support multiple targets and multiple channels and larger number of LUNs per target. Update the code to reflect this. With this patch we can correctly enumerate all the paths in a multi-path storage environment. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 8caf92d8 upstream. Going forward it is possible that some of the commands that are not currently implemented will be implemented on future Windows hosts. Even if they are not implemented, we are told the host will corrrectly handle unsupported commands (by returning appropriate return code and sense information). Make command filtering depend on the host version. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit 56b26e69 upstream. On Azure, we have seen instances of unbounded I/O latencies. To deal with this issue, implement handler that can reset the timeout. Note that the host gaurantees that it will respond to each command that has been issued. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> [hch: added a better comment explaining the issue] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Bottomley authored
commit 884ffee0 upstream. hostt->name might contain space, so use the ->proc_name short name instead when creating per-driver command slabs. Signed-off-by: James Bottomley <JBottomley@Parallels.com> Reported-by: poma <pomidorabelisima@gmail.com> Tested-by: poma <pomidorabelisima@gmail.com> Reviewed-by: Vladimir Davydov <vdavydov@parallels.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 7e467245 upstream. We would get wrong results in compiler recomputed old_pmd. Avoid that by using ACCESS_ONCE Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 969b7b20 upstream. As per ISA, for 4k base page size we compare 14..65 bits of VA specified with the entry_VA in tlb. That implies we need to make sure we do a tlbie with all the possible 4k va we used to access the 16MB hugepage. With 64k base page size we compare 14..57 bits of VA. Hence we cannot ignore the lower 24 bits of va while tlbie .We also cannot tlb invalidate a 16MB entry with just one tlbie instruction because we don't track which va was used to instantiate the tlb entry. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit fc047955 upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Use _PAGE_COMBO to determine the page size with which we should invalidate the hash table entries on unmap. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 629149fa upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Handle this correctly for 16M pages Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit fa1f8ae8 upstream. The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit b0aa44a3 upstream. With hugepages, we store the hpte valid information in the pte page whose address is stored in the second half of the PMD. Use a write barrier to make sure clearing pmd busy bit and updating hpte valid info are ordered properly. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit 5efbabe0 upstream. Function remove_ddw() could be called in of_reconfig_notifier and we potentially remove the dynamic DMA window property, which invokes of_reconfig_notifier again. Eventually, it leads to the deadlock as following backtrace shows. The patch fixes the above issue by deferring releasing the dynamic DMA window property while releasing the device node. ============================================= [ INFO: possible recursive locking detected ] 3.16.0+ #428 Tainted: G W --------------------------------------------- drmgr/2273 is trying to acquire lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 but task is already holding lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock((of_reconfig_chain).rwsem); lock((of_reconfig_chain).rwsem); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by drmgr/2273: #0: (sb_writers#4){.+.+.+}, at: [<c0000000001cbe70>] \ .vfs_write+0xb0/0x1f8 #1: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 stack backtrace: CPU: 17 PID: 2273 Comm: drmgr Tainted: G W 3.16.0+ #428 Call Trace: [c0000000137e7000] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c0000000137e70b0] [c00000000083cd34] .dump_stack+0x7c/0x9c [c0000000137e7130] [c0000000000b8afc] .__lock_acquire+0x128c/0x1c68 [c0000000137e7280] [c0000000000b9a4c] .lock_acquire+0xe8/0x104 [c0000000137e7350] [c00000000083588c] .down_read+0x4c/0x90 [c0000000137e73e0] [c000000000091890] .__blocking_notifier_call_chain+0x40/0x78 [c0000000137e7490] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7520] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e75b0] [c000000000682a9c] .of_property_notify+0x4c/0x54 [c0000000137e7650] [c000000000682bf0] .of_remove_property+0x30/0xd4 [c0000000137e76f0] [c000000000052a44] .remove_ddw+0x144/0x168 [c0000000137e7790] [c000000000053204] .iommu_reconfig_notifier+0x30/0xe0 [c0000000137e7820] [c00000000009137c] .notifier_call_chain+0x6c/0xb4 [c0000000137e78c0] [c0000000000918ac] .__blocking_notifier_call_chain+0x5c/0x78 [c0000000137e7970] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7a00] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e7a90] [c000000000682e14] .of_detach_node+0x44/0x1fc [c0000000137e7b40] [c0000000000518e4] .ofdt_write+0x3ac/0x688 [c0000000137e7c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c0000000137e7cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c0000000137e7d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c0000000137e7e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit f1b3929c upstream. While running command "drmgr -c phb -r -s 'PHB 528'", following backtrace jumped out because the target device node isn't marked with OF_DETACHED by of_detach_node(), which caused by error returned from memory hotplug related reconfig notifier when disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it. ERROR: Bad of_node_put() on /pci@800000020000210/ethernet@0 CPU: 14 PID: 2252 Comm: drmgr Tainted: G W 3.16.0+ #427 Call Trace: [c000000012a776a0] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c000000012a77750] [c00000000083cd34] .dump_stack+0x7c/0x9c [c000000012a777d0] [c0000000006807c4] .of_node_release+0x58/0xe0 [c000000012a77860] [c00000000038a7d0] .kobject_release+0x174/0x1b8 [c000000012a77900] [c00000000038a884] .kobject_put+0x70/0x78 [c000000012a77980] [c000000000681680] .of_node_put+0x28/0x34 [c000000012a77a00] [c000000000681ea8] .__of_get_next_child+0x64/0x70 [c000000012a77a90] [c000000000682138] .of_find_node_by_path+0x1b8/0x20c [c000000012a77b40] [c000000000051840] .ofdt_write+0x308/0x688 [c000000012a77c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c000000012a77cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c000000012a77d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c000000012a77e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 85c1fafd upstream. On ppc64 we support 4K hash pte with 64K page size. That requires us to track the hash pte slot information on a per 4k basis. We do that by storing the slot details in the second half of pte page. The pte bit _PAGE_COMBO is used to indicate whether the second half need to be looked while building real_pte. We need to use read memory barrier while doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO check. On the store side we already do a lwsync in __hash_page_4K Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrey Utkin authored
commit b00fc6ec upstream. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81631Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vaidyanathan Srinivasan authored
commit 95707d85 upstream. Flags from device-tree need to be parsed with accessors for interpreting correct value in little-endian. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Reviewed-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Felipe Balbi authored
commit 42ab0f39 upstream. The second range of this particular regulator, starts at 1.60V, not as 1.55V as it was originally implied by code. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nikesh Oswal authored
commit 5b919f3e upstream. WM5110/8280 devices do not support bypass mode for LDO1 so remove the bypass callbacks registered with regulator core. Signed-off-by: Nikesh Oswal <nikesh@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Lindgren authored
commit daebabd5 upstream. Commit 43fef47f (mfd: twl4030-power: Add a configuration to turn off oscillator during off-idle) added support for configuring the PMIC to cut off resources during deeper idle states to save power. This however caused regression for n900 display power that needed the PMIC configuration to be disabled with commit d937678a (ARM: dts: Revert enabling of twl configuration for n900). Turns out the root cause of the problem is that we must use TWL4030_RESCONFIG_UNDEF instead of DEV_GRP_NULL to avoid disabling regulators that may have been enabled before the init function for twl4030-power.c runs. With TWL4030_RESCONFIG_UNDEF we let the regulator framework control the regulators like it should. Here we need to only configure the sys_clken and sys_off_mode triggers for the regulators that cannot be done by the regulator framework as it's not running at that point. This allows us to enable the PMIC configuration for n900. Fixes: 43fef47f (mfd: twl4030-power: Add a configuration to turn off oscillator during off-idle) Signed-off-by: Tony Lindgren <tony@atomide.com> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeff Mahoney authored
commit 18139089 upstream. The rtsx_usb driver contains the table for the devices it supports but doesn't export it. As a result, no alias is generated and it doesn't get loaded automatically. Via https://bugzilla.novell.com/show_bug.cgi?id=890096Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reported-by: Marcel Witte <wittemar@googlemail.com> Cc: Roger Tseng <rogerable@realtek.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Welling authored
commit 46de8ff8 upstream. single-ulpi-bypass is a flag used for older OMAP3 silicon. The flag when set, can excite code that improperly uses the OMAP_UHH_HOSTCONFIG_UPLI_BYPASS define to clear the corresponding bit. Instead it clears all of the other bits disabling all of the ports in the process. Signed-off-by: Michael Welling <mwelling@emacinc.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sasha Levin authored
commit 618fde87 upstream. The rarely-executed memry-allocation-failed callback path generates a WARN_ON_ONCE() when smp_call_function_single() succeeds. Presumably it's supposed to warn on failures. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@gentwo.org> Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Li Zhong authored
commit d0177639 upstream. It is possible for some platforms, such as powerpc to set HPAGE_SHIFT to 0 to indicate huge pages not supported. When this is the case, hugetlbfs could be disabled during boot time: hugetlbfs: disabling because there are no supported hugepage sizes Then in dissolve_free_huge_pages(), order is kept maximum (64 for 64bits), and the for loop below won't end: for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) As suggested by Naoya, below fix checks hugepages_supported() before calling dissolve_free_huge_pages(). [rientjes@google.com: no legitimate reason to call dissolve_free_huge_pages() when !hugepages_supported()] Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pranith Kumar authored
commit e04aca4a upstream. Fix build error as reported by Geert Uytterhoeven here: http://kisskb.ellerman.id.au/kisskb/buildresult/11607865/ The error happens when CONFIG_HAS_IOPORT_MAP=n because of which there are missing definitions of ioport_map/unmap(). Fix this build error by adding these prototypes. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
commit 2b462638 upstream. If we failed to copy from the structure, writing back the flags leaks 31 bits of kernel memory (the rest of the ir_flags field). In any case, if we cannot copy from/to the structure, why should we expect putting just the flags to work? Also make sure ocfs2_info_handle_freeinode() returns the right error code if the copy_to_user() fails. Fixes: ddee5cdb ('Ocfs2: Add new OCFS2_IOC_INFO ioctl for ocfs2 v8.') Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Joel Becker <jlbec@evilplan.org> Acked-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 5838d444 upstream. Commit 85816794 ("fanotify: Fix use after free for permission events") introduced a double free issue for permission events which are pending in group's notification queue while group is being destroyed. These events are freed from fanotify_handle_event() but they are not removed from groups notification queue and thus they get freed again from fsnotify_flush_notify(). Fix the problem by removing permission events from notification queue before freeing them if we skip processing access response. Also expand comments in fanotify_release() to explain group shutdown in detail. Fixes: 85816794Signed-off-by: Jan Kara <jack@suse.cz> Reported-by: Douglas Leeder <douglas.leeder@sophos.com> Tested-by: Douglas Leeder <douglas.leeder@sophos.com> Reported-by: Heinrich Schuchard <xypron.glpk@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Paris authored
commit 7d8b6c63 upstream. This is effectively a revert of 7b9a7ec5 plus fixing it a different way... We found, when trying to run an application from an application which had dropped privs that the kernel does security checks on undefined capability bits. This was ESPECIALLY difficult to debug as those undefined bits are hidden from /proc/$PID/status. Consider a root application which drops all capabilities from ALL 4 capability sets. We assume, since the application is going to set eff/perm/inh from an array that it will clear not only the defined caps less than CAP_LAST_CAP, but also the higher 28ish bits which are undefined future capabilities. The BSET gets cleared differently. Instead it is cleared one bit at a time. The problem here is that in security/commoncap.c::cap_task_prctl() we actually check the validity of a capability being read. So any task which attempts to 'read all things set in bset' followed by 'unset all things set in bset' will not even attempt to unset the undefined bits higher than CAP_LAST_CAP. So the 'parent' will look something like: CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffc000000000 All of this 'should' be fine. Given that these are undefined bits that aren't supposed to have anything to do with permissions. But they do... So lets now consider a task which cleared the eff/perm/inh completely and cleared all of the valid caps in the bset (but not the invalid caps it couldn't read out of the kernel). We know that this is exactly what the libcap-ng library does and what the go capabilities library does. They both leave you in that above situation if you try to clear all of you capapabilities from all 4 sets. If that root task calls execve() the child task will pick up all caps not blocked by the bset. The bset however does not block bits higher than CAP_LAST_CAP. So now the child task has bits in eff which are not in the parent. These are 'meaningless' undefined bits, but still bits which the parent doesn't have. The problem is now in cred_cap_issubset() (or any operation which does a subset test) as the child, while a subset for valid cap bits, is not a subset for invalid cap bits! So now we set durring commit creds that the child is not dumpable. Given it is 'more priv' than its parent. It also means the parent cannot ptrace the child and other stupidity. The solution here: 1) stop hiding capability bits in status This makes debugging easier! 2) stop giving any task undefined capability bits. it's simple, it you don't put those invalid bits in CAP_FULL_SET you won't get them in init and you won't get them in any other task either. This fixes the cap_issubset() tests and resulting fallout (which made the init task in a docker container untraceable among other things) 3) mask out undefined bits when sys_capset() is called as it might use ~0, ~0 to denote 'all capabilities' for backward/forward compatibility. This lets 'capsh --caps="all=eip" -- -c /bin/bash' run. 4) mask out undefined bit when we read a file capability off of disk as again likely all bits are set in the xattr for forward/backward compatibility. This lets 'setcap all+pe /bin/bash; /bin/bash' run Signed-off-by: Eric Paris <eparis@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Andrew Vagin <avagin@openvz.org> Cc: Andrew G. Morgan <morgan@kernel.org> Cc: Serge E. Hallyn <serge.hallyn@canonical.com> Cc: Kees Cook <keescook@chromium.org> Cc: Steve Grubb <sgrubb@redhat.com> Cc: Dan Walsh <dwalsh@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stefan Berger authored
commit b49e1043 upstream. Properly clean the sysfs entries in the error path Reported-by: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Gunthorpe authored
commit 8e54caf4 upstream. Some Atmel TPMs provide completely wrong timeouts from their TPM_CAP_PROP_TIS_TIMEOUT query. This patch detects that and returns new correct values via a DID/VID table in the TIS driver. Tested on ARM using an AT97SC3204T FW version 37.16 [PHuewe: without this fix these 'broken' Atmel TPMs won't function on older kernels] Signed-off-by: "Berg, Christopher" <Christopher.Berg@atmel.com> Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
-
Jarkko Sakkinen authored
commit 3e14d83e upstream. Regression in 41ab999c. Call to tpm_chip_put is missing. This will cause TPM device driver not to unload if tmp_get_random() is called. Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit aee530cf upstream. spin_is_locked() always returns false for uniprocessor configurations in several architectures, so do not use WARN_ON with it. Use lockdep_assert_held() instead to also reduce overhead in non-debug kernels. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 0e16e4cf upstream. Older firmware didn't support the new nop packet. v2 (Andreas Boll): - Drop usage of packet3 for new firmware Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> (v1) Signed-off-by: Andreas Boll <andreas.boll.dev@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vaidyanathan Srinivasan authored
commit 6174bac8 upstream. Cpufreq depends on platform firmware to implement PStates. In case of platform firmware failure, cpufreq should not panic host kernel with BUG_ON(). Less severe pr_warn() will suffice. Add firmware_has_feature(FW_FEATURE_OPALv3) check to skip probing for device-tree on non-powernv platforms. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Borntraeger authored
commit 36e7fdaa upstream. commit 4badad35 (locking/mutex: Disable optimistic spinning on some architectures) fenced spinning for architectures without proper cmpxchg. There is no need to disable mutex spinning on s390, though: The instructions CS,CSG and friends provide the proper guarantees. (We dont implement cmpxchg with locks). Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark A. Greer authored
commit 97ca0d6c upstream. Commit id 2bd16e3e (spi: omap2-mcspi: Do not configure the controller on each transfer unless needed) does its job too well so omap2_mcspi_setup_transfer() isn't called even when an SPI slave driver changes 'spi->mode'. The result is that the mode requested by the SPI slave driver never takes effect. Fix this by adding the 'mode' member to the omap2_mcspi_cs structure which holds the mode value that the hardware is configured for. When the SPI slave driver changes 'spi->mode' it will be different than the value of this new member and the SPI master driver will know that the hardware must be reconfigured (by calling omap2_mcspi_setup_transfer()). Fixes: 2bd16e3e (spi: omap2-mcspi: Do not configure the controller on each transfer unless needed) Signed-off-by: Mark A. Greer <mgreer@animalcreek.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Petazzoni authored
commit e06871cd upstream. In commit f814f9ac ("spi/orion: add device tree binding"), Device Tree support was added to the spi-orion driver. However, this commit reads the "cell-index" property, without taking into account the fact that DT properties are big-endian encoded. Since most of the platforms using spi-orion with DT have apparently not used anything but cell-index = <0>, the problem was not visible. But as soon as one starts using cell-index = <1>, the problem becomes clearly visible, as the master->bus_num gets a wrong value (actually it gets the value 0, which conflicts with the first bus that has cell-index = <0>). This commit fixes that by using of_property_read_u32() to read the property value, which does the appropriate endianness conversion when needed. Fixes: f814f9ac ("spi/orion: add device tree binding") Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joerg Roedel authored
commit 9b29d3c6 upstream. When multiple devices are detached in __detach_device, they are also removed from the domains dev_list. This makes it unsafe to use list_for_each_entry_safe, as the next pointer might also not be in the list anymore after __detach_device returns. So just repeatedly remove the first element of the list until it is empty. Tested-by: Marti Raudsepp <marti@juffo.org> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-