- 06 Apr, 2016 40 commits
-
-
Paolo Bonzini authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 5f0b8199 upstream. KVM has special logic to handle pages with pte.u=1 and pte.w=0 when CR0.WP=1. These pages' SPTEs flip continuously between two states: U=1/W=0 (user and supervisor reads allowed, supervisor writes not allowed) and U=0/W=1 (supervisor reads and writes allowed, user writes not allowed). When SMEP is in effect, however, U=0 will enable kernel execution of this page. To avoid this, KVM also sets NX=1 in the shadow PTE together with U=0, making the two states U=1/W=0/NX=gpte.NX and U=0/W=1/NX=1. When guest EFER has the NX bit cleared, the reserved bit check thinks that the latter state is invalid; teach it that the smep_andnot_wp case will also use the NX bit of SPTEs. Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.inel.com> Fixes: c258b62bSigned-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Paolo Bonzini authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 844a5fe2 upstream. Yes, all of these are needed. :) This is admittedly a bit odd, but kvm-unit-tests access.flat tests this if you run it with "-cpu host" and of course ept=0. KVM runs the guest with CR0.WP=1, so it must handle supervisor writes specially when pte.u=1/pte.w=0/CR0.WP=0. Such writes cause a fault when U=1 and W=0 in the SPTE, but they must succeed because CR0.WP=0. When KVM gets the fault, it sets U=0 and W=1 in the shadow PTE and restarts execution. This will still cause a user write to fault, while supervisor writes will succeed. User reads will fault spuriously now, and KVM will then flip U and W again in the SPTE (U=1, W=0). User reads will be enabled and supervisor writes disabled, going back to the originary situation where supervisor writes fault spuriously. When SMEP is in effect, however, U=0 will enable kernel execution of this page. To avoid this, KVM also sets NX=1 in the shadow PTE together with U=0. If the guest has not enabled NX, the result is a continuous stream of page faults due to the NX bit being reserved. The fix is to force EFER.NX=1 even if the CPU is taking care of the EFER switch. (All machines with SMEP have the CPU_LOAD_IA32_EFER vm-entry control, so they do not use user-return notifiers for EFER---if they did, EFER.NX would be forced to the same value as the host). There is another bug in the reserved bit check, which I've split to a separate patch for easier application to stable kernels. Cc: Andy Lutomirski <luto@amacapital.net> Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Fixes: f6577a5fSigned-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Paul Mackerras authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit ccec4456 upstream. Thomas Huth discovered that a guest could cause a hard hang of a host CPU by setting the Instruction Authority Mask Register (IAMR) to a suitable value. It turns out that this is because when the code was added to context-switch the new special-purpose registers (SPRs) that were added in POWER8, we forgot to add code to ensure that they were restored to a sane value on guest exit. This adds code to set those registers where a bad value could compromise the execution of the host kernel to a suitable neutral value on guest exit. Fixes: b005255eReported-by: Thomas Huth <thuth@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
David Hildenbrand authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 9522b37f upstream. With MACHINE_HAS_VX, we convert the floating point registers from the vector registeres when storing the status. For other VCPUs, these are stored to vcpu->run->s.regs.vrs, but we are using current->thread.fpu.vxrs, which resolves to the currently loaded VCPU. So kvm_s390_store_status_unloaded() currently writes the wrong floating point registers (converted from the vector registers) when called from another VCPU on a z13. This is only the case for old user space not handling SIGP STORE STATUS and SIGP STOP AND STORE STATUS, but relying on the kernel implementation. All other calls come from the loaded VCPU via kvm_s390_store_status(). Fixes: 9abc2a08 (KVM: s390: fix memory overwrites when vx is disabled) Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Radim Krčmář authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 7099e2e1 upstream. Linux guests on Haswell (and also SandyBridge and Broadwell, at least) would crash if you decided to run a host command that uses PEBS, like perf record -e 'cpu/mem-stores/pp' -a This happens because KVM is using VMX MSR switching to disable PEBS, but SDM [2015-12] 18.4.4.4 Re-configuring PEBS Facilities explains why it isn't safe: When software needs to reconfigure PEBS facilities, it should allow a quiescent period between stopping the prior event counting and setting up a new PEBS event. The quiescent period is to allow any latent residual PEBS records to complete its capture at their previously specified buffer address (provided by IA32_DS_AREA). There might not be a quiescent period after the MSR switch, so a CPU ends up using host's MSR_IA32_DS_AREA to access an area in guest's memory. (Or MSR switching is just buggy on some models.) The guest can learn something about the host this way: If the guest doesn't map address pointed by MSR_IA32_DS_AREA, it results in #PF where we leak host's MSR_IA32_DS_AREA through CR2. After that, a malicious guest can map and configure memory where MSR_IA32_DS_AREA is pointing and can therefore get an output from host's tracing. This is not a critical leak as the host must initiate with PEBS tracing and I have not been able to get a record from more than one instruction before vmentry in vmx_vcpu_run() (that place has most registers already overwritten with guest's). We could disable PEBS just few instructions before vmentry, but disabling it earlier shouldn't affect host tracing too much. We also don't need to switch MSR_IA32_PEBS_ENABLE on VMENTRY, but that optimization isn't worth its code, IMO. (If you are implementing PEBS for guests, be sure to handle the case where both host and guest enable PEBS, because this patch doesn't.) Fixes: 26a4f3c0 ("perf/x86: disable PEBS on a guest entry.") Reported-by: Jiří Olša <jolsa@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
David Matlack authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 313f636d upstream. When growing halt-polling, there is no check that the poll time exceeds the limit. It's possible for vcpu->halt_poll_ns grow once past halt_poll_ns, and stay there until a halt which takes longer than vcpu->halt_poll_ns. For example, booting a Linux guest with halt_poll_ns=11000: ... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 0 (shrink 10000) ... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 10000 (grow 0) ... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 20000 (grow 10000) Signed-off-by: David Matlack <dmatlack@google.com> Fixes: aca6ff29Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Krzysztof Hałasa authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 54c6e2dd upstream. pci_create_root_bus() passes a "parent" pointer to pci_bus_assign_domain_nr(). When CONFIG_PCI_DOMAINS_GENERIC is defined, pci_bus_assign_domain_nr() dereferences that pointer. Many callers of pci_create_root_bus() supply a NULL "parent" pointer, which leads to a NULL pointer dereference error. 7c674700 ("PCI: Move domain assignment from arm64 to generic code") moved the "parent" dereference from arm64 to generic code. Only arm64 used that code (because only arm64 defined CONFIG_PCI_DOMAINS_GENERIC), and it always supplied a valid "parent" pointer. Other arches supplied NULL "parent" pointers but didn't defined CONFIG_PCI_DOMAINS_GENERIC, so they used a no-op version of pci_bus_assign_domain_nr(). 8c7d1474 ("ARM/PCI: Move to generic PCI domains") defined CONFIG_PCI_DOMAINS_GENERIC on ARM, and many ARM platforms use pci_common_init(), which supplies a NULL "parent" pointer. These platforms (cns3xxx, dove, footbridge, iop13xx, etc.) crash with a NULL pointer dereference like this while probing PCI: Unable to handle kernel NULL pointer dereference at virtual address 000000a4 PC is at pci_bus_assign_domain_nr+0x10/0x84 LR is at pci_create_root_bus+0x48/0x2e4 Kernel panic - not syncing: Attempted to kill init! [bhelgaas: changelog, add "Reported:" and "Fixes:" tags] Reported: http://forum.doozan.com/read.php?2,17868,22070,quote=1 Fixes: 8c7d1474 ("ARM/PCI: Move to generic PCI domains") Fixes: 7c674700 ("PCI: Move domain assignment from arm64 to generic code") Signed-off-by: Krzysztof Hałasa <khalasa@piap.pl> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Lokesh Vutla authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 2e18f5a1 upstream. Introduce a dt property, ti,no-idle, that prevents an IP to idle at any point. This is to handle Errata i877, which tells that GMAC clocks cannot be disabled. Acked-by: Roger Quadros <rogerq@ti.com> Tested-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Dave Gerlach <d-gerlach@ti.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Paul Walmsley <paul@pwsan.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Mugunthan V N authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 0f514e69 upstream. Errata id: i877 Description: ------------ The RGMII 1000 Mbps Transmit timing is based on the output clock (rgmiin_txc) being driven relative to the rising edge of an internal clock and the output control/data (rgmiin_txctl/txd) being driven relative to the falling edge of an internal clock source. If the internal clock source is allowed to be static low (i.e., disabled) for an extended period of time then when the clock is actually enabled the timing delta between the rising edge and falling edge can change over the lifetime of the device. This can result in the device switching characteristics degrading over time, and eventually failing to meet the Data Manual Delay Time/Skew specs. To maintain RGMII 1000 Mbps IO Timings, SW should minimize the duration that the Ethernet internal clock source is disabled. Note that the device reset state for the Ethernet clock is "disabled". Other RGMII modes (10 Mbps, 100Mbps) are not affected Workaround: ----------- If the SoC Ethernet interface(s) are used in RGMII mode at 1000 Mbps, SW should minimize the time the Ethernet internal clock source is disabled to a maximum of 200 hours in a device life cycle. This is done by enabling the clock as early as possible in IPL (QNX) or SPL/u-boot (Linux/Android) by setting the register CM_GMAC_CLKSTCTRL[1:0]CLKTRCTRL = 0x2:SW_WKUP. So, do not allow to gate the cpsw clocks using ti,no-idle property in cpsw node assuming 1000 Mbps is being used all the time. If someone does not need 1000 Mbps and wants to gate clocks to cpsw, this property needs to be deleted in their respective board files. Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Paul Walmsley <paul@pwsan.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Thomas Petazzoni authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit d7d5a43c upstream. When the Crypto SRAM mappings were added to the Device Tree files describing the Armada XP boards in commit c466d997 ("ARM: mvebu: define crypto SRAM ranges for all armada-xp boards"), the fact that those mappings were overlaping with the PCIe memory aperture was overlooked. Due to this, we currently have for all Armada XP platforms a situation that looks like this: Memory mapping on Armada XP boards with internal registers at 0xf1000000: - 0x00000000 -> 0xf0000000 3.75G RAM - 0xf0000000 -> 0xf1000000 16M NOR flashes (AXP GP / AXP DB) - 0xf1000000 -> 0xf1100000 1M internal registers - 0xf8000000 -> 0xffe0000 126M PCIe memory aperture - 0xf8100000 -> 0xf8110000 64KB Crypto SRAM #0 => OVERLAPS WITH PCIE ! - 0xf8110000 -> 0xf8120000 64KB Crypto SRAM #1 => OVERLAPS WITH PCIE ! - 0xffe00000 -> 0xfff00000 1M PCIe I/O aperture - 0xfff0000 -> 0xffffffff 1M BootROM The overlap means that when PCIe devices are added, depending on their memory window needs, they might or might not be mapped into the physical address space. Indeed, they will not be mapped if the area allocated in the PCIe memory aperture by the PCI core overlaps with one of the Crypto SRAM. Typically, a Intel IGB PCIe NIC that needs 8MB of PCIe memory will see its PCIe memory window allocated from 0xf80000000 for 8MB, which overlaps with the Crypto SRAM windows. Due to this, the PCIe window is not created, and any attempt to access the PCIe window makes the kernel explode: [ 3.302213] igb: Copyright (c) 2007-2014 Intel Corporation. [ 3.307841] pci 0000:00:09.0: enabling device (0140 -> 0143) [ 3.313539] mvebu_mbus: cannot add window '4:f8', conflicts with another window [ 3.320870] mvebu-pcie soc:pcie-controller: Could not create MBus window at [mem 0xf8000000-0xf87fffff]: -22 [ 3.330811] Unhandled fault: external abort on non-linefetch (0x1008) at 0xf08c0018 This problem does not occur on Armada 370 boards, because we use the following memory mapping (for boards that have internal registers at 0xf1000000): - 0x00000000 -> 0xf0000000 3.75G RAM - 0xf0000000 -> 0xf1000000 16M NOR flashes (AXP GP / AXP DB) - 0xf1000000 -> 0xf1100000 1M internal registers - 0xf1100000 -> 0xf1110000 64KB Crypto SRAM #0 => OK ! - 0xf8000000 -> 0xffe0000 126M PCIe memory - 0xffe00000 -> 0xfff00000 1M PCIe I/O - 0xfff0000 -> 0xffffffff 1M BootROM Obviously, the solution is to align the location of the Crypto SRAM mappings of Armada XP to be similar with the ones on Armada 370, i.e have them between the "internal registers" area and the beginning of the PCIe aperture. However, we have a special case with the OpenBlocks AX3-4 platform, which has a 128 MB NOR flash. Currently, this NOR flash is mapped from 0xf0000000 to 0xf8000000. This is possible because on OpenBlocks AX3-4, the internal registers are not at 0xf1000000. And this explains why the Crypto SRAM mappings were not configured at the same place on Armada XP. Hence, the solution is two-fold: (1) Move the NOR flash mapping on Armada XP OpenBlocks AX3-4 from 0xe8000000 to 0xf0000000. This frees the 0xf0000000 -> 0xf80000000 space. (2) Move the Crypto SRAM mappings on Armada XP to be similar to Armada 370 (except of course that Armada XP has two Crypto SRAM and not one). After this patch, the memory mapping on Armada XP boards with registers at 0xf1 is: - 0x00000000 -> 0xf0000000 3.75G RAM - 0xf0000000 -> 0xf1000000 16M NOR flashes (AXP GP / AXP DB) - 0xf1000000 -> 0xf1100000 1M internal registers - 0xf1100000 -> 0xf1110000 64KB Crypto SRAM #0 - 0xf1110000 -> 0xf1120000 64KB Crypto SRAM #1 - 0xf8000000 -> 0xffe0000 126M PCIe memory - 0xffe00000 -> 0xfff00000 1M PCIe I/O - 0xfff0000 -> 0xffffffff 1M BootROM And the memory mapping for the special case of the OpenBlocks AX3-4 (internal registers at 0xd0000000, NOR of 128 MB): - 0x00000000 -> 0xc0000000 3G RAM - 0xd0000000 -> 0xd1000000 1M internal registers - 0xe800000 -> 0xf0000000 128M NOR flash - 0xf1100000 -> 0xf1110000 64KB Crypto SRAM #0 - 0xf1110000 -> 0xf1120000 64KB Crypto SRAM #1 - 0xf8000000 -> 0xffe0000 126M PCIe memory - 0xffe00000 -> 0xfff00000 1M PCIe I/O - 0xfff0000 -> 0xffffffff 1M BootROM Fixes: c466d997 ("ARM: mvebu: define crypto SRAM ranges for all armada-xp boards") Reported-by: Phil Sutter <phil@nwl.cc> Cc: Phil Sutter <phil@nwl.cc> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Ard Biesheuvel authored
BugLink: http://bugs.launchpad.net/bugs/1558330 commit 36e5cd6b upstream. Commit dfd55ad8 ("arm64: vmemmap: use virtual projection of linear region") fixed an issue where the struct page array would overflow into the adjacent virtual memory region if system RAM was placed so high up in physical memory that its addresses were not representable in the build time configured virtual address size. However, the fix failed to take into account that the vmemmap region needs to be relatively aligned with respect to the sparsemem section size, so that a sequence of page structs corresponding with a sparsemem section in the linear region appears naturally aligned in the vmemmap region. So round up vmemmap to sparsemem section size. Since this essentially moves the projection of the linear region up in memory, also revert the reduction of the size of the vmemmap region. Fixes: dfd55ad8 ("arm64: vmemmap: use virtual projection of linear region") Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: David Daney <david.daney@cavium.com> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Michal Marek authored
The limbs are integers in the host endianness, so we can't simply iterate over the individual bytes. The current code happens to work on little-endian, because the order of the limbs in the MPI array is the same as the order of the bytes in each limb, but it breaks on big-endian. Fixes: 0f74fbf7 ("MPI: Fix mpi_read_buffer") Signed-off-by: Michal Marek <mmarek@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> BugLink: http://bugs.launchpad.net/bugs/1557250Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Tim Gardner authored
BugLink: http://bugs.launchpad.net/bugs/1557994Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Gal Pressman authored
BugLink: http://bugs.launchpad.net/bugs/1557950 Calling mlx5e_set_coalesce while the interface is down will result in modifying CQs that don't exist. Fixes: f62b8bb8 ('net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality') Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from linux-next commit 2fcb92fb) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Gal Pressman authored
BugLink: http://bugs.launchpad.net/bugs/1557950 If CQ moderation is not supported by the device, print a warning on netdevice load, and return error when trying to modify/query cq moderation via ethtool. Fixes: f62b8bb8 ('net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality') Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from linux-next commit 7524a5d8) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Adrian Hunter authored
BugLink: http://bugs.launchpad.net/bugs/1520454 A card can be removed while it is runtime suspended. Do not print an error message. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> (back ported from commit 520322d9) Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Conflicts: drivers/mmc/core/mmc.c drivers/mmc/core/sd.c
-
Fu, Zhonghui authored
BugLink: http://bugs.launchpad.net/bugs/1520454 Now, PM core supports asynchronous suspend/resume mode for devices during system suspend/resume, and the power state transition of one device may be completed in separate kernel thread. PM core ensures all power state transition dependency between devices. This patch enables MMC/SD/SDIO card and SDIO function devices to suspend/resume asynchronously. This will take advantage of multicore and improve system suspend/resume speed. After applying this patch and enabling all SDIO function's child devices to suspend/resume asynchronously on ASUS T100TA, the system suspend-to-idle time is reduced from 1645ms to 1108ms, and the system resume time is reduced from 940ms to 918ms. Signed-off-by: Zhonghui Fu <zhonghui.fu@linux.intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> (cherry picked from commit ec076cd2) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Adrian Hunter authored
BugLink: http://bugs.launchpad.net/bugs/1520454 The driver may not be able to set the power correctly but that is not a reason to BUG(). Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Venu Byravarasu <vbyravarasu@nvidia.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> (cherry picked from commit 9d5de93f) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
BugLink: http://bugs.launchpad.net/bugs/1557689Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
BugLink: http://bugs.launchpad.net/bugs/1557690Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
Ignore: yes Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Andreas Schwab authored
Since binutils 2.26 BFD is doing suffix merging on STRTAB sections. But dedotify modifies the symbol names in place, which can also modify unrelated symbols with a name that matches a suffix of a dotted name. To remove the leading dot of a symbol name we can just increment the pointer into the STRTAB section instead. Backport to all stables to avoid breakage when people update their binutils - mpe. Cc: stable@vger.kernel.org Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> (backported from commit f15838e9) BugLink: http://bugs.launchpad.net/bugs/1557130Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Paul Dagnelie authored
BugLink: http://bugs.launchpad.net/bugs/1557151 6370 ZFS send fails to transmit some holes Reviewed by: Matthew Ahrens <mahrens@delphix.com> Reviewed by: Chris Williamson <chris.williamson@delphix.com> Reviewed by: Stefan Ring <stefanrin@gmail.com> Reviewed by: Steven Burgess <sburgess@datto.com> Reviewed by: Arne Jansen <sensille@gmx.net> Approved by: Robert Mustacchi <rm@joyent.com> References: https://www.illumos.org/issues/6370 https://github.com/illumos/illumos-gate/commit/286ef71 In certain circumstances, "zfs send -i" (incremental send) can produce a stream which will result in incorrect sparse file contents on the target. The problem manifests as regions of the received file that should be sparse (and read a zero-filled) actually contain data from a file that was deleted (and which happened to share this file's object ID). Note: this can happen only with filesystems (not zvols, because they do not free (and thus can not reuse) object IDs). Note: This can happen only if, since the incremental source (FromSnap), a file was deleted and then another file was created, and the new file is sparse (i.e. has areas that were never written to and should be implicitly zero-filled). We suspect that this was introduced by 4370 (applies only if hole_birth feature is enabled), and made worse by 5243 (applies if hole_birth feature is disabled, and we never send any holes). The bug is caused by the hole birth feature. When an object is deleted and replaced, all the holes in the object have birth time zero. However, zfs send cannot tell that the holes are new since the file was replaced, so it doesn't send them in an incremental. As a result, you can end up with invalid data when you receive incremental send streams. As a short-term fix, we can always send holes with birth time 0 (unless it's a zvol or a dataset where we can guarantee that no objects have been reused). Ported-by: Steven Burgess <sburgess@datto.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #4369 Closes #4050 cherry-picked from c352ec27d5c5ecea8f6af066258dfd106085eaac https://github.com/zfsonlinux/zfs.gitSigned-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Aviv Greenberg authored
BugLink: http://bugs.launchpad.net/bugs/1557138 Add support for Intel DS4 depth camera in uvc driver. This includes adding new uvc GUIDs for the new pixel formats, adding new V4L pixel format definition to user api headers, and updating the uvc driver GUID-to-4cc tables with the new formats. Change-Id: If240d95a7d4edc8dcc3e02d58cd8267a6bbf6fcb Tested-by: Greenberg, Aviv D <aviv.d.greenberg@intel.com> Signed-off-by: Aviv Greenberg <aviv.d.greenberg@intel.com> Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> (cherry picked from commit 120c41d3) Signed-off-by: Leann Ogasawara <leann.ogasawara@canonical.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Gavin Guo authored
BugLink: http://bugs.launchpad.net/bugs/1527643 The following message can be observed on the Ubuntu v3.13.0-65 with KASan backported: ================================================================== BUG: KASan: use after free in task_numa_find_cpu+0x64c/0x890 at addr ffff880dd393ecd8 Read of size 8 by task qemu-system-x86/3998900 ============================================================================= BUG kmalloc-128 (Tainted: G B ): kasan: bad access detected ----------------------------------------------------------------------------- INFO: Allocated in task_numa_fault+0xc1b/0xed0 age=41980 cpu=18 pid=3998890 __slab_alloc+0x4f8/0x560 __kmalloc+0x1eb/0x280 task_numa_fault+0xc1b/0xed0 do_numa_page+0x192/0x200 handle_mm_fault+0x808/0x1160 __do_page_fault+0x218/0x750 do_page_fault+0x1a/0x70 page_fault+0x28/0x30 SyS_poll+0x66/0x1a0 system_call_fastpath+0x1a/0x1f INFO: Freed in task_numa_free+0x1d2/0x200 age=62 cpu=18 pid=0 __slab_free+0x2ab/0x3f0 kfree+0x161/0x170 task_numa_free+0x1d2/0x200 finish_task_switch+0x1d2/0x210 __schedule+0x5d4/0xc60 schedule_preempt_disabled+0x40/0xc0 cpu_startup_entry+0x2da/0x340 start_secondary+0x28f/0x360 Call Trace: [<ffffffff81a6ce35>] dump_stack+0x45/0x56 [<ffffffff81244aed>] print_trailer+0xfd/0x170 [<ffffffff8124ac36>] object_err+0x36/0x40 [<ffffffff8124cbf9>] kasan_report_error+0x1e9/0x3a0 [<ffffffff8124d260>] kasan_report+0x40/0x50 [<ffffffff810dda7c>] ? task_numa_find_cpu+0x64c/0x890 [<ffffffff8124bee9>] __asan_load8+0x69/0xa0 [<ffffffff814f5c38>] ? find_next_bit+0xd8/0x120 [<ffffffff810dda7c>] task_numa_find_cpu+0x64c/0x890 [<ffffffff810de16c>] task_numa_migrate+0x4ac/0x7b0 [<ffffffff810de523>] numa_migrate_preferred+0xb3/0xc0 [<ffffffff810e0b88>] task_numa_fault+0xb88/0xed0 [<ffffffff8120ef02>] do_numa_page+0x192/0x200 [<ffffffff81211038>] handle_mm_fault+0x808/0x1160 [<ffffffff810d7dbd>] ? sched_clock_cpu+0x10d/0x160 [<ffffffff81068c52>] ? native_load_tls+0x82/0xa0 [<ffffffff81a7bd68>] __do_page_fault+0x218/0x750 [<ffffffff810c2186>] ? hrtimer_try_to_cancel+0x76/0x160 [<ffffffff81a6f5e7>] ? schedule_hrtimeout_range_clock.part.24+0xf7/0x1c0 [<ffffffff81a7c2ba>] do_page_fault+0x1a/0x70 [<ffffffff81a772e8>] page_fault+0x28/0x30 [<ffffffff8128cbd4>] ? do_sys_poll+0x1c4/0x6d0 [<ffffffff810e64f6>] ? enqueue_task_fair+0x4b6/0xaa0 [<ffffffff810233c9>] ? sched_clock+0x9/0x10 [<ffffffff810cf70a>] ? resched_task+0x7a/0xc0 [<ffffffff810d0663>] ? check_preempt_curr+0xb3/0x130 [<ffffffff8128b5c0>] ? poll_select_copy_remaining+0x170/0x170 [<ffffffff810d3bc0>] ? wake_up_state+0x10/0x20 [<ffffffff8112a28f>] ? drop_futex_key_refs.isra.14+0x1f/0x90 [<ffffffff8112d40e>] ? futex_requeue+0x3de/0xba0 [<ffffffff8112e49e>] ? do_futex+0xbe/0x8f0 [<ffffffff81022c89>] ? read_tsc+0x9/0x20 [<ffffffff8111bd9d>] ? ktime_get_ts+0x12d/0x170 [<ffffffff8108f699>] ? timespec_add_safe+0x59/0xe0 [<ffffffff8128d1f6>] SyS_poll+0x66/0x1a0 [<ffffffff81a830dd>] system_call_fastpath+0x1a/0x1f As commit 1effd9f1 ("sched/numa: Fix unsafe get_task_struct() in task_numa_assign()") points out, the rcu_read_lock() cannot protect the task_struct from being freed in the finish_task_switch(). And the bug happens in the process of calculation of imp which requires the access of p->numa_faults being freed in the following path: do_exit() current->flags |= PF_EXITING; release_task() ~~delayed_put_task_struct()~~ schedule() ... ... rq->curr = next; context_switch() finish_task_switch() put_task_struct() __put_task_struct() task_numa_free() The fix here to get_task_struct() early before end of dst_rq->lock to protect the calculation process and also put_task_struct() in the corresponding point if finally the dst_rq->curr somehow cannot be assigned. Additional credit to Liang Chen who helped fix the error logic and add the put_task_struct() to the place it missed. Signed-off-by: Gavin Guo <gavin.guo@canonical.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: jay.vosburgh@canonical.com Cc: liang.chen@canonical.com Link: http://lkml.kernel.org/r/1453264618-17645-1-git-send-email-gavin.guo@canonical.comSigned-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 1dff76b9) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Konstantin Khlebnikov authored
Overlayfs must update uid/gid after chown, otherwise functions like inode_owner_or_capable() will check user against stale uid. Catched by xfstests generic/087, it chowns file and calls utimes. Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com> Signed-off-by: Miklos Szeredi <miklos@szeredi.hu> Cc: <stable@vger.kernel.org> (backported from commit b81de061) BugLink: http://bugs.launchpad.net/bugs/1555997Signed-off-by: Seth Forshee <seth.forshee@canonical.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
Ignore: yes Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1556141 The fork of a process with four page table levels is broken since git commit 6252d702 "[S390] dynamic page tables." All new mm contexts are created with three page table levels and an asce limit of 4TB. If the parent has four levels dup_mmap will add vmas to the new context which are outside of the asce limit. The subsequent call to copy_page_range will walk the three level page table structure of the new process with non-zero pgd and pud indexes. This leads to memory clobbers as the pgd_index *and* the pud_index is added to the mm->pgd pointer without a pgd_deref in between. The init_new_context() function is selecting the number of page table levels for a new context. The function is used by mm_init() which in turn is called by dup_mm() and mm_alloc(). These two are used by fork() and exec(). The init_new_context() function can distinguish the two cases by looking at mm->context.asce_limit, for fork() the mm struct has been copied and the number of page table levels may not change. For exec() the mm_alloc() function set the new mm structure to zero, in this case a three-level page table is created as the temporary stack space is located at STACK_TOP_MAX = 4TB. This fixes CVE-2016-2143. Reported-by: Marcin Kościelnicki <koriakin@0x04.net> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: stable@vger.kernel.org Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> (cherry picked from commit 3446c13b git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
sixiao@microsoft.com authored
BugLink: http://bugs.launchpad.net/bugs/1556037 1. Adding NETIF_F_TSO6 feature flag; 2. Adding NETIF_F_HW_CSUM. NETIF_F_IPV6_CSUM and NETIF_F_IP_CSUM are being deprecated; 3. Cleanup the coding style of flag assignment by using macro. Signed-off-by: Simon Xiao <sixiao@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net> (back ported from linux-next commit a060679c) Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Conflicts: drivers/net/hyperv/netvsc_drv.c
-
Vitaly Kuznetsov authored
BugLink: http://bugs.launchpad.net/bugs/1556037 Recent changes to 'struct flow_keys' (e.g commit d34af823 ("net: Add VLAN ID to flow_keys")) introduced a performance regression in netvsc driver. Is problem is, however, not the above mentioned commit but the fact that netvsc_set_hash() function did some assumptions on the struct flow_keys data layout and this is wrong. Get rid of netvsc_set_hash() by switching to skb_get_hash(). This change will also imply switching to Jenkins hash from the currently used Toeplitz but it seems there is no good excuse for Toeplitz to stay. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from linux-next commit 757647e1) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Tim Gardner authored
UBUNTU: SAUCE: (noup) megaraid_sas: Don't issue kill adapter for MFI controllers in case of PD list DCMD failure BugLink: http://bugs.launchpad.net/bugs/1552903 http://marc.info/?l=linux-scsi&m=145760492231010&w=2 There are few MFI adapters which do not support MR_DCMD_PD_LIST_QUERY so if MFI adapters fail this DCMD, it should not be considered as FATAL and driver should not issue kill adapter and set per controller's instance variable- pd_list_not_supported so that same variable can be used inside functions- slave_alloc and slave_configure to allow firmware scan. Killing adapter because of DCMD failure when this DCMD is not supported causes driver's probe getting failed. This issue got introduced because of below commit when MFI IO timeout handling was introduced- 6d40afbc megaraid_sas: MFI IO timeout handling Killing adapter in case of this DCMD failure should be limited to Fusion adapters only. Per controller's instance variable allow_fw_scan is removed as pd_list_not_supported better reflect the purpose. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Libin Yang authored
BugLink: http://bugs.launchpad.net/bugs/1556002 This patch adds codec ID (0x8086280b) for Kabylake display codec and apply the hsw fix-ups to Kabylake. Signed-off-by: Libin Yang <libin.yang@linux.intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> (cherry picked from commit 91815d8a) Signed-off-by: Timo Aaltonen <timo.aaltonen@canonical.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Shilpasri G Bhat authored
BugLink: http://bugs.launchpad.net/bugs/1555765 Unregister the notifiers if cpufreq_driver_register() fails in powernv_cpufreq_init(). Re-arrange the unregistration and cleanup routines in powernv_cpufreq_exit() to free all the resources after the driver has unregistered. Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Shilpasri G Bhat authored
BugLink: http://bugs.launchpad.net/bugs/1555765 Currently we use printk message to notify the throttle event. But this can flood the console if the cpu is throttled frequently. So replace the printk with the tracepoint to notify the throttle event. And also events like throttle below nominal frequency and OCC_RESET are reduced to pr_warn/pr_warn_once as pointed by MFG to not mark them as critical messages. This patch adds 'throttle_reason' to struct chip to store the throttle reason. Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> (cherry picked from linux-next commit c89f2682) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Shilpasri G Bhat authored
BugLink: http://bugs.launchpad.net/bugs/1555765 This patch adds the powernv_throttle tracepoint to trace the CPU frequency throttling event, which is used by the powernv-cpufreq driver in POWER8. Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> (cherry picked from linux-next commit 0306e481) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Shilpasri G Bhat authored
BugLink: http://bugs.launchpad.net/bugs/1555765 cpu_to_chip_id() does a DT walk through to find out the chip id by taking a contended device tree lock. This adds an unnecessary overhead in a hot path. So instead of calling cpu_to_chip_id() everytime cache the chip ids for all cores in the array 'core_to_chip_map' and use it in the hotpath. Reported-by: Anton Blanchard <anton@samba.org> Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> (cherry picked from linux-next commit 96c4726f) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-
Shilpasri G Bhat authored
BugLink: http://bugs.launchpad.net/bugs/1555765 In the kworker_thread powernv_cpufreq_work_fn(), we can end up sending an IPI to a cpu going offline. This is a rare corner case which is fixed using {get/put}_online_cpus(). Along with this fix, this patch adds changes to do oneshot cpumask_{clear/and} operation. Suggested-by: Shreyas B Prabhu <shreyas@linux.vnet.ibm.com> Suggested-by: Gautham R Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> (cherry picked from linux-next commit 6d167a44) Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
-