- 10 Oct, 2014 40 commits
-
-
Lee, Chun-Yi authored
Compared source code of rtc-lib.c::rtc_year_days() with efirtc.c::rtc_year_days(), found the code in rtc-efi decreases value of day twice when it computing year days. rtc-lib.c::rtc_year_days() has already decrease days and return the year days from 0 to 365. Signed-off-by:
Lee, Chun-Yi <jlee@suse.com> Cc: Alessandro Zummo <a.zummo@towertech.it> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 809d9627) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Benjamin Tisssoires authored
This fix (not very clean though) should fix the long time USB3 issue that was spotted last year. The rational has been given by Hans de Goede: ---- I think the most likely cause for this is a firmware bug in the unifying receiver, likely a race condition. The most prominent difference between having a USB-2 device plugged into an EHCI (so USB-2 only) port versus an XHCI port will be inter packet timing. Specifically if you send packets (ie hid reports) one at a time, then with the EHCI controller their will be a significant pause between them, where with XHCI they will be very close together in time. The reason for this is the difference in EHCI / XHCI controller OS <-> driver interfaces. For non periodic endpoints (control, bulk) the EHCI uses a circular linked-list of commands in dma-memory, which it follows to execute commands, if the list is empty, it will go into an idle state and re-check periodically. The XHCI uses a ring of commands per endpoint, and if the OS places anything new on the ring it will do an ioport write, waking up the XHCI making it send the new packet immediately. For periodic transfers (isoc, interrupt) the delay between packets when sending one at a time (rather then queuing them up) will be even larger, because they need to be inserted into the EHCI schedule 2 ms in the future so the OS driver can be sure that the EHCI driver does not try to start executing the time slot in question before the insertion has completed. So a possible fix may be to insert a delay between packets being send to the receiver. ---- I tested this on a buggy Haswell USB 3.0 motherboard, and I always get the notification after adding the msleep. Signed-off-by:
Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> (cherry picked from commit 42c22dbf) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Jiri Kosina authored
Acer Aspire needs to be added to nomux blacklist, otherwise the touchpad misbehaves rather randomly. Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> (cherry picked from commit 8c947e20) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Laurent Dufour authored
Numerical values stored in the device tree are encoded in Big Endian and should be byte swapped when running in Little Endian. The RPA hotplug module should convert those values as well. Note that in rpaphp_get_drc_props(), the comparison between indexes[i+1] and *index is done using the BE values (whatever is the current endianess). This doesn't matter since we are checking for equality here. This way only the returned value is byte swapped. RPA also made RTAS calls which implies BE values to be used. According to the patch done in RTAS (http://patchwork.ozlabs.org/patch/336865), no additional conversion is required in RPA. Signed-off-by:
Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com> (cherry picked from commit 761ce533) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Maurizio Lombardi authored
In case of error, the bnx2fc_allocate_hash_table() didn't free all the memory it allocated. Signed-off-by:
Maurizio Lombardi <mlombard@redhat.com> Acked-by:
Eddie Wai <eddie.wai@broadcom.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> (cherry picked from commit fdbcbcab) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Yuval Mintz authored
Since commit 3fb43eb2 ("bnx2x: Change to D3hot only on removal") nvram is accessible whenever the driver is loaded - Thus it is possible to test it during self-test even if the interface is down Signed-off-by:
Yuval Mintz <yuvalmin@broadcom.com> Signed-off-by:
Ariel Elior <ariele@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit bd8e012b) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Dan Carpenter authored
The cpl_abort_req struct has several reserved members which need to be cleared to avoid disclosing kernel information. I have added a memset() so now it matches the cxgb4 version of this function. Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> (cherry picked from commit e4514cbd) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
David Gibson authored
netxen_process_lro() contains two bounds checks. One for the ring number against the number of rings, and one for the Rx buffer ID against the array of receive buffers. Both of these have off-by-one errors, using > instead of >=. The correct versions are used in netxen_process_rcv(), they're just wrong in netxen_process_lro(). Signed-off-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 4710b2ba) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Russell King authored
The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { *using_dac = true; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) goto release_regions; } This means we only try and set the coherent DMA mask if we failed to set a 32-bit DMA mask, and only if both fail do we fail the driver. Adjust this so that if either setting fails, we fail the driver - and thereby end up properly setting both the DMA mask and the coherent DMA mask in the fallback case. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit 3e548079) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Wei Yongjun authored
Add the missing iounmap() before return from igbvf_probe() in the error handling case. Signed-off-by:
Wei Yongjun <yongjun_wei@trendmicro.com.cn> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Tested-by:
Sibai Li <Sibai.li@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit de524681) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Dan Carpenter authored
If new_mtu is very large then "new_mtu + ETH_HLEN + ETH_FCS_LEN" can wrap and the check on the next line can underflow. This is one of those bugs which can be triggered by the user if you have namespaces configured. Also since this is something the user can trigger then we don't want to have dev_err() message. This is a static checker fix and I'm not sure what the impact is. Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Tested-by: Sibai Li Sibai.li@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit 3de9e65f) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Russell King authored
The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit c21b8ebc) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Fujinaka, Todd authored
Don't let ethtool try to write to iNVM in i210/i211. This fixes an issue seen by Marek Vasut. Reported-by:
Marek Vasut <marex@denx.de> Signed-off-by:
Todd Fujinaka <todd.fujinaka@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit a71fc313) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Carolyn Wyborny authored
This patch calls code to set the master/slave mode for all m88 gen 2 PHY's. This patch also removes the call to this function for I210 devices only from the function that is not called by I210 devices. Signed-off-by:
Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by:
Jeff Pieper <jeffrey.e.pieper@gmail.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit d1c17d80) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Fujinaka, Todd authored
Add the ethtool offline tests for i354 devices. Signed-off-by:
Todd Fujinaka <todd.fujinaka@intel.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit a4e979a2) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Russell King authored
The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit 53567aa4) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Emil Tantilov authored
This patch resolves an issue where the MTA table can be cleared when the interface is reset while in promisc mode. As result IPv6 traffic between VFs will be interrupted. This patch makes the update of the MTA table unconditional to avoid the inconsistent clearing on reset. Signed-off-by:
Emil Tantilov <emil.s.tantilov@intel.com> Tested-by:
Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit cf78959c) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Russell King authored
The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit f5f2eda8) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Vladimir Davydov authored
On e1000_down(), we should ensure every asynchronous work is canceled before proceeding. Since the watchdog_task can schedule other works apart from itself, it should be stopped first, but currently it is stopped after the reset_task. This can result in the following race leading to the reset_task running after the module unload: e1000_down_and_stop(): e1000_watchdog(): ---------------------- ----------------- cancel_work_sync(reset_task) schedule_work(reset_task) cancel_delayed_work_sync(watchdog_task) The patch moves cancel_delayed_work_sync(watchdog_task) at the beginning of e1000_down_and_stop() thus ensuring the race is impossible. Cc: Tushar Dave <tushar.n.dave@intel.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by:
Vladimir Davydov <vdavydov@parallels.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit 74a1b1ea) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
yzhu1 authored
This change is based on a similar change made to e1000e support in commit bb9e44d0 ("e1000e: prevent oops when adapter is being closed and reset simultaneously"). The same issue has also been observed on the older e1000 cards. Here, we have increased the RESET_COUNT value to 50 because there are too many accesses to e1000 nic on stress tests to e1000 nic, it is not enough to set RESET_COUT 25. Experimentation has shown that it is enough to set RESET_COUNT 50. Signed-off-by:
yzhu1 <yanjun.zhu@windriver.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit 6a7d64e3) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Hong Zhiguo authored
tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring *" Signed-off-by:
Hong Zhiguo <zhiguohong@tencent.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit 49a45a06) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Mika Westerberg authored
Commit 7509963c (e1000e: Fix a compile flag mis-match for suspend/resume) moved suspend and resume hooks to be available when CONFIG_PM is set. However, it can be set even if CONFIG_PM_SLEEP is not set causing following warnings to be emitted: drivers/net/ethernet/intel/e1000e/netdev.c:6178:12: warning: ‘e1000_suspend’ defined but not used [-Wunused-function] drivers/net/ethernet/intel/e1000e/netdev.c:6185:12: warning: ‘e1000_resume’ defined but not used [-Wunused-function] To fix this make the hooks to be available only when CONFIG_PM_SLEEP is set and remove CONFIG_PM wrapping from driver ops because this is already handled by SET_SYSTEM_SLEEP_PM_OPS() and SET_RUNTIME_PM_OPS(). Signed-off-by:
Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Dave Ertman <davidx.m.ertman@intel.com> Cc: Aaron Brown <aaron.f.brown@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 38a529b5) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
David Ertman authored
This patch addresses a mis-match between the declaration and usage of the e1000_suspend and e1000_resume functions. Previously, these functions were declared in a CONFIG_PM_SLEEP wrapper, and then utilized within a CONFIG_PM wrapper. Both the declaration and usage will now be contained within CONFIG_PM wrappers. Signed-off-by:
Dave Ertman <davidx.m.ertman@intel.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (cherry picked from commit 7509963c) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Russell King authored
Provide a helper to set both the DMA and coherent DMA masks to the same value - this avoids duplicated code in a number of drivers, sometimes with buggy error handling, and also allows us identify which drivers do things differently. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit 4aa806b7) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Keith Packard authored
When FB_EVENT_FB_UNBIND is sent, fbcon has two paths, one path taken when there is another frame buffer to switch any affected vcs to and another path when there isn't. In the case where there is another frame buffer to use, fbcon_fb_unbind calls set_con2fb_map to remap all of the affected vcs to the replacement frame buffer. set_con2fb_map will eventually call con2fb_release_oldinfo when the last vcs gets unmapped from the old frame buffer. con2fb_release_oldinfo frees the fbcon data that is hooked off of the fb_info structure, including the cursor timer. In the case where there isn't another frame buffer to use, fbcon_fb_unbind simply calls fbcon_unbind, which doesn't clear the con2fb_map or free the fbcon data hooked from the fb_info structure. In particular, it doesn't stop the cursor blink timer. When the fb_info structure is then freed, we end up with a timer queue pointing into freed memory and "bad things" start happening. This patch first changes con2fb_release_oldinfo so that it can take a NULL pointer for the new frame buffer, but still does all of the deallocation and cursor timer cleanup. Finally, the patch tries to replicate some of what set_con2fb_map does by clearing the con2fb_map for the affected vcs and calling the modified con2fb_release_info function to clean up the fb_info structure. Signed-off-by:
Keith Packard <keithp@keithp.com> Signed-off-by:
Tomi Valkeinen <tomi.valkeinen@ti.com> (cherry picked from commit 5f4dc28b) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Cedric Le Goater authored
The "screen" properties : depth, width, height, linebytes need to be converted to the host endian order when read from the device tree. The offb_init_palette_hacks() routine also made assumption on the host endian order. Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> (cherry picked from commit 212c0cbd) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Naoya Horiguchi authored
Commit 4a705fef ("hugetlb: fix copy_hugetlb_page_range() to handle migration/hwpoisoned entry") changed the order of huge_ptep_set_wrprotect() and huge_ptep_get(), which leads to breakage in some workloads like hugepage-backed heap allocation via libhugetlbfs. This patch fixes it. The test program for the problem is shown below: $ cat heap.c #include <unistd.h> #include <stdlib.h> #include <string.h> #define HPS 0x200000 int main() { int i; char *p = malloc(HPS); memset(p, '1', HPS); for (i = 0; i < 5; i++) { if (!fork()) { memset(p, '2', HPS); p = malloc(HPS); memset(p, '3', HPS); free(p); return 0; } } sleep(1); free(p); return 0; } $ export HUGETLB_MORECORE=yes ; export HUGETLB_NO_PREFAULT= ; hugectl --heap ./heap Fixes 4a705fef ("hugetlb: fix copy_hugetlb_page_range() to handle migration/hwpoisoned entry"), so is applicable to -stable kernels which include it. Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by:
Guillaume Morin <guillaume@morinfr.org> Suggested-by:
Guillaume Morin <guillaume@morinfr.org> Acked-by:
Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [2.6.37+] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 0253d634) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Naoya Horiguchi authored
There's a race between fork() and hugepage migration, as a result we try to "dereference" a swap entry as a normal pte, causing kernel panic. The cause of the problem is that copy_hugetlb_page_range() can't handle "swap entry" family (migration entry and hwpoisoned entry) so let's fix it. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by:
Hugh Dickins <hughd@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: <stable@vger.kernel.org> [2.6.37+] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 4a705fef) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
H. Peter Anvin authored
Make espfix64 a hidden Kconfig option. This fixes the x86-64 UML build which had broken due to the non-existence of init_espfix_bsp() in UML: since UML uses its own Kconfig, this option does not appear in the UML build. This also makes it possible to make support for 16-bit segments a configuration option, for the people who want to minimize the size of the kernel. Reported-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Cc: Richard Weinberger <richard@nod.at> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com (cherry picked from commit 197725de) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Greg Thelen authored
1d3d4437 ("vmscan: per-node deferred work") added a flags field to struct shrinker assuming that all shrinkers were zero filled. The dm bufio shrinker is not zero filled, which leaves arbitrary kmalloc() data in flags. So far the only defined flags bit is SHRINKER_NUMA_AWARE. But there are proposed patches which add other bits to shrinker.flags (e.g. memcg awareness). Rather than simply initializing the shrinker, this patch uses kzalloc() when allocating the dm_bufio_client to ensure that the embedded shrinker and any other similar structures are zeroed. This fixes theoretical over aggressive shrinking of dm bufio objects. If the uninitialized dm_bufio_client.shrinker.flags contains SHRINKER_NUMA_AWARE then shrink_slab() would call the dm shrinker for each numa node rather than just once. This has been broken since 3.12. Signed-off-by:
Greg Thelen <gthelen@google.com> Acked-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # v3.12+ (cherry picked from commit d8c712ea) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Michal Hocko authored
Paul Furtado has reported the following GPF: general protection fault: 0000 [#1] SMP Modules linked in: ipv6 dm_mod xen_netfront coretemp hwmon x86_pkg_temp_thermal crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel ablk_helper cryptd lrw gf128mul glue_helper aes_x86_64 microcode pcspkr ext4 jbd2 mbcache raid0 xen_blkfront CPU: 3 PID: 3062 Comm: java Not tainted 3.16.0-rc5 #1 task: ffff8801cfe8f170 ti: ffff8801d2ec4000 task.ti: ffff8801d2ec4000 RIP: e030:mem_cgroup_oom_synchronize+0x140/0x240 RSP: e02b:ffff8801d2ec7d48 EFLAGS: 00010283 RAX: 0000000000000001 RBX: ffff88009d633800 RCX: 000000000000000e RDX: fffffffffffffffe RSI: ffff88009d630200 RDI: ffff88009d630200 RBP: ffff8801d2ec7da8 R08: 0000000000000012 R09: 00000000fffffffe R10: 0000000000000000 R11: 0000000000000000 R12: ffff88009d633800 R13: ffff8801d2ec7d48 R14: dead000000100100 R15: ffff88009d633a30 FS: 00007f1748bb4700(0000) GS:ffff8801def80000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00007f4110300308 CR3: 00000000c05f7000 CR4: 0000000000002660 Call Trace: pagefault_out_of_memory+0x18/0x90 mm_fault_error+0xa9/0x1a0 __do_page_fault+0x478/0x4c0 do_page_fault+0x2c/0x40 page_fault+0x28/0x30 Code: 44 00 00 48 89 df e8 40 ca ff ff 48 85 c0 49 89 c4 74 35 4c 8b b0 30 02 00 00 4c 8d b8 30 02 00 00 4d 39 fe 74 1b 0f 1f 44 00 00 <49> 8b 7e 10 be 01 00 00 00 e8 42 d2 04 00 4d 8b 36 4d 39 fe 75 RIP mem_cgroup_oom_synchronize+0x140/0x240 Commit fb2a6fc5 ("mm: memcg: rework and document OOM waiting and wakeup") has moved mem_cgroup_oom_notify outside of memcg_oom_lock assuming it is protected by the hierarchical OOM-lock. Although this is true for the notification part the protection doesn't cover unregistration of event which can happen in parallel now so mem_cgroup_oom_notify can see already unlinked and/or freed mem_cgroup_eventfd_list. Fix this by using memcg_oom_lock also in mem_cgroup_oom_notify. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=80881 Fixes: fb2a6fc5 (mm: memcg: rework and document OOM waiting and wakeup) Signed-off-by:
Michal Hocko <mhocko@suse.cz> Reported-by:
Paul Furtado <paulfurtado91@gmail.com> Tested-by:
Paul Furtado <paulfurtado91@gmail.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 2bcf2e92) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Mateusz Guzik authored
proc_sched_show_task() does: if (nr_switches) do_div(avg_atom, nr_switches); nr_switches is unsigned long and do_div truncates it to 32 bits, which means it can test non-zero on e.g. x86-64 and be truncated to zero for division. Fix the problem by using div64_ul() instead. As a side effect calculations of avg_atom for big nr_switches are now correct. Signed-off-by:
Mateusz Guzik <mguzik@redhat.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1402750809-31991-1-git-send-email-mguzik@redhat.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> (cherry picked from commit b0ab99e7) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Mike Snitzer authored
The block size for the thin-pool's data device must remained fixed for the life of the thin-pool. Disallow any attempt to change the thin-pool's data block size. It should be noted that attempting to change the data block size via thin-pool table reload will be ignored as a side-effect of the thin-pool handover that the thin-pool target does during thin-pool table reload. Here is an example outcome of attempting to load a thin-pool table that reduced the thin-pool's data block size from 1024K to 512K. Before: kernel: device-mapper: thin: 253:4: growing the data device from 204800 to 409600 blocks After: kernel: device-mapper: thin metadata: changing the data block size (from 2048 to 1024) is not supported kernel: device-mapper: table: 253:4: thin-pool: Error creating metadata object kernel: device-mapper: ioctl: error adding target to table Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org (cherry picked from commit 9aec8629) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
If the value in the scratch register is 0, set it to the max level. This fixes an issue where the console fb blanking code calls back into the backlight driver on unblank and then sets the backlight level to 0 after the driver has already set the mode and enabled the backlight. bugs: https://bugs.freedesktop.org/show_bug.cgi?id=81382 https://bugs.freedesktop.org/show_bug.cgi?id=70207Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Tested-by:
David Heidelberger <david.heidelberger@ixit.cz> Cc: stable@vger.kernel.org (cherry picked from commit 201bb624) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Thomas Petazzoni authored
As reported by Maggie Mae Roxas, the mvneta driver doesn't behave properly in 10 Mbit/s mode. This is due to a misconfiguration of the MVNETA_GMAC_AUTONEG_CONFIG register: bit MVNETA_GMAC_CONFIG_MII_SPEED must be set for a 100 Mbit/s speed, but cleared for a 10 Mbit/s speed, which the driver was not properly doing. This commit adjusts that by setting the MVNETA_GMAC_CONFIG_MII_SPEED bit only in 100 Mbit/s mode, and relying on the fact that all the speed related bits of this register are cleared at the beginning of the mvneta_adjust_link() function. This problem exists since c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") which is the commit that introduced the mvneta driver in the kernel. Cc: <stable@vger.kernel.org> # v3.8+ Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Reported-by:
Maggie Mae Roxas <maggie.mae.roxas@gmail.com> Cc: Maggie Mae Roxas <maggie.mae.roxas@gmail.com> Signed-off-by:
Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 4d12bc63) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Yuchung Cheng authored
The undo code assumes that, upon entering loss recovery, TCP 1) always retransmit something 2) the retransmission never fails locally (e.g., qdisc drop) so undo_marker is set in tcp_enter_recovery() and undo_retrans is incremented only when tcp_retransmit_skb() is successful. When the assumption is broken because TCP's cwnd is too small to retransmit or the retransmit fails locally. The next (DUP)ACK would incorrectly revert the cwnd and the congestion state in tcp_try_undo_dsack() or tcp_may_undo(). Subsequent (DUP)ACKs may enter the recovery state. The sender repeatedly enter and (incorrectly) exit recovery states if the retransmits continue to fail locally while receiving (DUP)ACKs. The fix is to initialize undo_retrans to -1 and start counting on the first retransmission. Always increment undo_retrans even if the retransmissions fail locally because they couldn't cause DSACKs to undo the cwnd reduction. Signed-off-by:
Yuchung Cheng <ycheng@google.com> Signed-off-by:
Neal Cardwell <ncardwell@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 6e08d5e3) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Loic Prylli authored
A bug was introduced in NETDEV_CHANGE notifier sequence causing the arp table to be sometimes spuriously cleared (including manual arp entries marked permanent), upon network link carrier changes. The changed argument for the notifier was applied only to a single caller of NETDEV_CHANGE, missing among others netdev_state_change(). So upon net_carrier events induced by the network, which are triggering a call to netdev_state_change(), arp_netdev_event() would decide whether to clear or not arp cache based on random/junk stack values (a kind of read buffer overflow). Fixes: be9efd36 ("net: pass changed flags along with NETDEV_CHANGE event") Fixes: 6c8b4e3f ("arp: flush arp cache on IFF_NOARP change") Signed-off-by:
Loic Prylli <loicp@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 54951194) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Bernd Wachter authored
There's a new version of the Telewell 4G modem working with, but not recognized by this driver. Signed-off-by:
Bernd Wachter <bernd.wachter@jolla.com> Acked-by:
Bjørn Mork <bjorn@mork.no> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 8dcb4b15) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Edward Allcutt authored
Some older router implementations still send Fragmentation Needed errors with the Next-Hop MTU field set to zero. This is explicitly described as an eventuality that hosts must deal with by the standard (RFC 1191) since older standards specified that those bits must be zero. Linux had a generic (for all of IPv4) implementation of the algorithm described in the RFC for searching a list of MTU plateaus for a good value. Commit 46517008 ("ipv4: Kill ip_rt_frag_needed().") removed this as part of the changes to remove the routing cache. Subsequently any Fragmentation Needed packet with a zero Next-Hop MTU has been discarded without being passed to the per-protocol handlers or notifying userspace for raw sockets. When there is a router which does not implement RFC 1191 on an MTU limited path then this results in stalled connections since large packets are discarded and the local protocols are not notified so they never attempt to lower the pMTU. One example I have seen is an OpenBSD router terminating IPSec tunnels. It's worth pointing out that this case is distinct from the BSD 4.2 bug which incorrectly calculated the Next-Hop MTU since the commit in question dismissed that as a valid concern. All of the per-protocols handlers implement the simple approach from RFC 1191 of immediately falling back to the minimum value. Although this is sub-optimal it is vastly preferable to connections hanging indefinitely. Remove the Next-Hop MTU != 0 check and allow such packets to follow the normal path. Fixes: 46517008 ("ipv4: Kill ip_rt_frag_needed().") Signed-off-by:
Edward Allcutt <edward.allcutt@openmarket.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 68b7107b) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-
Christoph Paasch authored
When in repair-mode and TCP_RECV_QUEUE is set, we end up calling tcp_push with mss_now being 0. If data is in the send-queue and tcp_set_skb_tso_segs gets called, we crash because it will divide by mss_now: [ 347.151939] divide error: 0000 [#1] SMP [ 347.152907] Modules linked in: [ 347.152907] CPU: 1 PID: 1123 Comm: packetdrill Not tainted 3.16.0-rc2 #4 [ 347.152907] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007 [ 347.152907] task: f5b88540 ti: f3c82000 task.ti: f3c82000 [ 347.152907] EIP: 0060:[<c1601359>] EFLAGS: 00210246 CPU: 1 [ 347.152907] EIP is at tcp_set_skb_tso_segs+0x49/0xa0 [ 347.152907] EAX: 00000b67 EBX: f5acd080 ECX: 00000000 EDX: 00000000 [ 347.152907] ESI: f5a28f40 EDI: f3c88f00 EBP: f3c83d10 ESP: f3c83d00 [ 347.152907] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 [ 347.152907] CR0: 80050033 CR2: 083158b0 CR3: 35146000 CR4: 000006b0 [ 347.152907] Stack: [ 347.152907] c167f9d9 f5acd080 000005b4 00000002 f3c83d20 c16013e6 f3c88f00 f5acd080 [ 347.152907] f3c83da0 c1603b5a f3c83d38 c10a0188 00000000 00000000 f3c83d84 c10acc85 [ 347.152907] c1ad5ec0 00000000 00000000 c1ad679c 010003e0 00000000 00000000 f3c88fc8 [ 347.152907] Call Trace: [ 347.152907] [<c167f9d9>] ? apic_timer_interrupt+0x2d/0x34 [ 347.152907] [<c16013e6>] tcp_init_tso_segs+0x36/0x50 [ 347.152907] [<c1603b5a>] tcp_write_xmit+0x7a/0xbf0 [ 347.152907] [<c10a0188>] ? up+0x28/0x40 [ 347.152907] [<c10acc85>] ? console_unlock+0x295/0x480 [ 347.152907] [<c10ad24f>] ? vprintk_emit+0x1ef/0x4b0 [ 347.152907] [<c1605716>] __tcp_push_pending_frames+0x36/0xd0 [ 347.152907] [<c15f4860>] tcp_push+0xf0/0x120 [ 347.152907] [<c15f7641>] tcp_sendmsg+0xf1/0xbf0 [ 347.152907] [<c116d920>] ? kmem_cache_free+0xf0/0x120 [ 347.152907] [<c106a682>] ? __sigqueue_free+0x32/0x40 [ 347.152907] [<c106a682>] ? __sigqueue_free+0x32/0x40 [ 347.152907] [<c114f0f0>] ? do_wp_page+0x3e0/0x850 [ 347.152907] [<c161c36a>] inet_sendmsg+0x4a/0xb0 [ 347.152907] [<c1150269>] ? handle_mm_fault+0x709/0xfb0 [ 347.152907] [<c15a006b>] sock_aio_write+0xbb/0xd0 [ 347.152907] [<c1180b79>] do_sync_write+0x69/0xa0 [ 347.152907] [<c1181023>] vfs_write+0x123/0x160 [ 347.152907] [<c1181d55>] SyS_write+0x55/0xb0 [ 347.152907] [<c167f0d8>] sysenter_do_call+0x12/0x28 This can easily be reproduced with the following packetdrill-script (the "magic" with netem, sk_pacing and limit_output_bytes is done to prevent the kernel from pushing all segments, because hitting the limit without doing this is not so easy with packetdrill): 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 +0 < S 0:0(0) win 32792 <mss 1460> +0 > S. 0:0(0) ack 1 <mss 1460> +0.1 < . 1:1(0) ack 1 win 65000 +0 accept(3, ..., ...) = 4 // This forces that not all segments of the snd-queue will be pushed +0 `tc qdisc add dev tun0 root netem delay 10ms` +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=2` +0 setsockopt(4, SOL_SOCKET, 47, [2], 4) = 0 +0 write(4,...,10000) = 10000 +0 write(4,...,10000) = 10000 // Set tcp-repair stuff, particularly TCP_RECV_QUEUE +0 setsockopt(4, SOL_TCP, 19, [1], 4) = 0 +0 setsockopt(4, SOL_TCP, 20, [1], 4) = 0 // This now will make the write push the remaining segments +0 setsockopt(4, SOL_SOCKET, 47, [20000], 4) = 0 +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=130000` // Now we will crash +0 write(4,...,1000) = 1000 This happens since ec342325 (tcp: fix retransmission in repair mode). Prior to that, the call to tcp_push was prevented by a check for tp->repair. The patch fixes it, by adding the new goto-label out_nopush. When exiting tcp_sendmsg and a push is not required, which is the case for tp->repair, we go to this label. When repairing and calling send() with TCP_RECV_QUEUE, the data is actually put in the receive-queue. So, no push is required because no data has been added to the send-queue. Cc: Andrew Vagin <avagin@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Fixes: ec342325 (tcp: fix retransmission in repair mode) Signed-off-by:
Christoph Paasch <christoph.paasch@uclouvain.be> Acked-by:
Andrew Vagin <avagin@openvz.org> Acked-by:
Pavel Emelyanov <xemul@parallels.com> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit 5924f17a) Signed-off-by:
Sasha Levin <sasha.levin@oracle.com>
-