- 26 Jun, 2019 29 commits
-
-
Nishka Dasgupta authored
Remove function block_resume as it was only called by vchiq_arm_force_suspend, which was removed in a previous patch. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove function output_timeout_error as it was only called by vchiq_arm_force_suspend, which was deleted in a previous patch. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_send_remote_release. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_use_service_no_resume. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_resume_internal. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_pause_internal. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_arm_force_suspend. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiq_arm_allow_resume. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nishka Dasgupta authored
Remove unused function vchiu_queue_is_full. Issue found with Coccinelle. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Simon Sandström authored
Fixes checkpatch errors: - spaces required around that '=' (ctx:VxV) - space required before the open parenthesis '(' - spaces preferred around that '-' (ctx:VxV) - space required before the open brace '{' Signed-off-by: Simon Sandström <simon@nikanor.nu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Simon Sandström authored
Fixes checkpatch "CHECK: spaces preferred around that '+' (ctx:VxV)". Signed-off-by: Simon Sandström <simon@nikanor.nu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Straube authored
Function is_ap_in_wep() is not used in the driver code, so remove it. Signed-off-by: Michael Straube <straube.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Straube authored
Function get_bsstype() is not used in the driver code, so remove it. Signed-off-by: Michael Straube <straube.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shobhit Kukreti authored
rtw_init_default_value() func always returns a value (u8)_SUCCESS. Modified return type to void to resolve coccicheck warnings of unneeded variable. Signed-off-by: Shobhit Kukreti <shobhitkukreti@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shobhit Kukreti authored
Changed return type of function rtw_suspend_wow() to void. The function always return _SUCCESS and the value is never checked in the calling function. Resolves coccicheck Unneeded variable "ret" warning. Signed-off-by: Shobhit Kukreti <shobhitkukreti@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shobhit Kukreti authored
Coccicheck issues Unneeded variable "ret" warning. The return value of function rtw_suspend_normal() is set to _SUCCESS. The return value is never never checked by the calling function. Modified return type to void to remove the coccicheck warning.. Signed-off-by: Shobhit Kukreti <shobhitkukreti@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shobhit Kukreti authored
The function rtw_reset_drv_sw() return value is set to _SUCCESS. The return value is never checked when the function is called. Modified the return value to void to remove "Unneeded Variable warning of coccicheck. Signed-off-by: Shobhit Kukreti <shobhitkukreti@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shobhit Kukreti authored
The function static uint loadparam(struct adapter *padapter, _nic_hdl pnetdev) return type is modified to void. The initial return value was always returning _SUCCESS and the return value is never checked when the function is called. This resolves coccicheck warnings of unneeded variables. Signed-off-by: Shobhit Kukreti <shobhitkukreti@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Straube authored
Function hal_init_macaddr() just calls rtw_hal_set_hwreg(). Use rtw_hal_set_hwreg() directly and remove hal_init_macaddr(). Signed-off-by: Michael Straube <straube.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Straube authored
Cleanup checkpatch issues in usb_halinit.c. CHECK: Lines should not end with a '(' Signed-off-by: Michael Straube <straube.linux@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
Decompressor needs to know whether it's a partial or full decompression since only full decompression can be decompressed in-place. On kirin980 platform, sequential read is finally increased to 812MiB/s after decompression inplace is enabled. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
This patch integrates new decompression framework to erofs decompression path, and remove the old decompression implementation as well. On kirin980 platform, sequential read is slightly improved to 778MiB/s after the new decompression backend is used. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
compressed data will be usually loaded into last pages of the extent (the last page for 4k) for in-place decompression (more specifically, in-place IO), as ilustration below, start of compressed logical extent | end of this logical extent | | ______v___________________________v________ ... | page 6 | page 7 | page 8 | page 9 | ... |__________|__________|__________|__________| . ^ . ^ . |compressed| . | data | . . . |< dstsize >|<margin>| oend iend op ip Therefore, it's possible to do decompression inplace (thus no memcpy at all) if the margin is sufficient and safe enough [1], and it can be implemented only for fixed-size output compression compared with fixed-size input compression. No memcpy for most of in-place IO (about 99% of enwik9) after decompression inplace is implemented and sequential read will be improved of course (see the following patches for test results). [1] https://github.com/lz4/lz4/commit/b17f578a919b7e6b078cede2d52be29dd48c8e8c https://github.com/lz4/lz4/commit/5997e139f53169fa3a1c1b4418d2452a90b01602Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
This patch adds a new generic decompression framework in order to replace the old LZ4-specific decompression code. Even though LZ4 is still the only supported algorithm, yet it is more cleaner and easy to integrate new algorithm than the old almost hard-coded decompression backend. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
stagingpages are behaved as bounce pages for temporary use. Move to compress.h since the upcoming decompressor will allocate stagingpages as well. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
This patch moves per-CPU buffers to utils.c in order for the upcoming generic decompression framework to use it. Note that I tried to use generic per-CPU buffer or per-CPU page approaches to clean up further, but obvious performanace regression (about 2% for sequential read) was observed. Therefore let's leave it as it is instead, just move to utils.c and I'll try to dig into the root cause later. Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
This patch aims at compacted compression indexes: 1) cleanup z_erofs_map_blocks_iter and move into zmap.c; 2) add compacted 4/2B decoding support. On kirin980 platform, sequential read is increased about 6% (725MiB/s -> 770MiB/s) on enwik9 dataset if compacted 2B feature is enabled. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gao Xiang authored
This patch introduces new compacted compression indexes. In contract to legacy compression indexes that each 4k logical cluster has an 8-byte index, compacted ondisk compression indexes will have amortized 2 bytes for each 4k logical cluster (compacted 2B) amortized 4 bytes for each 4k logical cluster (compacted 4B) In detail, several continuous clusters will be encoded in a compacted pack with cluster types, offsets, and one blkaddr at the end of the pack to leave 4-byte margin for better decoding performance, as illustrated below: _____________________________________________ |___@_____ encoded bits __________|_ blkaddr _| 0 . amortized * vcnt . . . . amortized * vcnt - 4 . . .___________________. |_type_|_clusterofs_| Note that compacted 2 / 4B should be aligned with 32 / 8 bytes in order to avoid each pack crossing page boundary. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ian Abbott authored
Comedi's acquisition buffer allocation code can allocate the buffer from normal kernel memory or from DMA coherent memory depending on the `dma_async_dir` value in the comedi subdevice. (A value of `DMA_NONE` causes the buffer to be allocated from normal kernel memory. Other values cause the buffer to be allocated from DMA coherent memory.) The buffer currently consists of a bunch of page-sized blocks that are vmap'ed into a contiguous range of virtual addresses. The pages are also mmap'able into user address space. For DMA'able buffers, these page-sized blocks are allocated by `dma_alloc_coherent()`. For DMA-able buffers, the DMA API is currently abused in various ways, the most serious abuse being the calling of `virt_to_page()` on the blocks allocated by `dma_alloc_coherent()` and passing those pages to `vmap()` (for mapping to the kernels vmalloc address space) and via `page_to_pfn()` to `remap_pfn_range()` (for mmap'ing to user space). it also uses the `__GFP_COMP` flag when allocating the blocks, and marks the allocated pages as reserved (which is unnecessary for DMA coherent allocations). The code can be changed to get rid of the vmap'ed address altogether if necessary, since there are only a few places in the comedi code that use the vmap'ed address directly and we also keep a list of the kernel addresses for the individual pages prior to the vmap operation. This would add some run-time overhead to buffer accesses. The real killer is the mmap operation. For mmap, the address range specified in the VMA needs to be mmap'ed to the individually allocated page-sized blocks. That is not a problem when the pages are allocated from normal kernel memory as the individual pages can be remapped by `remap_pfn_range()`, but it is a problem when the page-sized blocks are allocated by `dma_alloc_coherent()` because the DMA API currently has no support for splitting a VMA across multiple blocks of DMA coherent memory (or rather, no support for mapping part of a VMA range to a single block of DMA coherent memory). In order to comply with the DMA API and allow the buffer to be mmap'ed, the buffer needs to be allocated as a single block by a single call to `dma_alloc_coherent()`, freed by a single call to `dma_free_coherent()`, and mmap'ed to user space by a single call to `dma_mmap_coherent()`. This patch changes the buffer allocation, freeing, and mmap'ing code to do that, with the unfortunate consequence that buffer allocation is more likely to fail. It also no longer uses the `__GFP_COMP` flag when allocating DMA coherent memory, no longer marks the allocated pages of DMA coherent memory as reserved, and no longer vmap's the DMA coherent memory pages (since they are contiguous anyway). Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 Jun, 2019 1 commit
-
-
Greg Kroah-Hartman authored
This reverts commit 3e5bc68f as it causes build errors in linux-next. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 Jun, 2019 1 commit
-
-
Greg Kroah-Hartman authored
We want the fixes and this resolves a merge issue as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 22 Jun, 2019 9 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds authored
Pull iommu fix from Joerg Roedel: "Revert a commit from the previous pile of fixes which causes new lockdep splats. It is better to revert it for now and work on a better and more well tested fix" * tag 'iommu-fix-v5.2-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: Revert "iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock"
-
Peter Xu authored
This reverts commit 7560cc3c. With 5.2.0-rc5 I can easily trigger this with lockdep and iommu=pt: ====================================================== WARNING: possible circular locking dependency detected 5.2.0-rc5 #78 Not tainted ------------------------------------------------------ swapper/0/1 is trying to acquire lock: 00000000ea2b3beb (&(&iommu->lock)->rlock){+.+.}, at: domain_context_mapping_one+0xa5/0x4e0 but task is already holding lock: 00000000a681907b (device_domain_lock){....}, at: domain_context_mapping_one+0x8d/0x4e0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (device_domain_lock){....}: _raw_spin_lock_irqsave+0x3c/0x50 dmar_insert_one_dev_info+0xbb/0x510 domain_add_dev_info+0x50/0x90 dev_prepare_static_identity_mapping+0x30/0x68 intel_iommu_init+0xddd/0x1422 pci_iommu_init+0x16/0x3f do_one_initcall+0x5d/0x2b4 kernel_init_freeable+0x218/0x2c1 kernel_init+0xa/0x100 ret_from_fork+0x3a/0x50 -> #0 (&(&iommu->lock)->rlock){+.+.}: lock_acquire+0x9e/0x170 _raw_spin_lock+0x25/0x30 domain_context_mapping_one+0xa5/0x4e0 pci_for_each_dma_alias+0x30/0x140 dmar_insert_one_dev_info+0x3b2/0x510 domain_add_dev_info+0x50/0x90 dev_prepare_static_identity_mapping+0x30/0x68 intel_iommu_init+0xddd/0x1422 pci_iommu_init+0x16/0x3f do_one_initcall+0x5d/0x2b4 kernel_init_freeable+0x218/0x2c1 kernel_init+0xa/0x100 ret_from_fork+0x3a/0x50 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(device_domain_lock); lock(&(&iommu->lock)->rlock); lock(device_domain_lock); lock(&(&iommu->lock)->rlock); *** DEADLOCK *** 2 locks held by swapper/0/1: #0: 00000000033eb13d (dmar_global_lock){++++}, at: intel_iommu_init+0x1e0/0x1422 #1: 00000000a681907b (device_domain_lock){....}, at: domain_context_mapping_one+0x8d/0x4e0 stack backtrace: CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.2.0-rc5 #78 Hardware name: LENOVO 20KGS35G01/20KGS35G01, BIOS N23ET50W (1.25 ) 06/25/2018 Call Trace: dump_stack+0x85/0xc0 print_circular_bug.cold.57+0x15c/0x195 __lock_acquire+0x152a/0x1710 lock_acquire+0x9e/0x170 ? domain_context_mapping_one+0xa5/0x4e0 _raw_spin_lock+0x25/0x30 ? domain_context_mapping_one+0xa5/0x4e0 domain_context_mapping_one+0xa5/0x4e0 ? domain_context_mapping_one+0x4e0/0x4e0 pci_for_each_dma_alias+0x30/0x140 dmar_insert_one_dev_info+0x3b2/0x510 domain_add_dev_info+0x50/0x90 dev_prepare_static_identity_mapping+0x30/0x68 intel_iommu_init+0xddd/0x1422 ? printk+0x58/0x6f ? lockdep_hardirqs_on+0xf0/0x180 ? do_early_param+0x8e/0x8e ? e820__memblock_setup+0x63/0x63 pci_iommu_init+0x16/0x3f do_one_initcall+0x5d/0x2b4 ? do_early_param+0x8e/0x8e ? rcu_read_lock_sched_held+0x55/0x60 ? do_early_param+0x8e/0x8e kernel_init_freeable+0x218/0x2c1 ? rest_init+0x230/0x230 kernel_init+0xa/0x100 ret_from_fork+0x3a/0x50 domain_context_mapping_one() is taking device_domain_lock first then iommu lock, while dmar_insert_one_dev_info() is doing the reverse. That should be introduced by commit: 7560cc3c ("iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock", 2019-05-27) So far I still cannot figure out how the previous deadlock was triggered (I cannot find iommu lock taken before calling of iommu_flush_dev_iotlb()), however I'm pretty sure that that change should be incomplete at least because it does not fix all the places so we're still taking the locks in different orders, while reverting that commit is very clean to me so far that we should always take device_domain_lock first then the iommu lock. We can continue to try to find the real culprit mentioned in 7560cc3c, but for now I think we should revert it to fix current breakage. CC: Joerg Roedel <joro@8bytes.org> CC: Lu Baolu <baolu.lu@linux.intel.com> CC: dave.jiang@intel.com Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Joerg Roedel <jroedel@suse.de>
-
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pciLinus Torvalds authored
Pull PCI fix from Bjorn Helgaas: "If an IOMMU is present, ignore the P2PDMA whitelist we added for v5.2 because we don't yet know how to support P2PDMA in that case (Logan Gunthorpe)" * tag 'pci-v5.2-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: PCI/P2PDMA: Ignore root complex whitelist when an IOMMU is present
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fixes from James Bottomley: "Three driver fixes (and one version number update): a suspend hang in ufs, a qla hard lock on module removal and a qedi panic during discovery" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: qla2xxx: Fix hardlockup in abort command during driver remove scsi: ufs: Avoid runtime suspend possibly being blocked forever scsi: qedi: update driver version to 8.37.0.20 scsi: qedi: Check targetname while finding boot target information
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: "This is a frustratingly large batch at rc5. Some of these were sent earlier but were missed by me due to being distracted by other things, and some took a while to track down due to needing manual bisection on old hardware. But still we clearly need to improve our testing of KVM, and of 32-bit, so that we catch these earlier. Summary: seven fixes, all for bugs introduced this cycle. - The commit to add KASAN support broke booting on 32-bit SMP machines, due to a refactoring that moved some setup out of the secondary CPU path. - A fix for another 32-bit SMP bug introduced by the fast syscall entry implementation for 32-bit BOOKE. And a build fix for the same commit. - Our change to allow the DAWR to be force enabled on Power9 introduced a bug in KVM, where we clobber r3 leading to a host crash. - The same commit also exposed a previously unreachable bug in the nested KVM handling of DAWR, which could lead to an oops in a nested host. - One of the DMA reworks broke the b43legacy WiFi driver on some people's powermacs, fix it by enabling a 30-bit ZONE_DMA on 32-bit. - A fix for TLB flushing in KVM introduced a new bug, as it neglected to also flush the ERAT, this could lead to memory corruption in the guest. Thanks to: Aaro Koskinen, Christoph Hellwig, Christophe Leroy, Larry Finger, Michael Neuling, Suraj Jitindar Singh" * tag 'powerpc-5.2-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: KVM: PPC: Book3S HV: Invalidate ERAT when flushing guest TLB entries powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac KVM: PPC: Book3S HV: Only write DAWR[X] when handling h_set_dawr in real mode KVM: PPC: Book3S HV: Fix r3 corruption in h_set_dabr() powerpc/32: fix build failure on book3e with KVM powerpc/booke: fix fast syscall entry on SMP powerpc/32s: fix initial setup of segment registers on secondary CPU
-
Marcel Holtmann authored
When trying to align the minimum encryption key size requirement for Bluetooth connections, it turns out doing this in a central location in the HCI connection handling code is not possible. Original Bluetooth version up to 2.0 used a security model where the L2CAP service would enforce authentication and encryption. Starting with Bluetooth 2.1 and Secure Simple Pairing that model has changed into that the connection initiator is responsible for providing an encrypted ACL link before any L2CAP communication can happen. Now connecting Bluetooth 2.1 or later devices with Bluetooth 2.0 and before devices are causing a regression. The encryption key size check needs to be moved out of the HCI connection handling into the L2CAP channel setup. To achieve this, the current check inside hci_conn_security() has been moved into l2cap_check_enc_key_size() helper function and then called from four decisions point inside L2CAP to cover all combinations of Secure Simple Pairing enabled devices and device using legacy pairing and legacy service security model. Fixes: d5bb334a ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections") Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203643Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds authored
Pull networking fixes from David Miller: 1) Fix leak of unqueued fragments in ipv6 nf_defrag, from Guillaume Nault. 2) Don't access the DDM interface unless the transceiver implements it in bnx2x, from Mauro S. M. Rodrigues. 3) Don't double fetch 'len' from userspace in sock_getsockopt(), from JingYi Hou. 4) Sign extension overflow in lio_core, from Colin Ian King. 5) Various netem bug fixes wrt. corrupted packets from Jakub Kicinski. 6) Fix epollout hang in hvsock, from Sunil Muthuswamy. 7) Fix regression in default fib6_type, from David Ahern. 8) Handle memory limits in tcp_fragment more appropriately, from Eric Dumazet. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (24 commits) tcp: refine memory limit test in tcp_fragment() inet: clear num_timeout reqsk_alloc() net: mvpp2: debugfs: Add pmap to fs dump ipv6: Default fib6_type to RTN_UNICAST when not set net: hns3: Fix inconsistent indenting net/af_iucv: always register net_device notifier net/af_iucv: build proper skbs for HiperTransport net/af_iucv: remove GFP_DMA restriction for HiperTransport net: dsa: mv88e6xxx: fix shift of FID bits in mv88e6185_g1_vtu_loadpurge() hvsock: fix epollout hang from race condition net/udp_gso: Allow TX timestamp with UDP GSO net: netem: fix use after free and double free with packet corruption net: netem: fix backlog accounting for corrupted GSO frames net: lio_core: fix potential sign-extension overflow on large shift tipc: pass tunnel dev as NULL to udp_tunnel(6)_xmit_skb ip6_tunnel: allow not to count pkts on tstats by passing dev as NULL ip_tunnel: allow not to count pkts on tstats by setting skb's dev to NULL tun: wake up waitqueues after IFF_UP is set net: remove duplicate fetch in sock_getsockopt tipc: fix issues with early FAILOVER_MSG from peer ...
-
Eric Dumazet authored
tcp_fragment() might be called for skbs in the write queue. Memory limits might have been exceeded because tcp_sendmsg() only checks limits at full skb (64KB) boundaries. Therefore, we need to make sure tcp_fragment() wont punish applications that might have setup very low SO_SNDBUF values. Fixes: f070ef2a ("tcp: tcp_fragment() should apply sane memory limits") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Christoph Paasch <cpaasch@apple.com> Tested-by: Christoph Paasch <cpaasch@apple.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-