- 09 Feb, 2022 40 commits
-
-
Ioana Ciornei authored
Once we added support in the dpaa2-eth for driver level software TSO we observed the following situation: if the EQCR CI (consumer index) is read from the cache-enabled area we sometimes end up with a computed value of available enqueue entries bigger than the size of the ring. This eventually will lead to the multiple enqueue of the same FD which will determine the same FD to end up on the Tx confirmation path and the same skb being freed twice. Just read the consumer index from the cache inhibited area so that we avoid this situation. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
This patch adds support for driver level TSO in the enetc driver using the TSO API. There is not much to say about this specific implementation. We are using the usual tso_build_hdr(), tso_build_data() to create each data segment, we create an array of S/G FDs where the first S/G entry is referencing the header data and the remaining ones the data portion. For the S/G Table buffer we use the same cache of buffers used on the other non-GSO cases - dpaa2_eth_sgt_get() and dpaa2_eth_sgt_recycle(). We cannot keep a DMA coherent buffer for all the TSO headers because the DPAA2 architecture does not work in a ring based fashion so we just allocate a buffer each time. Even with these limitations we get the following improvement in TCP termination on the LX2160A SoC, on a single A72 core running at 2.2GHz. before: 6.38Gbit/s after: 8.48Gbit/s Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Up until now, the __dpaa2_eth_tx function used a single FD on the stack to construct the structure to be enqueued. Since we are now preparing the ground work to add support for TSO done in software at the driver level, the same function needs to work with an array of FDs and enqueue as many as the build_*_fd functions create. Make the necessary adjustments in order to do this. These include: keeping an array of FDs in a percpu structure, cleaning up the necessary FDs before populating it and then, retrying the enqueue process up till all the generated FDs were enqueued or until we reach the maximum number retries. This patch does not change the fact that only a single FD will result from a __dpaa2_eth_tx call but rather just creates the necessary changes for the next patch. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Instead of allocating memory for an S/G table each time a nonlinear skb is processed, and then freeing it on the Tx confirmation path, use the S/G table cache in order to reuse the memory. For this to work we have to change the size of the cached buffers so that it can hold the maximum number of scatterlist entries. Other than that, each allocate/free call is replaced by a call to the dpaa2_eth_sgt_get/dpaa2_eth_sgt_recycle functions, introduced in the previous patch. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
The dpaa2-eth driver uses in certain circumstances a buffer cache for the S/G tables needed in case of a S/G FD. At the moment, the interraction with the cache is open-coded and couldn't be reused easily. Add two new functions - dpaa2_eth_sgt_get and dpaa2_eth_sgt_recycle - which help with code reusability. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Instead of allocating memory and then manually aligning it to the desired value use napi_alloc_frag_align() directly to streamline the process. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
In the next patches we'll be moving things arroung in the mentioned function and also add some new variable declarations. Before all this, cleanup the variable declaration order. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Hariprasad Kelam says: ==================== Priority flow control support for RVU netdev In network congestion, instead of pausing all traffic on link PFC allows user to selectively pause traffic according to its class. This series of patches add support of PFC for RVU netdev drivers. Patch1 adds support to disable pause frames by default as with PFC user can enable either PFC or 802.3 pause frames. Patch2&3 adds resource management support for flow control and configures necessary registers for PFC. Patch4 adds dcb ops registration for netdev drivers. V2 changes: Fix compilation error by exporting required symbols 'otx2_config_pause_frm' ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Kelam authored
Data centric bridging designed to eliminate packet loss due to queue overflow by adding enhancements to ethernet network such as proprity flow control etc. This patch adds support for management of Priority flow control(PFC) on Octeontx2 and CN10K interfaces. To enable PFC for all priorities dcb pfc set dev eth0 prio-pfc all:on/off To enable PFC on selected priorites dcb pfc set dev eth0 prio-pfc 0:on/off 1:on/off ..7:on/off With the ntuple commands user can map Priority to receive queues. On queue overflow NIX will assert backpressure such that PFC pause frames are genarated with mapped priority. To map priority 7 to Queue 1 ethtool -U eth0 flow-type ether dst xx:xx:xx:xx:xx:xx vlan 0xe00a m 0x1fff queue 1 Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Kelam authored
CN10K MAC block (RPM) and Octeontx2 MAC block (CGX) both supports PFC flow control and 802.3X flow control pause frames. Each MAC block supports max 4 LMACS and AF driver assigns same (MAC,LMAC) to PF and its VFs. As PF and its share same (MAC,LMAC) pair we need resource management to address below scenarios 1. Maintain PFC and 8023X pause frames mutually exclusive. 2. Reject disable flow control request if other PF or Vfs enabled it. Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Kumar Kori authored
Prirority based flow control (802.1Qbb) mechanism is similar to ethernet pause frames (802.3x) instead pausing all traffic on a link, PFC allows user to selectively pause traffic according to its class. Oceteontx2 MAC block (CGX) and CN10K Mac block (RPM) both supports PFC. As upper layer mbox handler is same for both the MACs, this patch configures PFC by calling apporopritate callbacks. Signed-off-by: Sunil Kumar Kori <skori@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Kelam authored
Current implementation is such that 802.3x pause frames are enabled by default. As CGX and RPM blocks support PFC (priority flow control) also, instead of driver enabling one between them enable them upon request from PF or its VFs. Also add support to disable pause frames in driver unbind. Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jeremy Kerr says: ==================== MCTP tag control interface This series implements a small interface for userspace-controlled message tag allocation for the MCTP protocol. Rather than leaving the kernel to allocate per-message tag values, userspace can explicitly allocate (and release) message tags through two new ioctls: SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG. In order to do this, we first introduce some minor changes to the tag handling, including a couple of new tests for the route input paths. As always, any comments/queries/etc are most welcome. v2: - make mctp_lookup_prealloc_tag static - minor checkpatch formatting fixes ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Johnston authored
This change adds a couple of new ioctls for mctp sockets: SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG. These ioctls provide facilities for explicit allocation / release of tags, overriding the automatic allocate-on-send/release-on-reply and timeout behaviours. This allows userspace more control over messages that may not fit a simple request/response model. In order to indicate a pre-allocated tag to the sendmsg() syscall, we introduce a new flag to the struct sockaddr_mctp.smctp_tag value: MCTP_TAG_PREALLOC. Additional changes from Jeremy Kerr <jk@codeconstruct.com.au>. Contains a fix that was: Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Matt Johnston <matt@codeconstruct.com.au> Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeremy Kerr authored
Currently, we require an exact match on an incoming packet's dest address, and the key's local_addr field. In a future change, we may want to set up a key before packets are routed, meaning we have no local address to match on. This change allows key lookups to match on local_addr = MCTP_ADDR_ANY. Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeremy Kerr authored
Currently, we have a couple of paths that check that an EID matches, or the match value is MCTP_ADDR_ANY. Rather than open coding this, add a little helper. Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeremy Kerr authored
This change adds a few more tests to check the key/tag lookups on route input. We add a specific entry to the keys lists, route a packet with specific header values, and check for key match/mismatch. Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeremy Kerr authored
This is a definition for the tag-owner flag, which has TO as a standard abbreviation. We'll want to add a helper for the actual tag value in a future change. Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/nextDavid S. Miller authored
-queue Tony Nguyen says: ==================== 40GbE Intel Wired LAN Driver Updates 2022-02-08 Joe Damato says: This patch set makes several updates to the i40e driver stats collection and reporting code to help users of i40e get a better sense of how the driver is performing and interacting with the rest of the kernel. These patches include some new stats (like waived and busy) which were inspired by other drivers that track stats using the same nomenclature. The new stats and an existing stat, rx_reuse, are now accessible with ethtool to make harvesting this data more convenient for users. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Make sure to test that skb has a dst attached to it. general protection fault, probably for non-canonical address 0xdffffc0000000011: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000088-0x000000000000008f] CPU: 0 PID: 32650 Comm: syz-executor.4 Not tainted 5.17.0-rc2-next-20220204-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:ip6_tnl_xmit+0x2140/0x35f0 net/ipv6/ip6_tunnel.c:1127 Code: 4d 85 f6 0f 85 c5 04 00 00 e8 9c b0 66 f9 48 83 e3 fe 48 b8 00 00 00 00 00 fc ff df 48 8d bb 88 00 00 00 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 07 7f 05 e8 11 25 b2 f9 44 0f b6 b3 88 00 00 RSP: 0018:ffffc900141b7310 EFLAGS: 00010206 RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffc9000c77a000 RDX: 0000000000000011 RSI: ffffffff8811f854 RDI: 0000000000000088 RBP: ffffc900141b7480 R08: 0000000000000000 R09: 0000000000000008 R10: ffffffff8811f846 R11: 0000000000000008 R12: ffffc900141b7548 R13: ffff8880297c6000 R14: 0000000000000000 R15: ffff8880351c8dc0 FS: 00007f9827ba2700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b31322000 CR3: 0000000033a70000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ipxip6_tnl_xmit net/ipv6/ip6_tunnel.c:1386 [inline] ip6_tnl_start_xmit+0x71e/0x1830 net/ipv6/ip6_tunnel.c:1435 __netdev_start_xmit include/linux/netdevice.h:4683 [inline] netdev_start_xmit include/linux/netdevice.h:4697 [inline] xmit_one net/core/dev.c:3473 [inline] dev_hard_start_xmit+0x1eb/0x920 net/core/dev.c:3489 __dev_queue_xmit+0x2a24/0x3760 net/core/dev.c:4116 packet_snd net/packet/af_packet.c:3057 [inline] packet_sendmsg+0x2265/0x5460 net/packet/af_packet.c:3084 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 sock_write_iter+0x289/0x3c0 net/socket.c:1061 call_write_iter include/linux/fs.h:2075 [inline] do_iter_readv_writev+0x47a/0x750 fs/read_write.c:726 do_iter_write+0x188/0x710 fs/read_write.c:852 vfs_writev+0x1aa/0x630 fs/read_write.c:925 do_writev+0x27f/0x300 fs/read_write.c:968 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f9828c2d059 Fixes: c1f55c5e ("ip6_tunnel: allow routing IPv4 traffic in NBMA mode") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Qing Deng <i@moy.cat> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tianyu Lan authored
netvsc_device_remove() calls vunmap() inside which should not be called in the interrupt context. Current code calls hv_unmap_memory() in the free_netvsc_device() which is rcu callback and maybe called in the interrupt context. This will trigger BUG_ON(in_interrupt()) in the vunmap(). Fix it via moving hv_unmap_memory() to netvsc_device_ remove(). Fixes: 846da38d ("net: netvsc: Add Isolation VM support for netvsc driver") Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2022-02-07 Corinna Vinschen says: Fix the kernel warning "Missing unregister, handled but fix driver" when running, e.g., $ ethtool -G eth0 rx 1024 on igc. Remove memset hack from igb and align igb code to igc. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Biju Das authored
Document Gigabit Ethernet IP found on RZ/G2UL SoC. Gigabit Ethernet Interface is identical to one found on the RZ/G2L SoC. No driver changes are required as generic compatible string "renesas,rzg2l-gbeth" will be used as a fallback. Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Biju Das authored
Document Gigabit Ethernet IP found on RZ/V2L SoC. Gigabit Ethernet Interface is identical to one found on the RZ/G2L SoC. No driver changes are required as generic compatible string "renesas,rzg2l-gbeth" will be used as a fallback. Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Acked-by: Rob Herring <robh@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luiz Angelo Daros de Luca authored
Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220208053210.14831-1-luizluca@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
The default values for hooks in the driver.pm are NULLs. Hence drop unused pch_pm_ops. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-6-andriy.shevchenko@linux.intel.comAcked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
This makes the error handling much more simpler than open-coding everything and in addition makes the probe function smaller an tidier. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-5-andriy.shevchenko@linux.intel.comAcked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
Eliminate some boilerplate code by using module_pci_driver() instead of init/exit, and, if needed, moving the salient bits from init into probe. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-4-andriy.shevchenko@linux.intel.comAcked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
There is already helper functions to do 64-bit I/O on 32-bit machines or buses, thus we don't need to reinvent the wheel. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-3-andriy.shevchenko@linux.intel.comAcked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
There is already helper functions to do 64-bit I/O on 32-bit machines or buses, thus we don't need to reinvent the wheel. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-2-andriy.shevchenko@linux.intel.comAcked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
Use mac_pton() instead of custom approach. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/20220207210730.75252-1-andriy.shevchenko@linux.intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric Dumazet says: ==================== net: speedup netns dismantles From: Eric Dumazet <edumazet@google.com> In this series, I made network namespace deletions more scalable, by 4x on the little benchmark described in this cover letter. - Remove bottleneck on ipv6 addrconf, by replacing a global hash table to a per netns one. - Rework many (struct pernet_operations)->exit() handlers to exit_batch() ones. This removes many rtnl acquisitions, and gives to cleanup_net() kind of a priority over rtnl ownership. Tested on a host with 24 cpus (48 HT) Test script: for nr in {1..10} do (for i in {1..10000}; do unshare -n /bin/bash -c "ifconfig lo up"; done) & done wait for i in {1..10} do sleep 1 echo 3 >/proc/sys/vm/drop_caches grep net_namespace /proc/slabinfo done Before: We can see host struggles to clean the netns, even after there are no new creations. Memory cost is high, because each netns consumes a good amount of memory. time ./unshare10.sh net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 82634 82634 3968 1 1 : tunables 24 12 8 : slabdata 82634 82634 0 net_namespace 37214 37792 3968 1 1 : tunables 24 12 8 : slabdata 37214 37792 192 real 6m57.766s user 3m37.277s sys 40m4.826s After: We can see the script completes much faster, the kernel thread doing the cleanup_net() keeps up just fine. Memory cost is not too big. time ./unshare10.sh net_namespace 9945 9945 4096 1 1 : tunables 24 12 8 : slabdata 9945 9945 0 net_namespace 4087 4665 4096 1 1 : tunables 24 12 8 : slabdata 4087 4665 192 net_namespace 4082 4607 4096 1 1 : tunables 24 12 8 : slabdata 4082 4607 192 net_namespace 234 761 4096 1 1 : tunables 24 12 8 : slabdata 234 761 192 net_namespace 224 751 4096 1 1 : tunables 24 12 8 : slabdata 224 751 192 net_namespace 218 745 4096 1 1 : tunables 24 12 8 : slabdata 218 745 192 net_namespace 193 667 4096 1 1 : tunables 24 12 8 : slabdata 193 667 172 net_namespace 167 609 4096 1 1 : tunables 24 12 8 : slabdata 167 609 152 net_namespace 167 609 4096 1 1 : tunables 24 12 8 : slabdata 167 609 152 net_namespace 157 609 4096 1 1 : tunables 24 12 8 : slabdata 157 609 152 real 1m43.876s user 3m39.728s sys 7m36.342s ==================== Link: https://lore.kernel.org/r/20220208045038.2635826-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
For some reason default_device_ops kept two exit method: 1) default_device_exit() is called for each netns being dismantled in a cleanup_net() round. This acquires rtnl for each invocation. 2) default_device_exit_batch() is called once with the list of all netns int the batch, allowing for a single rtnl invocation. Get rid of the .exit() method to handle the logic from default_device_exit_batch(), to decrease the number of rtnl acquisition to one. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. Batching bond_net_exit() factorizes all rtnl acquistions to a single one, giving chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. Avoiding to acquire rtnl for each netns before calling cgw_remove_all_jobs() gives chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Oliver Hartkopp <socketcan@hartkopp.net> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. Avoiding to acquire rtnl for each netns before calling ipmr_rules_exit() gives chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. Avoiding to acquire rtnl for each netns before calling ip6mr_rules_exit() gives chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. fib6_rules_net_exit() seems a good candidate for exit_batch(), as this gives chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. Instead of acquiring rtnl at each fib_net_exit() invocation, add fib_net_exit_batch() so that rtnl is acquired once. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
cleanup_net() is competing with other rtnl users. nexthop_net_exit() seems a good candidate for exit_batch(), as this gives chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-