- 16 May, 2020 5 commits
-
-
Jakub Kicinski authored
Having a channel config with no ability to RX or TX traffic is clearly wrong. Check for this in the core so the drivers don't have to. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christoph Paasch authored
RFC8684 allows to send 32-bit DATA_ACKs as long as the peer is not sending 64-bit data-sequence numbers. The 64-bit DSN is only there for extreme scenarios when a very high throughput subflow is combined with a long-RTT subflow such that the high-throughput subflow wraps around the 32-bit sequence number space within an RTT of the high-RTT subflow. It is thus a rare scenario and we should try to use the 32-bit DATA_ACK instead as long as possible. It allows to reduce the TCP-option overhead by 4 bytes, thus makes space for an additional SACK-block. It also makes tcpdumps much easier to read when the DSN and DATA_ACK are both either 32 or 64-bit. Signed-off-by: Christoph Paasch <cpaasch@apple.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
The goal is to be able to inherit the initial devconf parameters from the current netns, ie the netns where this new netns has been created. This is useful in a containers environment where /proc/sys is read only. For example, if a pod is created with specifics devconf parameters and has the capability to create netns, the user expects to get the same parameters than his 'init_net', which is not the real init_net in this case. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Add driver level bulking to the XDP_TX action. An array of frame descriptors is held for each Tx frame queue and populated accordingly when the action returned by the XDP program is XDP_TX. The frames will be actually enqueued only when the array is filled. At the end of the NAPI cycle a flush on the queued frames is performed in order to enqueue the remaining FDs. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Lo authored
This patch makes checkpatch happy for tabs Signed-off-by: Kevin Lo <kevlo@kevlo.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 15 May, 2020 35 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2020-05-15 mlx5 core and mlx5e (netdev) updates: 1) Two fixes for release all FW pages support. 2) Improvement in calculating the send queue stop room on tx 3) Flow steering auto-groups creation improvements 4) TC offload fix for Connection tracking with NAT action 5) IPoIB support for self looback to allow communication between ipoib pkey child interfaces on the same host. 6) DCBNL cleanup to avoid #ifdef DCBNL all over the main mlx5e code 7) Small and trivial code cleanup ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Chancellor authored
When building with Clang: In file included from drivers/net/ethernet/ti/am65-cpsw-ethtool.c:15: drivers/net/ethernet/ti/am65-cpts.h:58:12: warning: unused function 'am65_cpts_ns_gettime' [-Wunused-function] static s64 am65_cpts_ns_gettime(struct am65_cpts *cpts) ^ drivers/net/ethernet/ti/am65-cpts.h:63:12: warning: unused function 'am65_cpts_estf_enable' [-Wunused-function] static int am65_cpts_estf_enable(struct am65_cpts *cpts, ^ drivers/net/ethernet/ti/am65-cpts.h:69:13: warning: unused function 'am65_cpts_estf_disable' [-Wunused-function] static void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx) ^ 3 warnings generated. These functions need to be marked as inline, which adds __maybe_unused, to avoid these warnings, which is the pattern for stub functions. Fixes: ec008fa2 ("ethernet: ti: am65-cpts: add routines to support taprio offload") Link: https://github.com/ClangBuiltLinux/linux/issues/1026Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Take DCBNL-related definitions out of the common en.h header, Use a dedicated header file for exposing them. Some need not to be exposed, use them locally in the .c file. Use stubs to eliminate use of CONFIG_MLX5_CORE_EN_DCB in the generic control flows. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Maxim Mikityanskiy authored
Currently, different formulas are used to estimate the space that may be taken by WQEs in the SQ during a single packet transmit. This space is called stop room, and it's checked in the end of packet transmit to find out if the next packet could overflow the SQ. If it could, the driver tells the kernel to stop sending next packets. Many factors affect the stop room: 1. Padding with NOPs to avoid WQEs spanning over page boundaries. 2. Enabled and disabled offloads (TLS, upcoming MPWQE). 3. The maximum size of a WQE. The padding is performed before every WQE if it doesn't fit the current page. The current formula assumes that only one padding will be required per packet, and it doesn't take into account that the WQEs posted during the transmission of a single packet might exceed the page size in very rare circumstances. For example, to hit this condition with 4096-byte pages, TLS offload will have to interrupt an almost-full MPWQE session, be in the resync flow and try to transmit a near to maximum amount of data. To avoid SQ overflows in such rare cases after MPWQE is added, this patch introduces a more robust formula to estimate the stop room. The new formula uses the fact that a WQE of size X will not require more than X-1 WQEBBs of padding. More exact estimations are possible, but they result in much more complex and error-prone code for little gain. Before this patch, the TLS stop room included space for both INNOVA and ConnectX TLS offloads that couldn't run at the same time anyway, so this patch accounts only for the active one. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Erez Shitrit authored
After enabled loopback packets for IPoIB, we need to drop these packets that this HCA has replicated and came back to the same interface that sent them. Fixes: 4c6c615e ("net/mlx5e: IPoIB, Add PKEY child interface nic profile") Signed-off-by: Erez Shitrit <erezsh@mellanox.com> Reviewed-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Erez Shitrit authored
Enable loopback of unicast and multicast traffic for IPoIB enhanced mode. This will allow interfaces with the same pkey to communicate between them e.g cloned interfaces that located in different namespaces. Signed-off-by: Erez Shitrit <erezsh@mellanox.com> Reviewed-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Roi Dayan authored
It could be a chain of rules will do action CT again after CT NAT Before this fix matching will break as we get into the CT table after NAT changes and not CT NAT. Fix this by adding pre ct and pre ct nat tables to skip ct/ct_nat tables and go straight to post_ct table if ct/nat was already done. Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Eran Ben Elisha authored
Move mlx5_read_internal_timer() into lib/clock.c file as it is being used there. As such, make this function a static one. In addition, rearrange headers include to support function move. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Reviewed-by: Aya Levin <ayal@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Paul Blakey authored
Currently, if one thread tries to add an entry to an autogrouped table with no free matching group, while another thread is in the process of creating a new matching autogroup, it doesn't wait for the new group creation, and creates an unnecessary new autogroup. Instead of skipping inactive, wait on the write lock of those groups. Signed-off-by: Paul Blakey <paulb@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Parav Pandit authored
mlx5_unload_one() is done with cleanup = true only once. So instead of doing health wq drain inside the if(), directly do during PCI device removal. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Parav Pandit authored
Having multiple error unwinding path are error prone. Lets have just one error unwinding path. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Eran Ben Elisha authored
On systems with page size larger than 4K, a fwp object has few 4K chunks. Fix a bug in fwp free flow where the chunk address was dropped and fwp->addr was used instead (first chunk address). This caused a wrong update of fwp->bitmask which later can cause errors in re-alloc fwp chunk flow. In order to fix this it, re-factor the release flow: - Free 4k: Releases a specific 4k chunk inside the fwp, defined by starting address. - Free fwp: Unconditionally release the whole fwp and its resources. Free addr will call free fwp if all chunks were released, in order to do code sharing. In addition, fix npages to count for all released chunks correctly. Fixes: c6168161 ("net/mlx5: Add support for release all pages event") Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Eran Ben Elisha authored
The cited patch assumes that all chuncks in a fw page belong to the same function, thus the driver must dedicate fw page to the requesting function, which is actually what was intedned in the original fw pages allocator design, hence the fwp->func_id ! Up until the cited patch everything worked ok, but now "relase all pages" is broken on systems with page_size > 4k. Fix this by dedicating fw page to the requesting function id via adding a func_id parameter to alloc_4k() function. Fixes: c6168161 ("net/mlx5: Add support for release all pages event") Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller authored
Move the bpf verifier trace check into the new switch statement in HEAD. Resolve the overlapping changes in hinic, where bug fixes overlap the addition of VF support. Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from David Miller: 1) Fix sk_psock reference count leak on receive, from Xiyu Yang. 2) CONFIG_HNS should be invisible, from Geert Uytterhoeven. 3) Don't allow locking route MTUs in ipv6, RFCs actually forbid this, from Maciej Żenczykowski. 4) ipv4 route redirect backoff wasn't actually enforced, from Paolo Abeni. 5) Fix netprio cgroup v2 leak, from Zefan Li. 6) Fix infinite loop on rmmod in conntrack, from Florian Westphal. 7) Fix tcp SO_RCVLOWAT hangs, from Eric Dumazet. 8) Various bpf probe handling fixes, from Daniel Borkmann. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (68 commits) selftests: mptcp: pm: rm the right tmp file dpaa2-eth: properly handle buffer size restrictions bpf: Restrict bpf_trace_printk()'s %s usage and add %pks, %pus specifier bpf: Add bpf_probe_read_{user, kernel}_str() to do_refine_retval_range bpf: Restrict bpf_probe_read{, str}() only to archs where they work MAINTAINERS: Mark networking drivers as Maintained. ipmr: Add lockdep expression to ipmr_for_each_table macro ipmr: Fix RCU list debugging warning drivers: net: hamradio: Fix suspicious RCU usage warning in bpqether.c net: phy: broadcom: fix BCM54XX_SHD_SCR3_TRDDAPD value for BCM54810 tcp: fix error recovery in tcp_zerocopy_receive() MAINTAINERS: Add Jakub to networking drivers. MAINTAINERS: another add of Karsten Graul for S390 networking drivers: ipa: fix typos for ipa_smp2p structure doc pppoe: only process PADT targeted at local interfaces selftests/bpf: Enforce returning 0 for fentry/fexit programs bpf: Enforce returning 0 for fentry/fexit progs net: stmmac: fix num_por initialization security: Fix the default value of secid_to_secctx hook libbpf: Fix register naming in PT_REGS s390 macros ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds authored
Pull rdma fixes from Jason Gunthorpe: "A few minor bug fixes for user visible defects, and one regression: - Various bugs from static checkers and syzkaller - Add missing error checking in mlx4 - Prevent RTNL lock recursion in i40iw - Fix segfault in cxgb4 in peer abort cases - Fix a regression added in 5.7 where the IB_EVENT_DEVICE_FATAL could be lost, and wasn't delivered to all the FDs" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: RDMA/uverbs: Move IB_EVENT_DEVICE_FATAL to destroy_uobj RDMA/uverbs: Do not discard the IB_EVENT_DEVICE_FATAL event RDMA/iw_cxgb4: Fix incorrect function parameters RDMA/core: Fix double put of resource IB/core: Fix potential NULL pointer dereference in pkey cache IB/hfi1: Fix another case where pq is left on waitlist IB/i40iw: Remove bogus call to netdev_master_upper_dev_get() IB/mlx4: Test return value of calls to ib_get_cached_pkey RDMA/rxe: Always return ERR_PTR from rxe_create_mmap_info() i40iw: Fix error handling in i40iw_manage_arp_cache()
-
Linus Torvalds authored
Merge tag 'linux-kselftest-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest fixes from Shuah Khan: - lkdtm runner fixes to prevent dmesg clearing and shellcheck errors - ftrace test handling when test module doesn't exist - nsfs test fix to replace zero-length array with flexible-array - dmabuf-heaps test fix to return clear error value * tag 'linux-kselftest-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests/lkdtm: Use grep -E instead of egrep selftests/lkdtm: Don't clear dmesg when running tests selftests/ftrace: mark irqsoff_tracer.tc test as unresolved if the test module does not exist tools/testing: Replace zero-length array with flexible-array kselftests: dmabuf-heaps: Fix confused return value on expected error testing
-
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linuxLinus Torvalds authored
Pull RISC-V fixes from Palmer Dabbelt: "A handful of build fixes, all found by Huawei's autobuilder. None of these patches should have any functional impact on kernels that build, and they're mostly related to various features intermingling with !MMU. While some of these might be better hoisted to generic code, it seems better to have the simple fixes in the meanwhile. As far as I know these are the only outstanding patches for 5.7" * tag 'riscv-for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: mmiowb: Fix implicit declaration of function 'smp_processor_id' riscv: pgtable: Fix __kernel_map_pages build error if NOMMU riscv: Make SYS_SUPPORTS_HUGETLBFS depends on MMU riscv: Disable ARCH_HAS_DEBUG_VIRTUAL if NOMMU riscv: Add pgprot_writecombine/device and PAGE_SHARED defination if NOMMU riscv: stacktrace: Fix undefined reference to `walk_stackframe' riscv: Fix unmet direct dependencies built based on SOC_VIRT riscv: perf: RISCV_BASE_PMU should be independent riscv: perf_event: Make some funciton static
-
David S. Miller authored
Paolo Abeni says: ==================== mptcp: fix MP_JOIN failure handling Currently if we hit an MP_JOIN failure on the third ack, the child socket is closed with reset, but the request socket is not deleted, causing weird behaviors. The main problem is that MPTCP's MP_JOIN code needs to plug it's own 'valid 3rd ack' checks and the current TCP callbacks do not allow that. This series tries to address the above shortcoming introducing a new MPTCP specific bit in a 'struct tcp_request_sock' hole, and leveraging that to allow tcp_check_req releasing the request socket when needed. The above allows cleaning-up a bit current MPTCP hooking in tcp_check_req(). An alternative solution, possibly cleaner but more invasive, would be changing the 'bool *own_req' syn_recv_sock() argument into 'int *req_status' and let MPTCP set it to 'REQ_DROP'. v1 -> v2: - be more conservative about drop_req initialization RFC -> v1: - move the drop_req bit inside tcp_request_sock (Eric) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
Currently, on MP_JOIN failure we reset the child socket, but leave the request socket untouched. tcp_check_req will deal with it according to the 'tcp_abort_on_overflow' sysctl value - by default the req socket will stay alive. The above leads to inconsistent behavior on MP JOIN failure, and bad listener overflow accounting. This patch addresses the issue leveraging the infrastructure just introduced to ask the TCP stack to drop the req on failure. The child socket is not freed anymore by subflow_syn_recv_sock(), instead it's moved to a dead state and will be disposed by the next sock_put done by the TCP stack, so that listener overflow accounting is not affected by MP JOIN failure. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Christoph Paasch <cpaasch@apple.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
Move the steps to prepare an inet_connection_sock for forced disposal inside a separate helper. No functional changes inteded, this will just simplify the next patch. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Christoph Paasch <cpaasch@apple.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
MP_JOIN subflows must not land into the accept queue. Currently tcp_check_req() calls an mptcp specific helper to detect such scenario. Such helper leverages the subflow context to check for MP_JOIN subflows. We need to deal also with MP JOIN failures, even when the subflow context is not available due allocation failure. A possible solution would be changing the syn_recv_sock() signature to allow returning a more descriptive action/ error code and deal with that in tcp_check_req(). Since the above need is MPTCP specific, this patch instead uses a TCP request socket hole to add a MPTCP specific flag. Such flag is used by the MPTCP syn_recv_sock() to tell tcp_check_req() how to deal with the request socket. This change is a no-op for !MPTCP build, and makes the MPTCP code simpler. It allows also the next patch to deal correctly with MP JOIN failure. v1 -> v2: - be more conservative on drop_req initialization (Mat) RFC -> v1: - move the drop_req bit inside tcp_request_sock (Eric) Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Reviewed-by: Christoph Paasch <cpaasch@apple.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds authored
Pull arm64 fix from Catalin Marinas: "Fix flush_icache_range() second argument in machine_kexec() to be an address rather than size" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: fix the flush_icache_range arguments in machine_kexec
-
Oleksij Rempel authored
A typical 100Base-T1 link should be always connected. If the link is in a shot or open state, it is a failure. In most cases, we won't be able to automatically handle this issue, but we need to log it or notify user (if possible). With this patch, the cable will be tested on "ip l s dev .. up" attempt and send ethnl notification to the user space. This patch was tested with TJA1102 PHY and "ethtool --monitor" command. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller authored
Alexei Starovoitov says: ==================== pull-request: bpf 2020-05-15 The following pull-request contains BPF updates for your *net* tree. We've added 9 non-merge commits during the last 2 day(s) which contain a total of 14 files changed, 137 insertions(+), 43 deletions(-). The main changes are: 1) Fix secid_to_secctx LSM hook default value, from Anders. 2) Fix bug in mmap of bpf array, from Andrii. 3) Restrict bpf_probe_read to archs where they work, from Daniel. 4) Enforce returning 0 for fentry/fexit progs, from Yonghong. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Lo authored
The BCM54811 PHY shares many similarities with the already supported BCM54810 PHY but additionally requires some semi-unique configuration. Signed-off-by: Kevin Lo <kevlo@kevlo.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Rahul Lakkireddy says: ==================== cxgb4: improve and tune TC-MQPRIO offload Patch 1 improves the Tx path's credit request and recovery mechanism when running under heavy load. Patch 2 adds ability to tune the burst buffer sizes of all traffic classes to improve performance for <= 1500 MTU, under heavy load. Patch 3 adds support to track EOTIDs and dump software queue contexts used by TC-MQPRIO offload. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Lakkireddy authored
Rework and add support for dumping EOTID software context used by TC-MQPRIO. Also track number of EOTIDs in use. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Lakkireddy authored
For each traffic class, firmware handles up to 4 * MTU amount of data per burst cycle. Under heavy load, this small buffer size is a bottleneck when buffering large TSO packets in <= 1500 MTU case. Increase the burst buffer size to 8 * MTU when supported. Also, keep the driver's traffic class configuration API similar to the firmware API counterpart. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Lakkireddy authored
Request credit update for every half credits consumed, including the current request. Also, avoid re-trying to post packets when there are no credits left. The credit update reply via interrupt will eventually restore the credits and will invoke the Tx path again. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Alexei Starovoitov says: ==================== pull-request: bpf-next 2020-05-15 The following pull-request contains BPF updates for your *net-next* tree. We've added 37 non-merge commits during the last 1 day(s) which contain a total of 67 files changed, 741 insertions(+), 252 deletions(-). The main changes are: 1) bpf_xdp_adjust_tail() now allows to grow the tail as well, from Jesper. 2) bpftool can probe CONFIG_HZ, from Daniel. 3) CAP_BPF is introduced to isolate user processes that use BPF infra and to secure BPF networking services by dropping CAP_SYS_ADMIN requirement in certain cases, from Alexei. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
DENG Qingfang authored
Allow DSA to add VLAN entries even if VLAN filtering is disabled, so enabling it will not block the traffic of existent ports in the bridge Signed-off-by: DENG Qingfang <dqfext@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vlad Buslov says: ==================== Implement classifier-action terse dump mode Output rate of current upstream kernel TC filter dump implementation if relatively low (~100k rules/sec depending on configuration). This constraint impacts performance of software switch implementation that rely on TC for their datapath implementation and periodically call TC filter dump to update rules stats. Moreover, TC filter dump output a lot of static data that don't change during the filter lifecycle (filter key, specific action details, etc.) which constitutes significant portion of payload on resulting netlink packets and increases amount of syscalls necessary to dump all filters on particular Qdisc. In order to significantly improve filter dump rate this patch sets implement new mode of TC filter dump operation named "terse dump" mode. In this mode only parameters necessary to identify the filter (handle, action cookie, etc.) and data that can change during filter lifecycle (filter flags, action stats, etc.) are preserved in dump output while everything else is omitted. Userspace API is implemented using new TCA_DUMP_FLAGS tlv with only available flag value TCA_DUMP_FLAGS_TERSE. Internally, new API requires individual classifier support (new tcf_proto_ops->terse_dump() callback). Support for action terse dump is implemented in act API and don't require changing individual action implementations. The following table provides performance comparison between regular filter dump and new terse dump mode for two classifier-action profiles: one minimal config with L2 flower classifier and single gact action and another heavier config with L2+5tuple flower classifier with tunnel_key+mirred actions. Classifier-action type | dump | terse dump | X improvement | (rules/sec) | (rules/sec) | -----------------------------+-------------+-------------+--------------- L2 with gact | 141.8 | 293.2 | 2.07 L2+5tuple tunnel_key+mirred | 76.4 | 198.8 | 2.60 Benchmark details: to measure the rate tc filter dump and terse dump commands are invoked on ingress Qdisc that have one million filters configured using following commands. > time sudo tc -s filter show dev ens1f0 ingress >/dev/null > time sudo tc -s filter show terse dev ens1f0 ingress >/dev/null Value in results table is calculated by dividing 1000000 total rules by "real" time reported by time command. Setup details: 2x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz, 32GB memory ==================== Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthieu Baerts authored
"$err" is a variable pointing to a temp file. "$out" is not: only used as a local variable in "check()" and representing the output of a command line. Fixes: eedbc685 (selftests: add PM netlink functional tests) Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Depending on the WRIOP version, the buffer size on the RX path must by a multiple of 64 or 256. Handle this restriction properly by aligning down the buffer size to the necessary value. Also, use the new buffer size dynamically computed instead of the compile time one. Fixes: 27c87486 ("dpaa2-eth: Use a single page per Rx buffer") Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-