- 17 Jun, 2021 25 commits
-
-
Peng Li authored
This patch removes some redundant blank lines. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ioana Ciornei says: ==================== net: mdio: setup both fwnode and of_node The first patch in this series fixes a bug introduced by mistake in the previous ACPI MDIO patch set. The next two patches are adding a new helper which takes a device and a fwnode_handle and populates both the of_node and fwnode so that we make sure that a bug like this does not happen anymore. Also, the new helper is used in the MDIO area. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Use the newly introduced helper to setup both the of_node and the fwnode for a given device. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
There are many places where both the fwnode_handle and the of_node of a device need to be populated. Add a function which does both so that we have consistency. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
By mistake, the of_node of the MDIO device was not setup in the patch linked below. As a consequence, any PHY driver that depends on the of_node in its probe callback was not be able to successfully finish its probe on a PHY, thus the Generic PHY driver was used instead. Fix this by actually setting up the of_node. Fixes: bc1bee3b ("net: mdiobus: Introduce fwnode_mdiobus_register_phy()") Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hayes Wang authored
Store the information of the pipes to avoid calling usb_rcvctrlpipe(), usb_sndctrlpipe(), usb_rcvbulkpipe(), usb_sndbulkpipe(), and usb_rcvintpipe() frequently. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2021-06-17 The following pull-request contains BPF updates for your *net-next* tree. We've added 50 non-merge commits during the last 25 day(s) which contain a total of 148 files changed, 4779 insertions(+), 1248 deletions(-). The main changes are: 1) BPF infrastructure to migrate TCP child sockets from a listener to another in the same reuseport group/map, from Kuniyuki Iwashima. 2) Add a provably sound, faster and more precise algorithm for tnum_mul() as noted in https://arxiv.org/abs/2105.05398, from Harishankar Vishwanathan. 3) Streamline error reporting changes in libbpf as planned out in the 'libbpf: the road to v1.0' effort, from Andrii Nakryiko. 4) Add broadcast support to xdp_redirect_map(), from Hangbin Liu. 5) Extends bpf_map_lookup_and_delete_elem() functionality to 4 more map types, that is, {LRU_,PERCPU_,LRU_PERCPU_,}HASH, from Denis Salopek. 6) Support new LLVM relocations in libbpf to make them more linker friendly, also add a doc to describe the BPF backend relocations, from Yonghong Song. 7) Silence long standing KUBSAN complaints on register-based shifts in interpreter, from Daniel Borkmann and Eric Biggers. 8) Add dummy PT_REGS macros in libbpf to fail BPF program compilation when target arch cannot be determined, from Lorenz Bauer. 9) Extend AF_XDP to support large umems with 1M+ pages, from Magnus Karlsson. 10) Fix two minor libbpf tc BPF API issues, from Kumar Kartikeya Dwivedi. 11) Move libbpf BPF_SEQ_PRINTF/BPF_SNPRINTF macros that can be used by BPF programs to bpf_helpers.h header, from Florent Revest. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Esben Haabendal says: ==================== net: gianfar: 64-bit statistics and rx_missed_errors counter This series replaces the legacy 32-bit statistics to proper 64-bit ditto, and implements rx_missed_errors counter on top of that. The device supports a 16-bit RDRP counter, and a related carry bit and interrupt, which allows implementation of a robust 64-bit counter. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
Devices with RMON support has a 16-bit RDRP counter. It provides: "Receive dropped packets counter. Increments for frames received which are streamed to system but are later dropped due to lack of system resources." To handle more than 2^16 dropped packets, a carry bit in CAR1 register is set on overflow, so we enable irq when this is set, extending the counter to 2^64 for handling situations where lots of packets are missed (e.g. during heavy network storms). Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
These are for carry status and interrupt mask bits of statistics registers. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
The memset on CAMx is wrong, as it actually unmasks all carry irq's, which we clearly are not interested in. The memset on CARx registers is just pointless, as they are W1C. So let's just stop the memset before CAR1. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
The CAR1 and CAR2 registers are W1C style registers, to the memset does not actually clear them. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
No reason to wrap counter values at 2^32. Especially the bytes counters can wrap pretty fast on Gbit networks. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Esben Haabendal authored
No reason to produce the legacy net_device_stats struct, only to have it converted to rtnl_link_stats64. And as a bonus, this allows for improving counter size to 64 bit. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
When nla_put_u32() fails, 'ret' could be 0, it should return error code in tcf_del_walker(). Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
This node pointer is returned by of_parse_phandle() with refcount incremented in this function. of_node_put() on it before exiting this function. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Acked-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Th_strings arrays netdev_features_strings, tunable_strings, and phy_tunable_strings has been moved to file net/ethtool/common.c. So fixes the comment. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Oleksandr Mazur authored
Fixes: 66826c43 ("documentation: networking: devlink: add prestera switched driver Documentation") Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Currently the check for the u16 variable val being less than zero is always false because val is unsigned. Fix this by using the int variable for the assignment and less than zero check. Addresses-Coverity: ("Unsigned compared against 0") Fixes: f7380bba ("net: pcs: xpcs: add support for NXP SJA1110") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrii Nakryiko authored
migrate_reuseport.c selftest relies on having TCP_FASTOPEN_CONNECT defined in system-wide netinet/tcp.h. Selftests can use up-to-date uapi/linux/tcp.h, but that one doesn't have SOL_TCP. So instead of switching everything to uapi header, add #define for TCP_FASTOPEN_CONNECT to fix the build. Fixes: c9d0bdef ("bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Link: https://lore.kernel.org/bpf/20210617041446.425283-1-andrii@kernel.org
-
Daniel Borkmann authored
syzbot reported a shift-out-of-bounds that KUBSAN observed in the interpreter: [...] UBSAN: shift-out-of-bounds in kernel/bpf/core.c:1420:2 shift exponent 255 is too large for 64-bit type 'long long unsigned int' CPU: 1 PID: 11097 Comm: syz-executor.4 Not tainted 5.12.0-rc2-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x141/0x1d7 lib/dump_stack.c:120 ubsan_epilogue+0xb/0x5a lib/ubsan.c:148 __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x181 lib/ubsan.c:327 ___bpf_prog_run.cold+0x19/0x56c kernel/bpf/core.c:1420 __bpf_prog_run32+0x8f/0xd0 kernel/bpf/core.c:1735 bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline] bpf_prog_run_pin_on_cpu include/linux/filter.h:624 [inline] bpf_prog_run_clear_cb include/linux/filter.h:755 [inline] run_filter+0x1a1/0x470 net/packet/af_packet.c:2031 packet_rcv+0x313/0x13e0 net/packet/af_packet.c:2104 dev_queue_xmit_nit+0x7c2/0xa90 net/core/dev.c:2387 xmit_one net/core/dev.c:3588 [inline] dev_hard_start_xmit+0xad/0x920 net/core/dev.c:3609 __dev_queue_xmit+0x2121/0x2e00 net/core/dev.c:4182 __bpf_tx_skb net/core/filter.c:2116 [inline] __bpf_redirect_no_mac net/core/filter.c:2141 [inline] __bpf_redirect+0x548/0xc80 net/core/filter.c:2164 ____bpf_clone_redirect net/core/filter.c:2448 [inline] bpf_clone_redirect+0x2ae/0x420 net/core/filter.c:2420 ___bpf_prog_run+0x34e1/0x77d0 kernel/bpf/core.c:1523 __bpf_prog_run512+0x99/0xe0 kernel/bpf/core.c:1737 bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline] bpf_test_run+0x3ed/0xc50 net/bpf/test_run.c:50 bpf_prog_test_run_skb+0xabc/0x1c50 net/bpf/test_run.c:582 bpf_prog_test_run kernel/bpf/syscall.c:3127 [inline] __do_sys_bpf+0x1ea9/0x4f00 kernel/bpf/syscall.c:4406 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae [...] Generally speaking, KUBSAN reports from the kernel should be fixed. However, in case of BPF, this particular report caused concerns since the large shift is not wrong from BPF point of view, just undefined. In the verifier, K-based shifts that are >= {64,32} (depending on the bitwidth of the instruction) are already rejected. The register-based cases were not given their content might not be known at verification time. Ideas such as verifier instruction rewrite with an additional AND instruction for the source register were brought up, but regularly rejected due to the additional runtime overhead they incur. As Edward Cree rightly put it: Shifts by more than insn bitness are legal in the BPF ISA; they are implementation-defined behaviour [of the underlying architecture], rather than UB, and have been made legal for performance reasons. Each of the JIT backends compiles the BPF shift operations to machine instructions which produce implementation-defined results in such a case; the resulting contents of the register may be arbitrary but program behaviour as a whole remains defined. Guard checks in the fast path (i.e. affecting JITted code) will thus not be accepted. The case of division by zero is not truly analogous here, as division instructions on many of the JIT-targeted architectures will raise a machine exception / fault on division by zero, whereas (to the best of my knowledge) none will do so on an out-of-bounds shift. Given the KUBSAN report only affects the BPF interpreter, but not JITs, one solution is to add the ANDs with 63 or 31 into ___bpf_prog_run(). That would make the shifts defined, and thus shuts up KUBSAN, and the compiler would optimize out the AND on any CPU that interprets the shift amounts modulo the width anyway (e.g., confirmed from disassembly that on x86-64 and arm64 the generated interpreter code is the same before and after this fix). The BPF interpreter is slow path, and most likely compiled out anyway as distros select BPF_JIT_ALWAYS_ON to avoid speculative execution of BPF instructions by the interpreter. Given the main argument was to avoid sacrificing performance, the fact that the AND is optimized away from compiler for mainstream archs helps as well as a solution moving forward. Also add a comment on LSH/RSH/ARSH translation for JIT authors to provide guidance when they see the ___bpf_prog_run() interpreter code and use it as a model for a new JIT backend. Reported-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com Reported-by: Kurt Manucredo <fuzzybritches0@gmail.com> Signed-off-by: Eric Biggers <ebiggers@kernel.org> Co-developed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com Cc: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/bpf/0000000000008f912605bd30d5d7@google.com Link: https://lore.kernel.org/bpf/bac16d8d-c174-bdc4-91bd-bfa62b410190@gmail.com
-
Lorenz Bauer authored
bpf2go is the Go equivalent of libbpf skeleton. The convention is that the compiled BPF is checked into the repository to facilitate distributing BPF as part of Go packages. To make this portable, bpf2go by default generates both bpfel and bpfeb variants of the C. Using bpf_tracing.h is inherently non-portable since the fields of struct pt_regs differ between platforms, so CO-RE can't help us here. The only way of working around this is to compile for each target platform independently. bpf2go can't do this by default since there are too many platforms. Define the various PT_... macros when no target can be determined and turn them into compilation failures. This works because bpf2go always compiles for bpf targets, so the compiler fallback doesn't kick in. Conditionally define __BPF_MISSING_TARGET so that we can inject a more appropriate error message at build time. The user can then choose which platform to target explicitly. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210616083635.11434-1-lmb@cloudflare.com
-
Wang Hai authored
xdp_sample_pkts usage() is missing the introduction of the "-S" option, this patch adds it. Fixes: d50ecc46 ("samples/bpf: Attach XDP programs in driver mode by default") Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20210615135724.29528-1-wanghai38@huawei.com
-
Wang Hai authored
xdp_fwd usage() is missing the introduction of the "-S" and "-F" options, this patch adds it. Fixes: d50ecc46 ("samples/bpf: Attach XDP programs in driver mode by default") Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20210615135554.29158-1-wanghai38@huawei.com
-
Shuyi Cheng authored
Fix s/sleeable/sleepable/ typo in a comment. Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1623809076-97907-1-git-send-email-chengshuyi@linux.alibaba.com
-
- 16 Jun, 2021 15 commits
-
-
Daniel Xu authored
Somehow test_progs.h was being included by the existing rule: /test_progs* This is bad because: 1) test_progs.h is a checked in file 2) grep-like tools like ripgrep[0] respect gitignore and test_progs.h was being hidden from searches [0]: https://github.com/BurntSushi/ripgrep Fixes: 74b5a596 ("selftests/bpf: Replace test_progs and test_maps w/ general rule") Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/a46f64944bf678bc652410ca6028d3450f4f7f4b.1623880296.git.dxu@dxuuu.xyz
-
David S. Miller authored
Merge tag 'wireless-drivers-next-2021-06-16' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.14 First set of patches for v5.14. Major new features are here support WCN6855 PCI in ath11k and WoWLAN support for wcn36xx. Also smaller fixes and cleanups all over. ath9k * provide STBC info in the received frames brcmfmac * fix setting of station info chains bitmask * correctly report average RSSI in station info rsi * support for changing beacon interval in AP mode ath11k * support for WCN6855 PCI hardware wcn36xx * WoWLAN support with magic packets and GTK rekeying ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vadym Kochan says: ==================== Marvell Prestera add flower and match all support Add ACL infrastructure for Prestera Switch ASICs family devices to offload cls_flower rules to be processed in the HW. ACL implementation is based on tc filter api. The flower classifier is supported to configure ACL rules/matches/action. Supported actions: - drop - trap - pass Supported dissector keys: - indev - src_mac - dst_mac - src_ip - dst_ip - ip_proto - src_port - dst_port - vlan_id - vlan_ethtype - icmp type/code - Introduce matchall filter support - Add SPAN API to configure port mirroring. - Add tc mirror action. At this moment, only mirror (egress) action is supported. Example: tc filter ... action mirred egress mirror dev DEV v2: Fixed "newline at EOF warnings" from "git am" by re-applying with --whitespace=fix patch #1: 1) Set TC HW Offload always enabled without disable it [suggested by Vladimir Oltean] by user. It reduced the logic by removing feature handling and acl block disable counting. patch #2: 1) Removed extra not needed diff with prestera_port and [suggested by Vladimir Oltean] prestera_switch lines exchanging in prestera_acl.h 2) Fix local variables ordering to reverse chrostmas tree [suggested by Vladimir Oltean] 3) Use tc_cls_can_offload_and_chain0() in [suggested by Vladimir Oltean] prestera_span_replace() 4) Removed TODO about prio check [suggested by Vladimir Oltean] 5) Rephrase error message if prestera_netdev_check() [suggested by Vladimir Oltean] fails in prestera_span_replace() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Serhiy Boiko authored
- Introduce matchall filter support - Add SPAN API to configure port mirroring. - Add tc mirror action. At this moment, only mirror (egress) action is supported. Example: tc filter ... action mirred egress mirror dev DEV Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com> Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com> Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu> Signed-off-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Serhiy Boiko authored
Add ACL infrastructure for Prestera Switch ASICs family devices to offload cls_flower rules to be processed in the HW. ACL implementation is based on tc filter api. The flower classifier is supported to configure ACL rules/matches/action. Supported actions: - drop - trap - pass Supported dissector keys: - indev - src_mac - dst_mac - src_ip - dst_ip - ip_proto - src_port - dst_port - vlan_id - vlan_ethtype - icmp type/code Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com> Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com> Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu> Signed-off-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Karsten Graul says: ==================== net/smc: Add SMC statistic support Please apply the following patch series for smc to netdev's net-next tree. This v2 is a resend of the code contained in v1 but with an updated cover letter to describe why we have chosen to use the generic netlink mechanism to access the smc protocol's statistic data. The patchset adds statistic support to the SMC protocol. Per-cpu variables are used to collect the statistic information for better performance and for reducing concurrency pitfalls. The code that is collecting statistic data is implemented in macros to increase code reuse and readability. The generic netlink mechanism in SMC is extended to provide the collected statistics to userspace. Network namespace awareness is also part of the statistics implementation. SMC is a protocol interacting with PCI devices (like RoCE Cards) and runs on top of the TCP protocol. As SMC is a network protocol and not an ethernet device driver, we decided to use the generic netlink interface. This should be comparable to what other protocols in the net subsystem like tipc, ncsi, ieee802154 or tcp, et al, do. There is already an established internal generic netlink interface mechanism in SMC which is used to collect SMC Protocol internal information. This patchset extends that existing mechanism. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guvenc Gulce authored
Make the gathered SMC statistics network namespace aware, for each namespace collect an own set of statistic information. Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guvenc Gulce authored
Add support to collect more detailed SMC fallback reason statistics and provide these statistics to user space on the netlink interface. Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guvenc Gulce authored
Add the netlink function which collects the statistics information and delivers it to the userspace. Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guvenc Gulce authored
Add the ability to collect SMC statistics information. Per-cpu variables are used to collect the statistic information for better performance and for reducing concurrency pitfalls. The code that is collecting statistic data is implemented in macros to increase code reuse and readability. Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The continue statement at the end of a for-loop has no effect, remove it. Addresses-Coverity: ("Continue has no effect") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Simon Horman says: ==================== Next set of conntrack patches for the nfp driver Louis Peens says: This follows on from the previous series of a similar nature. Looking at the diagram as explained in the previous series this implements changes up to the point where the merged nft entries are saved. There are still bits of stubbed out code where offloading of the flows will be implemented. +-------------+ +----------+ | pre_ct flow +--------+ | nft flow | +-------------+ v +------+---+ +----------+ | | tc_merge +--------+ | +----------+ v v +--------------+ ^ +-------------+ | post_ct flow +-------+ +---+nft_tc merge | +--------------+ | +-------------+ | | | v Offload to nfp ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Louis Peens authored
Fill in code stub to check that the flow actions are valid for merge. The actions of the flow X should not conflict with the matches of flow X+1. For now this check is quite strict and set_actions are very limited, will need to update this when NAT support is added. Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Louis Peens authored
Fill in check_meta stub to check that ct_metadata action fields in the nft flow matches the ct_match data of the post_ct flow. Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Louis Peens authored
Replace merge check stub code with the actual implementation. This checks that the match parts of two tc flows does not conflict. Only overlapping keys needs to be checked, and only the narrowest masked parts needs to be checked, so each key is masked with the AND'd result of both masks before comparing. Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-