- 10 Dec, 2022 8 commits
-
-
Gavrilov Ilia authored
When the resource size changes, the return value of the 'nla_put_u64_64bit' function is not checked. That has been fixed to avoid rechecking at the next step. Found by InfoTeCS on behalf of Linux Verification Center (linuxtesting.org) with SVACE. Note that this is harmless, we'd error out at the next put(). Signed-off-by: Ilia.Gavrilov <Ilia.Gavrilov@infotecs.ru> Link: https://lore.kernel.org/r/20221208082821.3927937-1-Ilia.Gavrilov@infotecs.ruSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kees Cook authored
syzkaller reported: BUG: KASAN: slab-out-of-bounds in __build_skb_around+0x235/0x340 net/core/skbuff.c:294 Write of size 32 at addr ffff88802aa172c0 by task syz-executor413/5295 For bpf_prog_test_run_skb(), which uses a kmalloc()ed buffer passed to build_skb(). When build_skb() is passed a frag_size of 0, it means the buffer came from kmalloc. In these cases, ksize() is used to find its actual size, but since the allocation may not have been made to that size, actually perform the krealloc() call so that all the associated buffer size checking will be correctly notified (and use the "new" pointer so that compiler hinting works correctly). Split this logic out into a new interface, slab_build_skb(), but leave the original 0 checking for now to catch any stragglers. Reported-by: syzbot+fda18eaa8c12534ccb3b@syzkaller.appspotmail.com Link: https://groups.google.com/g/syzkaller-bugs/c/UnIKxTtU5-0/m/-wbXinkgAQAJ Fixes: 38931d89 ("mm: Make ksize() a reporting-only function") Cc: Pavel Begunkov <asml.silence@gmail.com> Cc: pepsipu <soopthegoop@gmail.com> Cc: syzbot+fda18eaa8c12534ccb3b@syzkaller.appspotmail.com Cc: Vlastimil Babka <vbabka@suse.cz> Cc: kasan-dev <kasan-dev@googlegroups.com> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: ast@kernel.org Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Hao Luo <haoluo@google.com> Cc: Jesper Dangaard Brouer <hawk@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: jolsa@kernel.org Cc: KP Singh <kpsingh@kernel.org> Cc: martin.lau@linux.dev Cc: Stanislav Fomichev <sdf@google.com> Cc: song@kernel.org Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221208060256.give.994-kees@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiapeng Chong authored
The function dmadesc_get_addr() is defined in the bcmgenet.c file, but not called elsewhere, so remove this unused function. drivers/net/ethernet/broadcom/genet/bcmgenet.c:120:26: warning: unused function 'dmadesc_get_addr'. Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3401Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20221209033723.32452-1-jiapeng.chong@linux.alibaba.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Mat Martineau says: ==================== mptcp: Miscellaneous cleanup Two code cleanup patches for the 6.2 merge window that don't change behavior: Patch 1 makes proper use of nlmsg_free(), as suggested by Jakub while reviewing f8c9dfbd ("mptcp: add pm listener events"). Patch 2 clarifies success status in a few mptcp functions, which prevents some smatch false positives. ==================== Link: https://lore.kernel.org/r/20221209004431.143701-1-mathew.j.martineau@linux.intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Matthieu Baerts authored
When 'err' is 0, it looks clearer to return '0' instead of the variable called 'err'. The behaviour is then not modified, just a clearer code. By doing this, we can also avoid false positive smatch warnings like this one: net/mptcp/pm_netlink.c:1169 mptcp_pm_parse_pm_addr_attr() warn: missing error code? 'err' Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <error27@gmail.com> Suggested-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Geliang Tang authored
Use nlmsg_free() instead of kfree_skb() in pm_netlink.c. The SKB's have been created by nlmsg_new(). The proper cleaning way should then be done with nlmsg_free(). For the moment, nlmsg_free() is simply calling kfree_skb() so we don't change the behaviour here. Suggested-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxJakub Kicinski authored
Saeed Mahameed says: ==================== mlx5-updates-2022-12-08 1) Support range match action in SW steering Yevgeny Kliteynik says: ======================= The following patch series adds support for a range match action in SW Steering. SW steering is able to match only on the exact values of the packet fields, as requested by the user: the user provides mask for the fields that are of interest, and the exact values to be matched on when the traffic is handled. The following patch series add new type of action - Range Match, where the user provides a field to be matched on and a range of values (min to max) that will be considered as hit. There are several new notions that were implemented in order to support Range Match: - MATCH_RANGES Steering Table Entry (STE): the new STE type that allows matching the packets' fields on the range of values instead of a specific value. - Match Definer: this is a general FW object that defines which fields in the packet will be referenced by the mask and tag of each STE. Match definer ID is part of STE fields, and it defines how the HW needs to interpret the STE's mask/tag values. Till now SW steering used the definers that were managed by FW and implemented the STE layout as described by the HW spec. Now that we're adding a new type of STE, SW steering needs to also be able to define this new STE's layout, and this is do ======================= 2) From OZ add support for meter mtu offload 2.1: Refactor the code to allow both metering and range post actions as a pre-step for adding police mtu offload support. 2.2: Instantiate mtu green/red flow tables with a single match-all rule. Add the green/red actions to the hit/miss table accordingly 2.3: Initialize the meter object with the TC police mtu parameter. Use the hardware range match action feature. 3) From MaorD, support routes with more than 2 nexthops in multipath 4) Michael and Or, improve and extend vport representor counters. * tag 'mlx5-updates-2022-12-08' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: Expose steering dropped packets counter net/mlx5: Refactor and expand rep vport stat group net/mlx5e: multipath, support routes with more than 2 nexthops net/mlx5e: TC, add support for meter mtu offload net/mlx5e: meter, add mtu post meter tables net/mlx5e: meter, refactor to allow multiple post meter tables net/mlx5: DR, Add support for range match action net/mlx5: DR, Add function that tells if STE miss addr has been initialized net/mlx5: DR, Some refactoring of miss address handling net/mlx5: DR, Manage definers with refcounts net/mlx5: DR, Handle FT action in a separate function net/mlx5: DR, Rework is_fw_table function net/mlx5: DR, Add functions to create/destroy MATCH_DEFINER general object net/mlx5: fs, add match on ranges API net/mlx5: mlx5_ifc updates for MATCH_DEFINER general object ==================== Link: https://lore.kernel.org/r/20221209001420.142794-1-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueJakub Kicinski authored
Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2022-12-08 (ice) Jacob Keller says: This series of patches primarily consists of changes to fix some corner cases that can cause Tx timestamp failures. The issues were discovered and reported by Siddaraju DH and primarily affect E822 hardware, though this series also includes some improvements that affect E810 hardware as well. The primary issue is regarding the way that E822 determines when to generate timestamp interrupts. If the driver reads timestamp indexes which do not have a valid timestamp, the E822 interrupt tracking logic can get stuck. This is due to the way that E822 hardware tracks timestamp index reads internally. I was previously unaware of this behavior as it is significantly different in E810 hardware. Most of the fixes target refactors to ensure that the ice driver does not read timestamp indexes which are not valid on E822 hardware. This is done by using the Tx timestamp ready bitmap register from the PHY. This register indicates what timestamp indexes have outstanding timestamps waiting to be captured. Care must be taken in all cases where we read the timestamp registers, and thus all flows which might have read these registers are refactored. The ice_ptp_tx_tstamp function is modified to consolidate as much of the logic relating to these registers as possible. It now handles discarding stale timestamps which are old or which occurred after a PHC time update. This replaces previously standalone thread functions like the periodic work function and the ice_ptp_flush_tx_tracker function. In addition, some minor cleanups noticed while writing these refactors are included. The remaining patches refactor the E822 implementation to remove the "bypass" mode for timestamps. The E822 hardware has the ability to provide a more precise timestamp by making use of measurements of the precise way that packets flow through the hardware pipeline. These measurements are known as "Vernier" calibration. The "bypass" mode disables many of these measurements in favor of a faster start up time for Tx and Rx timestamping. Instead, once these measurements were captured, the driver tries to reconfigure the PHY to enable the vernier calibrations. Unfortunately this recalibration does not work. Testing indicates that the PHY simply remains in bypass mode without the increased timestamp precision. Remove the attempt at recalibration and always use vernier mode. This has one disadvantage that Tx and Rx timestamps cannot begin until after at least one packet of that type goes through the hardware pipeline. Because of this, further refactor the driver to separate Tx and Rx vernier calibration. Complete the Tx and Rx independently, enabling the appropriate type of timestamp as soon as the relevant packet has traversed the hardware pipeline. This was reported by Milena Olech. Note that although these might be considered "bug fixes", the required changes in order to appropriately resolve these issues is large. Thus it does not feel suitable to send this series to net. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: reschedule ice_ptp_wait_for_offset_valid during reset ice: make Tx and Rx vernier offset calibration independent ice: only check set bits in ice_ptp_flush_tx_tracker ice: handle flushing stale Tx timestamps in ice_ptp_tx_tstamp ice: cleanup allocations in ice_ptp_alloc_tx_tracker ice: protect init and calibrating check in ice_ptp_request_ts ice: synchronize the misc IRQ when tearing down Tx tracker ice: check Tx timestamp memory register for ready timestamps ice: handle discarding old Tx requests in ice_ptp_tx_tstamp ice: always call ice_ptp_link_change and make it void ice: fix misuse of "link err" with "link status" ice: Reset TS memory for all quads ice: Remove the E822 vernier "bypass" logic ice: Use more generic names for ice_ptp_tx fields ==================== Link: https://lore.kernel.org/r/20221208213932.1274143-1-anthony.l.nguyen@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 09 Dec, 2022 32 commits
-
-
wangchuanlei authored
Add support to count upall packets, when kmod of openvswitch upcall to count the number of packets for upcall succeed and failed, which is a better way to see how many packets upcalled on every interfaces. Signed-off-by: wangchuanlei <wangchuanlei@inspur.com> Acked-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tejun Heo authored
rhashtable currently only does bh-safe synchronization making it impossible to use from irq-safe contexts. Switch it to use irq-safe synchronization to remove the restriction. v2: Update the lock functions to return the ulong flags value and unlock functions to take the value directly instead of passing around the pointer. Suggested by Linus. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: David Vernet <dvernet@meta.com> Acked-by: Josh Don <joshdon@google.com> Acked-by: Hao Luo <haoluo@google.com> Acked-by: Barret Rhoden <brho@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Pedro Tammela says: ==================== net/sched: retpoline wrappers for tc In tc all qdics, classifiers and actions can be compiled as modules. This results today in indirect calls in all transitions in the tc hierarchy. Due to CONFIG_RETPOLINE, CPUs with mitigations=on might pay an extra cost on indirect calls. For newer Intel cpus with IBRS the extra cost is nonexistent, but AMD Zen cpus and older x86 cpus still go through the retpoline thunk. Known built-in symbols can be optimized into direct calls, thus avoiding the retpoline thunk. So far, tc has not been leveraging this build information and leaving out a performance optimization for some CPUs. In this series we wire up 'tcf_classify()' and 'tcf_action_exec()' with direct calls when known modules are compiled as built-in as an opt-in optimization. We measured these changes in one AMD Zen 4 cpu (Retpoline), one AMD Zen 3 cpu (Retpoline), one Intel 10th Gen CPU (IBRS), one Intel 3rd Gen cpu (Retpoline) and one Intel Xeon CPU (IBRS) using pktgen with 64b udp packets. Our test setup is a dummy device with clsact and matchall in a kernel compiled with every tc module as built-in. We observed a 3-8% speed up on the retpoline CPUs, when going through 1 tc filter, and a 60-100% speed up when going through 100 filters. For the IBRS cpus we observed a 1-2% degradation in both scenarios, we believe the extra branches check introduced a small overhead therefore we added a static key that bypasses the wrapper on kernels not using the retpoline mitigation, but compiled with CONFIG_RETPOLINE. 1 filter: CPU | before (pps) | after (pps) | diff R9 7950X | 5914980 | 6380227 | +7.8% R9 5950X | 4237838 | 4412241 | +4.1% R9 5950X | 4265287 | 4413757 | +3.4% [*] i5-3337U | 1580565 | 1682406 | +6.4% i5-10210U | 3006074 | 3006857 | +0.0% i5-10210U | 3160245 | 3179945 | +0.6% [*] Xeon 6230R | 3196906 | 3197059a | +0.0% Xeon 6230R | 3190392 | 3196153 | +0.01% [*] 100 filters: CPU | before (pps) | after (pps) | diff R9 7950X | 373598 | 820396 | +119.59% R9 5950X | 313469 | 633303 | +102.03% R9 5950X | 313797 | 633150 | +101.77% [*] i5-3337U | 127454 | 211210 | +65.71% i5-10210U | 389259 | 381765 | -1.9% i5-10210U | 408812 | 412730 | +0.9% [*] Xeon 6230R | 415420 | 406612 | -2.1% Xeon 6230R | 416705 | 405869 | -2.6% [*] [*] In these tests we ran pktgen with clone set to 1000. On the 7950x system we also tested the impact of filters if iteration order placement varied, first by compiling a kernel with the filter under test being the first one in the static iteration and then repeating it with being last (of 15 classifiers existing today). We saw a difference of +0.5-1% in pps between being the first in the iteration vs being the last. Therefore we order the classifiers and actions according to relevance per our current thinking. v5->v6: - Address Eric Dumazet suggestions v4->v5: - Rebase v3->v4: - Address Eric Dumazet suggestions v2->v3: - Address suggestions by Jakub, Paolo and Eric - Dropped RFC tag (I forgot to add it on v2) v1->v2: - Fix build errors found by the bots - Address Kuniyuki Iwashima suggestions ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pedro Tammela authored
Expose the necessary tc classifier functions and wire up cls_api to use direct calls in retpoline kernels. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pedro Tammela authored
Expose the necessary tc act functions and wire up act_api to use direct calls in retpoline kernels. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pedro Tammela authored
On kernels using retpoline as a spectrev2 mitigation, optimize actions and filters that are compiled as built-ins into a direct call. On subsequent patches we expose the classifiers and actions functions and wire up the wrapper into tc. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pedro Tammela authored
The type definition should be visible even in configurations not using CONFIG_NET_CLS_ACT. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Randy Dunlap authored
Delete a few lines of "depends on PHYLIB" since they are inside an "if PHYLIB / endif # PHYLIB" block, i.e., they are redundant and the other 50+ drivers there don't use "depends on PHYLIB" since it is not needed. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Link: https://lore.kernel.org/r/20221207044257.30036-1-rdunlap@infradead.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Willem de Bruijn authored
Add an option to initialize SOF_TIMESTAMPING_OPT_ID for TCP from write_seq sockets instead of snd_una. This should have been the behavior from the start. Because processes may now exist that rely on the established behavior, do not change behavior of the existing option, but add the right behavior with a new flag. It is encouraged to always set SOF_TIMESTAMPING_OPT_ID_TCP on stream sockets along with the existing SOF_TIMESTAMPING_OPT_ID. Intuitively the contract is that the counter is zero after the setsockopt, so that the next write N results in a notification for the last byte N - 1. On idle sockets snd_una == write_seq and this holds for both. But on sockets with data in transmission, snd_una records the unacked offset in the stream. This depends on the ACK response from the peer. A process cannot learn this in a race free manner (ioctl SIOCOUTQ is one racy approach). write_seq records the offset at the last byte written by the process. This is a better starting point. It matches the intuitive contract in all circumstances, unaffected by external behavior. The new timestamp flag necessitates increasing sk_tsflags to 32 bits. Move the field in struct sock to avoid growing the socket (for some common CONFIG variants). The UAPI interface so_timestamping.flags is already int, so 32 bits wide. Reported-by: Sotirios Delimanolis <sotodel@meta.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20221207143701.29861-1-willemdebruijn.kernel@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Lorenzo Bianconi says: ==================== fix possible deadlock during WED attach Fix a possible deadlock in mtk_wed_attach if mtk_wed_wo_init routine fails. Check wo pointer is properly allocated before running mtk_wed_wo_reset() and mtk_wed_wo_deinit(). ==================== Link: https://lore.kernel.org/r/cover.1670421354.git.lorenzo@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Lorenzo Bianconi authored
Introduce __mtk_wed_detach() in order to avoid a deadlock in mtk_wed_attach routine if mtk_wed_wo_init fails since both mtk_wed_attach and mtk_wed_detach run holding hw_lock mutex. Fixes: 4c5de09e ("net: ethernet: mtk_wed: add configure wed wo support") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Lorenzo Bianconi authored
Fix possible NULL pointer dereference in mtk_wed_detach routine checking wo pointer is properly allocated before running mtk_wed_wo_reset() and mtk_wed_wo_deinit(). Even if it is just a theoretical issue at the moment check wo pointer is not NULL in mtk_wed_mcu_msg_update. Moreover, honor mtk_wed_mcu_send_msg return value in mtk_wed_wo_reset() Fixes: 79968444 ("net: ethernet: mtk_wed: introduce wed wo support") Fixes: 4c5de09e ("net: ethernet: mtk_wed: add configure wed wo support") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Colin Ian King authored
There is a spelling mistake in a nn_dp_warn message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20221207094312.2281493-1-colin.i.king@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Björn Töpel authored
The BPF Makefile in net/bpf did incorrect path substitution for O=dir builds, e.g. make O=/tmp/kselftest headers make O=/tmp/kselftest -C tools/testing/selftests would fail in selftest builds [1] net/ with clang-16: error: no such file or directory: 'kselftest/net/bpf/nat6to4.c' clang-16: error: no input files Add a pattern prerequisite and an order-only-prerequisite (for creating the directory), to resolve the issue. [1] https://lore.kernel.org/all/202212060009.34CkQmCN-lkp@intel.com/Reported-by: kernel test robot <lkp@intel.com> Fixes: 837a3d66 ("selftests: net: Add cross-compilation support for BPF programs") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20221206102838.272584-1-bjorn@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Petr Machata says: ==================== mlxsw: Add Spectrum-1 ip6gre support Ido Schimmel writes: Currently, mlxsw only supports ip6gre offload on Spectrum-2 and newer ASICs. Spectrum-1 can also offload ip6gre tunnels, but it needs double entry router interfaces (RIFs) for the RIFs representing these tunnels. In addition, the RIF index needs to be even. This is handled in patches #1-#3. The implementation can otherwise be shared between all Spectrum generations. This is handled in patches #4-#5. Patch #6 moves a mlxsw ip6gre selftest to a shared directory, as ip6gre is no longer only supported on Spectrum-2 and newer ASICs. This work is motivated by users that require multiple GRE tunnels that all share the same underlay VRF. Currently, mlxsw only supports decapsulation based on the underlay destination IP (i.e., not taking the GRE key into account), so users need to configure these tunnels with different source IPs and IPv6 addresses are easier to spare than IPv4. Tested using existing ip6gre forwarding selftests. ==================== Link: https://lore.kernel.org/r/cover.1670414573.git.petrm@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
Now that Spectrum-1 gained ip6gre support we can move the test out of the Spectrum-2 directory. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
As explained in the previous patch, the existing Spectrum-2 ip6gre implementation can be reused for Spectrum-1. Change the Spectrum-1 ip6gre operations structure to use the common operations. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
There are two main differences between Spectrum-1 and newer ASICs in terms of IP-in-IP support: 1. In Spectrum-1, RIFs representing ip6gre tunnels require two entries in the RIF table. 2. In Spectrum-2 and newer ASICs, packets ingress the underlay (during encapsulation) and egress the underlay (during decapsulation) via a special generic loopback RIF. The first difference was handled in previous patches by adding the 'double_rif_entry' field to the Spectrum-1 operations structure of ip6gre RIFs. The second difference is handled during RIF creation, by only creating a generic loopback RIF in Spectrum-2 and newer ASICs. Therefore, the ip6gre operations can be shared between Spectrum-1 and newer ASIC in a similar fashion to how the ipgre operations are shared. Rename the operations to not be Spectrum-2 specific and move them earlier in the file so that they could later be used for Spectrum-1. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
In Spectrum-1, loopback router interfaces (RIFs) used for IP-in-IP encapsulation with an IPv6 underlay require two RIF entries and the RIF index must be even. Prepare for this change by extending the RIF parameters structure with a 'double_entry' field that indicates if the RIF being created requires two RIF entries or not. Only set it for RIFs representing ip6gre tunnels in Spectrum-1. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
Currently, each router interface (RIF) consumes one entry in the RIFs table. This is going to change in subsequent patches where some RIFs will consume two table entries. Prepare for this change by parametrizing the RIF allocation size. For now, always pass '1'. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
Currently, each router interface (RIF) consumes one entry in the RIFs table and there are no alignment constraints. This is going to change in subsequent patches where some RIFs will consume two table entries and their indexes will need to be aligned to the allocation size (even). Prepare for this change by converting the RIF index allocation to use gen_pool with the 'gen_pool_first_fit_order_align' algorithm. No Kconfig changes necessary as mlxsw already selects 'GENERIC_ALLOCATOR'. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Guralnik authored
Add rx steering discarded packets counter to the vnic_diag debugfs. Signed-off-by: Michael Guralnik <michaelgur@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Or Har-Toov authored
Expand representor vport stat group to support all counters from the vport stat group, to count all the traffic passing through the vport. Fix current implementation where fill_stats and update_stats use different structs. Signed-off-by: Or Har-Toov <ohartoov@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maor Dickman authored
Today multipath offload is only supported when the number of nexthops is 2 which block the use of it in case of system with 2 NICs. This patch solve it by enabling multipath offload per NIC if 2 nexthops of the route are its uplinks. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Oz Shlomo authored
Initialize the meter object with the TC police mtu parameter. Use the hardware range destination to compare the pkt len to the mtu setting. Assign the range destination hit/miss ft to the police conform/exceed attributes. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Oz Shlomo authored
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. MTU check is realized in hardware using the range destination, specifying a hit ft, if packet len is in the range, or miss ft otherwise. Instantiate mtu green/red flow tables with a single match-all rule. Add the green/red actions to the hit/miss table accordingly. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Oz Shlomo authored
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. Currently the post meter table steers the packet according to the meter aso output. Refactor the code to allow both metering and range post actions as a pre-step for adding police mtu offload support. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add support for matching on range. The supported type of range is L2 frame size. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Up until now miss address in all the STEs was used to connect miss lists and to link the last STE in the list to end anchor. Match range STE will require special handling because its miss address is part of the 'action'. That is, range action has hit and miss addresses. Since the range action is always the last action, need to make sure that its miss address isn't overwritten by the end anchor. Adding new function mlx5dr_ste_is_miss_addr_set() to answer the question whether the STE's miss address has already been set as part of STE initialization. Use a callback that always returns false right now. Once match range is added, a different callback will be used for that STE type. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
In preparation for MATCH RANGE STE support, create a function to set the miss address of an STE. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
In many cases different actions will ask for the same definer format. Instead of allocating new definer general object and running out of definers, have an xarray of allocated definers and keep track of their usage with refcounts: allocate a new definer only when there isn't one with the same format already created, and destroy definer only when its refcount runs down to zero. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-