- 17 Apr, 2024 3 commits
-
-
Jiapeng Chong authored
The print function dev_err() is redundant because platform_get_irq_byname() already prints an error. ./drivers/net/ipa/ipa_interrupt.c:300:2-9: line 300 is redundant because platform_get_irq() already prints an error. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=8756Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240415031456.10805-1-jiapeng.chong@linux.alibaba.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Colin Ian King authored
The variable is being assigned an value and then is being re-assigned a new value in the next statement. The assignment is redundant and can be removed. Cleans up clang scan build warning: net/handshake/tlshd.c:216:2: warning: Value stored to 'ret' is never read [deadcode.DeadStores] Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Acked-by: Chuck Lever <chuck.lever@oracle.com> Link: https://lore.kernel.org/r/20240415100713.483399-1-colin.i.king@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Krzysztof Kozlowski authored
The ID table already has respective entry and MODULE_DEVICE_TABLE and creates proper alias for SPI driver. Having another MODULE_ALIAS causes the alias to be duplicated. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20240414154929.127045-1-krzk@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 16 Apr, 2024 18 commits
-
-
Kuniyuki Iwashima authored
Commit dcf70df2 ("af_unix: Fix up unix_edge.successor for embryo socket.") added spin_lock(&unix_gc_lock) in accept() path, and it caused regression in a stress test as reported by kernel test robot. If the embryo socket is not part of the inflight graph, we need not hold the lock. To decide that in O(1) time and avoid the regression in the normal use case, 1. add a new stat unix_sk(sk)->scm_stat.nr_unix_fds 2. count the number of inflight AF_UNIX sockets in the receive queue under unix_state_lock() 3. move unix_update_edges() call under unix_state_lock() 4. avoid locking if nr_unix_fds is 0 in unix_update_edges() Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202404101427.92a08551-oliver.sang@intel.comSigned-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240413021928.20946-1-kuniyu@amazon.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queuePaolo Abeni authored
Tony Nguyen says: ==================== This series contains updates to ice driver only. Lukasz removes unnecessary argument from ice_fdir_comp_rules(). Jakub adds support for ethtool 'ether' flow-type rules. Jake moves setting of VF MSI-X value to initialization function and adds tracking of VF relative MSI-X index. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: store VF relative MSI-X index in q_vector->vf_reg_idx ice: set vf->num_msix in ice_initialize_vf_entry() ice: Implement 'flow-type ether' rules ice: Remove unnecessary argument from ice_fdir_comp_rules() ==================== Link: https://lore.kernel.org/r/20240412210534.916756-1-anthony.l.nguyen@intel.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Paolo Abeni authored
Petr Machata says: ==================== selftests: Assortment of fixes This is a loose follow-up to the Kernel CI patchset posted recently. It contains various fixes that were supposed to be part of said patchset, but didn't fit due to its size. The latter 4 patches were written independently of the CI effort, but again didn't fit in their intended patchsets. - Patch #1 unifies code of two very similar looking functions, busywait() and slowwait(). - Patch #2 adds sanity checks around the setting of NETIFS, which carries list of interfaces to run on. - Patch #3 changes bail_on_lldpad() to SKIP instead of FAILing. - Patches #4 to #7 fix issues in selftests. - Patches #8 to #10 add topology diagrams to several selftests. This should have been part of the mlxsw leg of NH group stats patches, but again, it did not fit in due to size. ==================== Link: https://lore.kernel.org/r/cover.1712940759.git.petrm@nvidia.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
This test lacks a topology diagram, making the setup not obvious. Add one. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
This test lacks a topology diagram, making the setup not obvious. Add one. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
This test lacks a topology diagram, making the setup not obvious. Add one. Cc: David Ahern <dsahern@gmail.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Danielle Ratson authored
The ethtool dump includes the lanes parameter only when the port is up. Therefore, the ethtool_lanes.sh test waits for ports to come before testing the lanes parameter. In some cases, the test considers the port as up, but the lanes parameter is not yet dumped although assumed to be, resulting in ethtool_lanes.sh test failure. To avoid that, ensure that the lanes parameter is indeed dumped by waiting for it explicitly, before preforming the test cases. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
The tests use the constant TC_HIT_TIMEOUT when waiting on the counter values. However it does not include tc_common.sh where the counter is specified. The test has been robust in our testing, which means the counter is bumped quickly enough that the updated value is available already on the first iteration. Nevertheless it's not correct. Include tc_common.sh as appropriate. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
Some log_test calls are done in a loop, and lead to the same log output. This might prove tricky to deduplicate for automated tools. Instead, roll the unique information from log_info to log_test, and drop the log_info. This also leads to more compact and clearer output. This change prompts rewording the messages so that they are not excessively long. Some check_err messages do not indicate what the issue actually is, so reword them to say it's a "ping with", like is the case in some other instances in this test. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
When rx-pktsNtoM reports a range that involves very low-valued range, such as 0-64, the calculated length of the packet will be -4, because FCS is subtracted from the value. mausezahn then confuses the value for an option and bails out. As a result, the test dumps many mausezahn error messages. Instead, cap the value at 0. mausezahn will use an appropriate minimum packet length. Cc: Vladimir Oltean <vladimir.oltean@nxp.com> Cc: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
$ksft_skip is used to mark selftests that have tooling issues. The fact that LLDPad is running, but shouldn't, is one such issue. Therefore have bail_on_lldpad() bail with $ksft_skip. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
The variable should contain at least NUM_NETIFS interfaces, stored as keys named "p$i", for i in `seq $NUM_NETIFS`. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Petr Machata authored
Bodies of busywait() and slowwait() functions are almost identical. Extract the common code into a helper, loopy_wait, and convert busywait() and slowwait() into trivial wrappers. Moreover, the fact that slowwait() uses seconds for units is really not intuitive, and the comment does not help much. Instead make the unit part of the name of the argument to further clarify what units are expected. Cc: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com> Reviewed-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Russell King (Oracle) authored
Convert mt753x to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Tested-by: Arınç ÜNAL <arinc.unal@arinc9.com> Link: https://lore.kernel.org/r/E1rvIco-006bQu-Fq@rmk-PC.armlinux.org.ukSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Russell King (Oracle) authored
Convert lantiq_gswip to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c. For lantiq_gswip, it means we end up with a common instance of phylink MAC operations that are shared between the different variants, rather than having duplicated initialisers in dsa_switch_ops. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/E1rvIcj-006bQo-B3@rmk-PC.armlinux.org.ukSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Russell King (Oracle) authored
Convert qca8k to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/E1rvIce-006bQi-58@rmk-PC.armlinux.org.ukSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Russell King (Oracle) authored
Convert ar9331 to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/E1rvIcZ-006bQc-0W@rmk-PC.armlinux.org.ukSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Russell King (Oracle) authored
Convert sja1105 to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/E1rvIcT-006bQW-S3@rmk-PC.armlinux.org.ukSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
- 15 Apr, 2024 19 commits
-
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== selftests: net: exercise page pool reporting via netlink Add a basic test for page pool netlink reporting. v1: https://lore.kernel.org/all/20240411012815.174400-1-kuba@kernel.org/ ==================== Link: https://lore.kernel.org/r/20240412141436.828666-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Add a Python test for the basic ops. # ./net/nl_netdev.py KTAP version 1 1..3 ok 1 nl_netdev.empty_check ok 2 nl_netdev.lo_check ok 3 nl_netdev.page_pool_check # Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0 Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20240412141436.828666-7-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Using "with" on an entire driver test env is supported already, but it's also useful to use "with" on an individual nsim. Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20240412141436.828666-6-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Instead of a summary line print the full exception. This makes debugging Python tests much easier. Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20240412141436.828666-5-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Developing Python tests is a bit annoying because when test fails we only print the fail message and no info about which exact check led to it. Print the location (the first line of this example is new): # At /root/ksft-net-drv/./net/nl_netdev.py line 38: # Check failed 0 != 10 not ok 3 nl_netdev.page_pool_check Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20240412141436.828666-4-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
YNL currently reports None for empty dump: $ cli.py ...netdev.yaml --dump page-pool-get None This doesn't matter for the CLI but when writing YNL based tests having to deal with either list or None is annoying. Limit the None conversion to non-dump ops: $ cli.py ...netdev.yaml --dump page-pool-get [] Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://lore.kernel.org/r/20240412141436.828666-3-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Add very basic page pool use so that we can exercise the netlink uAPI in a selftest. Page pool gets created on open, destroyed on close. But we control allocating of a single page thru debugfs. This page may survive past the page pool itself so that we can test orphaned page pools. Link: https://lore.kernel.org/r/20240412141436.828666-2-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Breno Leitao says: ==================== net: dqs: optimize if stall threshold is not set Here are four patches aimed at enhancing the Dynamic Queue Limit (DQL) subsystem within the networking stack. The first two commits involve code refactoring, while the third patch introduces the actual change. The fourth patch just improves the cache locality. Typically, when DQL is enabled, stall information is always populated through dql_queue_stall(). However, this information is only necessary if a stall threshold is set, which is stored in struct dql->stall_thrs. Although dql_queue_stall() is relatively inexpensive, it is not entirely free due to memory barriers and similar overheads. To optimize performance, refrain from calling dql_queue_stall() when no stall threshold is set, thus avoiding the processing of unnecessary information. v1: https://lore.kernel.org/all/20240404145939.3601097-1-leitao@debian.org/ ==================== Link: https://lore.kernel.org/r/20240411192241.2498631-1-leitao@debian.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Breno Leitao authored
With the previous change, struct dqs->stall_thrs will be in the hot path (at queue side), even if DQS is disabled. The other fields accessed in this function (last_obj_cnt and num_queued) are in the first cache line, let's move this field (stall_thrs) to the very first cache line, since there is a hole there. This does not change the structure size, since it moves an short (2 bytes) to 4-bytes whole in the first cache line. This is the new structure format now: struct dql { unsigned int num_queued; unsigned int last_obj_cnt; ... short unsigned int stall_thrs; /* XXX 2 bytes hole, try to pack */ ... /* --- cacheline 1 boundary (64 bytes) --- */ ... /* Longest stall detected, reported to user */ short unsigned int stall_max; /* XXX 2 bytes hole, try to pack */ }; Also, read the stall_thrs (now in the very first cache line) earlier, together with dql->num_queued (also in the first cache line). Suggested-by: Jakub Kicinski <kuba@kernel.org> Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20240411192241.2498631-5-leitao@debian.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Breno Leitao authored
When Dynamic Queue Limit (DQL) is set, it always populate stall information through dql_queue_stall(). However, this information is only necessary if a stall threshold is set, stored in struct dql->stall_thrs. dql_queue_stall() is cheap, but not free, since it does have memory barriers and so forth. Do not call dql_queue_stall() if there is no stall threshold set, and save some CPU cycles. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20240411192241.2498631-4-leitao@debian.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Breno Leitao authored
The dql_queued() function currently handles both queuing object counts and populating bitmaps for reporting stalls. This commit splits the bitmap population into a separate function, allowing for conditional invocation in scenarios where the feature is disabled. This refactor maintains functionality while improving code organization. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20240411192241.2498631-3-leitao@debian.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Breno Leitao authored
If the dql_queued() function receives an invalid argument, WARN about it and continue, instead of crashing the kernel. This was raised by checkpatch, when I am refactoring this code (see following patch/commit) WARNING: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20240411192241.2498631-2-leitao@debian.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
David S. Miller authored
Julien Panis says: ==================== Add minimal XDP support to TI AM65 CPSW Ethernet driver This patch adds XDP support to TI AM65 CPSW Ethernet driver. The following features are implemented: NETDEV_XDP_ACT_BASIC, NETDEV_XDP_ACT_REDIRECT, and NETDEV_XDP_ACT_NDO_XMIT. Zero-copy and non-linear XDP buffer supports are NOT implemented. Besides, the page pool memory model is used to get better performance. ==================== Signed-off-by: Julien Panis <jpanis@baylibre.com>
-
Julien Panis authored
This patch adds XDP (eXpress Data Path) support to TI AM65 CPSW Ethernet driver. The following features are implemented: - NETDEV_XDP_ACT_BASIC (XDP_PASS, XDP_TX, XDP_DROP, XDP_ABORTED) - NETDEV_XDP_ACT_REDIRECT (XDP_REDIRECT) - NETDEV_XDP_ACT_NDO_XMIT (ndo_xdp_xmit callback) The page pool memory model is used to get better performance. Below are benchmark results obtained for the receiver with iperf3 default parameters: - Without page pool: 495 Mbits/sec - With page pool: 605 Mbits/sec (actually 610 Mbits/sec, with a 5 Mbits/sec loss due to extra processing in the hot path to handle XDP). Signed-off-by: Julien Panis <jpanis@baylibre.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julien Panis authored
This patch introduces a member and the related accessors which can be used to store descriptor specific additional information. This member can store, for instance, an ID to differentiate a skb TX buffer type from a xdpf TX buffer type. Signed-off-by: Julien Panis <jpanis@baylibre.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julien Panis authored
This patch adds accessors for desc_size and cpumem members. They may be used, for instance, to compute a descriptor index. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Julien Panis <jpanis@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gabriel Krisman Bertazi authored
We've observed a 7-12% performance regression in iperf3 UDP ipv4 and ipv6 tests with multiple sockets on Zen3 cpus, which we traced back to commit f0ea27e7 ("udp: re-score reuseport groups when connected sockets are present"). The failing tests were those that would spawn UDP sockets per-cpu on systems that have a high number of cpus. Unsurprisingly, it is not caused by the extra re-scoring of the reused socket, but due to the compiler no longer inlining compute_score, once it has the extra call site in udp4_lib_lookup2. This is augmented by the "Safe RET" mitigation for SRSO, needed in our Zen3 cpus. We could just explicitly inline it, but compute_score() is quite a large function, around 300b. Inlining in two sites would almost double udp4_lib_lookup2, which is a silly thing to do just to workaround a mitigation. Instead, this patch shuffles the code a bit to avoid the multiple calls to compute_score. Since it is a static function used in one spot, the compiler can safely fold it in, as it did before, without increasing the text size. With this patch applied I ran my original iperf3 testcases. The failing cases all looked like this (ipv4): iperf3 -c 127.0.0.1 --udp -4 -f K -b $R -l 8920 -t 30 -i 5 -P 64 -O 2 where $R is either 1G/10G/0 (max, unlimited). I ran 3 times each. baseline is v6.9-rc3. harmean == harmonic mean; CV == coefficient of variation. ipv4: 1G 10G MAX HARMEAN (CV) HARMEAN (CV) HARMEAN (CV) baseline 1743852.66(0.0208) 1725933.02(0.0167) 1705203.78(0.0386) patched 1968727.61(0.0035) 1962283.22(0.0195) 1923853.50(0.0256) ipv6: 1G 10G MAX HARMEAN (CV) HARMEAN (CV) HARMEAN (CV) baseline 1729020.03(0.0028) 1691704.49(0.0243) 1692251.34(0.0083) patched 1900422.19(0.0067) 1900968.01(0.0067) 1568532.72(0.1519) This restores the performance we had before the change above with this benchmark. We obviously don't expect any real impact when mitigations are disabled, but just to be sure it also doesn't regresses: mitigations=off ipv4: 1G 10G MAX HARMEAN (CV) HARMEAN (CV) HARMEAN (CV) baseline 3230279.97(0.0066) 3229320.91(0.0060) 2605693.19(0.0697) patched 3242802.36(0.0073) 3239310.71(0.0035) 2502427.19(0.0882) Cc: Lorenz Bauer <lmb@isovalent.com> Fixes: f0ea27e7 ("udp: re-score reuseport groups when connected sockets are present") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
Commit 3e2f544d ("net: get stats64 if device if driver is configured") moved the callback to dev_get_tstats64() to net core, so, unless the driver is doing some custom stats collection, it does not need to set .ndo_get_stats64. Since this driver is now relying in NETDEV_PCPU_STAT_TSTATS, then, it doesn't need to set the dev_get_tstats64() generic .ndo_get_stats64 function pointer. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
With commit 34d21de9 ("net: Move {l,t,d}stats allocation to core and convert veth & vrf"), stats allocation could be done on net core instead of in this driver. With this new approach, the driver doesn't have to bother with error handling (allocation failure checking, making sure free happens in the right spot, etc). This is core responsibility now. Remove the allocation in the ip6_gre and leverage the network core allocation instead. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-