- 11 Aug, 2024 4 commits
-
-
James Chapman authored
To handle colliding l2tpv3 session IDs, l2tp_v3_session_get searches a hashed list keyed by ID and sk. Although unlikely, if hash keys collide, it is possible that hash_for_each_possible loops over a session which doesn't have the ID that we are searching for. So check for session ID match when looping over possible hash key matches. Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
James Chapman authored
l2tp_ip[6] have always used global socket tables. It is therefore not possible to create l2tpip sockets in different namespaces with the same socket address. To support this, move l2tpip socket tables to pernet data. Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
James Chapman authored
Update l2tp to remove the inline keyword from several functions in C sources, since this is now discouraged. Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
James Chapman authored
l2tp no longer uses sk_user_data in tunnel sockets and now manages tunnel/session lifetimes slightly differently. Update docs to cover this. CC: linux-doc@vger.kernel.org CC: corbet@lwn.net Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 10 Aug, 2024 24 commits
-
-
Jakub Kicinski authored
Tariq Toukan says: ==================== mlx5 misc patches 2024-08-08 This patchset contains multiple enhancements from the team to the mlx5 core and Eth drivers. Patch #1 by Chris bumps a defined value to permit more devices doing TC offloads. Patch #2 by Jianbo adds an IPsec fast-path optimization to replace the slow async handling. Patches #3 and #4 by Jianbo add TC offload support for complicated rules to overcome firmware limitation. Patch #5 by Gal unifies the access macro to advertised/supported link modes. Patches #6 to #9 by Gal adds extack messages in ethtool ops to replace prints to the kernel log. Patch #10 by Cosmin switches to using 'update' verb instead of 'replace' to better reflect the operation. Patch #11 by Cosmin exposes an update connection tracking operation to replace the assumed delete+add implementaiton. ==================== Link: https://patch.msgid.link/20240808055927.2059700-1-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Cosmin Ratiu authored
Previously, replacing a connection tracking steering entry was done by adding a new rule (with the same tag but possibly different mod hdr actions/labels) then removing the old rule. This approach doesn't work in hardware steering because two steering entries with the same tag cannot coexist in a hardware steering table. This commit prepares for that by adding a new ct_rule_update operation on the ct_fs_ops struct which is used instead of add+delete. Implementations for both dmfs (firmware steering) and smfs (software steering) are provided, which simply add the new rule and delete the old one. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-12-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Cosmin Ratiu authored
Offloaded rules can be updated with a new modify header action containing a changed restore cookie. This was done using the verb 'replace', while in some configurations 'update' is a better fit. This commit renames the functions used to reflect that. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-11-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
In case of errors in get module eeprom by page, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-10-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
In case of errors in set coalesce, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-9-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
In case of errors in get coalesce, reflect it through extack instead of a dmesg print. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-8-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
In case of errors in set ringparams, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly and remove two redundant checks that are already validated by the core. Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-7-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
Use the bitmap operations when accessing the advertised/supported link modes and remove places that access them as arrays of unsigned longs (underlying implementation of the bitmap), this makes the code much more readable and clear. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-6-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jianbo Liu authored
Firmware has the limitation that it cannot offload a rule with rewrite and mirror to internal and external destinations simultaneously. This patch adds a workaround to this issue. Here the destination array is split again, just like what's done in previous commit, but after the action indexed by split_count - 1. An extra rule is added for the leftover destinations. Such rule can be offloaded, even there are destinations to both internal and external destinations, because the header rewrite is left in the original FTE. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-5-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jianbo Liu authored
To offload the encap rule when the tunnel IP is configured on an openvswitch internal port, driver need to overwrite vport metadata in reg_c0 to the value assigned to the internal port, then forward packets to root table to be processed again by the rules matching on the metadata for such internal port. When such rule is combined with header rewrite and mirror, openvswitch generates the rule like the following, because it resets mirror after packets are modified. in_port(enp8s0f0npf0sf1),.., actions:enp8s0f0npf0sf2,set(tunnel(...)),set(ipv4(...)),vxlan_sys_4789,enp8s0f0npf0sf2 The split_count was introduced before to support rewrite and mirror. Driver splits the rule into two different hardware rules in order to offload it. But it's not enough to offload the above complicated rule because of the limitations, in both driver and firmware. To resolve this issue, the destination array is split again after the destination indexed by split_count. An extra rule is added for the leftover destinations (in the above example, it is enp8s0f0npf0sf2), and is inserted to post_act table. And the extra destination is added in the original rule to forward to post_act table, so the extra mirror is done there. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-4-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jianbo Liu authored
In the commit a2a73ea1 ("net/mlx5e: Don't listen to remove flows event"), remove_flow_enable event is removed, and the hard limit usually relies on software mechanism added in commit b2f7b01d ("net/mlx5e: Simulate missing IPsec TX limits hardware functionality"). But the delayed work is rescheduled every one second, which is slow for fast traffic. As a result, traffic can't be blocked even reaches the hard limit, which usually happens when soft and hard limits are very close. In reality it won't happen because soft limit is much lower than hard limit. But, as an optimization for RX to block traffic when reaching hard limit, need to set remove_flow_enable. When remove flow is enabled, IPSEC HARD_LIFETIME ASO syndrome will be set in the metadata defined in the ASO return register if packets reach hard lifetime threshold. And those packets are dropped immediately by the steering table. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-3-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Chris Mi authored
Currently MLX5E_TC_MAX_INT_PORT_NUM is 8. Usually int port has one ingress and one egress rules. But sometimes, a temporary rule can be offloaded as well, eg: recirc_id(0),in_port(br-phy),eth(src=10:70:fd:87:57:c0,dst=33:33:00:00:00:16), eth_type(0x86dd),ipv6(frag=no), packets:2, bytes:180, used:0.060s, actions:enp8s0f0 If one int port device offloads 3 rules, only 2 devices can offload. Other devices will hit the limit and fail to offload. Actually it is insufficient for customers. So increase the number to 32. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-2-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Rosen Penev authored
f1294617 removed the custom function for ndo_eth_ioctl and used the standard phy_do_ioctl which calls phy_mii_ioctl. However since then, this driver was ported to phylink where it makes more sense to call phylink_mii_ioctl. Bring back custom function that calls phylink_mii_ioctl. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240807215834.33980-1-rosenp@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Nick Child says: ==================== ibmvnic: ibmvnic rr patchset v1 - https://lore.kernel.org/netdev/20240801212340.132607-1-nnac123@linux.ibm.com/ v2 - https://lore.kernel.org/netdev/20240806193706.998148-1-nnac123@linux.ibm.com/ ==================== Link: https://patch.msgid.link/20240807211809.1259563-1-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
During initialization with the vnic server, a bitstring is communicated to the client regarding header info needed during CSO (See "VNIC Capabilities" in PAPR). Most of the time, to be safe, vnic server requests header info for CSO. When header info is needed, multiple TX descriptors are required per skb; This limits the driver to use send_subcrq_indirect instead of send_subcrq_direct. Previously, the vnic server request for header info was ignored. This allowed the use of send_sub_crq_direct. Transmissions were successful because the bitstring returned by vnic server is broad and over cautionary. It was observed that mlx backing devices could actually transmit and handle CSO packets without the vnic server receiving header info (despite the fact that the bitstring requested it). There was a trust issue: The bitstring was overcautionary. This extra precaution (requesting header info when the backing device may not use it) comes at the cost of performance (using direct vs indirect hcalls has a 30% delta in small packet RR transaction rate). So it has been requested that the vnic server team tries to ensure that the bitstring is more exact. In the meantime, disable CSO when it is possible to use the skb in the send_subcrq_direct path. In other words, calculate the checksum before handing the packet to FW when the packet is not segmented and xmit_more is false. Since the code path is only possible if the skb is non GSO and xmit_more is false, the cost of doing checksum in the send_subcrq_direct path is minimal. Any large segmented skb will have xmit_more set to true more frequently and it is inexpensive to do checksumming on a small skb. The worst-case workload would be a 9000 MTU TCP_RR test with close to MTU sized packets (and TSO off). This allows xmit_more to be false more frequently and open the code path up to use send_subcrq_direct. Observing trace data (graph-time = 1) and packet rate with this workload shows minimal performance degradation: 1. NIC does checksum w headers, safely use send_subcrq_indirect: - Packet rate: 631k txs - Trace data: ibmvnic_xmit = 44344685.87 us / 6234576 hits = AVG 7.11 us skb_checksum_help = 4.07 us / 2 hits = AVG 2.04 us ^ Notice hits, tracing this just for reassurance ibmvnic_tx_scrq_flush = 33040649.69 us / 5638441 hits = AVG 5.86 us send_subcrq_indirect = 37438922.24 us / 6030859 hits = AVG 6.21 us 2. NIC does checksum w/o headers, dangerously use send_subcrq_direct: - Packet rate: 831k txs - Trace data: ibmvnic_xmit = 48940092.29 us / 8187630 hits = AVG 5.98 us skb_checksum_help = 2.03 us / 1 hits = AVG 2.03 ibmvnic_tx_scrq_flush = 31141879.57 us / 7948960 hits = AVG 3.92 us send_subcrq_indirect = 8412506.03 us / 728781 hits = AVG 11.54 ^ notice hits is much lower b/c send_subcrq_direct was called ^ wasn't traceable 3. driver does checksum, safely use send_subcrq_direct (THIS PATCH): - Packet rate: 829k txs - Trace data: ibmvnic_xmit = 56696077.63 us / 8066168 hits = AVG 7.03 us skb_checksum_help = 8587456.16 us / 7526072 hits = AVG 1.14 us ibmvnic_tx_scrq_flush = 30219545.55 us / 7782409 hits = AVG 3.88 us send_subcrq_indirect = 8638326.44 us / 763693 hits = AVG 11.31 us When the bitstring ever specifies that CSO does not require headers (dependent on VIOS vnic server changes), then this patch should be removed and replaced with one that investigates the bitstring before using send_subcrq_direct. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-8-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
Byte Queue Limits depends on dql_completed being called once per tx completion round in order to adjust its algorithm appropriately. The dql->limit value is an approximation of the amount of bytes that the NIC can consume per irq interval. If this approximation is too high then the NIC will become over-saturated. Too low and the NIC will starve. The dql->limit depends on dql->prev-* stats to calculate an optimal value. If dql_completed() is called more than once per irq handler then those prev-* values become unreliable (because they are not an accurate representation of the previous state of the NIC) resulting in a sub-optimal limit value. Therefore, move the call to netdev_tx_completed_queue() to the end of ibmvnic_complete_tx(). When performing 150 sessions of TCP rr (request-response 1 byte packets) workloads, one could observe: PREVIOUSLY: - limit and inflight values hovering around 130 - transaction rate of around 750k pps. NOW: - limit rises and falls in response to inflight (130-900) - transaction rate of around 1M pps (33% improvement) Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-7-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
Firmware supports two hcalls to send a sub-crq request: H_SEND_SUB_CRQ_INDIRECT and H_SEND_SUB_CRQ. The indirect hcall allows for submission of batched messages while the other hcall is limited to only one message. This protocol is defined in PAPR section 17.2.3.3. Previously, the ibmvnic xmit function only used the indirect hcall. This allowed the driver to batch it's skbs. A single skb can occupy a few entries per hcall depending on if FW requires skb header information or not. The FW only needs header information if the packet is segmented. By this logic, if an skb is not GSO then it can fit in one sub-crq message and therefore is a candidate for H_SEND_SUB_CRQ. Batching skb transmission is only useful when there are more packets coming down the line (ie netdev_xmit_more is true). As it turns out, H_SEND_SUB_CRQ induces less latency than H_SEND_SUB_CRQ_INDIRECT. Therefore, use H_SEND_SUB_CRQ where appropriate. Small latency gains seen when doing TCP_RR_150 (request/response workload). Ftrace results (graph-time=1): Previous: ibmvnic_xmit = 29618270.83 us / 8860058.0 hits = AVG 3.34 ibmvnic_tx_scrq_flush = 21972231.02 us / 6553972.0 hits = AVG 3.35 Now: ibmvnic_xmit = 22153350.96 us / 8438942.0 hits = AVG 2.63 ibmvnic_tx_scrq_flush = 15858922.4 us / 6244076.0 hits = AVG 2.54 Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-6-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
send_subcrq_[in]direct() already has a dma memory barrier. Remove the earlier one. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-5-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
Previously when creating the header descriptors, the driver would: 1. allocate a temporary buffer on the stack (in build_hdr_descs_arr) 2. memcpy the header info into the temporary buffer (in build_hdr_data) 3. memcpy the temp buffer into a local variable (in create_hdr_descs) 4. copy the local variable into the return buffer (in create_hdr_descs) Since, there is no opportunity for errors during this process, the temp buffer is not needed and work can be done on the return buffer directly. Repurpose build_hdr_data() to only calculate the header lengths. Rename it to get_hdr_lens(). Edit create_hdr_descs() to read from the skb directly and copy directly into the returned useful buffer. The process now involves less memory and write operations while also being more readable. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-4-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
Use the header length helper functions rather than trying to calculate it within the driver. There are defined functions for mac and network headers (skb_mac_header_len and skb_network_header_len) but no such function exists for the transport header length. Also, hdr_data was memset during allocation to all 0's so no need to memset again. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-3-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nick Child authored
Previously, the driver would replenish the rx pool if the polling function consumed less than the budget. The logic being that the driver did not exhaust its budget so that must mean that the driver is not busy and has cycles to spare for replenishing the pool. So pool replenishment happens on every poll which did not consume the budget. This can very costly during request-response tests. In fact, an extra ~100pps can be seen in TCP_RR_150 tests when we remove this conditional. Trace results (ftrace, graph-time=1) for the poll function are below: Previous results: ibmvnic_poll = 64951846.0 us / 4167628.0 hits = AVG 15.58 replenish_rx_pool = 17602846.0 us / 4710437.0 hits = AVG 3.74 Now: ibmvnic_poll = 57673941.0 us / 4791737.0 hits = AVG 12.04 replenish_rx_pool = 3938171.6 us / 4314.0 hits = AVG 912.88 While the replenish function takes longer, it is hit less frequently meaning the ibmvnic_poll function, on average, is faster. Furthermore, this change does not have a negative effect on performance bandwidth/latency measurements. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-2-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christophe Leroy authored
Building fs_enet on powerpc e500 leads to following warning: CC drivers/net/ethernet/freescale/fs_enet/mac-scc.o In file included from ./include/linux/build_bug.h:5, from ./include/linux/container_of.h:5, from ./include/linux/list.h:5, from ./include/linux/module.h:12, from drivers/net/ethernet/freescale/fs_enet/mac-scc.c:15: drivers/net/ethernet/freescale/fs_enet/mac-scc.c: In function 'allocate_bd': ./include/linux/err.h:28:49: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 28 | #define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO) | ^ ./include/linux/compiler.h:77:45: note: in definition of macro 'unlikely' 77 | # define unlikely(x) __builtin_expect(!!(x), 0) | ^ drivers/net/ethernet/freescale/fs_enet/mac-scc.c:138:13: note: in expansion of macro 'IS_ERR_VALUE' 138 | if (IS_ERR_VALUE(fep->ring_mem_addr)) | ^~~~~~~~~~~~ This is due to fep->ring_mem_addr not being a pointer but a DMA address which is 64 bits on that platform while pointers are 32 bits as this is a 32 bits platform with wider physical bus. However, using fep->ring_mem_addr is just wrong because cpm_muram_alloc() returns an offset within the muram and not a physical address directly. So use fpi->dpram_offset instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/ec67ea3a3bef7e58b8dc959f7c17d405af0d27e4.1723101144.git.christophe.leroy@csgroup.euSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
zhangxiangqian authored
The usbnet_link_change function is not called, if the link has not changed. ... [16913.807393][ 3] cdc_ether 1-2:2.0 enx00e0995fd1ac: kevent 12 may have been dropped [16913.822266][ 2] cdc_ether 1-2:2.0 enx00e0995fd1ac: kevent 12 may have been dropped [16913.826296][ 2] cdc_ether 1-2:2.0 enx00e0995fd1ac: kevent 11 may have been dropped ... kevent 11 is scheduled too frequently and may affect other event schedules. Signed-off-by: zhangxiangqian <zhangxiangqian@kylinos.cn> Acked-by: Oliver Neukum <oneukum@suse.com> Link: https://patch.msgid.link/1723109985-11996-1-git-send-email-zhangxiangqian@kylinos.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Mina Almasry authored
Currently ethtool_set_channel calls separate functions to check whether the new channel number violates rss configuration or flow steering configuration. Very soon we need to check whether the new channel number violates memory provider configuration as well. To do all 3 checks cleanly, add a wrapper around ethtool_get_max_rxnfc_channel() and ethtool_get_max_rxfh_channel(), which does both checks. We can later extend this wrapper to add the memory provider check in one place. Note that in the current code, we put a descriptive genl error message when we run into issues. To preserve the error message, we pass the genl_info* to the common helper. The ioctl calls can pass NULL instead. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20240808205345.2141858-1-almasrymina@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 09 Aug, 2024 11 commits
-
-
David S. Miller authored
Allison Henderson says: ==================== selftests: rds selftest This series is a new selftest that Vegard, Chuck and myself have been working on to provide some test coverage for rds. I've modified the scripts to include the feedback from the last version, but let me know if there's anything missed. Questions and comments appreciated. Thanks everyone! Allison Changes in v2: - Removed qemu vm creation and related code - Updated README.txt with examples of running the test with virtme - Removed init.sh. run.sh now directly calls test.py - Some clean up done with the return code handling since there is no vm between the scripts anymore - Imported ip python function in tools/testing/selftests/net/lib/py/utils.py into test.py - Adapted test.py to use the imported ip function, and removed the local ip wrapper - Some line wrap clean up - Link to v1: https://lore.kernel.org/netdev/20240626012834.5678-3-allison.henderson@oracle.com/T ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vegard Nossum authored
This adds some basic self-testing infrastructure for RDS-TCP. Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vegard Nossum authored
To better our unit tests we need code coverage to be part of the kernel. This patch borrows heavily from how CONFIG_GCOV_PROFILE_FTRACE is implemented Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vegard Nossum authored
These files contain the runtime coverage data generated by gcov. Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Simon Horman authored
Jiri Slaby advises me that the preferred mechanism for declaring string constants is static char arrays, so use that here. This mostly reverts commit 1692b977 ("net: stmmac: xgmac: use #define for string constants") That commit was a fix for commit 46eba193 ("net: stmmac: xgmac: fix handling of DPP safety error for DMA channels"). The fix being replacing const char * with #defines in order to address compilation failures observed on GCC 6 through 10. Compile tested only. No functional change intended. Suggested-by: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/netdev/485dbc5a-a04b-40c2-9481-955eaa5ce2e2@kernel.org/Signed-off-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pawel Dembicki authored
This commit changes magic numbers in phy operations. Some shifted registers was replaced with bitfield macros. No functional changes done. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthias Schiffer authored
Allow of_find_net_device_by_node() to find icssg_prueth ports and make the individual ports' of_nodes available in sysfs. Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com> Reviewed-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240807121215.3178964-1-matthias.schiffer@ew.tq-group.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christophe JAILLET authored
'struct mii_phy_def' are not modified in this driver. Constifying these structures moves some data to a read-only section, so increase overall security. While at it fix the checkpatch warning related to this patch (some missing newlines and spaces around *) On a x86_64, with allmodconfig: Before: ====== 27709 928 0 28637 6fdd drivers/net/sungem_phy.o After: ===== text data bss dec hex filename 28157 476 0 28633 6fd9 drivers/net/sungem_phy.o Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/54c3b30930f80f4895e6fa2f4234714fdea4ef4e.1723033266.git.christophe.jaillet@wanadoo.frSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Edward Cree authored
A value of 1 doesn't make sense, as it implies the only allowed context ID is 0, which is reserved for the default context - in which case the driver should just not claim to support custom RSS contexts at all. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/c07725b3a3d0b0a63b85e230f9c77af59d4d07f8.1723045898.git.ecree.xilinx@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Rosen Penev authored
Allows simplifying get_strings and avoids manual pointer manipulation. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20240807190303.6143-1-rosenp@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Simon Horman authored
Provide declaration of dmae_reg_go_c in header. This symbol is defined in bnx2x_main.c. And used in that file and bnx2x_stats.c. However, Sparse complains that there is no declaration of the symbol in dmae_reg_go_c nor is the symbol static. .../bnx2x_main.c:291:11: warning: symbol 'dmae_reg_go_c' was not declared. Should it be static? Address this by moving the declaration from bnx2x_stats.c to bnx2x_reg.h. No functional change intended. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20240806-bnx2x-dec-v1-1-ae844ec785e4@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 08 Aug, 2024 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
Cross-merge networking fixes after downstream PR. No conflicts or adjacent changes. Link: https://patch.msgid.link/20240808170148.3629934-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-