- 29 Jul, 2022 38 commits
-
-
Christian Marangi authored
The same mib function is used by drivers based on qca8k family switch. Move it to common code to make it accessible also by other drivers. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Marangi authored
The same ATU function are used by drivers based on qca8k family switch. Move the bulk read/write helper to common code to declare these shared ATU functions in common code. These helper will be dropped when regmap correctly support bulk read/write. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Marangi authored
The same reg table and read/write/rmw function are used by drivers based on qca8k family switch. Move them to common code to make it accessible also by other drivers. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Marangi authored
The same MIB struct is used by drivers based on qca8k family switch. Move it to common code to make it accessible also by other drivers. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Marangi authored
Some switch may not support mib autocast feature and require the legacy way of reading the regs directly. Make the mib autocast feature optional and permit to declare support for it using match_data struct in a dedicated qca8k_info_ops struct. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Marangi authored
Using of_device_get_match_data is expensive. Cache match data to speed up access and rework user of match data to use the new cached value. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
Since we have a proper endianness converters for BE 48-bit data use them. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220726144906.5217-1-andriy.shevchenko@linux.intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ruffalo Lavoisier authored
Correct spelling on 'non-existent' in comment Signed-off-by: Ruffalo Lavoisier <RuffaloLavoisier@gmail.com> Link: https://lore.kernel.org/r/20220728032854.151180-1-RuffaloLavoisier@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yang Li authored
Fix following includecheck warning: ./drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c: linux/err.h is included more than once. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/20220727233801.23781-1-yang.lee@linux.alibaba.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Martin Blumenstingl authored
Add a Makefile which takes care of installing the selftests in tools/testing/selftests/drivers/net/dsa. This can be used to install all DSA specific selftests and forwarding.config using the same approach as for the selftests in tools/testing/selftests/net/forwarding. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20220727191642.480279-1-martin.blumenstingl@googlemail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Moshe Shemesh says: ==================== Take devlink lock on mlx4 and mlx5 callbacks Prepare mlx4 and mlx5 drivers to have all devlink callbacks called with devlink instance locked. Change mlx4 driver to use devl_ API where needed to have devlink reload callbacks locked. Change mlx5 driver to use devl_ API where needed to have devlink reload and devlink health callbacks locked. As mlx5 is the only driver which needed changes to enable calling health callbacks with devlink instance locked, this patchset also removes DEVLINK_NL_FLAG_NO_LOCK flag from devlink health callbacks. This patchset will be followed by a patchset that will remove DEVLINK_NL_FLAG_NO_LOCK flag from devlink and will remove devlink_mutex. ==================== Link: https://lore.kernel.org/r/1659023630-32006-1-git-send-email-moshe@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Let the core take the devlink instance lock around health callbacks and remove the now redundant locking in the drivers. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Change devlink instance locks in mlx5 driver to have devlink health recovery callback locked, while keeping all driver paths which lead to devl_ API functions called by the driver locked. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Change devlink instance locks in mlx4 driver to have devlink reload callback locked, while keeping all driver paths which leads to devl_ API functions called by the mlx4 driver locked. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Use devl_ API to call devl_port_register() and devl_port_unregister() instead of devlink_port_register() and devlink_port_unregister(). Add devlink instance lock in mlx4 driver paths to these functions. This will be used by the downstream patch to invoke mlx4 devlink reload callbacks with devlink lock held. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Use devl_ API to call devl_region_create() and devl_region_destroy() instead of devlink_region_create() and devlink_region_destroy(). Add devlink instance lock in mlx4 driver paths to these functions. This will be used by the downstream patch to invoke mlx4 devlink reload callbacks with devlink lock held. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Change devlink instance locks in mlx5 driver to have devlink reload callbacks locked, while keeping all driver paths which lead to devl_ API functions called by the driver locked. Add mlx5_load_one_devl_locked() and mlx5_unload_one_devl_locked() which are used by the paths which are already locked such as devlink reload callbacks. This patch makes the driver use devl_ API also for traps register as these functions are called from the driver paths parallel to reload that requires locking now. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Moshe Shemesh authored
Refactor fw reset code to have the unload driver part done on mlx5_fw_reset_complete_reload(), so if it was called by the PF which initiated the reload fw activate flow, the unload part will be handled by the mlx5_devlink_reload_fw_activate() callback itself and not by the reset event work. This will be used by the downstream patch to invoke devlink reload callbacks with devlink lock held. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
After mlx4 driver is converted to do locked reload, devlink_region_snapshot_create() may be called from both locked and unlocked context. Note that in mlx4 region snapshots could be created on any command failure. That can happen in any flow that involves commands to FW, which means most of the driver flows. So resolve this by removing dependency on devlink->lock for region snapshots list consistency and introduce new mutex to ensure it. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
After mlx4 driver is converted to do locked reload, functions to get/put regions snapshot ID may be called from both locked and unlocked context. So resolve this by removing dependency on devlink->lock for region snapshot ID tracking by using internal xa_lock() to maintain shapshot_ids xa_array consistency. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Vikas Gupta says: ==================== add framework for selftests in devlink Add support for selftests in the devlink framework. Adds a callback .selftests_check and .selftests_run in devlink_ops. User can add test(s) suite which is subsequently passed to the driver and driver can opt for running particular tests based on its capabilities. Patchset adds a flash based test for the bnxt_en driver. ==================== Link: https://lore.kernel.org/r/20220727165721.37959-1-vikas.gupta@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
vikas authored
Add callbacks ============= .selftest_check: returns true for flash selftest. .selftest_run: runs a flash selftest. Also, refactor NVM APIs so that they can be used with devlink and ethtool both. Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com> Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vikas Gupta authored
Add a framework for running selftests. Framework exposes devlink commands and test suite(s) to the user to execute and query the supported tests by the driver. Below are new entries in devlink_nl_ops devlink_nl_cmd_selftests_show_doit/dumpit: To query the supported selftests by the drivers. devlink_nl_cmd_selftests_run: To execute selftests. Users can provide a test mask for executing group tests or standalone tests. Documentation/networking/devlink/ path is already part of MAINTAINERS & the new files come under this path. Hence no update needed to the MAINTAINERS Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com> Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Tariq Toukan says: ==================== mlx5e use TLS TX pool to improve connection rate To offload encryption operations, the mlx5 device maintains state and keeps track of every kTLS device-offloaded connection. Two HW objects are used per TX context of a kTLS offloaded connection: a. Transport interface send (TIS) object, to reach the HW context. b. Data Encryption Key (DEK) to perform the crypto operations. These two objects are created and destroyed per TLS TX context, via FW commands. In total, 4 FW commands are issued per TLS TX context, which seriously limits the connection rate. In this series, we aim to save creation and destroy of TIS objects by recycling them. Upon recycling of a TIS, the HW still needs to be notified for the re-mapping between a TIS and a context. This is done by posting WQEs via an SQ, significantly faster API than the FW command interface. A pool is used for recycling. The pool dynamically interacts to the load and connection rate, growing and shrinking accordingly. Saving the TIS FW commands per context increases connection rate by ~42%, from 11.6K to 16.5K connections per sec. Connection rate is still limited by FW bottleneck due to the remaining per context FW commands (DEK create/destroy). This will soon be addressed in a followup series. By combining the two series, the FW bottleneck will be released, and a significantly higher (about 100K connections per sec) kTLS TX device-offloaded connection rate is reached. ==================== Link: https://lore.kernel.org/r/20220727094346.10540-1-tariqt@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Let the TLS TX recycle pool be more flexible in size, by continuously and dynamically allocating and releasing HW resources in response to changes in the connections rate and load. Allocate and release pool entries in bulks (16). Use a workqueue to release/allocate in the background. Allocate a new bulk when the pool size goes lower than the low threshold (1K). Symmetric operation is done when the pool size gets greater than the upper threshold (4K). Every idle pool entry holds: 1 TIS, 1 DEK (HW resources), in addition to ~100 bytes in host memory. Start with an empty pool to minimize memory and HW resources waste for non-TLS users that have the device-offload TLS enabled. Upon a new request, in case the pool is empty, do not wait for a whole bulk allocation to complete. Instead, trigger an instant allocation of a single resource to reduce latency. Performance tests: Before: 11,684 CPS After: 16,556 CPS Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
The transport interface send (TIS) object is responsible for performing all transport related operations of the transmit side. The ConnectX HW uses a TIS object to save and access the TLS crypto information and state of an offloaded TX kTLS connection. Before this patch, we used to create a new TIS per connection and destroy it once it’s closed. Every create and destroy of a TIS is a FW command. Same applies for the private TLS context, where we used to dynamically allocate and free it per connection. Resources recycling reduce the impact of the allocation/free operations and helps speeding up the connection rate. In this feature we maintain a pool of TX objects and use it to recycle the resources instead of re-creating them per connection. A cached TIS popped from the pool is updated to serve the new connection via the fast-path HW interface, updating the tls static and progress params. This is a very fast operation, significantly faster than FW commands. On recycling, a WQE fence is required after the context params change. This guarantees that the data is sent after the context has been successfully updated in hardware, and that the context modification doesn't interfere with existing traffic. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Let the caller of mlx5e_ktls_tx_handle_ooo() take care of updating the stats, according to the returned value. As the switch/case blocks are already there, this change saves unnecessary branches in the handler. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
TLS TIS objects have a defined role in mapping and reaching the HW TLS contexts. Some standard TIS attributes (like LAG port affinity) are not relevant for them. Use a dedicated TLS TIS create function instead of the generic mlx5e_create_tis. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Multiple TLS device-offloaded contexts can be added in parallel via concurrent calls to .tls_dev_add, while calls to .tls_dev_del are sequential in tls_device_gc_task. This is not a sustainable behavior. This creates a rate gap between add and del operations (addition rate outperforms the deletion rate). When running for enough time, the TLS device resources could get exhausted, failing to offload new connections. Replace the single-threaded garbage collector work with a per-context alternative, so they can be handled on several cores in parallel. Use a new dedicated destruct workqueue for this. Tested with mlx5 device: Before: 22141 add/sec, 103 del/sec After: 11684 add/sec, 11684 del/sec Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
TLS context destructor can be run in atomic context. Cleanup operations for device-offloaded contexts could require access and interaction with the device callbacks, which might sleep. Hence, the cleanup of such contexts must be deferred and completed inside an async work. For all others, this is not necessary, as cleanup is atomic. Invoke cleanup immediately for them, avoiding queueing redundant gc work. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yang Li authored
The return from the call to tls_rx_msg_size() is int, it can be a negative error code, however this is being assigned to an unsigned long variable 'sz', so making 'sz' an int. Eliminate the following coccicheck warning: ./net/tls/tls_strp.c:211:6-8: WARNING: Unsigned expression compared with zero: sz < 0 Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Link: https://lore.kernel.org/r/20220728031019.32838-1-yang.lee@linux.alibaba.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== tls: rx: follow ups to rx work A selection of unrelated changes. First some selftest polishing. Next a change to rcvtimeo handling for locking based on an exchange with Eric. Follow up to Paolo's comments from yesterday. Last but not least a fix to a false positive warning, turns out I've been testing with DEBUG_NET=n this whole time. ==================== Link: https://lore.kernel.org/r/20220727031524.358216-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
I went too far in the accessor conversion, we can't use tls_strp_msg() after decryption because the message may not be ready. What we care about on this path is that the output skb is detached, i.e. we didn't somehow just turn around and used the input skb with its TCP data still attached. So look at the anchor directly. Fixes: 84c61fe1 ("tls: rx: do not use the standard strparser") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Paolo points out that there seems to be no strong reason strparser users a single threaded workqueue. Perhaps there were some performance or pinning considerations? Since we don't know (and it's the slow path) let's default to the most natural, multi-threaded choice. Also rename the workqueue to "tls-". Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric indicates that restarting rcvtimeo on every wait may be fine. I thought that we should consider it cumulative, and made tls_rx_reader_lock() return the remaining timeo after acquiring the reader lock. tls_rx_rec_wait() gets its timeout passed in by value so it does not keep track of time previously spent. Make the lock waiting consistent with tls_rx_rec_wait() - don't keep track of time spent. Read the timeo fresh in tls_rx_rec_wait(). It's unclear to me why callers are supposed to cache the value. Link: https://lore.kernel.org/all/CANn89iKcmSfWgvZjzNGbsrndmCch2HC_EPZ7qmGboDNaWoviNQ@mail.gmail.com/Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Add a handful of memory randomizations and precise length checks. Nothing is really broken here, I did this to increase confidence when debugging. It does fix a GCC warning, tho. Apparently GCC recognizes that memory needs to be initialized for send() but does not recognize that for write(). Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xie Shaowen authored
delete extra space and tab in blank line, there is no functional change. Signed-off-by: Xie Shaowen <studentxswpy@163.com> Link: https://lore.kernel.org/r/20220727081253.3043941-1-studentxswpy@163.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 28 Jul, 2022 2 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth and netfilter, no known blockers for the release. Current release - regressions: - wifi: mac80211: do not abuse fq.lock in ieee80211_do_stop(), fix taking the lock before its initialized - Bluetooth: mgmt: fix double free on error path Current release - new code bugs: - eth: ice: fix tunnel checksum offload with fragmented traffic Previous releases - regressions: - tcp: md5: fix IPv4-mapped support after refactoring, don't take the pure v6 path - Revert "tcp: change pingpong threshold to 3", improving detection of interactive sessions - mld: fix netdev refcount leak in mld_{query | report}_work() due to a race - Bluetooth: - always set event mask on suspend, avoid early wake ups - L2CAP: fix use-after-free caused by l2cap_chan_put - bridge: do not send empty IFLA_AF_SPEC attribute Previous releases - always broken: - ping6: fix memleak in ipv6_renew_options() - sctp: prevent null-deref caused by over-eager error paths - virtio-net: fix the race between refill work and close, resulting in NAPI scheduled after close and a BUG() - macsec: - fix three netlink parsing bugs - avoid breaking the device state on invalid change requests - fix a memleak in another error path Misc: - dt-bindings: net: ethernet-controller: rework 'fixed-link' schema - two more batches of sysctl data race adornment" * tag 'net-5.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (67 commits) stmmac: dwmac-mediatek: fix resource leak in probe ipv6/addrconf: fix a null-ptr-deref bug for ip6_ptr net: ping6: Fix memleak in ipv6_renew_options(). net/funeth: Fix fun_xdp_tx() and XDP packet reclaim sctp: leave the err path free in sctp_stream_init to sctp_stream_free sfc: disable softirqs for ptp TX ptp: ocp: Select CRC16 in the Kconfig. tcp: md5: fix IPv4-mapped support virtio-net: fix the race between refill work and close mptcp: Do not return EINPROGRESS when subflow creation succeeds Bluetooth: L2CAP: Fix use-after-free caused by l2cap_chan_put Bluetooth: Always set event mask on suspend Bluetooth: mgmt: Fix double free on error path wifi: mac80211: do not abuse fq.lock in ieee80211_do_stop() ice: do not setup vlan for loopback VSI ice: check (DD | EOF) bits on Rx descriptor rather than (EOP | RS) ice: Fix VSIs unable to share unicast MAC ice: Fix tunnel checksum offload with fragmented traffic ice: Fix max VLANs available for VF netfilter: nft_queue: only allow supported familes and hooks ...
-
Dan Carpenter authored
If mediatek_dwmac_clks_config() fails, then call stmmac_remove_config_dt() before returning. Otherwise it is a resource leak. Fixes: fa4b3ca6 ("stmmac: dwmac-mediatek: fix clock issue") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Link: https://lore.kernel.org/r/YuJ4aZyMUlG6yGGa@kiliSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-