- 08 Dec, 2022 6 commits
-
-
Leon Romanovsky authored
Flow steering API separates newly created rules based on their match criteria. Right now, all IPsec tables are created with one group and suffers from not-optimal FS performance. Count number of different match criteria for relevant tables, and set proper value at the table creation. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
In packet offload mode, the HW is responsible to handle ESP headers, SPI numbers and trailers (ICV) together with different logic for RX and TX paths. In order to support packet offload mode, special logic is added to flow steering rules. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Remove intermediate variable in favor of having similar coding style for Rx and Tx add rule functions. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Implement mlx5 flow steering logic and mlx5 IPsec code support XFRM policy offload. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Add empty table to be used for IPsec policy offload. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
Leon Romanovsky says: ============ This series follows previously sent "Extend XFRM core to allow packet offload configuration" series [1]. It is first part with refactoring to mlx5 allow us natively extend mlx5 IPsec logic to support both crypto and packet offloads. ============ Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 06 Dec, 2022 17 commits
-
-
Leon Romanovsky authored
Create general function that sets miss group and rule to forward all not-matched traffic to the next table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Move miss handles into dedicated struct, so we can reuse it in next patch when creating IPsec policy flow table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Reuse existing struct what holds all information about modify header pointer and rule. This helps to reduce ambiguity from the name _err_ that doesn't describe the real purpose of that flow table, rule and function - to copy status result from HW to the stack. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Rewrote the IPsec RX add rule path to be less convoluted and don't rely on pre-initialized variables. The code now has clean linear flow with clean separation between error and success paths. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
The policy offload logic needs to set flow steering rule that match on saddr and daddr too, so factor out this code to separate functions, together with code alignment to netdev coding pattern of relying on family type. As part of this change, let's separate more logic from setup_fte_common to make sure that the function names describe that is done in the function better than general *common* name. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Even now, to support IPsec crypto, the RX and TX paths use same logic to create flow tables. In the following patches, we will add more tables to support IPsec packet offload. So reuse existing code and rewrite it to support IPsec packet offload from the beginning. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Create initial hardware IPsec packet offload object and connect it to advanced steering operation (ASO) context and queue, so the data path can communicate with the stack. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Setup the ASO (Advanced Steering Operation) object that is needed for IPsec to interact with SW stack about various fast changing events: replay window, lifetime limits, e.t.c Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
mlx5 priv structure is driver main structure that holds high level data. That information is not needed for IPsec flow steering logic and the pointer to mlx5e_priv was not supposed to be passed in the first place. This change "cleans" the logic to rely on internal to IPsec structures without touching global mlx5e_priv. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Low level mlx5 code needs to use mlx5_core print routines and not netdev ones, as the failures are relevant to the HW itself and not to its netdev. This change allows us to remove access to mlx5 priv structure, which holds high level driver data that isn't needed for mlx5 IPsec code. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Remove AF family obfuscation by creating symmetric structs for RX and TX IPsec flow steering chains. This simplifies to us low level IPsec FS creation logic without need to dig into multiple levels of structs. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Instead of performing redefinition of XFRM core defines to same values but with MLX5_* prefix, cache the input values as is by making sure that the proper storage objects are used. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
As a preparation for future extension of IPsec hardware object to allow configuration of packet offload mode, extend the XFRM validator to check replay window values. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Add needed capabilities check to determine if device supports IPsec packet offload mode. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Add all needed bits to support IPsec packet offload mode. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
There is no need in hiding returned ASO WQE type by providing void*, use the real type instead. Do it together with zeroing that memory, so ASO WQE will be ready to use immediately. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
Leon Romanovsky says: ============ The following series extends XFRM core code to handle a new type of IPsec offload - packet offload. In this mode, the HW is going to be responsible for the whole data path, so both policy and state should be offloaded. IPsec packet offload is an improved version of IPsec crypto mode, In packet mode, HW is responsible to trim/add headers in addition to decrypt/encrypt. In this mode, the packet arrives to the stack as already decrypted and vice versa for TX (exits to HW as not-encrypted). Devices that implement IPsec packet offload mode offload policies too. In the RX path, it causes the situation that HW can't effectively handle mixed SW and HW priorities unless users make sure that HW offloaded policies have higher priorities. It means that we don't need to perform any search of inexact policies and/or priority checks if HW policy was discovered. In such situation, the HW will catch the packets anyway and HW can still implement inexact lookups. In case specific policy is not found, we will continue with packet lookup and check for existence of HW policies in inexact list. HW policies are added to the head of SPD to ensure fast lookup, as XFRM iterates over all policies in the loop. This simple solution allows us to achieve same benefits of separate HW/SW policies databases without over-engineering the code to iterate and manage two databases at the same path. To not over-engineer the code, HW policies are treated as SW ones and don't take into account netdev to allow reuse of the same priorities for policies databases without over-engineering the code to iterate and manage two databases at the same path. To not over-engineer the code, HW policies are treated as SW ones and don't take into account netdev to allow reuse of the same priorities for different devices. * No software fallback * Fragments are dropped, both in RX and TX * No sockets policies * Only IPsec transport mode is implemented ================================================================================ Rekeying: In order to support rekeying, as XFRM core is skipped, the HW/driver should do the following: * Count the handled packets * Raise event that limits are reached * Drop packets once hard limit is occurred. The XFRM core calls to newly introduced xfrm_dev_state_update_curlft() function in order to perform sync between device statistics and internal structures. On HW limit event, driver calls to xfrm_state_check_expire() to allow XFRM core take relevant decisions. This separation between control logic (in XFRM) and data plane allows us to packet reuse SW stack. ================================================================================ Configuration: iproute2: https://lore.kernel.org/netdev/cover.1652179360.git.leonro@nvidia.com/ Packet offload mode: ip xfrm state offload packet dev <if-name> dir <in|out> ip xfrm policy .... offload packet dev <if-name> Crypto offload mode: ip xfrm state offload crypto dev <if-name> dir <in|out> or (backward compatibility) ip xfrm state offload dev <if-name> dir <in|out> ================================================================================ Performance results: TCP multi-stream, using iperf3 instance per-CPU. +----------------------+--------+--------+--------+--------+---------+---------+ | | 1 CPU | 2 CPUs | 4 CPUs | 8 CPUs | 16 CPUs | 32 CPUs | | +--------+--------+--------+--------+---------+---------+ | | BW (Gbps) | +----------------------+--------+--------+-------+---------+---------+---------+ | Baseline | 27.9 | 59 | 93.1 | 92.8 | 93.7 | 94.4 | +----------------------+--------+--------+-------+---------+---------+---------+ | Software IPsec | 6 | 11.9 | 23.3 | 45.9 | 83.8 | 91.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec crypto offload | 15 | 29.7 | 58.5 | 89.6 | 90.4 | 90.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec packet offload | 28 | 57 | 90.7 | 91 | 91.3 | 91.9 | +----------------------+--------+--------+-------+---------+---------+---------+ IPsec packet offload mode behaves as baseline and reaches linerate with same amount of CPUs. Setups details (similar for both sides): * NIC: ConnectX6-DX dual port, 100 Gbps each. Single port used in the tests. * CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz ================================================================================ Series together with mlx5 part: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next ================================================================================ Changelog: v10: * Added forgotten xdo_dev_state_del. Patch #4. * Moved changelog in cover letter to the end. * Added "if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {" line to newly added netronome IPsec support. Patch #2. v9: https://lore.kernel.org/all/cover.1669547603.git.leonro@nvidia.com * Added acquire support v8: https://lore.kernel.org/all/cover.1668753030.git.leonro@nvidia.com * Removed not-related blank line * Fixed typos in documentation v7: https://lore.kernel.org/all/cover.1667997522.git.leonro@nvidia.com As was discussed in IPsec workshop: * Renamed "full offload" to be "packet offload". * Added check that offloaded SA and policy have same device while sending packet * Added to SAD same optimization as was done for SPD to speed-up lookups. v6: https://lore.kernel.org/all/cover.1666692948.git.leonro@nvidia.com * Fixed misplaced "!" in sixth patch. v5: https://lore.kernel.org/all/cover.1666525321.git.leonro@nvidia.com * Rebased to latest ipsec-next. * Replaced HW priority patch with solution which mimics separated SPDs for SW and HW. See more description in this cover letter. * Dropped RFC tag, usecase, API and implementation are clear. v4: https://lore.kernel.org/all/cover.1662295929.git.leonro@nvidia.com * Changed title from "PATCH" to "PATCH RFC" per-request. * Added two new patches: one to update hard/soft limits and another initial take on documentation. * Added more info about lifetime/rekeying flow to cover letter, see relevant section. * perf traces for crypto mode will come later. v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com * I didn't hear any suggestion what term to use instead of "packet offload", so left it as is. It is used in commit messages and documentation only and easy to rename. * Added performance data and background info to cover letter * Reused xfrm_output_resume() function to support multiple XFRM transformations * Add PMTU check in addition to driver .xdo_dev_offload_ok validation * Documentation is in progress, but not part of this series yet. v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com * Rebased to latest 6.0-rc1 * Add an extra check in TX datapath patch to validate packets before forwarding to HW. * Added policy cleanup logic in case of netdev down event v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com * Moved comment to be before if (...) in third patch. v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com ----------------------------------------------------------------------- ============ Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 05 Dec, 2022 8 commits
-
-
Leon Romanovsky authored
Extend XFRM device offload API description with newly added packet offload mode. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Both in RX and TX, the traffic that performs IPsec packet offload transformation is accounted by HW. It is needed to properly handle hard limits that require to drop the packet. It means that XFRM core needs to update internal counters with the one that accounted by the HW, so new callbacks are introduced in this patch. In case of soft or hard limit is occurred, the driver should call to xfrm_state_check_expire() that will perform key rekeying exactly as done by XFRM core. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Devices that implement IPsec packet offload mode should offload SA and policies too. In RX path, it causes to the situation that HW will always have higher priority over any SW policies. It means that we don't need to perform any search of inexact policies and/or priority checks if HW policy was discovered. In such situation, the HW will catch the packets anyway and HW can still implement inexact lookups. In case specific policy is not found, we will continue with packet lookup and check for existence of HW policies in inexact list. HW policies are added to the head of SPD to ensure fast lookup, as XFRM iterates over all policies in the loop. The same solution of adding HW SAs at the begging of the list is applied to SA database too. However, we don't need to change lookups as they are sorted by insertion order and not priority. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Traffic received by device with enabled IPsec packet offload should be forwarded to the stack only after decryption, packet headers and trailers removed. Such packets are expected to be seen as normal (non-XFRM) ones, while not-supported packets should be dropped by the HW. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
In IPsec packet mode, the device is going to encrypt and encapsulate packets that are associated with offloaded policy. After successful policy lookup to indicate if packets should be offloaded or not, the stack forwards packets to the device to do the magic. Signed-off-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Huy Nguyen <huyn@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Extend netlink interface to add and delete XFRM policy from the device. This functionality is a first step to implement packet IPsec offload solution. Signed-off-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
Allow users to configure xfrm states with packet offload mode. The packet mode must be requested both for policy and state, and such requires us to do not implement fallback. We explicitly return an error if requested packet mode can't be configured. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Leon Romanovsky authored
In the next patches, the xfrm core code will be extended to support new type of offload - packet offload. In that mode, both policy and state should be specially configured in order to perform whole offloaded data path. Full offload takes care of encryption, decryption, encapsulation and other operations with headers. As this mode is new for XFRM policy flow, we can "start fresh" with flag bits and release first and second bit for future use. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 03 Dec, 2022 3 commits
-
-
Lorenzo Bianconi authored
In order to fix the following sleep while atomic bug always alloc pages with GFP_ATOMIC in mtk_wed_wo_queue_refill since page_frag_alloc runs in spin_lock critical section. [ 9.049719] Hardware name: MediaTek MT7986a RFB (DT) [ 9.054665] Call trace: [ 9.057096] dump_backtrace+0x0/0x154 [ 9.060751] show_stack+0x14/0x1c [ 9.064052] dump_stack_lvl+0x64/0x7c [ 9.067702] dump_stack+0x14/0x2c [ 9.071001] ___might_sleep+0xec/0x120 [ 9.074736] __might_sleep+0x4c/0x9c [ 9.078296] __alloc_pages+0x184/0x2e4 [ 9.082030] page_frag_alloc_align+0x98/0x1ac [ 9.086369] mtk_wed_wo_queue_refill+0x134/0x234 [ 9.090974] mtk_wed_wo_init+0x174/0x2c0 [ 9.094881] mtk_wed_attach+0x7c8/0x7e0 [ 9.098701] mt7915_mmio_wed_init+0x1f0/0x3a0 [mt7915e] [ 9.103940] mt7915_pci_probe+0xec/0x3bc [mt7915e] [ 9.108727] pci_device_probe+0xac/0x13c [ 9.112638] really_probe.part.0+0x98/0x2f4 [ 9.116807] __driver_probe_device+0x94/0x13c [ 9.121147] driver_probe_device+0x40/0x114 [ 9.125314] __driver_attach+0x7c/0x180 [ 9.129133] bus_for_each_dev+0x5c/0x90 [ 9.132953] driver_attach+0x20/0x2c [ 9.136513] bus_add_driver+0x104/0x1fc [ 9.140333] driver_register+0x74/0x120 [ 9.144153] __pci_register_driver+0x40/0x50 [ 9.148407] mt7915_init+0x5c/0x1000 [mt7915e] [ 9.152848] do_one_initcall+0x40/0x25c [ 9.156669] do_init_module+0x44/0x230 [ 9.160403] load_module+0x1f30/0x2750 [ 9.164135] __do_sys_init_module+0x150/0x200 [ 9.168475] __arm64_sys_init_module+0x18/0x20 [ 9.172901] invoke_syscall.constprop.0+0x4c/0xe0 [ 9.177589] do_el0_svc+0x48/0xe0 [ 9.180889] el0_svc+0x14/0x50 [ 9.183929] el0t_64_sync_handler+0x9c/0x120 [ 9.188183] el0t_64_sync+0x158/0x15c Fixes: 79968444 ("net: ethernet: mtk_wed: introduce wed wo support") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Link: https://lore.kernel.org/r/67ca94bdd3d9eaeb86e52b3050fbca0bcf7bb02f.1669908312.git.lorenzo@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
kfree_rcu(1-arg) should be avoided as much as possible, since this is only possible from sleepable contexts, and incurr extra rcu barriers. I wish the 1-arg variant of kfree_rcu() would get a distinct name, like kfree_rcu_slow() to avoid it being abused. Fixes: 459837b5 ("net/tcp: Disable TCP-MD5 static key on tcp_md5sig_info destruction") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20221202052847.2623997-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Merge tag 'wireless-next-2022-12-02' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next Kalle Valo says: ==================== wireless-next patches for v6.2 Third set of patches for v6.2. mt76 has a new driver for mt7996 Wi-Fi 7 devices and iwlwifi also got initial Wi-Fi 7 support. Otherwise smaller features and fixes. Major changes: ath10k - store WLAN firmware version in SMEM image table mt76 - mt7996: new driver for MediaTek Wi-Fi 7 (802.11be) devices - mt7986, mt7915: enable Wireless Ethernet Dispatch (WED) offload support - mt7915: add ack signal support - mt7915: enable coredump support - mt7921: remain_on_channel support - mt7921: channel context support iwlwifi - enable Wi-Fi 7 Extremely High Throughput (EHT) PHY capabilities - 320 MHz channels support * tag 'wireless-next-2022-12-02' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (144 commits) wifi: ath10k: fix QCOM_SMEM dependency wifi: mt76: mt7921e: add pci .shutdown() support wifi: mt76: mt7915: mmio: fix naming convention wifi: mt76: mt7996: add support to configure spatial reuse parameter set wifi: mt76: mt7996: enable ack signal support wifi: mt76: mt7996: enable use_cts_prot support wifi: mt76: mt7915: rely on band_idx of mt76_phy wifi: mt76: mt7915: enable per bandwidth power limit support wifi: mt76: mt7915: introduce mt7915_get_power_bound() mt76: mt7915: Fix PCI device refcount leak in mt7915_pci_init_hif2() wifi: mt76: do not send firmware FW_FEATURE_NON_DL region wifi: mt76: mt7921: Add missing __packed annotation of struct mt7921_clc wifi: mt76: fix coverity overrun-call in mt76_get_txpower() wifi: mt76: mt7996: add driver for MediaTek Wi-Fi 7 (802.11be) devices wifi: mt76: mt76x0: remove dead code in mt76x0_phy_get_target_power wifi: mt76: mt7915: fix band_idx usage wifi: mt76: mt7915: enable .sta_set_txpwr support wifi: mt76: mt7915: add basedband Txpower info into debugfs wifi: mt76: mt7915: add support to configure spatial reuse parameter set wifi: mt76: mt7915: add missing MODULE_PARM_DESC ... ==================== Link: https://lore.kernel.org/r/20221202214254.D0D3DC433C1@smtp.kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 02 Dec, 2022 6 commits
-
-
Kalle Valo authored
Nathan noticed that when HWSPINLOCK is disabled there's a Kconfig warning: WARNING: unmet direct dependencies detected for QCOM_SMEM Depends on [n]: (ARCH_QCOM [=y] || COMPILE_TEST [=n]) && HWSPINLOCK [=n] Selected by [m]: - ATH10K_SNOC [=m] && NETDEVICES [=y] && WLAN [=y] && WLAN_VENDOR_ATH [=y] && ATH10K [=m] && (ARCH_QCOM [=y] || COMPILE_TEST [=n]) The problem here is that QCOM_SMEM depends on HWSPINLOCK so we cannot select QCOM_SMEM and instead we neeed to use 'depends on'. Reported-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/all/Y4YsyaIW+CPdHWv3@dev-arch.thelio-3990X/ Fixes: 4d79f6f3 ("wifi: ath10k: Store WLAN firmware version in SMEM image table") Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20221202103027.25974-1-kvalo@kernel.org
-
Gerhard Engleder authored
Refill RX queue in batches of descriptors to improve performance. Refill is allowed to fail as long as a minimum number of descriptors is active. Thus, a limited number of failed RX buffer allocations is now allowed for normal operation. Previously every failed allocation resulted in a dropped frame. If the minimum number of active descriptors is reached, then RX buffers are still reused and frames are dropped. This ensures that the RX queue never runs empty and always continues to operate. Prework for future XDP support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gerhard Engleder authored
Without interrupt throttling, iperf server mode generates a CPU load of 100% (A53 1.2GHz). Also the throughput suffers with less than 900Mbit/s on a 1Gbit/s link. The reason is a high interrupt load with interrupts every ~20us. Reduce interrupt load by throttling of interrupts. Interrupt delay default is 64us. For iperf server mode the CPU load is significantly reduced to ~20% and the throughput reaches the maximum of 941MBit/s. Interrupts are generated every ~140us. RX and TX coalesce can be configured with ethtool. RX coalesce has priority over TX coalesce if the same interrupt is used. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gerhard Engleder authored
Allow user space to read number of TX and RX queue. This is useful for device dependent qdisc configurations like TAPRIO with hardware offload. Also ethtool::get_per_queue_coalesce / set_per_queue_coalesce requires that interface. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gerhard Engleder authored
Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jonathan Toppins authored
Correct xmit hash steps for layer3+4 as introduced by commit 49aefd13 ("bonding: do not discard lowest hash bit for non layer3+4 hashing"). Signed-off-by: Jonathan Toppins <jtoppins@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-