- 24 Aug, 2023 8 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxPaolo Abeni authored
Saeed Mahameed says: ==================== mlx5-updates-2023-08-22 1) Patches #1..#13 From Jiri: The goal of this patchset is to make the SF code cleaner. Benefit from previously introduced devlink_port struct containerization to avoid unnecessary lookups in devlink port ops. Also, benefit from the devlink locking changes and avoid unnecessary reference counting. 2) Patches #14,#15: Add ability to configure proto both UDP and TCP selectors in RX and TX directions. * tag 'mlx5-updates-2023-08-22' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5e: Support IPsec upper TCP protocol selector net/mlx5e: Support IPsec upper protocol selector field offload for RX net/mlx5: Store vport in struct mlx5_devlink_port and use it in port ops net/mlx5: Check vhca_resource_manager capability in each op and add extack msg net/mlx5: Relax mlx5_devlink_eswitch_get() return value checking net/mlx5: Return -EOPNOTSUPP in mlx5_devlink_port_fn_migratable_set() directly net/mlx5: Reduce number of vport lookups passing vport pointer instead of index net/mlx5: Embed struct devlink_port into driver structure net/mlx5: Don't register ops for non-PF/VF/SF port and avoid checks in ops net/mlx5: Remove no longer used mlx5_esw_offloads_sf_vport_enable/disable() net/mlx5: Introduce mlx5_eswitch_load/unload_sf_vport() and use it from SF code net/mlx5: Allow mlx5_esw_offloads_devlink_port_register() to register SFs net/mlx5: Push devlink port PF/VF init/cleanup calls out of devlink_port_register/unregister() net/mlx5: Push out SF devlink port init and cleanup code to separate helpers net/mlx5: Rework devlink port alloc/free into init/cleanup ==================== Link: https://lore.kernel.org/all/20230823051012.162483-1-saeed@kernel.org/Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jakub Kicinski authored
Daniel Golle says: ==================== net: ethernet: mtk_eth_soc: improve support for MT7988 This series fixes and completes commit 445eb644 ("net: ethernet: mtk_eth_soc: add basic support for MT7988 SoC") and also adds support for using the in-SoC SRAM to previous MT7986 and MT7981 SoCs. ==================== Link: https://lore.kernel.org/r/cover.1692721443.git.daniel@makrotopia.orgSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Daniel Golle authored
Systems having 4 GiB of RAM and more require DMA addressing beyond the current 32-bit limit. Starting from MT7988 the hardware now supports 36-bit DMA addressing, let's use that new capability in the driver to avoid running into swiotlb on systems with 4 GiB of RAM or more. Signed-off-by:
Daniel Golle <daniel@makrotopia.org> Link: https://lore.kernel.org/r/95b919c98876c9e49761e44662e7c937479eecb8.1692721443.git.daniel@makrotopia.orgSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Daniel Golle authored
MT7981, MT7986 and MT7988 come with in-SoC SRAM dedicated for Ethernet DMA rings. Support using the SRAM without breaking existing device tree bindings, ie. only new SoC starting from MT7988 will have the SRAM declared as additional resource in device tree. For MT7981 and MT7986 an offset on top of the main I/O base is used. Signed-off-by:
Daniel Golle <daniel@makrotopia.org> Link: https://lore.kernel.org/r/e45e0f230c63ad58869e8fe35b95a2fb8925b625.1692721443.git.daniel@makrotopia.orgSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Daniel Golle authored
Add bits needed to reset the frame engine on MT7988. Fixes: 445eb644 ("net: ethernet: mtk_eth_soc: add basic support for MT7988 SoC") Signed-off-by:
Daniel Golle <daniel@makrotopia.org> Link: https://lore.kernel.org/r/89b6c38380e7a3800c1362aa7575600717bc7543.1692721443.git.daniel@makrotopia.orgSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Daniel Golle authored
More register macros need to be adjusted for the 3rd GMAC on MT7988. Account for added bit in SYSCFG0_SGMII_MASK. Fixes: 445eb644 ("net: ethernet: mtk_eth_soc: add basic support for MT7988 SoC") Signed-off-by:
Daniel Golle <daniel@makrotopia.org> Reviewed-by:
Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/1c8da012e2ca80939906d85f314138c552139f0f.1692721443.git.daniel@makrotopia.orgSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Wei Fang authored
As we already added the exception tracing for XDP_TX, I think it is necessary to add the exception tracing for other XDP actions, such as XDP_REDIRECT, XDP_ABORTED and unknown error actions. Signed-off-by:
Wei Fang <wei.fang@nxp.com> Suggested-by:
Jakub Kicinski <kuba@kernel.org> Reviewed-by:
Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20230822065255.606739-1-wei.fang@nxp.comSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Yu Liao authored
Use the standard error pointer macro to shorten the code and simplify. Signed-off-by:
Yu Liao <liaoyu15@huawei.com> Link: https://lore.kernel.org/r/20230822021455.205101-2-liaoyu15@huawei.comSigned-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- 23 Aug, 2023 32 commits
-
-
Jakub Kicinski authored
All callers of build_skb() (*) in bnxt are in NAPI context. The budget checking is somewhat convoluted because in the shared completion queue cases Rx packets are discarded by netpoll by forcing an error (E). But that happens before skb allocation. Only a call chain starting at __bnxt_poll_work() can lead to an skb allocation and it checks budget (b). * bnxt_rx_multi_page_skb * bnxt_rx_skb ` bp->rx_skb_func * bnxt_tpa_end ` bnxt_rx_pkt E bnxt_force_rx_discard E bnxt_poll_nitroa0 b __bnxt_poll_work Use napi_build_skb() to take advantage of the skb cache. In iperf tests with HW-GRO enabled it barely makes a difference but in cases where HW-GRO is not as effective (or disabled) it can give even a >10% boost (20.7Gbps -> 23.1Gbps). Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Reviewed-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexis Lothoré authored
Remove debug logs in port vlan management, since there are already multiple tracepoints defined for those operations in DSA Signed-off-by:
Alexis Lothoré <alexis.lothore@bootlin.com> Reviewed-by:
Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jordan Rife authored
BPF programs that run on connect can rewrite the connect address. For the connect system call this isn't a problem, because a copy of the address is made when it is moved into kernel space. However, kernel_connect simply passes through the address it is given, so the caller may observe its address value unexpectedly change. A practical example where this is problematic is where NFS is combined with a system such as Cilium which implements BPF-based load balancing. A common pattern in software-defined storage systems is to have an NFS mount that connects to a persistent virtual IP which in turn maps to an ephemeral server IP. This is usually done to achieve high availability: if your server goes down you can quickly spin up a replacement and remap the virtual IP to that endpoint. With BPF-based load balancing, mounts will forget the virtual IP address when the address rewrite occurs because a pointer to the only copy of that address is passed down the stack. Server failover then breaks, because clients have forgotten the virtual IP address. Reconnects fail and mounts remain broken. This patch was tested by setting up a scenario like this and ensuring that NFS reconnects worked after applying the patch. Signed-off-by:
Jordan Rife <jrife@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Feng Liu authored
The virtio_net driver currently deals with different versions and types of virtio net headers, such as virtio_net_hdr_mrg_rxbuf, virtio_net_hdr_v1_hash, etc. Due to these variations, the code relies on multiple type casts to convert memory between different structures, potentially leading to bugs when there are changes in these structures. Introduces the "struct skb_vnet_common_hdr" as a unifying header structure using a union. With this approach, various virtio net header structures can be converted by accessing different members of this structure, thus eliminating the need for type casting and reducing the risk of potential bugs. For example following code: static struct sk_buff *page_to_skb(struct virtnet_info *vi, struct receive_queue *rq, struct page *page, unsigned int offset, unsigned int len, unsigned int truesize, unsigned int headroom) { [...] struct virtio_net_hdr_mrg_rxbuf *hdr; [...] hdr_len = vi->hdr_len; [...] ok: hdr = skb_vnet_hdr(skb); memcpy(hdr, hdr_p, hdr_len); [...] } When VIRTIO_NET_F_HASH_REPORT feature is enabled, hdr_len = 20 But the sizeof(*hdr) is 12, memcpy(hdr, hdr_p, hdr_len); will copy 20 bytes to the hdr, which make a potential risk of bug. And this risk can be avoided by introducing struct skb_vnet_common_hdr. Change log v1->v2 feedback from Willem de Bruijn <willemdebruijn.kernel@gmail.com> feedback from Simon Horman <horms@kernel.org> 1. change to use net-next tree. 2. move skb_vnet_common_hdr inside kernel file instead of the UAPI header. v2->v3 feedback from Willem de Bruijn <willemdebruijn.kernel@gmail.com> 1. fix typo in commit message. 2. add original struct virtio_net_hdr into union 3. remove virtio_net_hdr_mrg_rxbuf variable in receive_buf; Signed-off-by:
Feng Liu <feliu@nvidia.com> Reviewed-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Willem de Bruijn <willemb@google.com> Acked-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jinjie Ruan authored
Convert list_for_each() to list_for_each_entry() where applicable. No functional changed. Signed-off-by:
Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Petr Pavlu says: ==================== Convert mlx4 to use auxiliary bus This series converts the mlx4 drivers to use auxiliary bus, similarly to how mlx5 was converted [1]. The first 6 patches are preparatory changes, the remaining 4 are the final conversion. Initial motivation for this change was to address a problem related to loading mlx4_en/mlx4_ib by mlx4_core using request_module_nowait(). When doing such a load in initrd, the operation is asynchronous to any init control and can get unexpectedly affected/interrupted by an eventual root switch. Using an auxiliary bus leaves these module loads to udevd which better integrates with systemd processing. [2] General benefit is to get rid of custom interface logic and instead use a common facility available for this task. An obvious risk is that some new bug is introduced by the conversion. Leon Romanovsky was kind enough to check for me that the series passes their verification tests. Changes since v2 [3]: * Use 'void *' as the event param of mlx4_dispatch_event(). Changes since v1 [4]: * Fix a missing definition of the err variable in mlx4_en_add(). * Remove not needed comments about the event type in mlx4_en_event() and mlx4_ib_event(). [1] https://lore.kernel.org/netdev/20201101201542.2027568-1-leon@kernel.org/ [2] https://lore.kernel.org/netdev/0a361ac2-c6bd-2b18-4841-b1b991f0635e@suse.com/ [3] https://lore.kernel.org/netdev/20230813145127.10653-1-petr.pavlu@suse.com/ [4] https://lore.kernel.org/netdev/20230804150527.6117-1-petr.pavlu@suse.com/ ==================== Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
After the conversion to use the auxiliary bus, the custom device management is not needed anymore and can be deleted. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Use the auxiliary bus to perform device management of the infiniband part of the mlx4 driver. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Use the auxiliary bus to perform device management of the ethernet part of the mlx4 driver. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Add an auxiliary virtual bus to model the mlx4 driver structure. The code is added along the current custom device management logic. Subsequent patches switch mlx4_en and mlx4_ib to the auxiliary bus and the old interface is then removed. Structure mlx4_priv gains a new adev dynamic array to keep track of its auxiliary devices. Access to the array is protected by the global mlx4_intf mutex. Functions mlx4_register_device() and mlx4_unregister_device() are updated to expose auxiliary devices on the bus in order to load mlx4_en and/or mlx4_ib. Functions mlx4_register_auxiliary_driver() and mlx4_unregister_auxiliary_driver() are added to substitute mlx4_register_interface() and mlx4_unregister_interface(), respectively. Function mlx4_do_bond() is adjusted to walk over the adev array and re-adds a specific auxiliary device if its driver sets the MLX4_INTFF_BONDING flag. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
The mlx4_core driver has a logic that allows a sub-driver to set the MLX4_INTFF_BONDING flag which then causes that function mlx4_do_bond() asks the sub-driver to fully re-probe a device when its bonding configuration changes. Performing this operation is disallowed in mlx4_register_interface() when it is detected that any mlx4 device is multifunction (SRIOV). The code then resets MLX4_INTFF_BONDING in the driver flags. Move this check directly into mlx4_do_bond(). It provides a better separation as mlx4_core no longer directly modifies the sub-driver flags and it will allow to get rid of explicitly keeping track of all mlx4 devices by the intf.c code when it is switched to an auxiliary bus. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Function mlx4_en_queue_bond_work() is used in mlx4_en to start a bond reconfiguration. It gathers data about a new port map setting, takes a reference on the netdev that triggered the change and queues a work object on mlx4_en_priv.mdev.workqueue to perform the operation. The scheduled work is mlx4_en_bond_work() which calls mlx4_bond()/mlx4_unbond() and consequently mlx4_do_bond(). At the same time, function mlx4_change_port_types() in mlx4_core might be invoked to change the port type configuration. As part of its logic, it re-registers the whole device by calling mlx4_unregister_device(), followed by mlx4_register_device(). The two operations can result in concurrent access to the data about currently active interfaces on the device. Functions mlx4_register_device() and mlx4_unregister_device() lock the intf_mutex to gain exclusive access to this data. The current implementation of mlx4_do_bond() doesn't do that which could result in an unexpected behavior. An updated version of mlx4_do_bond() for use with an auxiliary bus goes and locks the intf_mutex when accessing a new auxiliary device array. However, doing so can then result in the following deadlock: * A two-port mlx4 device is configured as an Ethernet bond. * One of the ports is changed from eth to ib, for instance, by writing into a mlx4_port<x> sysfs attribute file. * mlx4_change_port_types() is called to update port types. It invokes mlx4_unregister_device() to unregister the device which locks the intf_mutex and starts removing all associated interfaces. * Function mlx4_en_remove() gets invoked and starts destroying its first netdev. This triggers mlx4_en_netdev_event() which recognizes that the configured bond is broken. It runs mlx4_en_queue_bond_work() which takes a reference on the netdev. Removing the netdev now cannot proceed until the work is completed. * Work function mlx4_en_bond_work() gets scheduled. It calls mlx4_unbond() -> mlx4_do_bond(). The latter function tries to lock the intf_mutex but that is not possible because it is held already by mlx4_unregister_device(). This particular case could be possibly solved by unregistering the mlx4_en_netdev_event() notifier in mlx4_en_remove() earlier, but it seems better to decouple mlx4_en more and break this reference order. Avoid then this scenario by recognizing that the bond reconfiguration operates only on a mlx4_dev. The logic to queue and execute the bond work can be moved into the mlx4_core driver. Only a reference on the respective mlx4_dev object is needed to be taken during the work's lifetime. This removes a call from mlx4_en that can directly result in needing to lock the intf_mutex, it remains a privilege of the core driver. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
The mlx4_interface.activate callback was introduced in commit 79857cd3 ("net/mlx4: Postpone the registration of net_device"). It dealt with a situation when a netdev notifier received a NETDEV_REGISTER event for a new net_device created by mlx4_en but the same device was not yet visible to mlx4_get_protocol_dev(). The callback can be removed now that mlx4_get_protocol_dev() is gone. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Use a notifier to implement mlx4_dispatch_event() in preparation to switch mlx4_en and mlx4_ib to be an auxiliary device. A problem is that if the mlx4_interface.event callback was replaced with something as mlx4_adrv.event then the implementation of mlx4_dispatch_event() would need to acquire a lock on a given device before executing this callback. That is necessary because otherwise there is no guarantee that the associated driver cannot get unbound when the callback is running. However, taking this lock is not possible because mlx4_dispatch_event() can be invoked from the hardirq context. Using an atomic notifier allows the driver to accurately record when it wants to receive these events and solves this problem. A handler registration is done by both mlx4_en and mlx4_ib at the end of their mlx4_interface.add callback. This matches the current situation when mlx4_add_device() would enable events for a given device immediately after this callback, by adding the device on the mlx4_priv.list. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Function mlx4_dispatch_event() takes an 'unsigned long' as its event parameter. The actual value is none (MLX4_DEV_EVENT_CATASTROPHIC_ERROR), a pointer to mlx4_eqe (MLX4_DEV_EVENT_PORT_MGMT_CHANGE), or a 32-bit integer (remaining events). In preparation to switch mlx4_en and mlx4_ib to be an auxiliary device, the mlx4_interface.event callback is replaced with a notifier and function mlx4_dispatch_event() gets updated to invoke atomic_notifier_call_chain(). This requires forwarding the input 'param' value from the former function to the latter. A problem is that the notifier call takes 'void *' as its 'param' value, compared to 'unsigned long' used by mlx4_dispatch_event(). Re-passing the value would need either punning it to 'void *' or passing down the address of the input 'param'. Both approaches create a number of unnecessary casts. Change instead the input 'param' of mlx4_dispatch_event() from 'unsigned long' to 'void *'. A mlx4_eqe pointer can be passed directly, callers using an int value are adjusted to pass its address. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Rename the mlx4_en_dev.nb notifier_block member to netdev_nb in preparation to add a mlx4 core notifier_block. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Petr Pavlu authored
Simplify the mlx4 driver interface by removing mlx4_get_protocol_dev() and the associated mlx4_interface.get_dev callbacks. This is done in preparation to use an auxiliary bus to model the mlx4 driver structure. The change is motivated by the following situation: * The mlx4_en interface is being initialized by mlx4_en_add() and mlx4_en_activate(). * The latter activate function calls mlx4_en_init_netdev() -> register_netdev() to register a new net_device. * A netdev event NETDEV_REGISTER is raised for the device. * The netdev notififier mlx4_ib_netdev_event() is called and it invokes mlx4_ib_scan_netdevs() -> mlx4_get_protocol_dev() -> mlx4_en_get_netdev() [via mlx4_interface.get_dev]. This chain creates a problem when mlx4_en gets switched to be an auxiliary driver. It contains two device calls which would both need to take a respective device lock. Avoid this situation by updating mlx4_ib_scan_netdevs() to no longer call mlx4_get_protocol_dev() but instead to utilize the information passed in net_device.parent and net_device.dev_port. This data is sufficient to determine that an updated port is one that the mlx4_ib driver should take care of and to keep mlx4_ib_dev.iboe.netdevs up to date. Following that, update mlx4_ib_get_netdev() to also not call mlx4_get_protocol_dev() and instead scan all current netdevs to find find a matching one. Note that mlx4_ib_get_netdev() is called early from ib_register_device() and cannot use data tracked in mlx4_ib_dev.iboe.netdevs which is not at that point yet set. Finally, remove function mlx4_get_protocol_dev() and the mlx4_interface.get_dev callbacks (only mlx4_en_get_netdev()) as they became unused. Signed-off-by:
Petr Pavlu <petr.pavlu@suse.com> Tested-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Acked-by:
Tariq Toukan <tariqt@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yue Haibing authored
Commit 8cd160a2 ("qede: convert to new udp_tunnel_nic infra") removed qede_udp_tunnel_{add,del}() but not the declarations. Commit 0ebcebbe ("qed: Read device port count from the shmem") removed qed_device_num_engines() but not its declaration. Commit 1e128c81 ("qed: Add support for hardware offloaded FCoE.") declared but never implemented qed_fcoe_set_pf_params(). Signed-off-by:
Yue Haibing <yuehaibing@huawei.com> Reviewed-by:
Simon Horman <horms@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sai Krishna authored
Some of the newer silicon versions in CN10K series supports a feature where in the current PTP timestamp in HW can be updated atomically without losing any cpu cycles unlike read/modify/write register. This patch uses this feature so that PTP accuracy can be improved while adjusting the master offset in HW. There is no need for SW timecounter when using this feature. So removed references to SW timecounter wherever appropriate. Signed-off-by:
Sai Krishna <saikrishnag@marvell.com> Signed-off-by:
Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by:
Sunil Kovvuri Goutham <sgoutham@marvell.com> Reviewed-by:
Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by:
Simon Horman <horms@kernel.org> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Leon Romanovsky authored
Support TCP as protocol selector for policy and state in IPsec packet offload mode. Example of state configuration is as follows: ip xfrm state add src 192.168.25.3 dst 192.168.25.1 \ proto esp spi 1001 reqid 10001 aead 'rfc4106(gcm(aes))' \ 0x54a7588d36873b031e4bd46301be5a86b3a53879 128 mode transport \ offload packet dev re0 dir in sel src 192.168.25.3 dst 192.168.25.1 \ proto tcp dport 9003 Acked-by:
Raed Salem <raeds@nvidia.com> Reviewed-by:
Simon Horman <horms@kernel.org> Signed-off-by:
Leon Romanovsky <leonro@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Emeel Hakim authored
Support RX policy/state upper protocol selector field offload, to enable selecting RX traffic for IPsec operation based on l4 protocol UDP with specific source/destination port. Signed-off-by:
Emeel Hakim <ehakim@nvidia.com> Reviewed-by:
Raed Salem <raeds@nvidia.com> Reviewed-by:
Simon Horman <horms@kernel.org> Signed-off-by:
Leon Romanovsky <leonro@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Instead of using internal devlink_port->index to perform vport lookup in every devlink port op, store the vport pointer to the container struct mlx5_devlink_port and use it directly in port ops. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Since the follow-up patch is going to remove mlx5_devlink_port_fn_get_vport() entirely, move the vhca_resource_manager capability checking to individual ops. Add proper extack message on the way. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
If called from port ops, it is not needed to perform the checks in mlx5_devlink_eswitch_get(). The reason is devlink port would not be registered if the checks are not true. Introduce relaxed version mlx5_devlink_eswitch_nocheck_get() and use it in port ops. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Instead of initializing "err" variable, just return "-EOPNOTSUPP" directly where it is needed. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
During devlink port init/cleanup and register/unregister calls, there are many lookups of vport. Instead of passing vport_num as argument to functions, pass the vport struct pointer directly and avoid repeated lookups. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Struct devlink_port is usually embedded in a driver-specific struct which allows to carry driver context to devlink port ops. Introduce a container struct to include devlink_port struct in preparation to also include driver context for devlink port ops. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Currently each PF/VF/SF devlink port op called into mlx5 code calls is_port_function_supported() to check if the port is either PF, VF or SF. So make sure that the ops are registered with devlink port only for those and avoid the is_port_function_supported() checks in ops. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Since the previous patch removed the only users of these functions, remove them. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Similar to the PF/VF helpers, introduce a set of load/unload helpers for SF vports. From there, call mlx5_eswitch_load/unload_vport() which are common for PFs/VFs and newly introduced SF helpers. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
Currently there is a separate set of functions used to register/unregister the SF. The only difference is currently the ops struct. Move the struct up and use it for SFs in mlx5_esw_offloads_devlink_port_register(). Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
Jiri Pirko authored
In order to prepare for mlx5_esw_offloads_devlink_port_register/unregister() to be used for SFs as well, push out the PF/VF specific init/cleanup calls outside. Introduce mlx5_eswitch_load/unload_pf_vf_vport() and call them from there. Use these new helpers of PF/VF loading and make mlx5_eswitch_local/unload_vport() reusable for SFs. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Shay Drory <shayd@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-