- 15 Jan, 2021 22 commits
-
-
Tobias Waldekranz authored
Packets ingressing on a LAG that egress on the CPU port, which are not classified as management, will have a FORWARD tag that does not contain the normal source device/port tuple. Instead the trunk bit will be set, and the port field holds the LAG id. Since the exact source port information is not available in the tag, frames are injected directly on the LAG interface and thus do never pass through any DSA port interface on ingress. Management frames (TO_CPU) are not affected and will pass through the DSA port interface as usual. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Support offloading of LAGs to hardware. LAGs may be attached to a bridge in which case VLANs, multicast groups, etc. are also offloaded as usual. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Monitor the following events and notify the driver when: - A DSA port joins/leaves a LAG. - A LAG, made up of DSA ports, joins/leaves a bridge. - A DSA port in a LAG is enabled/disabled (enabled meaning "distributing" in 802.3ad LACP terms). When a LAG joins a bridge, the DSA subsystem will treat that as each individual port joining the bridge. The driver may look at the port's LAG device pointer to see if it is associated with any LAG, if that is required. This is analogue to how switchdev events are replicated out to all lower devices when reaching e.g. a LAG. Drivers can optionally request that DSA maintain a linear mapping from a LAG ID to the corresponding netdev by setting ds->num_lag_ids to the desired size. In the event that the hardware is not capable of offloading a particular LAG for any reason (the typical case being use of exotic modes like broadcast), DSA will take a hands-off approach, allowing the LAG to be formed as a pure software construct. This is reported back through the extended ACK, but is otherwise transparent to the user. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Tested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
In a situation where a standalone port is indirectly attached to a bridge (e.g. via a LAG) which is not offloaded, do not offload any port attributes either. The port should behave as a standard NIC. Previously, on mv88e6xxx, this meant that in the following setup: br0 / team0 / \ swp0 swp1 If vlan filtering was enabled on br0, swp0's and swp1's QMode was set to "secure". This caused all untagged packets to be dropped, as their default VID (0) was not loaded into the VTU. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
When creating a static bond (e.g. balance-xor), all ports will always be enabled. This is set, and the corresponding notification is sent out, before the port is linked to the bond upper. In the offloaded case, this ordering is hard to deal with. The lower will first see a notification that it can not associate with any bond. Then the bond is joined. After that point no more notifications are sent, so all ports remain disabled. This change simply sends an extra notification once the port has been linked to the upper to synchronize the initial state. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Acked-by: Jay Vosburgh <jay.vosburgh@canonical.com> Tested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'mlxsw_pci_queue_init()' and 'mlxsw_pci_fw_area_init()' GFP_KERNEL can be used because both functions are already using this flag and no lock is acquired. When memory is allocated in 'mlxsw_pci_mbox_alloc()' GFP_KERNEL can be used because it is only called from the probe function and no lock is acquired in the between. The call chain is: --> mlxsw_pci_probe() --> mlxsw_pci_cmd_init() --> mlxsw_pci_mbox_alloc() While at it, also replace the 'dma_set_mask/dma_set_coherent_mask' sequence by a less verbose 'dma_set_mask_and_coherent() call. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/20210114084757.490540-1-christophe.jaillet@wanadoo.frSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
prestera_bridge_port_vlan_add should have been called with vlan->vid, however this was masked by the presence of the local vid variable and I did not notice the build warning. Reported-by: kernel test robot <lkp@intel.com> Fixes: b7a9e0da ("net: switchdev: remove vid_begin -> vid_end range from VLAN objects") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Taras Chornyi <tchornyi@marvell.com> Link: https://lore.kernel.org/r/20210114083556.2274440-1-olteanv@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eelco Chaudron authored
As requested by upstream OVS, added some error messages in the validate_and_copy_dec_ttl function. Includes a small cleanup, which removes an unnecessary parameter from the dec_ttl_exception_handler() function. Reported-by: Flavio Leitner <fbl@sysclose.org> Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Link: https://lore.kernel.org/r/161054576573.26637.18396634650212670580.stgit@ebuildSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
David Ahern says: ==================== selftests: Updates to allow single instance of nettest for client and server Update nettest to handle namespace change internally to allow a single instance to run both client and server modes. Device validation needs to be moved after the namespace change and a few run time options need to be split to allow values for client and server. v4 - really fix the memory leak with stdout/stderr buffers v3 - send proper status in do_server for UDP sockets - fix memory leak with stdout/stderr buffers - new patch with separate option for address binding - new patch to remove unnecessary newline v2 - fix checkpath warnings ==================== Link: https://lore.kernel.org/r/20210114030949.54425-1-dsahern@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Add separate option to nettest to specify local address binding in client mode. Signed-off-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Signed-off-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Add new options to nettest to specify device binding and expected device binding for server mode, and update fcnal-test script. This is needed to allow a single instance of nettest running both server and client modes to use different device bindings. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Add new option to nettest to specify MD5 password to use for client side. Update fcnal-test script. This is needed for a single instance running both server and client modes to test password mismatches. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
nettest started with -r as the remote address for MD5 passwords. The -m argument was added to use prefixes with a length when that feature was added to the kernel. Since -r is used to specify remote address for client mode, change nettest to only use -m for MD5 passwords and update fcnal-test script. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
When a single instance of nettest is used for client and server make sure address validation is only done for client mode. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
A few logging lines are missing the newline, or need it moved up for cleaner logging. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
When a single instance of nettest is doing both client and server modes, stdout and stderr messages can get interlaced and become unreadable. Allocate a new set of buffers for the child process handling server mode. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Add option to nettest to run both client and server within a single instance. Client forks a child process to run the server code. A pipe is used for the server to tell the client it has initialized and is ready or had an error. This avoid unnecessary sleeps to handle such race when the commands are separately launched. Signed-off-by: Seth David Schoen <schoen@loyalty.org> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Add options to specify server and client network namespace to use before running respective functions. Signed-off-by: Seth David Schoen <schoen@loyalty.org> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
IPv6 addresses can have a device name to declare a scope (e.g., fe80::5054:ff:fe12:3456%eth0). The next patch adds support to switch network namespace before running client or server code (or both), so move the address validation to the server and client functions. IPv4 multicast groups do not have the device scope in the address specification, so they can be validated inline with option parsing. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
convert_addr needs to be invoked in a different location. Move the code up to avoid a forward declaration. Code move only. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
Later patch adds support for switching network namespaces before running client, server or both. Device validations need to be done after the network namespace switch, so add a helper to do it and invoke in server and client code versus inline with argument parsing. Move related argument checks as well. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 14 Jan, 2021 9 commits
-
-
Jakub Kicinski authored
Bjarni Jonasson says: ==================== Add 100 base-x mode Adding support for 100 base-x in phylink. The Sparx5 switch supports 100 base-x pcs (IEEE 802.3 Clause 24) 4b5b encoded. These patches adds phylink support for that mode. Tested in Sparx5, using sfp modules: Axcen 100fx AXFE-1314-0521 (base-fx) Axcen 100lx AXFE-1314-0551 (base-lx) HP SFP 100FX J9054C (bx-10) Excom SFP-SX-M1002 (base-lx) v1 -> v2: Added description to Documentation/networking/phy.rst Moved PHY_INTERFACE_MODE_100BASEX to above 1000BASEX Patching against net-next ==================== Link: https://lore.kernel.org/r/20210113115626.17381-1-bjarni.jonasson@microchip.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Bjarni Jonasson authored
Add support for 100Base-FX, 100Base-LX, 100Base-PX and 100Base-BX10 modules This is needed for Sparx-5 switch. Signed-off-by: Bjarni Jonasson <bjarni.jonasson@microchip.com> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Bjarni Jonasson authored
Sparx-5 supports this mode and it is missing in the PHY core. Signed-off-by: Bjarni Jonasson <bjarni.jonasson@microchip.com> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King authored
This bit is enabled by default and advertises support for extended next page support. XNP is only needed for 10GBase-T and MultiGig support which is not supported. Additionally, Cisco MultiGig switches will read this bit and attempt 10Gb negotiation even though Next Page support is disabled. This will cause timeouts when the interface is forced to 100Mbps and auto-negotiation will fail. The interfaces are only 1000Base-T and supporting auto-negotiation for this only requires the Next Page bit to be set. Taken from: https://github.com/SolidRun/linux-stable/commit/7406c5244b7ea6bc17a2afe8568277a8c4b126a9 and adapted to mainline kernels by rmk. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/E1kzSdb-000417-FJ@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yuchung Cheng authored
Move skb_set_hash_from_sk s.t. it's called after instead of before tcp_event_data_sent is called. This enables congestion control modules to change the socket hash right before restarting from idle (via the TX_START congestion event). Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Link: https://lore.kernel.org/r/20210111230552.2704579-1-ycheng@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
Check if the interface is indeed connected to a MAC before trying to close the DPMAC object representing it. Without this check we end up working with a NULL pointer. Fixes: d87e6063 ("dpaa2-mac: export MAC counters even when in TYPE_FIXED") Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20210111171802.1826324-1-ciorneiioana@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ionut-robert Aron authored
Declare Rx VLAN filtering as supported and user-changeable only when there are VLAN filtering entries available on the DPNI object. Even then, rx-vlan-filtering is by default disabled. Also, populate the .ndo_vlan_rx_add_vid() and .ndo_vlan_rx_kill_vid() callbacks for adding and removing a specific VLAN from the VLAN table. Signed-off-by: Ionut-robert Aron <ionut-robert.aron@nxp.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20210111170725.1818218-1-ciorneiioana@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Naveen Mamindlapalli authored
This patch adds support to install flow rules using ipv4 proto or ipv6 next header field to distinguish between tcp/udp/sctp/esp/ah flows when user doesn't specify the other match creteria related to the flow such as tcp/udp/sctp source port and destination port, ah/esp spi value. This is achieved by matching the layer type extracted by NPC HW into the search key. Modified the driver to use Ethertype as match criteria when user doesn't specify source IP address and dest IP address. The esp/ah flow rule matching using security parameter index (spi) is not supported as of now since the field is not extracted into MCAM search key. Modified npc_check_field function to return bool. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Link: https://lore.kernel.org/r/20210111112537.3277-1-naveenm@marvell.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Menglong Dong authored
Replace the check for ETH_P_8021Q and ETH_P_8021AD in __netif_receive_skb_core with eth_type_vlan. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/20210111104221.3451-1-dong.menglong@zte.com.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 13 Jan, 2021 9 commits
-
-
Jakub Kicinski authored
Saeed Mahameed says: ==================== mlx5 updates 2021-01-07 Misc updates series for mlx5 driver: 1) From Eli and Jianbo, E-Switch cleanups and usage of new FW capability for mpls over udp 2) Paul Blakey, Adds support for mirroring with Connection tracking by splitting rules to pre and post Connection tracking to perform the mirroring. 3) Roi Dayan, cleanups for connection tracking 4) From Tariq, Cleanups and improvements to IPSec ==================== Link: https://lore.kernel.org/r/20210112070534.136841-1-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
MLX5_IPSEC_DEV() is always defined, no need to protect it under config flag CONFIG_MLX5_EN_IPSEC, especially in slow path. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Feature check functions are in the TX fast-path of all SKBs, not only IPsec traffic. Move the IPsec feature check function into a header and turn it inline. Use a stub and clean the config flag condition in Eth main driver file. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Simple code improvement, move default return operation under the #else block. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
All IPsec logic should be wrapped under the compile flag, including its checksum logic. Introduce an inline function in ipsec datapath header, with a corresponding stub. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Roi Dayan authored
The zone member is of type u16 so there is no reason to apply the zone mask on it. This is also matching the call to set a match in other places which don't need and don't apply the mask. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Roi Dayan authored
miss_rule and prio_s args are not being referenced before assigned so there is no need to init them. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Roi Dayan authored
No need to pass zero spec to mlx5_add_flow_rules() as the function can handle null spec. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jianbo Liu authored
net/mlx5e: E-Switch, Offload all chain 0 priorities when modify header and forward action is not supported Miss path handling of tc multi chain filters (i.e. filters that are defined on chain > 0) requires the hardware to communicate to the driver the last chain that was processed. This is possible only when the hardware is capable of performing the combination of modify header and forward to table actions. Currently, if the hardware is missing this capability then the driver only offloads rules that are defined on tc chain 0 prio 1. However, this restriction can be relaxed because packets that miss from chain 0 are processed through all the priorities by tc software. Allow the offload of all the supported priorities for chain 0 even when the hardware is not capable to perform modify header and goto table actions. Signed-off-by: Jianbo Liu <jianbol@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-