- 15 Dec, 2021 40 commits
-
-
Jacob Keller authored
The ice_devlink_flash_update function performs a few checks and then calls ice_flash_pldm_image. One of these checks is to call ice_check_for_pending_update. This function checks if the device has a pending update, and cancels it if so. This is necessary to allow a new flash update to proceed. We want to refactor the ice code to eliminate ice_devlink_flash_update, moving its checks into ice_flash_pldm_image. To do this, ice_check_for_pending_update will become static, and only called by ice_flash_pldm_image. To make this change easier to review, first just move the function up within the ice_fw_update.c file. While at it, note that the function has a misleading name. Its primary action is to cancel a pending update. Using the verb "check" does not imply this. Rename it to ice_cancel_pending_update. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Jacob Keller authored
We have a region for reading the contents of the NVM flash as a snapshot. This region does not allow reading the Shadow RAM, as it always passes the FLASH_ONLY bit to the low level firmware interface. Add a separate shadow-ram region which will allow snapshot of the current contents of the Shadow RAM. This data is built from the NVM contents but is distinct as the device builds up the Shadow RAM during initialization, so being able to snapshot its contents can be useful when attempting to debug flash related issues. Fix the comment description of the nvm-flash region which incorrectly stated that it filled the shadow-ram region, and add a comment explaining that the nvm-flash region does not actually read the Shadow RAM. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Jakub Kicinski authored
Commit 0976b888 ("ethtool: fix null-ptr-deref on ref tracker") made the write to req_info.dev conditional, but as Eric points out in a different follow up the structure is often allocated on the stack and not kzalloc()'d so seems safer to always write the dev, in case it's garbage on input. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Most notable changes are in af_packet, tipc ones are trivial. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jon Maloy <jmaloy@redhat.com> Cc: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Add support for VxLAN with IPv6 underlay So far, mlxsw only supported VxLAN with IPv4 underlay. This patchset extends mlxsw to also support VxLAN with IPv6 underlay. The main difference is related to the way IPv6 addresses are handled by the device. See patch #1 for a detailed explanation. Patch #1 creates a common hash table to store the mapping from IPv6 addresses to KVDL indexes. This table is useful for both IP-in-IP and VxLAN tunnels with an IPv6 underlay. Patch #2 converts the IP-in-IP code to use the new hash table. Patches #3-#6 are preparations. Patch #7 finally adds support for VxLAN with IPv6 underlay. Patch #8 removes a test case that checked that VxLAN configurations with IPv6 underlay are vetoed by the driver. A follow-up patchset will add forwarding selftests. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Currently, there is a test case to verify that VxLAN with IPv6 underlay is forbidden. Remove this test case as support for VxLAN with IPv6 underlay was added by the previous patch. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Currently, mlxsw driver supports VxLAN with IPv4 underlay only. Add support for IPv6 underlay. The main differences are: * Learning is not supported for IPv6 FDB entries, use static entries and do not allow 'learning' flag for IPv6 VxLAN. * IPv6 addresses for FDB entries should be saved as part of KVDL. Use the new API to allocate and release entries for IPv6 addresses. * Spectrum ASICs do not fill UDP checksum, while in software IPv6 UDP packets with checksum zero are dropped. Force the relevant flags which allow the VxLAN device to generate UDP packets with zero checksum and also receive them. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
FDB entries that perform VxLAN encapsulation with an IPv6 underlay hold a reference on a resource. Namely, the KVDL entry where the IPv6 underlay destination IP is stored. When such an FDB entry is deleted, it needs to drop the reference from the corresponding KVDL entry. To that end, maintain a hash table that maps an FDB entry (i.e., {MAC, FID}) to the IPv6 address used by it. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Add a function to fill IPv6 unicast FDB entries. Use the common function for common fields. Unlike IPv4 entries, the underlay IP address is not filled in the register payload, but instead a pointer to KVDL is used. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Currently, the function which adds/removes unicast tunnel FDB entries is shared between IPv4 and IPv6, while for IPv6 it warns because there is no support for it. The code for IPv6 will be more complicated because it needs to allocate/release a KVDL pointer for the underlay IPv6 address. As a preparation for IPv6 underlay support, split the code according to address family. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
As part of 'can_offload' checks, there is a check of VxLAN flags. The supported flags for IPv6 VxLAN will be different from the existing flags because of some limitations. As preparation for IPv6 underlay support, make this check per address family. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Use the common hash table introduced by the previous patch instead of the IP-in-IP specific implementation. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
The device supports forwarding entries such as routes and FDBs that perform tunnel (e.g., VXLAN, IP-in-IP) encapsulation or decapsulation. When the underlay is IPv6, these entries do not encode the 128 bit IPv6 address used for encapsulation / decapsulation. Instead, these entries encode a 24 bit pointer to an array called KVDL where the IPv6 address is stored. Currently, only IP-in-IP with IPv6 underlay is supported, but subsequent patches will add support for VxLAN with IPv6 underlay. To avoid duplicating the logic required to store and retrieve these IPv6 addresses, introduce a hash table that will store the mapping between IPv6 addresses and their KVDL index. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saed Mahameed says: ==================== mlx5-updates-2021-12-14 Parsing Infrastructure for TC actions: The series introduce a TC action infrastructure to help parsing TC actions in a generic way for both FDB and NIC rules. To help maintain the parsing code of TC actions, we the parsing code to action parser per action TC type in separate files, instead of having one big switch case loop, duplicated between FDB and NIC parsers as before this patchset. Each TC flow_action->id is represented by a dedicated mlx5e_tc_act handler which has callbacks to check if the specific action is offload supported and to parse the specific action. We move each case (TC action) handling into the specific handler, which is responsible for parsing and determining if the action is supported. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2021-12-14 This series contains updates to ice driver only. Haiyue adds support to query hardware for supported PTYPEs. Jeff changes PTYPE validation to utilize the capabilities queried from the hardware instead of maintaining a per DDP support list. Brett refactors promiscuous functions to provide common and clear interfaces to call for configuration. Wojciech modifies DDP package load to simplify determining the final state of the load. Tony removes the use of ice_status from the driver. This involves removing string conversion functions, converting variables and values to standard errors, and clean up. He also removes an unused define. Dan Carpenter removes unneeded casts. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joakim Zhang authored
1. During normal suspend (WoL not enabled) process, system has posibility to hang. The root cause is TXF interrupt coming after clocks disabled, system hang when accessing registers from interrupt handler. To fix this issue, disable all interrupts when system suspend. 2. System also has posibility to hang with WoL enabled during suspend, after entering stop mode, then magic pattern coming after clocks disabled, system will be waked up, and interrupt handler will be called, system hang when access registers. To fix this issue, disable wakeup irq in .suspend(), and enable it in .resume(). Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Clément Léger authored
Add support to get mac from device-tree using of_get_ethdev_address. Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Conley Lee authored
According to the current implementation of emac_rx, every arrived packet will be processed in the while loop. So, there is no remain packet last time. The skb_last field and this branch for dealing with it is unnecessary. Signed-off-by: Conley Lee <conleylee@foxmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
It seems I missed that most ethnl_parse_header_dev_get() callers declare an on-stack struct ethnl_req_info, and that they simply call dev_put(req_info.dev) when about to return. Add ethnl_parse_header_dev_put() helper to properly untrack reference taken by ethnl_parse_header_dev_get(). Fixes: e4b89540 ("netlink: add net device refcount tracker to struct ethnl_req_info") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roi Dayan authored
Move goto action checks from parse nic/fdb funcs into the tc action infra goto post parse op. While moving this part also use NL_SET_ERR_MSG_MOD() instead of NL_SET_ERR_MSG(). Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Move vlan prio tag rewrite handling into tc action infra vlan post parse op. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
The post_parse() op should be called after the parse op was called for all actions. It could be an action state is dependent on other actions. In the new op an action can fail the parse if the state is not valid anymore. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
There is no reason to wait with the kmalloc to after parsing all other actions. There could still be a failure later and before offloading the rule. So alloc the mem when parsing. The memory is being released on mlx5e_flow_put() which is called also on error flow. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Introduce a common function to implement the generic parsing loop. The same function can be used for parsing NIC and FDB (Switchdev mode) flows. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add parsing support by implementing struct mlx5e_tc_act for this action. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Add an infrastructure to help parsing tc actions in a generic way. Supporting an action parser means implementing struct mlx5e_tc_act for that action. The infrastructure will give the possibility to be generic when parsing tc actions, i.e. parse_tc_nic_actions() and parse_tc_fdb_actions(). To parse tc actions a user needs to allocate a parse_state instance and pass it when iterating over the tc actions parsers. If a parser doesn't exists then a user can treat it as unsupported. To add an action parser a user needs to implement two callbacks. The can_offload() callback to quickly check if an action can be offloaded. The parse_action() callback to do actual parsing and prepare for offload. Add implementation for drop, trap, mark and accept action parsers with this commit to act as examples and implement usage of the new infrastructure for those actions. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Jakub Kicinski authored
Kurt Kanzenbach says: ==================== net: dsa: hellcreek: Fix handling of MGMT protocols this series fixes some minor issues with regards to management protocols such as PTP and STP in the hellcreek DSA driver. Configure static FDB for these protocols. The end result is: |root@tsn:~# mv88e6xxx_dump --atu |Using device <platform/ff240000.switch> |ATU: |FID MAC 0123 Age OBT Pass Static Reprio Prio | 0 01:1b:19:00:00:00 1100 1 X X 6 | 1 01:00:5e:00:01:81 1100 1 X X 6 | 2 33:33:00:00:01:81 1100 1 X X 6 | 3 01:80:c2:00:00:0e 1100 1 X X X 6 | 4 01:00:5e:00:00:6b 1100 1 X X X 6 | 5 33:33:00:00:00:6b 1100 1 X X X 6 | 6 01:80:c2:00:00:00 1100 1 X X X 6 Previous version: * https://lore.kernel.org/r/20211213101810.121553-1-kurt@linutronix.de/ Changes since v1: * Target net-next, as this never worked correctly and is not critical * Add STP and PTP over UDP rules * Use pass_blocked for PDelay messages only (Richard Cochran) ==================== Link: https://lore.kernel.org/r/20211214134508.57806-1-kurt@linutronix.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kurt Kanzenbach authored
The switch supports PTP for UDP transport too. Therefore, add the missing static FDB entries to ensure correct forwarding of these packets. Fixes: ddd56dfe ("net: dsa: hellcreek: Add PTP clock support") Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kurt Kanzenbach authored
Allow PTP peer delay measurements on blocked ports by STP. In case of topology changes the PTP stack can directly start with the correct delays. Fixes: ddd56dfe ("net: dsa: hellcreek: Add PTP clock support") Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kurt Kanzenbach authored
Treat STP as management traffic. STP traffic is designated for the CPU port only. In addition, STP traffic has to pass blocked ports. Fixes: e4b27ebc ("net: dsa: Add DSA driver for Hirschmann Hellcreek switches") Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kurt Kanzenbach authored
The insertion of static FDB entries ignores the pass_blocked bit. That bit is evaluated with regards to STP. Add the missing functionality. Fixes: e4b27ebc ("net: dsa: Add DSA driver for Hirschmann Hellcreek switches") Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-