Commit ca7609ff authored by Petr Machata's avatar Petr Machata Committed by David S. Miller

mlxsw: spectrum: Wipe xstats.backlog of down ports

Per-port counter cache used by Qdiscs is updated periodically, unless the
port is down. The fact that the cache is not updated for down ports is no
problem for most counters, which are relative in nature. However, backlog
is absolute in nature, and if there is a non-zero value in the cache around
the time that the port goes down, that value just stays there. This value
then leaks to offloaded Qdiscs that report non-zero backlog even if
there (obviously) is no traffic.

The HW does not keep backlog of a downed port, so do likewise: as the port
goes down, wipe the backlog value from xstats.

Fixes: 075ab8ad ("mlxsw: spectrum: Collect tclass related stats periodically")
Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent fef6d670
......@@ -1209,6 +1209,9 @@ static void update_stats_cache(struct work_struct *work)
periodic_hw_stats.update_dw.work);
if (!netif_carrier_ok(mlxsw_sp_port->dev))
/* Note: mlxsw_sp_port_down_wipe_counters() clears the cache as
* necessary when port goes down.
*/
goto out;
mlxsw_sp_port_get_hw_stats(mlxsw_sp_port->dev,
......@@ -4318,6 +4321,15 @@ static int mlxsw_sp_port_unsplit(struct mlxsw_core *mlxsw_core, u8 local_port,
return 0;
}
static void
mlxsw_sp_port_down_wipe_counters(struct mlxsw_sp_port *mlxsw_sp_port)
{
int i;
for (i = 0; i < TC_MAX_QUEUE; i++)
mlxsw_sp_port->periodic_hw_stats.xstats.backlog[i] = 0;
}
static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
char *pude_pl, void *priv)
{
......@@ -4339,6 +4351,7 @@ static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
} else {
netdev_info(mlxsw_sp_port->dev, "link down\n");
netif_carrier_off(mlxsw_sp_port->dev);
mlxsw_sp_port_down_wipe_counters(mlxsw_sp_port);
}
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment