Commit aa866ee4 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'mlx5-fixes-2023-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5 fixes 2023-05-24

This series includes bug fixes for the mlx5 driver.

* tag 'mlx5-fixes-2023-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  Documentation: net/mlx5: Wrap notes in admonition blocks
  Documentation: net/mlx5: Add blank line separator before numbered lists
  Documentation: net/mlx5: Use bullet and definition lists for vnic counters description
  Documentation: net/mlx5: Wrap vnic reporter devlink commands in code blocks
  net/mlx5: Fix check for allocation failure in comp_irqs_request_pci()
  net/mlx5: DR, Add missing mutex init/destroy in pattern manager
  net/mlx5e: Move Ethernet driver debugfs to profile init callback
  net/mlx5e: Don't attach netdev profile while handling internal error
  net/mlx5: Fix post parse infra to only parse every action once
  net/mlx5e: Use query_special_contexts cmd only once per mdev
  net/mlx5: fw_tracer, Fix event handling
  net/mlx5: SF, Drain health before removing device
  net/mlx5: Drain health before unregistering devlink
  net/mlx5e: Do not update SBCM when prio2buffer command is invalid
  net/mlx5e: Consider internal buffers size in port buffer calculations
  net/mlx5e: Prevent encap offload when neigh update is running
  net/mlx5e: Extract remaining tunnel encap code to dedicated file
====================

Link: https://lore.kernel.org/r/20230525034847.99268-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 822b5a1c bb72b94c
......@@ -40,6 +40,7 @@ flow_steering_mode: Device flow steering mode
---------------------------------------------
The flow steering mode parameter controls the flow steering mode of the driver.
Two modes are supported:
1. 'dmfs' - Device managed flow steering.
2. 'smfs' - Software/Driver managed flow steering.
......@@ -99,6 +100,7 @@ between representors and stacked devices.
By default metadata is enabled on the supported devices in E-switch.
Metadata is applicable only for E-switch in switchdev mode and
users may disable it when NONE of the below use cases will be in use:
1. HCA is in Dual/multi-port RoCE mode.
2. VF/SF representor bonding (Usually used for Live migration)
3. Stacked devices
......@@ -180,7 +182,8 @@ User commands examples:
$ devlink health diagnose pci/0000:82:00.0 reporter tx
NOTE: This command has valid output only when interface is up, otherwise the command has empty output.
.. note::
This command has valid output only when interface is up, otherwise the command has empty output.
- Show number of tx errors indicated, number of recover flows ended successfully,
is autorecover enabled and graceful period from last recover::
......@@ -232,8 +235,9 @@ User commands examples:
$ devlink health dump show pci/0000:82:00.0 reporter fw
NOTE: This command can run only on the PF which has fw tracer ownership,
running it on other PF or any VF will return "Operation not permitted".
.. note::
This command can run only on the PF which has fw tracer ownership,
running it on other PF or any VF will return "Operation not permitted".
fw fatal reporter
-----------------
......@@ -256,7 +260,8 @@ User commands examples:
$ devlink health dump show pci/0000:82:00.1 reporter fw_fatal
NOTE: This command can run only on PF.
.. note::
This command can run only on PF.
vnic reporter
-------------
......@@ -265,28 +270,37 @@ It is responsible for querying the vnic diagnostic counters from fw and displayi
them in realtime.
Description of the vnic counters:
total_q_under_processor_handle: number of queues in an error state due to
an async error or errored command.
send_queue_priority_update_flow: number of QP/SQ priority/SL update
events.
cq_overrun: number of times CQ entered an error state due to an
overflow.
async_eq_overrun: number of times an EQ mapped to async events was
overrun.
comp_eq_overrun: number of times an EQ mapped to completion events was
overrun.
quota_exceeded_command: number of commands issued and failed due to quota
exceeded.
invalid_command: number of commands issued and failed dues to any reason
other than quota exceeded.
nic_receive_steering_discard: number of packets that completed RX flow
steering but were discarded due to a mismatch in flow table.
- total_q_under_processor_handle
number of queues in an error state due to
an async error or errored command.
- send_queue_priority_update_flow
number of QP/SQ priority/SL update events.
- cq_overrun
number of times CQ entered an error state due to an overflow.
- async_eq_overrun
number of times an EQ mapped to async events was overrun.
comp_eq_overrun number of times an EQ mapped to completion events was
overrun.
- quota_exceeded_command
number of commands issued and failed due to quota exceeded.
- invalid_command
number of commands issued and failed dues to any reason other than quota
exceeded.
- nic_receive_steering_discard
number of packets that completed RX flow
steering but were discarded due to a mismatch in flow table.
User commands examples:
- Diagnose PF/VF vnic counters
- Diagnose PF/VF vnic counters::
$ devlink health diagnose pci/0000:82:00.1 reporter vnic
- Diagnose representor vnic counters (performed by supplying devlink port of the
representor, which can be obtained via devlink port command)
representor, which can be obtained via devlink port command)::
$ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic
NOTE: This command can run over all interfaces such as PF/VF and representor ports.
.. note::
This command can run over all interfaces such as PF/VF and representor ports.
......@@ -490,7 +490,7 @@ static void poll_trace(struct mlx5_fw_tracer *tracer,
(u64)timestamp_low;
break;
default:
if (tracer_event->event_id >= tracer->str_db.first_string_trace ||
if (tracer_event->event_id >= tracer->str_db.first_string_trace &&
tracer_event->event_id <= tracer->str_db.first_string_trace +
tracer->str_db.num_string_trace) {
tracer_event->type = TRACER_EVENT_TYPE_STRING;
......
......@@ -327,6 +327,7 @@ struct mlx5e_params {
unsigned int sw_mtu;
int hard_mtu;
bool ptp_rx;
__be32 terminate_lkey_be;
};
static inline u8 mlx5e_get_dcb_num_tc(struct mlx5e_params *params)
......
......@@ -51,7 +51,7 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
if (err)
goto out;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]);
port_buffer->buffer[i].lossy =
MLX5_GET(bufferx_reg, buffer, lossy);
......@@ -73,14 +73,24 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
port_buffer->buffer[i].lossy);
}
port_buffer->headroom_size = total_used;
port_buffer->internal_buffers_size = 0;
for (i = MLX5E_MAX_NETWORK_BUFFER; i < MLX5E_TOTAL_BUFFERS; i++) {
buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]);
port_buffer->internal_buffers_size +=
MLX5_GET(bufferx_reg, buffer, size) * port_buff_cell_sz;
}
port_buffer->port_buffer_size =
MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz;
port_buffer->spare_buffer_size =
port_buffer->port_buffer_size - total_used;
mlx5e_dbg(HW, priv, "total buffer size=%d, spare buffer size=%d\n",
port_buffer->port_buffer_size,
port_buffer->headroom_size = total_used;
port_buffer->spare_buffer_size = port_buffer->port_buffer_size -
port_buffer->internal_buffers_size -
port_buffer->headroom_size;
mlx5e_dbg(HW, priv,
"total buffer size=%u, headroom buffer size=%u, internal buffers size=%u, spare buffer size=%u\n",
port_buffer->port_buffer_size, port_buffer->headroom_size,
port_buffer->internal_buffers_size,
port_buffer->spare_buffer_size);
out:
kfree(out);
......@@ -206,11 +216,11 @@ static int port_update_pool_cfg(struct mlx5_core_dev *mdev,
if (!MLX5_CAP_GEN(mdev, sbcam_reg))
return 0;
for (i = 0; i < MLX5E_MAX_BUFFER; i++)
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
lossless_buff_count += ((port_buffer->buffer[i].size) &&
(!(port_buffer->buffer[i].lossy)));
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count);
err = mlx5e_port_set_sbcm(mdev, 0, i,
MLX5_INGRESS_DIR,
......@@ -293,7 +303,7 @@ static int port_set_buffer(struct mlx5e_priv *priv,
if (err)
goto out;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
u64 size = port_buffer->buffer[i].size;
u64 xoff = port_buffer->buffer[i].xoff;
......@@ -351,7 +361,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
{
int i;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
if (port_buffer->buffer[i].lossy) {
port_buffer->buffer[i].xoff = 0;
port_buffer->buffer[i].xon = 0;
......@@ -408,7 +418,7 @@ static int update_buffer_lossy(struct mlx5_core_dev *mdev,
int err;
int i;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
prio_count = 0;
lossy_count = 0;
......@@ -432,11 +442,11 @@ static int update_buffer_lossy(struct mlx5_core_dev *mdev,
}
if (changed) {
err = port_update_pool_cfg(mdev, port_buffer);
err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
err = port_update_pool_cfg(mdev, port_buffer);
if (err)
return err;
......@@ -515,7 +525,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) {
update_prio2buffer = true;
for (i = 0; i < MLX5E_MAX_BUFFER; i++)
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n",
__func__, i, prio2buffer[i]);
......@@ -530,7 +540,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
}
if (change & MLX5E_PORT_BUFFER_SIZE) {
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
mlx5e_dbg(HW, priv, "%s: buffer[%d]=%d\n", __func__, i, buffer_size[i]);
if (!port_buffer.buffer[i].lossy && !buffer_size[i]) {
mlx5e_dbg(HW, priv, "%s: lossless buffer[%d] size cannot be zero\n",
......@@ -544,7 +554,9 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
mlx5e_dbg(HW, priv, "%s: total buffer requested=%d\n", __func__, total_used);
if (total_used > port_buffer.port_buffer_size)
if (total_used > port_buffer.headroom_size &&
(total_used - port_buffer.headroom_size) >
port_buffer.spare_buffer_size)
return -EINVAL;
update_buffer = true;
......
......@@ -35,7 +35,8 @@
#include "en.h"
#include "port.h"
#define MLX5E_MAX_BUFFER 8
#define MLX5E_MAX_NETWORK_BUFFER 8
#define MLX5E_TOTAL_BUFFERS 10
#define MLX5E_DEFAULT_CABLE_LEN 7 /* 7 meters */
#define MLX5_BUFFER_SUPPORTED(mdev) (MLX5_CAP_GEN(mdev, pcam_reg) && \
......@@ -60,8 +61,9 @@ struct mlx5e_bufferx_reg {
struct mlx5e_port_buffer {
u32 port_buffer_size;
u32 spare_buffer_size;
u32 headroom_size;
struct mlx5e_bufferx_reg buffer[MLX5E_MAX_BUFFER];
u32 headroom_size; /* Buffers 0-7 */
u32 internal_buffers_size; /* Buffers 8-9 */
struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER];
};
int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
......
......@@ -84,7 +84,7 @@ mlx5e_tc_act_init_parse_state(struct mlx5e_tc_act_parse_state *parse_state,
int
mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
struct flow_action *flow_action,
struct flow_action *flow_action, int from, int to,
struct mlx5_flow_attr *attr,
enum mlx5_flow_namespace_type ns_type)
{
......@@ -96,6 +96,11 @@ mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
priv = parse_state->flow->priv;
flow_action_for_each(i, act, flow_action) {
if (i < from)
continue;
else if (i > to)
break;
tc_act = mlx5e_tc_act_get(act->id, ns_type);
if (!tc_act || !tc_act->post_parse)
continue;
......
......@@ -112,7 +112,7 @@ mlx5e_tc_act_init_parse_state(struct mlx5e_tc_act_parse_state *parse_state,
int
mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state,
struct flow_action *flow_action,
struct flow_action *flow_action, int from, int to,
struct mlx5_flow_attr *attr,
enum mlx5_flow_namespace_type ns_type);
......
......@@ -492,6 +492,19 @@ void mlx5e_encap_put(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e)
mlx5e_encap_dealloc(priv, e);
}
static void mlx5e_encap_put_locked(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
lockdep_assert_held(&esw->offloads.encap_tbl_lock);
if (!refcount_dec_and_test(&e->refcnt))
return;
list_del(&e->route_list);
hash_del_rcu(&e->encap_hlist);
mlx5e_encap_dealloc(priv, e);
}
static void mlx5e_decap_put(struct mlx5e_priv *priv, struct mlx5e_decap_entry *d)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
......@@ -816,6 +829,8 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
uintptr_t hash_key;
int err = 0;
lockdep_assert_held(&esw->offloads.encap_tbl_lock);
parse_attr = attr->parse_attr;
tun_info = parse_attr->tun_info[out_index];
mpls_info = &parse_attr->mpls_info[out_index];
......@@ -829,7 +844,6 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
hash_key = hash_encap_info(&key);
mutex_lock(&esw->offloads.encap_tbl_lock);
e = mlx5e_encap_get(priv, &key, hash_key);
/* must verify if encap is valid or not */
......@@ -840,15 +854,6 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
goto out_err;
}
mutex_unlock(&esw->offloads.encap_tbl_lock);
wait_for_completion(&e->res_ready);
/* Protect against concurrent neigh update. */
mutex_lock(&esw->offloads.encap_tbl_lock);
if (e->compl_result < 0) {
err = -EREMOTEIO;
goto out_err;
}
goto attach_flow;
}
......@@ -877,15 +882,12 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
INIT_LIST_HEAD(&e->flows);
hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key);
tbl_time_before = mlx5e_route_tbl_get_last_update(priv);
mutex_unlock(&esw->offloads.encap_tbl_lock);
if (family == AF_INET)
err = mlx5e_tc_tun_create_header_ipv4(priv, mirred_dev, e);
else if (family == AF_INET6)
err = mlx5e_tc_tun_create_header_ipv6(priv, mirred_dev, e);
/* Protect against concurrent neigh update. */
mutex_lock(&esw->offloads.encap_tbl_lock);
complete_all(&e->res_ready);
if (err) {
e->compl_result = err;
......@@ -920,18 +922,15 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
} else {
flow_flag_set(flow, SLOW);
}
mutex_unlock(&esw->offloads.encap_tbl_lock);
return err;
out_err:
mutex_unlock(&esw->offloads.encap_tbl_lock);
if (e)
mlx5e_encap_put(priv, e);
mlx5e_encap_put_locked(priv, e);
return err;
out_err_init:
mutex_unlock(&esw->offloads.encap_tbl_lock);
kfree(tun_info);
kfree(e);
return err;
......@@ -1016,6 +1015,93 @@ int mlx5e_attach_decap(struct mlx5e_priv *priv,
return err;
}
int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr,
struct netlink_ext_ack *extack,
bool *vf_tun)
{
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5_esw_flow_attr *esw_attr;
struct net_device *encap_dev = NULL;
struct mlx5e_rep_priv *rpriv;
struct mlx5e_priv *out_priv;
struct mlx5_eswitch *esw;
int out_index;
int err = 0;
if (!mlx5e_is_eswitch_flow(flow))
return 0;
parse_attr = attr->parse_attr;
esw_attr = attr->esw_attr;
*vf_tun = false;
esw = priv->mdev->priv.eswitch;
mutex_lock(&esw->offloads.encap_tbl_lock);
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
struct net_device *out_dev;
int mirred_ifindex;
if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
continue;
mirred_ifindex = parse_attr->mirred_ifindex[out_index];
out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex);
if (!out_dev) {
NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found");
err = -ENODEV;
goto out;
}
err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index,
extack, &encap_dev);
dev_put(out_dev);
if (err)
goto out;
if (esw_attr->dests[out_index].flags &
MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE &&
!esw_attr->dest_int_port)
*vf_tun = true;
out_priv = netdev_priv(encap_dev);
rpriv = out_priv->ppriv;
esw_attr->dests[out_index].rep = rpriv->rep;
esw_attr->dests[out_index].mdev = out_priv->mdev;
}
if (*vf_tun && esw_attr->out_count > 1) {
NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported");
err = -EOPNOTSUPP;
goto out;
}
out:
mutex_unlock(&esw->offloads.encap_tbl_lock);
return err;
}
void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr)
{
struct mlx5_esw_flow_attr *esw_attr;
int out_index;
if (!mlx5e_is_eswitch_flow(flow))
return;
esw_attr = attr->esw_attr;
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
continue;
mlx5e_detach_encap(flow->priv, flow, attr, out_index);
kfree(attr->parse_attr->tun_info[out_index]);
}
}
static int cmp_route_info(struct mlx5e_route_key *a,
struct mlx5e_route_key *b)
{
......
......@@ -30,6 +30,15 @@ int mlx5e_attach_decap_route(struct mlx5e_priv *priv,
void mlx5e_detach_decap_route(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow);
int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr,
struct netlink_ext_ack *extack,
bool *vf_tun);
void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr);
struct ip_tunnel_info *mlx5e_dup_tun_info(const struct ip_tunnel_info *tun_info);
int mlx5e_tc_set_attr_rx_tun(struct mlx5e_tc_flow *flow,
......
......@@ -926,9 +926,10 @@ static int mlx5e_dcbnl_getbuffer(struct net_device *dev,
if (err)
return err;
for (i = 0; i < MLX5E_MAX_BUFFER; i++)
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)
dcb_buffer->buffer_size[i] = port_buffer.buffer[i].size;
dcb_buffer->total_size = port_buffer.port_buffer_size;
dcb_buffer->total_size = port_buffer.port_buffer_size -
port_buffer.internal_buffers_size;
return 0;
}
......@@ -970,7 +971,7 @@ static int mlx5e_dcbnl_setbuffer(struct net_device *dev,
if (err)
return err;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
if (port_buffer.buffer[i].size != dcb_buffer->buffer_size[i]) {
changed |= MLX5E_PORT_BUFFER_SIZE;
buffer_size = dcb_buffer->buffer_size;
......
......@@ -727,26 +727,6 @@ static void mlx5e_rq_free_shampo(struct mlx5e_rq *rq)
mlx5e_rq_shampo_hd_free(rq);
}
static __be32 mlx5e_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev)
{
u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {};
u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {};
int res;
if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey))
return MLX5_TERMINATE_SCATTER_LIST_LKEY;
MLX5_SET(query_special_contexts_in, in, opcode,
MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS);
res = mlx5_cmd_exec_inout(dev, query_special_contexts, in, out);
if (res)
return MLX5_TERMINATE_SCATTER_LIST_LKEY;
res = MLX5_GET(query_special_contexts_out, out,
terminate_scatter_list_mkey);
return cpu_to_be32(res);
}
static int mlx5e_alloc_rq(struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk,
struct mlx5e_rq_param *rqp,
......@@ -908,7 +888,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
/* check if num_frags is not a pow of two */
if (rq->wqe.info.num_frags < (1 << rq->wqe.info.log_num_frags)) {
wqe->data[f].byte_count = 0;
wqe->data[f].lkey = mlx5e_get_terminate_scatter_list_mkey(mdev);
wqe->data[f].lkey = params->terminate_lkey_be;
wqe->data[f].addr = 0;
}
}
......@@ -5007,6 +4987,8 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
/* RQ */
mlx5e_build_rq_params(mdev, params);
params->terminate_lkey_be = mlx5_core_get_terminate_scatter_list_mkey(mdev);
params->packet_merge.timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
/* CQ moderation params */
......@@ -5279,12 +5261,16 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
mlx5e_timestamp_init(priv);
priv->dfs_root = debugfs_create_dir("nic",
mlx5_debugfs_get_dev_root(mdev));
fs = mlx5e_fs_init(priv->profile, mdev,
!test_bit(MLX5E_STATE_DESTROYING, &priv->state),
priv->dfs_root);
if (!fs) {
err = -ENOMEM;
mlx5_core_err(mdev, "FS initialization failed, %d\n", err);
debugfs_remove_recursive(priv->dfs_root);
return err;
}
priv->fs = fs;
......@@ -5305,6 +5291,7 @@ static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
mlx5e_health_destroy_reporters(priv);
mlx5e_ktls_cleanup(priv);
mlx5e_fs_cleanup(priv->fs);
debugfs_remove_recursive(priv->dfs_root);
priv->fs = NULL;
}
......@@ -5851,8 +5838,8 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv)
}
static int
mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
const struct mlx5e_profile *new_profile, void *new_ppriv)
mlx5e_netdev_init_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
const struct mlx5e_profile *new_profile, void *new_ppriv)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
int err;
......@@ -5868,6 +5855,25 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
err = new_profile->init(priv->mdev, priv->netdev);
if (err)
goto priv_cleanup;
return 0;
priv_cleanup:
mlx5e_priv_cleanup(priv);
return err;
}
static int
mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev,
const struct mlx5e_profile *new_profile, void *new_ppriv)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
int err;
err = mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);
if (err)
return err;
err = mlx5e_attach_netdev(priv);
if (err)
goto profile_cleanup;
......@@ -5875,7 +5881,6 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
profile_cleanup:
new_profile->cleanup(priv);
priv_cleanup:
mlx5e_priv_cleanup(priv);
return err;
}
......@@ -5894,6 +5899,12 @@ int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,
priv->profile->cleanup(priv);
mlx5e_priv_cleanup(priv);
if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);
set_bit(MLX5E_STATE_DESTROYING, &priv->state);
return -EIO;
}
err = mlx5e_netdev_attach_profile(netdev, mdev, new_profile, new_ppriv);
if (err) { /* roll back to original profile */
netdev_warn(netdev, "%s: new profile init failed, %d\n", __func__, err);
......@@ -5955,8 +5966,11 @@ static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state)
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = priv->mdev;
if (!netif_device_present(netdev))
if (!netif_device_present(netdev)) {
if (test_bit(MLX5E_STATE_DESTROYING, &priv->state))
mlx5e_destroy_mdev_resources(mdev);
return -ENODEV;
}
mlx5e_detach_netdev(priv);
mlx5e_destroy_mdev_resources(mdev);
......@@ -6002,9 +6016,6 @@ static int mlx5e_probe(struct auxiliary_device *adev,
priv->profile = profile;
priv->ppriv = NULL;
priv->dfs_root = debugfs_create_dir("nic",
mlx5_debugfs_get_dev_root(priv->mdev));
err = profile->init(mdev, netdev);
if (err) {
mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err);
......@@ -6033,7 +6044,6 @@ static int mlx5e_probe(struct auxiliary_device *adev,
err_profile_cleanup:
profile->cleanup(priv);
err_destroy_netdev:
debugfs_remove_recursive(priv->dfs_root);
mlx5e_destroy_netdev(priv);
err_devlink_port_unregister:
mlx5e_devlink_port_unregister(mlx5e_dev);
......@@ -6053,7 +6063,6 @@ static void mlx5e_remove(struct auxiliary_device *adev)
unregister_netdev(priv->netdev);
mlx5e_suspend(adev, state);
priv->profile->cleanup(priv);
debugfs_remove_recursive(priv->dfs_root);
mlx5e_destroy_netdev(priv);
mlx5e_devlink_port_unregister(mlx5e_dev);
mlx5e_destroy_devlink(mlx5e_dev);
......
......@@ -30,6 +30,7 @@
* SOFTWARE.
*/
#include <linux/debugfs.h>
#include <linux/mlx5/fs.h>
#include <net/switchdev.h>
#include <net/pkt_cls.h>
......@@ -812,11 +813,15 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
{
struct mlx5e_priv *priv = netdev_priv(netdev);
priv->dfs_root = debugfs_create_dir("nic",
mlx5_debugfs_get_dev_root(mdev));
priv->fs = mlx5e_fs_init(priv->profile, mdev,
!test_bit(MLX5E_STATE_DESTROYING, &priv->state),
priv->dfs_root);
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
debugfs_remove_recursive(priv->dfs_root);
return -ENOMEM;
}
......@@ -829,6 +834,7 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
static void mlx5e_cleanup_rep(struct mlx5e_priv *priv)
{
mlx5e_fs_cleanup(priv->fs);
debugfs_remove_recursive(priv->dfs_root);
priv->fs = NULL;
}
......
......@@ -1699,91 +1699,6 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
return mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
}
static int
set_encap_dests(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr,
struct netlink_ext_ack *extack,
bool *vf_tun)
{
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5_esw_flow_attr *esw_attr;
struct net_device *encap_dev = NULL;
struct mlx5e_rep_priv *rpriv;
struct mlx5e_priv *out_priv;
int out_index;
int err = 0;
if (!mlx5e_is_eswitch_flow(flow))
return 0;
parse_attr = attr->parse_attr;
esw_attr = attr->esw_attr;
*vf_tun = false;
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
struct net_device *out_dev;
int mirred_ifindex;
if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
continue;
mirred_ifindex = parse_attr->mirred_ifindex[out_index];
out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex);
if (!out_dev) {
NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found");
err = -ENODEV;
goto out;
}
err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index,
extack, &encap_dev);
dev_put(out_dev);
if (err)
goto out;
if (esw_attr->dests[out_index].flags &
MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE &&
!esw_attr->dest_int_port)
*vf_tun = true;
out_priv = netdev_priv(encap_dev);
rpriv = out_priv->ppriv;
esw_attr->dests[out_index].rep = rpriv->rep;
esw_attr->dests[out_index].mdev = out_priv->mdev;
}
if (*vf_tun && esw_attr->out_count > 1) {
NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported");
err = -EOPNOTSUPP;
goto out;
}
out:
return err;
}
static void
clean_encap_dests(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr)
{
struct mlx5_esw_flow_attr *esw_attr;
int out_index;
if (!mlx5e_is_eswitch_flow(flow))
return;
esw_attr = attr->esw_attr;
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) {
if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP))
continue;
mlx5e_detach_encap(priv, flow, attr, out_index);
kfree(attr->parse_attr->tun_info[out_index]);
}
}
static int
verify_attr_actions(u32 actions, struct netlink_ext_ack *extack)
{
......@@ -1820,7 +1735,7 @@ post_process_attr(struct mlx5e_tc_flow *flow,
if (err)
goto err_out;
err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun);
err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun);
if (err)
goto err_out;
......@@ -3944,8 +3859,8 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
struct mlx5_flow_attr *prev_attr;
struct flow_action_entry *act;
struct mlx5e_tc_act *tc_act;
int err, i, i_split = 0;
bool is_missable;
int err, i;
ns_type = mlx5e_get_flow_namespace(flow);
list_add(&attr->list, &flow->attrs);
......@@ -3986,7 +3901,8 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
i < flow_action->num_entries - 1)) {
is_missable = tc_act->is_missable ? tc_act->is_missable(act) : false;
err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type);
err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr,
ns_type);
if (err)
goto out_free_post_acts;
......@@ -3996,6 +3912,7 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
goto out_free_post_acts;
}
i_split = i + 1;
list_add(&attr->list, &flow->attrs);
}
......@@ -4010,7 +3927,7 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
}
}
err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type);
err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, ns_type);
if (err)
goto out_free_post_acts;
......@@ -4324,7 +4241,7 @@ mlx5_free_flow_attr_actions(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *a
if (attr->post_act_handle)
mlx5e_tc_post_act_del(get_post_action(flow->priv), attr->post_act_handle);
clean_encap_dests(flow->priv, flow, attr);
mlx5e_tc_tun_encap_dests_unset(flow->priv, flow, attr);
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
mlx5_fc_destroy(counter_dev, attr->counter);
......
......@@ -824,7 +824,7 @@ static int comp_irqs_request_pci(struct mlx5_core_dev *dev)
ncomp_eqs = table->num_comp_eqs;
cpus = kcalloc(ncomp_eqs, sizeof(*cpus), GFP_KERNEL);
if (!cpus)
ret = -ENOMEM;
return -ENOMEM;
i = 0;
rcu_read_lock();
......
......@@ -1802,15 +1802,16 @@ static void remove_one(struct pci_dev *pdev)
struct devlink *devlink = priv_to_devlink(dev);
set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
/* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain
* fw_reset before unregistering the devlink.
/* mlx5_drain_fw_reset() and mlx5_drain_health_wq() are using
* devlink notify APIs.
* Hence, we must drain them before unregistering the devlink.
*/
mlx5_drain_fw_reset(dev);
mlx5_drain_health_wq(dev);
devlink_unregister(devlink);
mlx5_sriov_disable(pdev);
mlx5_thermal_uninit(dev);
mlx5_crdump_disable(dev);
mlx5_drain_health_wq(dev);
mlx5_uninit_one(dev);
mlx5_pci_close(dev);
mlx5_mdev_uninit(dev);
......
......@@ -32,6 +32,7 @@
#include <linux/kernel.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/qp.h>
#include "mlx5_core.h"
int mlx5_core_create_mkey(struct mlx5_core_dev *dev, u32 *mkey, u32 *in,
......@@ -122,3 +123,23 @@ int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num)
return mlx5_cmd_exec_in(dev, destroy_psv, in);
}
EXPORT_SYMBOL(mlx5_core_destroy_psv);
__be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev)
{
u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {};
u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {};
u32 mkey;
if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey))
return MLX5_TERMINATE_SCATTER_LIST_LKEY;
MLX5_SET(query_special_contexts_in, in, opcode,
MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS);
if (mlx5_cmd_exec_inout(dev, query_special_contexts, in, out))
return MLX5_TERMINATE_SCATTER_LIST_LKEY;
mkey = MLX5_GET(query_special_contexts_out, out,
terminate_scatter_list_mkey);
return cpu_to_be32(mkey);
}
EXPORT_SYMBOL(mlx5_core_get_terminate_scatter_list_mkey);
......@@ -63,6 +63,7 @@ static void mlx5_sf_dev_remove(struct auxiliary_device *adev)
struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev);
struct devlink *devlink = priv_to_devlink(sf_dev->mdev);
mlx5_drain_health_wq(sf_dev->mdev);
devlink_unregister(devlink);
mlx5_uninit_one(sf_dev->mdev);
iounmap(sf_dev->mdev->iseg);
......
......@@ -213,6 +213,8 @@ struct mlx5dr_ptrn_mgr *mlx5dr_ptrn_mgr_create(struct mlx5dr_domain *dmn)
}
INIT_LIST_HEAD(&mgr->ptrn_list);
mutex_init(&mgr->modify_hdr_mutex);
return mgr;
free_mgr:
......@@ -237,5 +239,6 @@ void mlx5dr_ptrn_mgr_destroy(struct mlx5dr_ptrn_mgr *mgr)
}
mlx5dr_icm_pool_destroy(mgr->ptrn_icm_pool);
mutex_destroy(&mgr->modify_hdr_mutex);
kfree(mgr);
}
......@@ -1093,6 +1093,7 @@ void mlx5_cmdif_debugfs_cleanup(struct mlx5_core_dev *dev);
int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn,
int npsvs, u32 *sig_index);
int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num);
__be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev);
void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common);
int mlx5_query_odp_caps(struct mlx5_core_dev *dev,
struct mlx5_odp_caps *odp_caps);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment