Commit ccd06f50 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'mlx5-updates-2023-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-08-14

1) Handle PTP out of order CQEs issue
2) Check FW status before determining reset successful
3) Expose maximum supported SFs via devlink resource
4) MISC cleanups

* tag 'mlx5-updates-2023-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net/mlx5: Don't query MAX caps twice
  net/mlx5: Remove unused MAX HCA capabilities
  net/mlx5: Remove unused CAPs
  net/mlx5: Fix error message in mlx5_sf_dev_state_change_handler()
  net/mlx5: Remove redundant check of mlx5_vhca_event_supported()
  net/mlx5: Use mlx5_sf_start_function_id() helper instead of directly calling MLX5_CAP_GEN()
  net/mlx5: Remove redundant SF supported check from mlx5_sf_hw_table_init()
  net/mlx5: Use auxiliary_device_uninit() instead of device_put()
  net/mlx5: E-switch, Add checking for flow rule destinations
  net/mlx5: Check with FW that sync reset completed successfully
  net/mlx5: Expose max possible SFs via devlink resource
  net/mlx5e: Add recovery flow for tx devlink health reporter for unhealthy PTP SQ
  net/mlx5e: Make tx_port_ts logic resilient to out-of-order CQEs
  net/mlx5: Consolidate devlink documentation in devlink/mlx5.rst
====================

Link: https://lore.kernel.org/r/20230814214144.159464-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 9cf3db3c bd3a2f77
......@@ -683,6 +683,12 @@ the software port.
time protocol.
- Error
* - `ptp_cq[i]_late_cqe`
- Number of times a CQE has been delivered on the PTP timestamping CQ when
the CQE was not expected since a certain amount of time had elapsed where
the device typically ensures not posting the CQE.
- Error
.. [#ring_global] The corresponding ring and global counters do not share the
same name (i.e. do not follow the common naming scheme).
......
......@@ -13,7 +13,6 @@ Contents:
:maxdepth: 2
kconfig
devlink
switchdev
tracepoints
counters
......
......@@ -18,6 +18,11 @@ Parameters
* - ``enable_roce``
- driverinit
- Type: Boolean
If the device supports RoCE disablement, RoCE enablement state controls
device support for RoCE capability. Otherwise, the control occurs in the
driver stack. When RoCE is disabled at the driver level, only raw
ethernet QPs are supported.
* - ``io_eq_size``
- driverinit
- The range is between 64 and 4096.
......@@ -48,6 +53,9 @@ parameters.
* ``smfs`` Software managed flow steering. In SMFS mode, the HW
steering entities are created and manage through the driver without
firmware intervention.
SMFS mode is faster and provides better rule insertion rate compared to
default DMFS mode.
* - ``fdb_large_groups``
- u32
- driverinit
......@@ -71,7 +79,24 @@ parameters.
deprecated.
Default: disabled
* - ``esw_port_metadata``
- Boolean
- runtime
- When applicable, disabling eswitch metadata can increase packet rate up
to 20% depending on the use case and packet sizes.
Eswitch port metadata state controls whether to internally tag packets
with metadata. Metadata tagging must be enabled for multi-port RoCE,
failover between representors and stacked devices. By default metadata is
enabled on the supported devices in E-switch. Metadata is applicable only
for E-switch in switchdev mode and users may disable it when NONE of the
below use cases will be in use:
1. HCA is in Dual/multi-port RoCE mode.
2. VF/SF representor bonding (Usually used for Live migration)
3. Stacked devices
When metadata is disabled, the above use cases will fail to initialize if
users try to enable them.
* - ``hairpin_num_queues``
- u32
- driverinit
......@@ -104,3 +129,160 @@ The ``mlx5`` driver reports the following versions
* - ``fw.version``
- stored, running
- Three digit major.minor.subminor firmware version number.
Health reporters
================
tx reporter
-----------
The tx reporter is responsible for reporting and recovering of the following three error scenarios:
- tx timeout
Report on kernel tx timeout detection.
Recover by searching lost interrupts.
- tx error completion
Report on error tx completion.
Recover by flushing the tx queue and reset it.
- tx PTP port timestamping CQ unhealthy
Report too many CQEs never delivered on port ts CQ.
Recover by flushing and re-creating all PTP channels.
tx reporter also support on demand diagnose callback, on which it provides
real time information of its send queues status.
User commands examples:
- Diagnose send queues status::
$ devlink health diagnose pci/0000:82:00.0 reporter tx
.. note::
This command has valid output only when interface is up, otherwise the command has empty output.
- Show number of tx errors indicated, number of recover flows ended successfully,
is autorecover enabled and graceful period from last recover::
$ devlink health show pci/0000:82:00.0 reporter tx
rx reporter
-----------
The rx reporter is responsible for reporting and recovering of the following two error scenarios:
- rx queues' initialization (population) timeout
Population of rx queues' descriptors on ring initialization is done
in napi context via triggering an irq. In case of a failure to get
the minimum amount of descriptors, a timeout would occur, and
descriptors could be recovered by polling the EQ (Event Queue).
- rx completions with errors (reported by HW on interrupt context)
Report on rx completion error.
Recover (if needed) by flushing the related queue and reset it.
rx reporter also supports on demand diagnose callback, on which it
provides real time information of its receive queues' status.
- Diagnose rx queues' status and corresponding completion queue::
$ devlink health diagnose pci/0000:82:00.0 reporter rx
.. note::
This command has valid output only when interface is up. Otherwise, the command has empty output.
- Show number of rx errors indicated, number of recover flows ended successfully,
is autorecover enabled, and graceful period from last recover::
$ devlink health show pci/0000:82:00.0 reporter rx
fw reporter
-----------
The fw reporter implements `diagnose` and `dump` callbacks.
It follows symptoms of fw error such as fw syndrome by triggering
fw core dump and storing it into the dump buffer.
The fw reporter diagnose command can be triggered any time by the user to check
current fw status.
User commands examples:
- Check fw heath status::
$ devlink health diagnose pci/0000:82:00.0 reporter fw
- Read FW core dump if already stored or trigger new one::
$ devlink health dump show pci/0000:82:00.0 reporter fw
.. note::
This command can run only on the PF which has fw tracer ownership,
running it on other PF or any VF will return "Operation not permitted".
fw fatal reporter
-----------------
The fw fatal reporter implements `dump` and `recover` callbacks.
It follows fatal errors indications by CR-space dump and recover flow.
The CR-space dump uses vsc interface which is valid even if the FW command
interface is not functional, which is the case in most FW fatal errors.
The recover function runs recover flow which reloads the driver and triggers fw
reset if needed.
On firmware error, the health buffer is dumped into the dmesg. The log
level is derived from the error's severity (given in health buffer).
User commands examples:
- Run fw recover flow manually::
$ devlink health recover pci/0000:82:00.0 reporter fw_fatal
- Read FW CR-space dump if already stored or trigger new one::
$ devlink health dump show pci/0000:82:00.1 reporter fw_fatal
.. note::
This command can run only on PF.
vnic reporter
-------------
The vnic reporter implements only the `diagnose` callback.
It is responsible for querying the vnic diagnostic counters from fw and displaying
them in realtime.
Description of the vnic counters:
- total_q_under_processor_handle
number of queues in an error state due to
an async error or errored command.
- send_queue_priority_update_flow
number of QP/SQ priority/SL update events.
- cq_overrun
number of times CQ entered an error state due to an overflow.
- async_eq_overrun
number of times an EQ mapped to async events was overrun.
comp_eq_overrun number of times an EQ mapped to completion events was
overrun.
- quota_exceeded_command
number of commands issued and failed due to quota exceeded.
- invalid_command
number of commands issued and failed dues to any reason other than quota
exceeded.
- nic_receive_steering_discard
number of packets that completed RX flow
steering but were discarded due to a mismatch in flow table.
- generated_pkt_steering_fail
number of packets generated by the VNIC experiencing unexpected steering
failure (at any point in steering flow).
- handled_pkt_steering_fail
number of packets handled by the VNIC experiencing unexpected steering
failure (at any point in steering flow owned by the VNIC, including the FDB
for the eswitch owner).
User commands examples:
- Diagnose PF/VF vnic counters::
$ devlink health diagnose pci/0000:82:00.1 reporter vnic
- Diagnose representor vnic counters (performed by supplying devlink port of the
representor, which can be obtained via devlink port command)::
$ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic
.. note::
This command can run over all interfaces such as PF/VF and representor ports.
......@@ -212,6 +212,9 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
/* On fw_activate action, also driver is reloaded and reinit performed */
*actions_performed |= BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
ret = mlx5_load_one_devl_locked(dev, true);
if (ret)
return ret;
ret = mlx5_fw_reset_verify_fw_complete(dev, extack);
break;
default:
/* Unsupported action should not get to this function */
......
......@@ -6,6 +6,14 @@
#include <net/devlink.h>
enum mlx5_devlink_resource_id {
MLX5_DL_RES_MAX_LOCAL_SFS = 1,
MLX5_DL_RES_MAX_EXTERNAL_SFS,
__MLX5_ID_RES_MAX,
MLX5_ID_RES_MAX = __MLX5_ID_RES_MAX - 1,
};
enum mlx5_devlink_param_id {
MLX5_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE,
......
......@@ -18,6 +18,7 @@ void mlx5e_reporter_tx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
void mlx5e_reporter_tx_ptpsq_unhealthy(struct mlx5e_ptpsq *ptpsq);
int mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
int mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
......
......@@ -7,18 +7,38 @@
#include "en.h"
#include "en_stats.h"
#include "en/txrx.h"
#include <linux/ktime.h>
#include <linux/ptp_classify.h>
#include <linux/time64.h>
#include <linux/workqueue.h>
#define MLX5E_PTP_CHANNEL_IX 0
#define MLX5E_PTP_MAX_LOG_SQ_SIZE (8U)
#define MLX5E_PTP_TS_CQE_UNDELIVERED_TIMEOUT (1 * NSEC_PER_SEC)
struct mlx5e_ptp_metadata_fifo {
u8 cc;
u8 pc;
u8 mask;
u8 *data;
};
struct mlx5e_ptp_metadata_map {
u16 undelivered_counter;
u16 capacity;
struct sk_buff **data;
};
struct mlx5e_ptpsq {
struct mlx5e_txqsq txqsq;
struct mlx5e_cq ts_cq;
u16 skb_fifo_cc;
u16 skb_fifo_pc;
struct mlx5e_skb_fifo skb_fifo;
struct mlx5e_ptp_cq_stats *cq_stats;
u16 ts_cqe_ctr_mask;
struct work_struct report_unhealthy_work;
struct mlx5e_ptp_port_ts_cqe_list *ts_cqe_pending_list;
struct mlx5e_ptp_metadata_fifo metadata_freelist;
struct mlx5e_ptp_metadata_map metadata_map;
};
enum {
......@@ -69,12 +89,35 @@ static inline bool mlx5e_use_ptpsq(struct sk_buff *skb)
fk.ports.dst == htons(PTP_EV_PORT));
}
static inline bool mlx5e_ptpsq_fifo_has_room(struct mlx5e_txqsq *sq)
static inline void mlx5e_ptp_metadata_fifo_push(struct mlx5e_ptp_metadata_fifo *fifo, u8 metadata)
{
if (!sq->ptpsq)
return true;
fifo->data[fifo->mask & fifo->pc++] = metadata;
}
static inline u8
mlx5e_ptp_metadata_fifo_pop(struct mlx5e_ptp_metadata_fifo *fifo)
{
return fifo->data[fifo->mask & fifo->cc++];
}
return mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo);
static inline void
mlx5e_ptp_metadata_map_put(struct mlx5e_ptp_metadata_map *map,
struct sk_buff *skb, u8 metadata)
{
WARN_ON_ONCE(map->data[metadata]);
map->data[metadata] = skb;
}
static inline bool mlx5e_ptpsq_metadata_freelist_empty(struct mlx5e_ptpsq *ptpsq)
{
struct mlx5e_ptp_metadata_fifo *freelist;
if (likely(!ptpsq))
return false;
freelist = &ptpsq->metadata_freelist;
return freelist->pc == freelist->cc;
}
int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params,
......@@ -89,6 +132,8 @@ void mlx5e_ptp_free_rx_fs(struct mlx5e_flow_steering *fs,
const struct mlx5e_profile *profile);
int mlx5e_ptp_rx_manage_fs(struct mlx5e_priv *priv, bool set);
void mlx5e_ptpsq_track_metadata(struct mlx5e_ptpsq *ptpsq, u8 metadata);
enum {
MLX5E_SKB_CB_CQE_HWTSTAMP = BIT(0),
MLX5E_SKB_CB_PORT_HWTSTAMP = BIT(1),
......
......@@ -164,6 +164,43 @@ static int mlx5e_tx_reporter_timeout_recover(void *ctx)
return err;
}
static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx)
{
struct mlx5e_ptpsq *ptpsq = ctx;
struct mlx5e_channels *chs;
struct net_device *netdev;
struct mlx5e_priv *priv;
int carrier_ok;
int err;
if (!test_bit(MLX5E_SQ_STATE_RECOVERING, &ptpsq->txqsq.state))
return 0;
priv = ptpsq->txqsq.priv;
mutex_lock(&priv->state_lock);
chs = &priv->channels;
netdev = priv->netdev;
carrier_ok = netif_carrier_ok(netdev);
netif_carrier_off(netdev);
mlx5e_deactivate_priv_channels(priv);
mlx5e_ptp_close(chs->ptp);
err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp);
mlx5e_activate_priv_channels(priv);
/* return carrier back if needed */
if (carrier_ok)
netif_carrier_on(netdev);
mutex_unlock(&priv->state_lock);
return err;
}
/* state lock cannot be grabbed within this function.
* It can cause a dead lock or a read-after-free.
*/
......@@ -516,6 +553,15 @@ static int mlx5e_tx_reporter_timeout_dump(struct mlx5e_priv *priv, struct devlin
return mlx5e_tx_reporter_dump_sq(priv, fmsg, to_ctx->sq);
}
static int mlx5e_tx_reporter_ptpsq_unhealthy_dump(struct mlx5e_priv *priv,
struct devlink_fmsg *fmsg,
void *ctx)
{
struct mlx5e_ptpsq *ptpsq = ctx;
return mlx5e_tx_reporter_dump_sq(priv, fmsg, &ptpsq->txqsq);
}
static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv,
struct devlink_fmsg *fmsg)
{
......@@ -621,6 +667,25 @@ int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq)
return to_ctx.status;
}
void mlx5e_reporter_tx_ptpsq_unhealthy(struct mlx5e_ptpsq *ptpsq)
{
struct mlx5e_ptp_metadata_map *map = &ptpsq->metadata_map;
char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
struct mlx5e_txqsq *txqsq = &ptpsq->txqsq;
struct mlx5e_cq *ts_cq = &ptpsq->ts_cq;
struct mlx5e_priv *priv = txqsq->priv;
struct mlx5e_err_ctx err_ctx = {};
err_ctx.ctx = ptpsq;
err_ctx.recover = mlx5e_tx_reporter_ptpsq_unhealthy_recover;
err_ctx.dump = mlx5e_tx_reporter_ptpsq_unhealthy_dump;
snprintf(err_str, sizeof(err_str),
"Unhealthy TX port TS queue: %d, SQ: 0x%x, CQ: 0x%x, Undelivered CQEs: %u Map Capacity: %u",
txqsq->ch_ix, txqsq->sqn, ts_cq->mcq.cqn, map->undelivered_counter, map->capacity);
mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx);
}
static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = {
.name = "tx",
.recover = mlx5e_tx_reporter_recover,
......
......@@ -2061,7 +2061,8 @@ static int set_pflag_tx_port_ts(struct net_device *netdev, bool enable)
struct mlx5e_params new_params;
int err;
if (!MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn))
if (!MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn) ||
!MLX5_CAP_GEN_2(mdev, ts_cqe_metadata_size2wqe_counter))
return -EOPNOTSUPP;
/* Don't allow changing the PTP state if HTB offload is active, because
......
......@@ -2142,9 +2142,7 @@ static const struct counter_desc ptp_cq_stats_desc[] = {
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, err_cqe) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort_abs_diff_ns) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_cqe) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_event) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, ooo_cqe_drop) },
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, late_cqe) },
};
static const struct counter_desc ptp_rq_stats_desc[] = {
......
......@@ -449,9 +449,7 @@ struct mlx5e_ptp_cq_stats {
u64 err_cqe;
u64 abort;
u64 abort_abs_diff_ns;
u64 resync_cqe;
u64 resync_event;
u64 ooo_cqe_drop;
u64 late_cqe;
};
struct mlx5e_rep_stats {
......
......@@ -372,7 +372,7 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
const struct mlx5e_tx_attr *attr,
const struct mlx5e_tx_wqe_attr *wqe_attr, u8 num_dma,
struct mlx5e_tx_wqe_info *wi, struct mlx5_wqe_ctrl_seg *cseg,
bool xmit_more)
struct mlx5_wqe_eth_seg *eseg, bool xmit_more)
{
struct mlx5_wq_cyc *wq = &sq->wq;
bool send_doorbell;
......@@ -394,11 +394,16 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
mlx5e_tx_check_stop(sq);
if (unlikely(sq->ptpsq)) {
if (unlikely(sq->ptpsq &&
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) {
u8 metadata_index = be32_to_cpu(eseg->flow_table_metadata);
mlx5e_skb_cb_hwtstamp_init(skb);
mlx5e_skb_fifo_push(&sq->ptpsq->skb_fifo, skb);
mlx5e_ptpsq_track_metadata(sq->ptpsq, metadata_index);
mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb,
metadata_index);
if (!netif_tx_queue_stopped(sq->txq) &&
!mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo)) {
mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq)) {
netif_tx_stop_queue(sq->txq);
sq->stats->stopped++;
}
......@@ -483,13 +488,16 @@ mlx5e_sq_xmit_wqe(struct mlx5e_txqsq *sq, struct sk_buff *skb,
if (unlikely(num_dma < 0))
goto err_drop;
mlx5e_txwqe_complete(sq, skb, attr, wqe_attr, num_dma, wi, cseg, xmit_more);
mlx5e_txwqe_complete(sq, skb, attr, wqe_attr, num_dma, wi, cseg, eseg, xmit_more);
return;
err_drop:
stats->dropped++;
dev_kfree_skb_any(skb);
if (unlikely(sq->ptpsq && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)))
mlx5e_ptp_metadata_fifo_push(&sq->ptpsq->metadata_freelist,
be32_to_cpu(eseg->flow_table_metadata));
mlx5e_tx_flush(sq);
}
......@@ -645,9 +653,9 @@ void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq)
static void mlx5e_cqe_ts_id_eseg(struct mlx5e_ptpsq *ptpsq, struct sk_buff *skb,
struct mlx5_wqe_eth_seg *eseg)
{
if (ptpsq->ts_cqe_ctr_mask && unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
eseg->flow_table_metadata = cpu_to_be32(ptpsq->skb_fifo_pc &
ptpsq->ts_cqe_ctr_mask);
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
eseg->flow_table_metadata =
cpu_to_be32(mlx5e_ptp_metadata_fifo_pop(&ptpsq->metadata_freelist));
}
static void mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq,
......@@ -766,7 +774,7 @@ void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq)
{
if (netif_tx_queue_stopped(sq->txq) &&
mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
mlx5e_ptpsq_fifo_has_room(sq) &&
!mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq) &&
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
netif_tx_wake_queue(sq->txq);
sq->stats->wake++;
......@@ -1031,7 +1039,7 @@ void mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
if (unlikely(num_dma < 0))
goto err_drop;
mlx5e_txwqe_complete(sq, skb, &attr, &wqe_attr, num_dma, wi, cseg, xmit_more);
mlx5e_txwqe_complete(sq, skb, &attr, &wqe_attr, num_dma, wi, cseg, eseg, xmit_more);
return;
......
......@@ -535,6 +535,28 @@ esw_src_port_rewrite_supported(struct mlx5_eswitch *esw)
MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level);
}
static bool
esw_dests_to_vf_pf_vports(struct mlx5_flow_destination *dests, int max_dest)
{
bool vf_dest = false, pf_dest = false;
int i;
for (i = 0; i < max_dest; i++) {
if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
continue;
if (dests[i].vport.num == MLX5_VPORT_UPLINK)
pf_dest = true;
else
vf_dest = true;
if (vf_dest && pf_dest)
return true;
}
return false;
}
static int
esw_setup_dests(struct mlx5_flow_destination *dest,
struct mlx5_flow_act *flow_act,
......@@ -671,6 +693,15 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
rule = ERR_PTR(err);
goto err_create_goto_table;
}
/* Header rewrite with combined wire+loopback in FDB is not allowed */
if ((flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) &&
esw_dests_to_vf_pf_vports(dest, i)) {
esw_warn(esw->dev,
"FDB: Header rewrite with forwarding to both PF and VF is not allowed\n");
rule = ERR_PTR(-EINVAL);
goto err_esw_get;
}
}
if (esw_attr->decap_pkt_reformat)
......
......@@ -143,90 +143,86 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
{
int err;
err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_GENERAL, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
if (MLX5_CAP_GEN(dev, port_selection_cap)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_PORT_SELECTION);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_PORT_SELECTION, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, hca_cap_2)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL_2);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_GENERAL_2, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, eth_net_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ETHERNET_OFFLOADS);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ETHERNET_OFFLOADS,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_IPOIB_ENHANCED_OFFLOADS);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_IPOIB_ENHANCED_OFFLOADS,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, pg)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ODP);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ODP, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, atomic)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ATOMIC);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ATOMIC, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, roce)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ROCE);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ROCE, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, nic_flow_table) ||
MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_FLOW_TABLE, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_ESWITCH_MANAGER(dev)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH_FLOW_TABLE);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ESWITCH_FLOW_TABLE,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, vector_calc)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_VECTOR_CALC);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ESWITCH, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, qos)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_QOS);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_QOS, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, debug))
mlx5_core_get_caps(dev, MLX5_CAP_DEBUG);
mlx5_core_get_caps_mode(dev, MLX5_CAP_DEBUG, HCA_CAP_OPMOD_GET_CUR);
if (MLX5_CAP_GEN(dev, pcam_reg))
mlx5_get_pcam_reg(dev);
if (MLX5_CAP_GEN(dev, mcam_reg)) {
mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_FIRST_128);
mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9080_0x90FF);
mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9100_0x917F);
}
......@@ -234,57 +230,52 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
mlx5_get_qcam_reg(dev);
if (MLX5_CAP_GEN(dev, device_memory)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_MEM);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_DEV_MEM, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, event_cap)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_EVENT);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_DEV_EVENT, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, tls_tx) || MLX5_CAP_GEN(dev, tls_rx)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_TLS);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_TLS, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN_64(dev, general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) {
err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_VDPA_EMULATION, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, ipsec_offload)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_IPSEC);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_IPSEC, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, crypto)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_CRYPTO);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, shampo)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_SHAMPO);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_CRYPTO, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN_64(dev, general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_MACSEC_OFFLOAD) {
err = mlx5_core_get_caps(dev, MLX5_CAP_MACSEC);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_MACSEC, HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, adv_virtualization)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ADV_VIRTUALIZATION);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ADV_VIRTUALIZATION,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
......
......@@ -127,17 +127,23 @@ static int mlx5_fw_reset_get_reset_state_err(struct mlx5_core_dev *dev,
if (mlx5_reg_mfrl_query(dev, NULL, NULL, &reset_state))
goto out;
if (!reset_state)
return 0;
switch (reset_state) {
case MLX5_MFRL_REG_RESET_STATE_IN_NEGOTIATION:
case MLX5_MFRL_REG_RESET_STATE_RESET_IN_PROGRESS:
NL_SET_ERR_MSG_MOD(extack, "Sync reset was already triggered");
NL_SET_ERR_MSG_MOD(extack, "Sync reset still in progress");
return -EBUSY;
case MLX5_MFRL_REG_RESET_STATE_TIMEOUT:
NL_SET_ERR_MSG_MOD(extack, "Sync reset got timeout");
case MLX5_MFRL_REG_RESET_STATE_NEG_TIMEOUT:
NL_SET_ERR_MSG_MOD(extack, "Sync reset negotiation timeout");
return -ETIMEDOUT;
case MLX5_MFRL_REG_RESET_STATE_NACK:
NL_SET_ERR_MSG_MOD(extack, "One of the hosts disabled reset");
return -EPERM;
case MLX5_MFRL_REG_RESET_STATE_UNLOAD_TIMEOUT:
NL_SET_ERR_MSG_MOD(extack, "Sync reset unload timeout");
return -ETIMEDOUT;
}
out:
......@@ -151,7 +157,7 @@ int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
u32 out[MLX5_ST_SZ_DW(mfrl_reg)] = {};
u32 in[MLX5_ST_SZ_DW(mfrl_reg)] = {};
int err;
int err, rst_res;
set_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags);
......@@ -164,13 +170,34 @@ int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
return 0;
clear_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags);
if (err == -EREMOTEIO && MLX5_CAP_MCAM_FEATURE(dev, reset_state))
return mlx5_fw_reset_get_reset_state_err(dev, extack);
if (err == -EREMOTEIO && MLX5_CAP_MCAM_FEATURE(dev, reset_state)) {
rst_res = mlx5_fw_reset_get_reset_state_err(dev, extack);
return rst_res ? rst_res : err;
}
NL_SET_ERR_MSG_MOD(extack, "Sync reset command failed");
return mlx5_cmd_check(dev, err, in, out);
}
int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev,
struct netlink_ext_ack *extack)
{
u8 rst_state;
int err;
err = mlx5_fw_reset_get_reset_state_err(dev, extack);
if (err)
return err;
rst_state = mlx5_get_fw_rst_state(dev);
if (!rst_state)
return 0;
mlx5_core_err(dev, "Sync reset did not complete, state=%d\n", rst_state);
NL_SET_ERR_MSG_MOD(extack, "Sync reset did not complete successfully");
return rst_state;
}
int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev)
{
return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL0, 0, 0, false);
......
......@@ -12,6 +12,8 @@ int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev);
int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev);
int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev,
struct netlink_ext_ack *extack);
void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);
void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev);
void mlx5_drain_fw_reset(struct mlx5_core_dev *dev);
......
......@@ -361,9 +361,8 @@ void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *dev)
}
EXPORT_SYMBOL(mlx5_core_uplink_netdev_event_replay);
static int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev,
enum mlx5_cap_type cap_type,
enum mlx5_cap_mode cap_mode)
int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type,
enum mlx5_cap_mode cap_mode)
{
u8 in[MLX5_ST_SZ_BYTES(query_hca_cap_in)];
int out_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
......@@ -1620,21 +1619,24 @@ static int mlx5_query_hca_caps_light(struct mlx5_core_dev *dev)
return err;
if (MLX5_CAP_GEN(dev, eth_net_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_ETHERNET_OFFLOADS);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ETHERNET_OFFLOADS,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN(dev, nic_flow_table) ||
MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_FLOW_TABLE,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
if (MLX5_CAP_GEN_64(dev, general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) {
err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION);
err = mlx5_core_get_caps_mode(dev, MLX5_CAP_VDPA_EMULATION,
HCA_CAP_OPMOD_GET_CUR);
if (err)
return err;
}
......@@ -1714,7 +1716,6 @@ static const int types[] = {
MLX5_CAP_FLOW_TABLE,
MLX5_CAP_ESWITCH_FLOW_TABLE,
MLX5_CAP_ESWITCH,
MLX5_CAP_VECTOR_CALC,
MLX5_CAP_QOS,
MLX5_CAP_DEBUG,
MLX5_CAP_DEV_MEM,
......@@ -1723,7 +1724,6 @@ static const int types[] = {
MLX5_CAP_VDPA_EMULATION,
MLX5_CAP_IPSEC,
MLX5_CAP_PORT_SELECTION,
MLX5_CAP_DEV_SHAMPO,
MLX5_CAP_MACSEC,
MLX5_CAP_ADV_VIRTUALIZATION,
MLX5_CAP_CRYPTO,
......
......@@ -174,6 +174,9 @@ static inline int mlx5_flexible_inlen(struct mlx5_core_dev *dev, size_t fixed,
#define MLX5_FLEXIBLE_INLEN(dev, fixed, item_size, num_items) \
mlx5_flexible_inlen(dev, fixed, item_size, num_items, __func__, __LINE__)
int mlx5_core_get_caps(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type);
int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type,
enum mlx5_cap_mode cap_mode);
int mlx5_query_hca_caps(struct mlx5_core_dev *dev);
int mlx5_query_board_id(struct mlx5_core_dev *dev);
int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num);
......
......@@ -129,7 +129,7 @@ static void mlx5_sf_dev_add(struct mlx5_core_dev *dev, u16 sf_index, u16 fn_id,
err = auxiliary_device_add(&sf_dev->adev);
if (err) {
put_device(&sf_dev->adev.dev);
auxiliary_device_uninit(&sf_dev->adev);
goto add_err;
}
......@@ -167,7 +167,7 @@ mlx5_sf_dev_state_change_handler(struct notifier_block *nb, unsigned long event_
if (!max_functions)
return 0;
base_id = MLX5_CAP_GEN(table->dev, sf_base_id);
base_id = mlx5_sf_start_function_id(table->dev);
if (event->function_id < base_id || event->function_id >= (base_id + max_functions))
return 0;
......@@ -185,7 +185,7 @@ mlx5_sf_dev_state_change_handler(struct notifier_block *nb, unsigned long event_
mlx5_sf_dev_del(table->dev, sf_dev, sf_index);
else
mlx5_core_err(table->dev,
"SF DEV: teardown state for invalid dev index=%d fn_id=0x%x\n",
"SF DEV: teardown state for invalid dev index=%d sfnum=0x%x\n",
sf_index, event->sw_function_id);
break;
case MLX5_VHCA_STATE_ACTIVE:
......@@ -209,7 +209,7 @@ static int mlx5_sf_dev_vhca_arm_all(struct mlx5_sf_dev_table *table)
int i;
max_functions = mlx5_sf_max_functions(dev);
function_id = MLX5_CAP_GEN(dev, sf_base_id);
function_id = mlx5_sf_start_function_id(dev);
/* Arm the vhca context as the vhca event notifier */
for (i = 0; i < max_functions; i++) {
err = mlx5_vhca_event_arm(dev, function_id);
......@@ -234,7 +234,7 @@ static void mlx5_sf_dev_add_active_work(struct work_struct *work)
int i;
max_functions = mlx5_sf_max_functions(dev);
function_id = MLX5_CAP_GEN(dev, sf_base_id);
function_id = mlx5_sf_start_function_id(dev);
for (i = 0; i < max_functions; i++, function_id++) {
if (table->stop_active_wq)
return;
......@@ -299,7 +299,7 @@ void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev)
unsigned int max_sfs;
int err;
if (!mlx5_sf_dev_supported(dev) || !mlx5_vhca_event_supported(dev))
if (!mlx5_sf_dev_supported(dev))
return;
table = kzalloc(sizeof(*table), GFP_KERNEL);
......
......@@ -9,6 +9,7 @@
#include "mlx5_core.h"
#include "eswitch.h"
#include "diag/sf_tracepoint.h"
#include "devlink.h"
struct mlx5_sf_hw {
u32 usr_sfnum;
......@@ -243,31 +244,61 @@ static void mlx5_sf_hw_table_hwc_cleanup(struct mlx5_sf_hwc_table *hwc)
kfree(hwc->sfs);
}
static void mlx5_sf_hw_table_res_unregister(struct mlx5_core_dev *dev)
{
devl_resources_unregister(priv_to_devlink(dev));
}
static int mlx5_sf_hw_table_res_register(struct mlx5_core_dev *dev, u16 max_fn,
u16 max_ext_fn)
{
struct devlink_resource_size_params size_params;
struct devlink *devlink = priv_to_devlink(dev);
int err;
devlink_resource_size_params_init(&size_params, max_fn, max_fn, 1,
DEVLINK_RESOURCE_UNIT_ENTRY);
err = devl_resource_register(devlink, "max_local_SFs", max_fn, MLX5_DL_RES_MAX_LOCAL_SFS,
DEVLINK_RESOURCE_ID_PARENT_TOP, &size_params);
if (err)
return err;
devlink_resource_size_params_init(&size_params, max_ext_fn, max_ext_fn, 1,
DEVLINK_RESOURCE_UNIT_ENTRY);
return devl_resource_register(devlink, "max_external_SFs", max_ext_fn,
MLX5_DL_RES_MAX_EXTERNAL_SFS, DEVLINK_RESOURCE_ID_PARENT_TOP,
&size_params);
}
int mlx5_sf_hw_table_init(struct mlx5_core_dev *dev)
{
struct mlx5_sf_hw_table *table;
u16 max_ext_fn = 0;
u16 ext_base_id = 0;
u16 max_fn = 0;
u16 base_id;
u16 max_fn;
int err;
if (!mlx5_vhca_event_supported(dev))
return 0;
if (mlx5_sf_supported(dev))
max_fn = mlx5_sf_max_functions(dev);
max_fn = mlx5_sf_max_functions(dev);
err = mlx5_esw_sf_max_hpf_functions(dev, &max_ext_fn, &ext_base_id);
if (err)
return err;
if (mlx5_sf_hw_table_res_register(dev, max_fn, max_ext_fn))
mlx5_core_dbg(dev, "failed to register max SFs resources");
if (!max_fn && !max_ext_fn)
return 0;
table = kzalloc(sizeof(*table), GFP_KERNEL);
if (!table)
return -ENOMEM;
if (!table) {
err = -ENOMEM;
goto alloc_err;
}
mutex_init(&table->table_lock);
table->dev = dev;
......@@ -291,6 +322,8 @@ int mlx5_sf_hw_table_init(struct mlx5_core_dev *dev)
table_err:
mutex_destroy(&table->table_lock);
kfree(table);
alloc_err:
mlx5_sf_hw_table_res_unregister(dev);
return err;
}
......@@ -299,12 +332,14 @@ void mlx5_sf_hw_table_cleanup(struct mlx5_core_dev *dev)
struct mlx5_sf_hw_table *table = dev->priv.sf_hw_table;
if (!table)
return;
goto res_unregister;
mutex_destroy(&table->table_lock);
mlx5_sf_hw_table_hwc_cleanup(&table->hwc[MLX5_SF_HWC_EXTERNAL]);
mlx5_sf_hw_table_hwc_cleanup(&table->hwc[MLX5_SF_HWC_LOCAL]);
mutex_destroy(&table->table_lock);
kfree(table);
res_unregister:
mlx5_sf_hw_table_res_unregister(dev);
}
static int mlx5_sf_hw_vhca_event(struct notifier_block *nb, unsigned long opcode, void *data)
......
......@@ -1208,9 +1208,7 @@ enum mlx5_cap_type {
MLX5_CAP_FLOW_TABLE,
MLX5_CAP_ESWITCH_FLOW_TABLE,
MLX5_CAP_ESWITCH,
MLX5_CAP_RESERVED,
MLX5_CAP_VECTOR_CALC,
MLX5_CAP_QOS,
MLX5_CAP_QOS = 0xc,
MLX5_CAP_DEBUG,
MLX5_CAP_RESERVED_14,
MLX5_CAP_DEV_MEM,
......@@ -1220,7 +1218,6 @@ enum mlx5_cap_type {
MLX5_CAP_DEV_EVENT = 0x14,
MLX5_CAP_IPSEC,
MLX5_CAP_CRYPTO = 0x1a,
MLX5_CAP_DEV_SHAMPO = 0x1d,
MLX5_CAP_MACSEC = 0x1f,
MLX5_CAP_GENERAL_2 = 0x20,
MLX5_CAP_PORT_SELECTION = 0x25,
......@@ -1239,7 +1236,6 @@ enum mlx5_pcam_feature_groups {
enum mlx5_mcam_reg_groups {
MLX5_MCAM_REGS_FIRST_128 = 0x0,
MLX5_MCAM_REGS_0x9080_0x90FF = 0x1,
MLX5_MCAM_REGS_0x9100_0x917F = 0x2,
MLX5_MCAM_REGS_NUM = 0x3,
};
......@@ -1279,10 +1275,6 @@ enum mlx5_qcam_feature_groups {
MLX5_GET(per_protocol_networking_offload_caps,\
mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS]->cur, cap)
#define MLX5_CAP_ETH_MAX(mdev, cap) \
MLX5_GET(per_protocol_networking_offload_caps,\
mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS]->max, cap)
#define MLX5_CAP_IPOIB_ENHANCED(mdev, cap) \
MLX5_GET(per_protocol_networking_offload_caps,\
mdev->caps.hca[MLX5_CAP_IPOIB_ENHANCED_OFFLOADS]->cur, cap)
......@@ -1305,77 +1297,40 @@ enum mlx5_qcam_feature_groups {
#define MLX5_CAP64_FLOWTABLE(mdev, cap) \
MLX5_GET64(flow_table_nic_cap, (mdev)->caps.hca[MLX5_CAP_FLOW_TABLE]->cur, cap)
#define MLX5_CAP_FLOWTABLE_MAX(mdev, cap) \
MLX5_GET(flow_table_nic_cap, mdev->caps.hca[MLX5_CAP_FLOW_TABLE]->max, cap)
#define MLX5_CAP_FLOWTABLE_NIC_RX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.cap)
#define MLX5_CAP_FLOWTABLE_NIC_RX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive.cap)
#define MLX5_CAP_FLOWTABLE_NIC_TX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit.cap)
#define MLX5_CAP_FLOWTABLE_NIC_TX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit.cap)
#define MLX5_CAP_FLOWTABLE_SNIFFER_RX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive_sniffer.cap)
#define MLX5_CAP_FLOWTABLE_SNIFFER_RX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive_sniffer.cap)
#define MLX5_CAP_FLOWTABLE_SNIFFER_TX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit_sniffer.cap)
#define MLX5_CAP_FLOWTABLE_SNIFFER_TX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit_sniffer.cap)
#define MLX5_CAP_FLOWTABLE_RDMA_RX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive_rdma.cap)
#define MLX5_CAP_FLOWTABLE_RDMA_RX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive_rdma.cap)
#define MLX5_CAP_FLOWTABLE_RDMA_TX(mdev, cap) \
MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit_rdma.cap)
#define MLX5_CAP_FLOWTABLE_RDMA_TX_MAX(mdev, cap) \
MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit_rdma.cap)
#define MLX5_CAP_ESW_FLOWTABLE(mdev, cap) \
MLX5_GET(flow_table_eswitch_cap, \
mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->cur, cap)
#define MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, cap) \
MLX5_GET(flow_table_eswitch_cap, \
mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->max, cap)
#define MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_nic_esw_fdb.cap)
#define MLX5_CAP_ESW_FLOWTABLE_FDB_MAX(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_nic_esw_fdb.cap)
#define MLX5_CAP_ESW_EGRESS_ACL(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_egress.cap)
#define MLX5_CAP_ESW_EGRESS_ACL_MAX(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_egress.cap)
#define MLX5_CAP_ESW_INGRESS_ACL(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_ingress.cap)
#define MLX5_CAP_ESW_INGRESS_ACL_MAX(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_ingress.cap)
#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE(mdev, ft_field_support_2_esw_fdb.cap)
#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2_MAX(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, ft_field_support_2_esw_fdb.cap)
#define MLX5_CAP_ESW(mdev, cap) \
MLX5_GET(e_switch_cap, \
mdev->caps.hca[MLX5_CAP_ESWITCH]->cur, cap)
......@@ -1384,10 +1339,6 @@ enum mlx5_qcam_feature_groups {
MLX5_GET64(flow_table_eswitch_cap, \
(mdev)->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->cur, cap)
#define MLX5_CAP_ESW_MAX(mdev, cap) \
MLX5_GET(e_switch_cap, \
mdev->caps.hca[MLX5_CAP_ESWITCH]->max, cap)
#define MLX5_CAP_PORT_SELECTION(mdev, cap) \
MLX5_GET(port_selection_cap, \
mdev->caps.hca[MLX5_CAP_PORT_SELECTION]->cur, cap)
......@@ -1400,26 +1351,15 @@ enum mlx5_qcam_feature_groups {
MLX5_GET(adv_virtualization_cap, \
mdev->caps.hca[MLX5_CAP_ADV_VIRTUALIZATION]->cur, cap)
#define MLX5_CAP_ADV_VIRTUALIZATION_MAX(mdev, cap) \
MLX5_GET(adv_virtualization_cap, \
mdev->caps.hca[MLX5_CAP_ADV_VIRTUALIZATION]->max, cap)
#define MLX5_CAP_FLOWTABLE_PORT_SELECTION(mdev, cap) \
MLX5_CAP_PORT_SELECTION(mdev, flow_table_properties_port_selection.cap)
#define MLX5_CAP_FLOWTABLE_PORT_SELECTION_MAX(mdev, cap) \
MLX5_CAP_PORT_SELECTION_MAX(mdev, flow_table_properties_port_selection.cap)
#define MLX5_CAP_ODP(mdev, cap)\
MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP]->cur, cap)
#define MLX5_CAP_ODP_MAX(mdev, cap)\
MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP]->max, cap)
#define MLX5_CAP_VECTOR_CALC(mdev, cap) \
MLX5_GET(vector_calc_cap, \
mdev->caps.hca[MLX5_CAP_VECTOR_CALC]->cur, cap)
#define MLX5_CAP_QOS(mdev, cap)\
MLX5_GET(qos_cap, mdev->caps.hca[MLX5_CAP_QOS]->cur, cap)
......@@ -1436,10 +1376,6 @@ enum mlx5_qcam_feature_groups {
MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_FIRST_128], \
mng_access_reg_cap_mask.access_regs.reg)
#define MLX5_CAP_MCAM_REG1(mdev, reg) \
MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9080_0x90FF], \
mng_access_reg_cap_mask.access_regs1.reg)
#define MLX5_CAP_MCAM_REG2(mdev, reg) \
MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9100_0x917F], \
mng_access_reg_cap_mask.access_regs2.reg)
......@@ -1485,9 +1421,6 @@ enum mlx5_qcam_feature_groups {
#define MLX5_CAP_CRYPTO(mdev, cap)\
MLX5_GET(crypto_cap, (mdev)->caps.hca[MLX5_CAP_CRYPTO]->cur, cap)
#define MLX5_CAP_DEV_SHAMPO(mdev, cap)\
MLX5_GET(shampo_cap, mdev->caps.hca_cur[MLX5_CAP_DEV_SHAMPO], cap)
#define MLX5_CAP_MACSEC(mdev, cap)\
MLX5_GET(macsec_cap, (mdev)->caps.hca[MLX5_CAP_MACSEC]->cur, cap)
......
......@@ -1022,7 +1022,6 @@ bool mlx5_cmd_is_down(struct mlx5_core_dev *dev);
void mlx5_core_uplink_netdev_set(struct mlx5_core_dev *mdev, struct net_device *netdev);
void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *mdev);
int mlx5_core_get_caps(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type);
void mlx5_health_cleanup(struct mlx5_core_dev *dev);
int mlx5_health_init(struct mlx5_core_dev *dev);
void mlx5_start_health_poll(struct mlx5_core_dev *dev);
......
......@@ -1314,33 +1314,6 @@ struct mlx5_ifc_odp_cap_bits {
u8 reserved_at_120[0x6E0];
};
struct mlx5_ifc_calc_op {
u8 reserved_at_0[0x10];
u8 reserved_at_10[0x9];
u8 op_swap_endianness[0x1];
u8 op_min[0x1];
u8 op_xor[0x1];
u8 op_or[0x1];
u8 op_and[0x1];
u8 op_max[0x1];
u8 op_add[0x1];
};
struct mlx5_ifc_vector_calc_cap_bits {
u8 calc_matrix[0x1];
u8 reserved_at_1[0x1f];
u8 reserved_at_20[0x8];
u8 max_vec_count[0x8];
u8 reserved_at_30[0xd];
u8 max_chunk_size[0x3];
struct mlx5_ifc_calc_op calc0;
struct mlx5_ifc_calc_op calc1;
struct mlx5_ifc_calc_op calc2;
struct mlx5_ifc_calc_op calc3;
u8 reserved_at_c0[0x720];
};
struct mlx5_ifc_tls_cap_bits {
u8 tls_1_2_aes_gcm_128[0x1];
u8 tls_1_3_aes_gcm_128[0x1];
......@@ -3435,20 +3408,6 @@ struct mlx5_ifc_roce_addr_layout_bits {
u8 reserved_at_e0[0x20];
};
struct mlx5_ifc_shampo_cap_bits {
u8 reserved_at_0[0x3];
u8 shampo_log_max_reservation_size[0x5];
u8 reserved_at_8[0x3];
u8 shampo_log_min_reservation_size[0x5];
u8 shampo_min_mss_size[0x10];
u8 reserved_at_20[0x3];
u8 shampo_max_log_headers_entry_size[0x5];
u8 reserved_at_28[0x18];
u8 reserved_at_40[0x7c0];
};
struct mlx5_ifc_crypto_cap_bits {
u8 reserved_at_0[0x3];
u8 synchronize_dek[0x1];
......@@ -3484,14 +3443,12 @@ union mlx5_ifc_hca_cap_union_bits {
struct mlx5_ifc_flow_table_eswitch_cap_bits flow_table_eswitch_cap;
struct mlx5_ifc_e_switch_cap_bits e_switch_cap;
struct mlx5_ifc_port_selection_cap_bits port_selection_cap;
struct mlx5_ifc_vector_calc_cap_bits vector_calc_cap;
struct mlx5_ifc_qos_cap_bits qos_cap;
struct mlx5_ifc_debug_cap_bits debug_cap;
struct mlx5_ifc_fpga_cap_bits fpga_cap;
struct mlx5_ifc_tls_cap_bits tls_cap;
struct mlx5_ifc_device_mem_cap_bits device_mem_cap;
struct mlx5_ifc_virtio_emulation_cap_bits virtio_emulation_cap;
struct mlx5_ifc_shampo_cap_bits shampo_cap;
struct mlx5_ifc_macsec_cap_bits macsec_cap;
struct mlx5_ifc_crypto_cap_bits crypto_cap;
u8 reserved_at_0[0x8000];
......@@ -10858,8 +10815,9 @@ enum {
MLX5_MFRL_REG_RESET_STATE_IDLE = 0,
MLX5_MFRL_REG_RESET_STATE_IN_NEGOTIATION = 1,
MLX5_MFRL_REG_RESET_STATE_RESET_IN_PROGRESS = 2,
MLX5_MFRL_REG_RESET_STATE_TIMEOUT = 3,
MLX5_MFRL_REG_RESET_STATE_NEG_TIMEOUT = 3,
MLX5_MFRL_REG_RESET_STATE_NACK = 4,
MLX5_MFRL_REG_RESET_STATE_UNLOAD_TIMEOUT = 5,
};
enum {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment