Commit 474bb1aa authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'mlx5-updates-2024-09-02' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2024-08-29

HW-Managed Flow Steering in mlx5 driver

Yevgeny Kliteynik says:
=======================

1. Overview
-----------

ConnectX devices support packet matching, modification, and redirection.
This functionality is referred as Flow Steering.
To configure a steering rule, the rule is written to the device-owned
memory. This memory is accessed and cached by the device when processing
a packet.

The first implementation of Flow Steering was done in FW, and it is
referred in the mlx5 driver as Device-Managed Flow Steering (DMFS).
Later we introduced SW-managed Flow Steering (SWS or SMFS), where the
driver is writing directly to the device's configuration memory (ICM)
through RC QP using RDMA operations (RDMA-read and RDAM-write), thus
achieving higher rates of rule insertion/deletion.

Now we introduce a new flow steering implementation: HW-Managed Flow
Steering (HWS or HMFS).

In this new approach, the driver is configuring steering rules directly
to the HW using the WQs with a special new type of WQE. This way we can
reach higher rule insertion/deletion rate with much lower CPU utilization
compared to SWS.

The key benefits of HWS as opposed to SWS:
+ HW manages the steering decision tree
   - HW calculates CRC for each entry
   - HW handles tree hash collisions
   - HW & FW manage objects refcount
+ HW keeps cache coherency:
   - HW provides tree access locking and synchronization
   - HW provides notification on completion
+ Insertion rate isn’t affected by background traffic
   - Dedicated HW components that handle insertion

2. Performance
--------------

Measuring Connection Tracking with simple IPv4 flows w/o NAT, we
are able to get ~5 times more flows offloaded per second using HWS.

3. Configuration
----------------

The enablement of HWS mode in eswitch manager is done using the same
devlink param that is already used for switching between FW-managed
steering and SW-managed steering modes:

  # devlink dev param set pci/<PCI_ID> name flow_steering_mode cmod runtime value hmfs

4. Upstream Submission
----------------------

HWS support consists of 3 main components:
+ Steering:
   - The lower layer that exposes HWS API to upper layers and implements
     all the management of flow steering building blocks
+ FS-Core
   - Implementation of fs_hws layer to enable fs_core to use HWS instead
     of FW or SW steering
   - Create HW steering action pools to utilize the ability of HWS to
     share steering actions among different rules
   - Add support for configuring HWS mode through devlink command,
     similar to configuring SWS mode
+ Connection Tracking
   - Implementation of CT support for HW steering
   - Hooks up the CT ops for the new steering mode and uses the HWS API
     to implement connection tracking.

Because of the large number of patches, we need to perform the submission
in several separate patch series. This series is the first submission that
lays the ground work for the next submissions, where an actual user of HWS
will be added.

5. Patches in this series
-------------------------

This patch series contains implementation of the first bullet from above.

=======================

* tag 'mlx5-updates-2024-09-02' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net/mlx5: HWS, added API and enabled HWS support
  net/mlx5: HWS, added send engine and context handling
  net/mlx5: HWS, added debug dump and internal headers
  net/mlx5: HWS, added backward-compatible API handling
  net/mlx5: HWS, added memory management handling
  net/mlx5: HWS, added vport handling
  net/mlx5: HWS, added modify header pattern and args handling
  net/mlx5: HWS, added FW commands handling
  net/mlx5: HWS, added matchers functionality
  net/mlx5: HWS, added definers handling
  net/mlx5: HWS, added rules handling
  net/mlx5: HWS, added tables handling
  net/mlx5: HWS, added actions handling
  net/mlx5: Added missing definitions in preparation for HW Steering
  net/mlx5: Added missing mlx5_ifc definition for HW Steering
====================

Link: https://patch.msgid.link/20240909181250.41596-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents ea403549 510f9f61
......@@ -130,6 +130,9 @@ Enabling the driver and kconfig options
| Build support for software-managed steering in the NIC.
**CONFIG_MLX5_HW_STEERING=(y/n)**
| Build support for hardware-managed steering in the NIC.
**CONFIG_MLX5_TC_CT=(y/n)**
......
......@@ -172,6 +172,16 @@ config MLX5_SW_STEERING
help
Build support for software-managed steering in the NIC.
config MLX5_HW_STEERING
bool "Mellanox Technologies hardware-managed steering"
depends on MLX5_CORE_EN && MLX5_ESWITCH
default y
help
Build support for Hardware-Managed Flow Steering (HMFS) in the NIC.
HMFS is a new approach to managing steering rules where STEs are
written to ICM by HW (as opposed to SW in software-managed steering),
which allows higher rate of rule insertion.
config MLX5_SF
bool "Mellanox Technologies subfunction device support using auxiliary device"
depends on MLX5_CORE && MLX5_CORE_EN
......
......@@ -119,6 +119,27 @@ mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o
steering/dr_action.o steering/fs_dr.o \
steering/dr_definer.o steering/dr_ptrn.o \
steering/dr_arg.o steering/dr_dbg.o lib/smfs.o
#
# HW Steering
#
mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/mlx5hws_cmd.o \
steering/hws/mlx5hws_context.o \
steering/hws/mlx5hws_pat_arg.o \
steering/hws/mlx5hws_buddy.o \
steering/hws/mlx5hws_pool.o \
steering/hws/mlx5hws_table.o \
steering/hws/mlx5hws_action.o \
steering/hws/mlx5hws_rule.o \
steering/hws/mlx5hws_matcher.o \
steering/hws/mlx5hws_send.o \
steering/hws/mlx5hws_definer.o \
steering/hws/mlx5hws_bwc.o \
steering/hws/mlx5hws_debug.o \
steering/hws/mlx5hws_vport.o \
steering/hws/mlx5hws_bwc_complex.o
#
# SF device
#
......
......@@ -110,7 +110,9 @@ enum fs_flow_table_type {
FS_FT_RDMA_RX = 0X7,
FS_FT_RDMA_TX = 0X8,
FS_FT_PORT_SEL = 0X9,
FS_FT_MAX_TYPE = FS_FT_PORT_SEL,
FS_FT_FDB_RX = 0xa,
FS_FT_FDB_TX = 0xb,
FS_FT_MAX_TYPE = FS_FT_FDB_TX,
};
enum fs_flow_table_op_mod {
......@@ -368,7 +370,9 @@ struct mlx5_flow_root_namespace *find_root(struct fs_node *node);
(type == FS_FT_RDMA_RX) ? MLX5_CAP_FLOWTABLE_RDMA_RX(mdev, cap) : \
(type == FS_FT_RDMA_TX) ? MLX5_CAP_FLOWTABLE_RDMA_TX(mdev, cap) : \
(type == FS_FT_PORT_SEL) ? MLX5_CAP_FLOWTABLE_PORT_SELECTION(mdev, cap) : \
(BUILD_BUG_ON_ZERO(FS_FT_PORT_SEL != FS_FT_MAX_TYPE))\
(type == FS_FT_FDB_RX) ? MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) : \
(type == FS_FT_FDB_TX) ? MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) : \
(BUILD_BUG_ON_ZERO(FS_FT_FDB_TX != FS_FT_MAX_TYPE))\
)
#endif
......@@ -251,9 +251,9 @@ int mlx5dr_cmd_query_flow_table(struct mlx5_core_dev *dev,
output->level = MLX5_GET(query_flow_table_out, out, flow_table_context.level);
output->sw_owner_icm_root_1 = MLX5_GET64(query_flow_table_out, out,
flow_table_context.sw_owner_icm_root_1);
flow_table_context.sws.sw_owner_icm_root_1);
output->sw_owner_icm_root_0 = MLX5_GET64(query_flow_table_out, out,
flow_table_context.sw_owner_icm_root_0);
flow_table_context.sws.sw_owner_icm_root_0);
return 0;
}
......@@ -480,15 +480,15 @@ int mlx5dr_cmd_create_flow_table(struct mlx5_core_dev *mdev,
*/
if (attr->table_type == MLX5_FLOW_TABLE_TYPE_NIC_RX) {
MLX5_SET64(flow_table_context, ft_mdev,
sw_owner_icm_root_0, attr->icm_addr_rx);
sws.sw_owner_icm_root_0, attr->icm_addr_rx);
} else if (attr->table_type == MLX5_FLOW_TABLE_TYPE_NIC_TX) {
MLX5_SET64(flow_table_context, ft_mdev,
sw_owner_icm_root_0, attr->icm_addr_tx);
sws.sw_owner_icm_root_0, attr->icm_addr_tx);
} else if (attr->table_type == MLX5_FLOW_TABLE_TYPE_FDB) {
MLX5_SET64(flow_table_context, ft_mdev,
sw_owner_icm_root_0, attr->icm_addr_rx);
sws.sw_owner_icm_root_0, attr->icm_addr_rx);
MLX5_SET64(flow_table_context, ft_mdev,
sw_owner_icm_root_1, attr->icm_addr_tx);
sws.sw_owner_icm_root_1, attr->icm_addr_tx);
}
}
......
# SPDX-License-Identifier: GPL-2.0-only
subdir-ccflags-y += -I$(src)/..
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_ACTION_H_
#define MLX5HWS_ACTION_H_
/* Max number of STEs needed for a rule (including match) */
#define MLX5HWS_ACTION_MAX_STE 20
/* Max number of internal subactions of ipv6_ext */
#define MLX5HWS_ACTION_IPV6_EXT_MAX_SA 4
enum mlx5hws_action_stc_idx {
MLX5HWS_ACTION_STC_IDX_CTRL = 0,
MLX5HWS_ACTION_STC_IDX_HIT = 1,
MLX5HWS_ACTION_STC_IDX_DW5 = 2,
MLX5HWS_ACTION_STC_IDX_DW6 = 3,
MLX5HWS_ACTION_STC_IDX_DW7 = 4,
MLX5HWS_ACTION_STC_IDX_MAX = 5,
/* STC Jumvo STE combo: CTR, Hit */
MLX5HWS_ACTION_STC_IDX_LAST_JUMBO_STE = 1,
/* STC combo1: CTR, SINGLE, DOUBLE, Hit */
MLX5HWS_ACTION_STC_IDX_LAST_COMBO1 = 3,
/* STC combo2: CTR, 3 x SINGLE, Hit */
MLX5HWS_ACTION_STC_IDX_LAST_COMBO2 = 4,
/* STC combo2: CTR, TRIPLE, Hit */
MLX5HWS_ACTION_STC_IDX_LAST_COMBO3 = 2,
};
enum mlx5hws_action_offset {
MLX5HWS_ACTION_OFFSET_DW0 = 0,
MLX5HWS_ACTION_OFFSET_DW5 = 5,
MLX5HWS_ACTION_OFFSET_DW6 = 6,
MLX5HWS_ACTION_OFFSET_DW7 = 7,
MLX5HWS_ACTION_OFFSET_HIT = 3,
MLX5HWS_ACTION_OFFSET_HIT_LSB = 4,
};
enum {
MLX5HWS_ACTION_DOUBLE_SIZE = 8,
MLX5HWS_ACTION_INLINE_DATA_SIZE = 4,
MLX5HWS_ACTION_HDR_LEN_L2_MACS = 12,
MLX5HWS_ACTION_HDR_LEN_L2_VLAN = 4,
MLX5HWS_ACTION_HDR_LEN_L2_ETHER = 2,
MLX5HWS_ACTION_HDR_LEN_L2 = (MLX5HWS_ACTION_HDR_LEN_L2_MACS +
MLX5HWS_ACTION_HDR_LEN_L2_ETHER),
MLX5HWS_ACTION_HDR_LEN_L2_W_VLAN = (MLX5HWS_ACTION_HDR_LEN_L2 +
MLX5HWS_ACTION_HDR_LEN_L2_VLAN),
MLX5HWS_ACTION_REFORMAT_DATA_SIZE = 64,
DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6,
DECAP_L3_NUM_ACTIONS_W_VLAN = 7,
};
enum mlx5hws_action_setter_flag {
ASF_SINGLE1 = 1 << 0,
ASF_SINGLE2 = 1 << 1,
ASF_SINGLE3 = 1 << 2,
ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3,
ASF_TRIPLE = ASF_SINGLE1 | ASF_DOUBLE,
ASF_INSERT = 1 << 3,
ASF_REMOVE = 1 << 4,
ASF_MODIFY = 1 << 5,
ASF_CTR = 1 << 6,
ASF_HIT = 1 << 7,
};
struct mlx5hws_action_default_stc {
struct mlx5hws_pool_chunk nop_ctr;
struct mlx5hws_pool_chunk nop_dw5;
struct mlx5hws_pool_chunk nop_dw6;
struct mlx5hws_pool_chunk nop_dw7;
struct mlx5hws_pool_chunk default_hit;
u32 refcount;
};
struct mlx5hws_action_shared_stc {
struct mlx5hws_pool_chunk stc_chunk;
u32 refcount;
};
struct mlx5hws_actions_apply_data {
struct mlx5hws_send_engine *queue;
struct mlx5hws_rule_action *rule_action;
__be32 *wqe_data;
struct mlx5hws_wqe_gta_ctrl_seg *wqe_ctrl;
u32 jump_to_action_stc;
struct mlx5hws_context_common_res *common_res;
enum mlx5hws_table_type tbl_type;
u32 next_direct_idx;
u8 require_dep;
};
struct mlx5hws_actions_wqe_setter;
typedef void (*mlx5hws_action_setter_fp)(struct mlx5hws_actions_apply_data *apply,
struct mlx5hws_actions_wqe_setter *setter);
struct mlx5hws_actions_wqe_setter {
mlx5hws_action_setter_fp set_single;
mlx5hws_action_setter_fp set_double;
mlx5hws_action_setter_fp set_triple;
mlx5hws_action_setter_fp set_hit;
mlx5hws_action_setter_fp set_ctr;
u8 idx_single;
u8 idx_double;
u8 idx_triple;
u8 idx_ctr;
u8 idx_hit;
u8 stage_idx;
u8 flags;
};
struct mlx5hws_action_template {
struct mlx5hws_actions_wqe_setter setters[MLX5HWS_ACTION_MAX_STE];
enum mlx5hws_action_type *action_type_arr;
u8 num_of_action_stes;
u8 num_actions;
u8 only_term;
};
struct mlx5hws_action {
u8 type;
u8 flags;
struct mlx5hws_context *ctx;
union {
struct {
struct mlx5hws_pool_chunk stc[MLX5HWS_TABLE_TYPE_MAX];
union {
struct {
u32 pat_id;
u32 arg_id;
__be64 single_action;
u32 nope_locations;
u8 num_of_patterns;
u8 single_action_type;
u8 num_of_actions;
u8 max_num_of_actions;
u8 require_reparse;
} modify_header;
struct {
u32 arg_id;
u32 header_size;
u16 max_hdr_sz;
u8 num_of_hdrs;
u8 anchor;
u8 e_anchor;
u8 offset;
bool encap;
u8 require_reparse;
} reformat;
struct {
u32 obj_id;
u8 return_reg_id;
} aso;
struct {
u16 vport_num;
u16 esw_owner_vhca_id;
bool esw_owner_vhca_id_valid;
} vport;
struct {
u32 obj_id;
} dest_obj;
struct {
struct mlx5hws_cmd_forward_tbl *fw_island;
size_t num_dest;
struct mlx5hws_cmd_set_fte_dest *dest_list;
} dest_array;
struct {
u8 type;
u8 start_anchor;
u8 end_anchor;
u8 num_of_words;
bool decap;
} insert_hdr;
struct {
/* PRM start anchor from which header will be removed */
u8 anchor;
/* Header remove offset in bytes, from the start
* anchor to the location where remove header starts.
*/
u8 offset;
/* Indicates the removed header size in bytes */
size_t size;
} remove_header;
struct {
struct mlx5hws_matcher_action_ste *table_ste;
struct mlx5hws_action *hit_ft_action;
struct mlx5hws_definer *definer;
} range;
};
};
struct ibv_flow_action *flow_action;
u32 obj_id;
struct ibv_qp *qp;
};
};
const char *mlx5hws_action_type_to_str(enum mlx5hws_action_type action_type);
int mlx5hws_action_get_default_stc(struct mlx5hws_context *ctx,
u8 tbl_type);
void mlx5hws_action_put_default_stc(struct mlx5hws_context *ctx,
u8 tbl_type);
void mlx5hws_action_prepare_decap_l3_data(u8 *src, u8 *dst,
u16 num_of_actions);
int mlx5hws_action_template_process(struct mlx5hws_action_template *at);
bool mlx5hws_action_check_combo(struct mlx5hws_context *ctx,
enum mlx5hws_action_type *user_actions,
enum mlx5hws_table_type table_type);
int mlx5hws_action_alloc_single_stc(struct mlx5hws_context *ctx,
struct mlx5hws_cmd_stc_modify_attr *stc_attr,
u32 table_type,
struct mlx5hws_pool_chunk *stc);
void mlx5hws_action_free_single_stc(struct mlx5hws_context *ctx,
u32 table_type,
struct mlx5hws_pool_chunk *stc);
static inline void
mlx5hws_action_setter_default_single(struct mlx5hws_actions_apply_data *apply,
struct mlx5hws_actions_wqe_setter *setter)
{
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW5] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW5] =
htonl(apply->common_res->default_stc->nop_dw5.offset);
}
static inline void
mlx5hws_action_setter_default_double(struct mlx5hws_actions_apply_data *apply,
struct mlx5hws_actions_wqe_setter *setter)
{
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW6] = 0;
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW7] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW6] =
htonl(apply->common_res->default_stc->nop_dw6.offset);
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW7] =
htonl(apply->common_res->default_stc->nop_dw7.offset);
}
static inline void
mlx5hws_action_setter_default_ctr(struct mlx5hws_actions_apply_data *apply,
struct mlx5hws_actions_wqe_setter *setter)
{
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW0] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_CTRL] =
htonl(apply->common_res->default_stc->nop_ctr.offset);
}
static inline void
mlx5hws_action_apply_setter(struct mlx5hws_actions_apply_data *apply,
struct mlx5hws_actions_wqe_setter *setter,
bool is_jumbo)
{
u8 num_of_actions;
/* Set control counter */
if (setter->set_ctr)
setter->set_ctr(apply, setter);
else
mlx5hws_action_setter_default_ctr(apply, setter);
if (!is_jumbo) {
if (unlikely(setter->set_triple)) {
/* Set triple on match */
setter->set_triple(apply, setter);
num_of_actions = MLX5HWS_ACTION_STC_IDX_LAST_COMBO3;
} else {
/* Set single and double on match */
if (setter->set_single)
setter->set_single(apply, setter);
else
mlx5hws_action_setter_default_single(apply, setter);
if (setter->set_double)
setter->set_double(apply, setter);
else
mlx5hws_action_setter_default_double(apply, setter);
num_of_actions = setter->set_double ?
MLX5HWS_ACTION_STC_IDX_LAST_COMBO1 :
MLX5HWS_ACTION_STC_IDX_LAST_COMBO2;
}
} else {
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW5] = 0;
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW6] = 0;
apply->wqe_data[MLX5HWS_ACTION_OFFSET_DW7] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW5] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW6] = 0;
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_DW7] = 0;
num_of_actions = MLX5HWS_ACTION_STC_IDX_LAST_JUMBO_STE;
}
/* Set next/final hit action */
setter->set_hit(apply, setter);
/* Set number of actions */
apply->wqe_ctrl->stc_ix[MLX5HWS_ACTION_STC_IDX_CTRL] |=
htonl(num_of_actions << 29);
}
#endif /* MLX5HWS_ACTION_H_ */
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#include "mlx5hws_internal.h"
#include "mlx5hws_buddy.h"
static int hws_buddy_init(struct mlx5hws_buddy_mem *buddy, u32 max_order)
{
int i, s, ret = 0;
buddy->max_order = max_order;
buddy->bitmap = kcalloc(buddy->max_order + 1,
sizeof(*buddy->bitmap),
GFP_KERNEL);
if (!buddy->bitmap)
return -ENOMEM;
buddy->num_free = kcalloc(buddy->max_order + 1,
sizeof(*buddy->num_free),
GFP_KERNEL);
if (!buddy->num_free) {
ret = -ENOMEM;
goto err_out_free_bits;
}
for (i = 0; i <= (int)buddy->max_order; ++i) {
s = 1 << (buddy->max_order - i);
buddy->bitmap[i] = bitmap_zalloc(s, GFP_KERNEL);
if (!buddy->bitmap[i]) {
ret = -ENOMEM;
goto err_out_free_num_free;
}
}
bitmap_set(buddy->bitmap[buddy->max_order], 0, 1);
buddy->num_free[buddy->max_order] = 1;
return 0;
err_out_free_num_free:
for (i = 0; i <= (int)buddy->max_order; ++i)
bitmap_free(buddy->bitmap[i]);
kfree(buddy->num_free);
err_out_free_bits:
kfree(buddy->bitmap);
return ret;
}
struct mlx5hws_buddy_mem *mlx5hws_buddy_create(u32 max_order)
{
struct mlx5hws_buddy_mem *buddy;
buddy = kzalloc(sizeof(*buddy), GFP_KERNEL);
if (!buddy)
return NULL;
if (hws_buddy_init(buddy, max_order))
goto free_buddy;
return buddy;
free_buddy:
kfree(buddy);
return NULL;
}
void mlx5hws_buddy_cleanup(struct mlx5hws_buddy_mem *buddy)
{
int i;
for (i = 0; i <= (int)buddy->max_order; ++i)
bitmap_free(buddy->bitmap[i]);
kfree(buddy->num_free);
kfree(buddy->bitmap);
}
static int hws_buddy_find_free_seg(struct mlx5hws_buddy_mem *buddy,
u32 start_order,
u32 *segment,
u32 *order)
{
unsigned int seg, order_iter, m;
for (order_iter = start_order;
order_iter <= buddy->max_order; ++order_iter) {
if (!buddy->num_free[order_iter])
continue;
m = 1 << (buddy->max_order - order_iter);
seg = find_first_bit(buddy->bitmap[order_iter], m);
if (WARN(seg >= m,
"ICM Buddy: failed finding free mem for order %d\n",
order_iter))
return -ENOMEM;
break;
}
if (order_iter > buddy->max_order)
return -ENOMEM;
*segment = seg;
*order = order_iter;
return 0;
}
int mlx5hws_buddy_alloc_mem(struct mlx5hws_buddy_mem *buddy, u32 order)
{
u32 seg, order_iter, err;
err = hws_buddy_find_free_seg(buddy, order, &seg, &order_iter);
if (err)
return err;
bitmap_clear(buddy->bitmap[order_iter], seg, 1);
--buddy->num_free[order_iter];
while (order_iter > order) {
--order_iter;
seg <<= 1;
bitmap_set(buddy->bitmap[order_iter], seg ^ 1, 1);
++buddy->num_free[order_iter];
}
seg <<= order;
return seg;
}
void mlx5hws_buddy_free_mem(struct mlx5hws_buddy_mem *buddy, u32 seg, u32 order)
{
seg >>= order;
while (test_bit(seg ^ 1, buddy->bitmap[order])) {
bitmap_clear(buddy->bitmap[order], seg ^ 1, 1);
--buddy->num_free[order];
seg >>= 1;
++order;
}
bitmap_set(buddy->bitmap[order], seg, 1);
++buddy->num_free[order];
}
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_BUDDY_H_
#define MLX5HWS_BUDDY_H_
struct mlx5hws_buddy_mem {
unsigned long **bitmap;
unsigned int *num_free;
u32 max_order;
};
struct mlx5hws_buddy_mem *mlx5hws_buddy_create(u32 max_order);
void mlx5hws_buddy_cleanup(struct mlx5hws_buddy_mem *buddy);
int mlx5hws_buddy_alloc_mem(struct mlx5hws_buddy_mem *buddy, u32 order);
void mlx5hws_buddy_free_mem(struct mlx5hws_buddy_mem *buddy, u32 seg, u32 order);
#endif /* MLX5HWS_BUDDY_H_ */
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_BWC_H_
#define MLX5HWS_BWC_H_
#define MLX5HWS_BWC_MATCHER_INIT_SIZE_LOG 1
#define MLX5HWS_BWC_MATCHER_SIZE_LOG_STEP 1
#define MLX5HWS_BWC_MATCHER_REHASH_PERCENT_TH 70
#define MLX5HWS_BWC_MATCHER_REHASH_BURST_TH 32
#define MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM 255
#define MLX5HWS_BWC_MAX_ACTS 16
struct mlx5hws_bwc_matcher {
struct mlx5hws_matcher *matcher;
struct mlx5hws_match_template *mt;
struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM];
u8 num_of_at;
u16 priority;
u8 size_log;
u32 num_of_rules; /* atomically accessed */
struct list_head *rules;
};
struct mlx5hws_bwc_rule {
struct mlx5hws_bwc_matcher *bwc_matcher;
struct mlx5hws_rule *rule;
u16 bwc_queue_idx;
struct list_head list_node;
};
int
mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher,
struct mlx5hws_table *table,
u32 priority,
u8 match_criteria_enable,
struct mlx5hws_match_parameters *mask,
enum mlx5hws_action_type action_types[]);
int mlx5hws_bwc_matcher_destroy_simple(struct mlx5hws_bwc_matcher *bwc_matcher);
struct mlx5hws_bwc_rule *mlx5hws_bwc_rule_alloc(struct mlx5hws_bwc_matcher *bwc_matcher);
void mlx5hws_bwc_rule_free(struct mlx5hws_bwc_rule *bwc_rule);
int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
u32 *match_param,
struct mlx5hws_rule_action rule_actions[],
u32 flow_source,
u16 bwc_queue_idx);
int mlx5hws_bwc_rule_destroy_simple(struct mlx5hws_bwc_rule *bwc_rule);
void mlx5hws_bwc_rule_fill_attr(struct mlx5hws_bwc_matcher *bwc_matcher,
u16 bwc_queue_idx,
u32 flow_source,
struct mlx5hws_rule_attr *rule_attr);
static inline u16 mlx5hws_bwc_queues(struct mlx5hws_context *ctx)
{
/* Besides the control queue, half of the queues are
* reguler HWS queues, and the other half are BWC queues.
*/
return (ctx->queues - 1) / 2;
}
static inline u16 mlx5hws_bwc_get_queue_id(struct mlx5hws_context *ctx, u16 idx)
{
return idx + mlx5hws_bwc_queues(ctx);
}
#endif /* MLX5HWS_BWC_H_ */
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#include "mlx5hws_internal.h"
bool mlx5hws_bwc_match_params_is_complex(struct mlx5hws_context *ctx,
u8 match_criteria_enable,
struct mlx5hws_match_parameters *mask)
{
struct mlx5hws_definer match_layout = {0};
struct mlx5hws_match_template *mt;
bool is_complex = false;
int ret;
if (!match_criteria_enable)
return false; /* empty matcher */
mt = mlx5hws_match_template_create(ctx,
mask->match_buf,
mask->match_sz,
match_criteria_enable);
if (!mt) {
mlx5hws_err(ctx, "BWC: failed creating match template\n");
return false;
}
ret = mlx5hws_definer_calc_layout(ctx, mt, &match_layout);
if (ret) {
/* The only case that we're interested in is E2BIG,
* which means that the match parameters need to be
* split into complex martcher.
* For all other cases (good or bad) - just return true
* and let the usual match creation path handle it,
* both for good and bad flows.
*/
if (ret == E2BIG) {
is_complex = true;
mlx5hws_dbg(ctx, "Matcher definer layout: need complex matcher\n");
} else {
mlx5hws_err(ctx, "Failed to calculate matcher definer layout\n");
}
}
mlx5hws_match_template_destroy(mt);
return is_complex;
}
int mlx5hws_bwc_matcher_create_complex(struct mlx5hws_bwc_matcher *bwc_matcher,
struct mlx5hws_table *table,
u32 priority,
u8 match_criteria_enable,
struct mlx5hws_match_parameters *mask)
{
mlx5hws_err(table->ctx, "Complex matcher is not supported yet\n");
return -EOPNOTSUPP;
}
void
mlx5hws_bwc_matcher_destroy_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
{
/* nothing to do here */
}
int mlx5hws_bwc_rule_create_complex(struct mlx5hws_bwc_rule *bwc_rule,
struct mlx5hws_match_parameters *params,
u32 flow_source,
struct mlx5hws_rule_action rule_actions[],
u16 bwc_queue_idx)
{
mlx5hws_err(bwc_rule->bwc_matcher->matcher->tbl->ctx,
"Complex rule is not supported yet\n");
return -EOPNOTSUPP;
}
int mlx5hws_bwc_rule_destroy_complex(struct mlx5hws_bwc_rule *bwc_rule)
{
return 0;
}
int mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
{
mlx5hws_err(bwc_matcher->matcher->tbl->ctx,
"Moving complex rule is not supported yet\n");
return -EOPNOTSUPP;
}
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_BWC_COMPLEX_H_
#define MLX5HWS_BWC_COMPLEX_H_
bool mlx5hws_bwc_match_params_is_complex(struct mlx5hws_context *ctx,
u8 match_criteria_enable,
struct mlx5hws_match_parameters *mask);
int mlx5hws_bwc_matcher_create_complex(struct mlx5hws_bwc_matcher *bwc_matcher,
struct mlx5hws_table *table,
u32 priority,
u8 match_criteria_enable,
struct mlx5hws_match_parameters *mask);
void mlx5hws_bwc_matcher_destroy_complex(struct mlx5hws_bwc_matcher *bwc_matcher);
int mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher);
int mlx5hws_bwc_rule_create_complex(struct mlx5hws_bwc_rule *bwc_rule,
struct mlx5hws_match_parameters *params,
u32 flow_source,
struct mlx5hws_rule_action rule_actions[],
u16 bwc_queue_idx);
int mlx5hws_bwc_rule_destroy_complex(struct mlx5hws_bwc_rule *bwc_rule);
#endif /* MLX5HWS_BWC_COMPLEX_H_ */
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_CMD_H_
#define MLX5HWS_CMD_H_
#define WIRE_PORT 0xFFFF
#define ACCESS_KEY_LEN 32
enum mlx5hws_cmd_ext_dest_flags {
MLX5HWS_CMD_EXT_DEST_REFORMAT = 1 << 0,
MLX5HWS_CMD_EXT_DEST_ESW_OWNER_VHCA_ID = 1 << 1,
};
struct mlx5hws_cmd_set_fte_dest {
u8 destination_type;
u32 destination_id;
enum mlx5hws_cmd_ext_dest_flags ext_flags;
u32 ext_reformat_id;
u16 esw_owner_vhca_id;
};
struct mlx5hws_cmd_set_fte_attr {
u32 action_flags;
bool ignore_flow_level;
u8 flow_source;
u8 extended_dest;
u8 encrypt_decrypt_type;
u32 encrypt_decrypt_obj_id;
u32 packet_reformat_id;
u32 dests_num;
struct mlx5hws_cmd_set_fte_dest *dests;
};
struct mlx5hws_cmd_ft_create_attr {
u8 type;
u8 level;
bool rtc_valid;
bool decap_en;
bool reformat_en;
};
struct mlx5hws_cmd_ft_modify_attr {
u8 type;
u32 rtc_id_0;
u32 rtc_id_1;
u32 table_miss_id;
u8 table_miss_action;
u64 modify_fs;
};
struct mlx5hws_cmd_ft_query_attr {
u8 type;
};
struct mlx5hws_cmd_fg_attr {
u32 table_id;
u32 table_type;
};
struct mlx5hws_cmd_forward_tbl {
u8 type;
u32 ft_id;
u32 fg_id;
u32 refcount;
};
struct mlx5hws_cmd_rtc_create_attr {
u32 pd;
u32 stc_base;
u32 ste_base;
u32 ste_offset;
u32 miss_ft_id;
bool fw_gen_wqe;
u8 update_index_mode;
u8 access_index_mode;
u8 num_hash_definer;
u8 log_depth;
u8 log_size;
u8 table_type;
u8 match_definer_0;
u8 match_definer_1;
u8 reparse_mode;
bool is_frst_jumbo;
bool is_scnd_range;
};
struct mlx5hws_cmd_alias_obj_create_attr {
u32 obj_id;
u16 vhca_id;
u16 obj_type;
u8 access_key[ACCESS_KEY_LEN];
};
struct mlx5hws_cmd_stc_create_attr {
u8 log_obj_range;
u8 table_type;
};
struct mlx5hws_cmd_stc_modify_attr {
u32 stc_offset;
u8 action_offset;
u8 reparse_mode;
enum mlx5_ifc_stc_action_type action_type;
union {
u32 id; /* TIRN, TAG, FT ID, STE ID, CRYPTO */
struct {
u8 decap;
u16 start_anchor;
u16 end_anchor;
} remove_header;
struct {
u32 arg_id;
u32 pattern_id;
} modify_header;
struct {
__be64 data;
} modify_action;
struct {
u32 arg_id;
u32 header_size;
u8 is_inline;
u8 encap;
u16 insert_anchor;
u16 insert_offset;
} insert_header;
struct {
u8 aso_type;
u32 devx_obj_id;
u8 return_reg_id;
} aso;
struct {
u16 vport_num;
u16 esw_owner_vhca_id;
u8 eswitch_owner_vhca_id_valid;
} vport;
struct {
struct mlx5hws_pool_chunk ste;
struct mlx5hws_pool *ste_pool;
u32 ste_obj_id; /* Internal */
u32 match_definer_id;
u8 log_hash_size;
bool ignore_tx;
} ste_table;
struct {
u16 start_anchor;
u16 num_of_words;
} remove_words;
struct {
u8 type;
u8 op;
u8 size;
} reformat_trailer;
u32 dest_table_id;
u32 dest_tir_num;
};
};
struct mlx5hws_cmd_ste_create_attr {
u8 log_obj_range;
u8 table_type;
};
struct mlx5hws_cmd_definer_create_attr {
u8 *dw_selector;
u8 *byte_selector;
u8 *match_mask;
};
struct mlx5hws_cmd_allow_other_vhca_access_attr {
u16 obj_type;
u32 obj_id;
u8 access_key[ACCESS_KEY_LEN];
};
struct mlx5hws_cmd_packet_reformat_create_attr {
u8 type;
size_t data_sz;
void *data;
u8 reformat_param_0;
};
struct mlx5hws_cmd_query_ft_caps {
u8 max_level;
u8 reparse;
u8 ignore_flow_level_rtc_valid;
};
struct mlx5hws_cmd_generate_wqe_attr {
u8 *wqe_ctrl;
u8 *gta_ctrl;
u8 *gta_data_0;
u8 *gta_data_1;
u32 pdn;
};
struct mlx5hws_cmd_query_caps {
u32 flex_protocols;
u8 wqe_based_update;
u8 rtc_reparse_mode;
u16 ste_format;
u8 rtc_index_mode;
u8 ste_alloc_log_max;
u8 ste_alloc_log_gran;
u8 stc_alloc_log_max;
u8 stc_alloc_log_gran;
u8 rtc_log_depth_max;
u8 format_select_gtpu_dw_0;
u8 format_select_gtpu_dw_1;
u8 flow_table_hash_type;
u8 format_select_gtpu_dw_2;
u8 format_select_gtpu_ext_dw_0;
u8 access_index_mode;
u32 linear_match_definer;
bool full_dw_jumbo_support;
bool rtc_hash_split_table;
bool rtc_linear_lookup_table;
u32 supp_type_gen_wqe;
u8 rtc_max_hash_def_gen_wqe;
u16 supp_ste_format_gen_wqe;
struct mlx5hws_cmd_query_ft_caps nic_ft;
struct mlx5hws_cmd_query_ft_caps fdb_ft;
bool eswitch_manager;
bool merged_eswitch;
u32 eswitch_manager_vport_number;
u8 log_header_modify_argument_granularity;
u8 log_header_modify_argument_max_alloc;
u8 sq_ts_format;
u8 fdb_tir_stc;
u64 definer_format_sup;
u32 trivial_match_definer;
u32 vhca_id;
u32 shared_vhca_id;
char fw_ver[64];
bool ipsec_offload;
bool is_ecpf;
u8 flex_parser_ok_bits_supp;
u8 flex_parser_id_geneve_tlv_option_0;
u8 flex_parser_id_mpls_over_gre;
u8 flex_parser_id_mpls_over_udp;
};
int mlx5hws_cmd_flow_table_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_ft_create_attr *ft_attr,
u32 *table_id);
int mlx5hws_cmd_flow_table_modify(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_ft_modify_attr *ft_attr,
u32 table_id);
int mlx5hws_cmd_flow_table_query(struct mlx5_core_dev *mdev,
u32 obj_id,
struct mlx5hws_cmd_ft_query_attr *ft_attr,
u64 *icm_addr_0, u64 *icm_addr_1);
int mlx5hws_cmd_flow_table_destroy(struct mlx5_core_dev *mdev,
u8 fw_ft_type, u32 table_id);
void mlx5hws_cmd_alias_flow_table_destroy(struct mlx5_core_dev *mdev,
u32 table_id);
int mlx5hws_cmd_rtc_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_rtc_create_attr *rtc_attr,
u32 *rtc_id);
void mlx5hws_cmd_rtc_destroy(struct mlx5_core_dev *mdev, u32 rtc_id);
int mlx5hws_cmd_stc_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_stc_create_attr *stc_attr,
u32 *stc_id);
int mlx5hws_cmd_stc_modify(struct mlx5_core_dev *mdev,
u32 stc_id,
struct mlx5hws_cmd_stc_modify_attr *stc_attr);
void mlx5hws_cmd_stc_destroy(struct mlx5_core_dev *mdev, u32 stc_id);
int mlx5hws_cmd_generate_wqe(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_generate_wqe_attr *attr,
struct mlx5_cqe64 *ret_cqe);
int mlx5hws_cmd_ste_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_ste_create_attr *ste_attr,
u32 *ste_id);
void mlx5hws_cmd_ste_destroy(struct mlx5_core_dev *mdev, u32 ste_id);
int mlx5hws_cmd_definer_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_definer_create_attr *def_attr,
u32 *definer_id);
void mlx5hws_cmd_definer_destroy(struct mlx5_core_dev *mdev,
u32 definer_id);
int mlx5hws_cmd_arg_create(struct mlx5_core_dev *mdev,
u16 log_obj_range,
u32 pd,
u32 *arg_id);
void mlx5hws_cmd_arg_destroy(struct mlx5_core_dev *mdev,
u32 arg_id);
int mlx5hws_cmd_header_modify_pattern_create(struct mlx5_core_dev *mdev,
u32 pattern_length,
u8 *actions,
u32 *ptrn_id);
void mlx5hws_cmd_header_modify_pattern_destroy(struct mlx5_core_dev *mdev,
u32 ptrn_id);
int mlx5hws_cmd_packet_reformat_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_packet_reformat_create_attr *attr,
u32 *reformat_id);
int mlx5hws_cmd_packet_reformat_destroy(struct mlx5_core_dev *mdev,
u32 reformat_id);
int mlx5hws_cmd_set_fte(struct mlx5_core_dev *mdev,
u32 table_type,
u32 table_id,
u32 group_id,
struct mlx5hws_cmd_set_fte_attr *fte_attr);
int mlx5hws_cmd_delete_fte(struct mlx5_core_dev *mdev,
u32 table_type, u32 table_id);
struct mlx5hws_cmd_forward_tbl *
mlx5hws_cmd_forward_tbl_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_ft_create_attr *ft_attr,
struct mlx5hws_cmd_set_fte_attr *fte_attr);
void mlx5hws_cmd_forward_tbl_destroy(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_forward_tbl *tbl);
int mlx5hws_cmd_alias_obj_create(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_alias_obj_create_attr *alias_attr,
u32 *obj_id);
int mlx5hws_cmd_alias_obj_destroy(struct mlx5_core_dev *mdev,
u16 obj_type,
u32 obj_id);
int mlx5hws_cmd_sq_modify_rdy(struct mlx5_core_dev *mdev, u32 sqn);
int mlx5hws_cmd_query_caps(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_query_caps *caps);
void mlx5hws_cmd_set_attr_connect_miss_tbl(struct mlx5hws_context *ctx,
u32 fw_ft_type,
enum mlx5hws_table_type type,
struct mlx5hws_cmd_ft_modify_attr *ft_attr);
int mlx5hws_cmd_allow_other_vhca_access(struct mlx5_core_dev *mdev,
struct mlx5hws_cmd_allow_other_vhca_access_attr *attr);
int mlx5hws_cmd_query_gvmi(struct mlx5_core_dev *mdev, bool other_function,
u16 vport_number, u16 *gvmi);
#endif /* MLX5HWS_CMD_H_ */
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2024 NVIDIA CORPORATION. All rights reserved. */
#include "mlx5hws_internal.h"
bool mlx5hws_context_cap_dynamic_reparse(struct mlx5hws_context *ctx)
{
return IS_BIT_SET(ctx->caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_BY_STC);
}
u8 mlx5hws_context_get_reparse_mode(struct mlx5hws_context *ctx)
{
/* Prefer to use dynamic reparse, reparse only specific actions */
if (mlx5hws_context_cap_dynamic_reparse(ctx))
return MLX5_IFC_RTC_REPARSE_NEVER;
/* Otherwise use less efficient static */
return MLX5_IFC_RTC_REPARSE_ALWAYS;
}
static int hws_context_pools_init(struct mlx5hws_context *ctx)
{
struct mlx5hws_pool_attr pool_attr = {0};
u8 max_log_sz;
int ret;
int i;
ret = mlx5hws_pat_init_pattern_cache(&ctx->pattern_cache);
if (ret)
return ret;
ret = mlx5hws_definer_init_cache(&ctx->definer_cache);
if (ret)
goto uninit_pat_cache;
/* Create an STC pool per FT type */
pool_attr.pool_type = MLX5HWS_POOL_TYPE_STC;
pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_STC_POOL;
max_log_sz = min(MLX5HWS_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max);
pool_attr.alloc_log_sz = max(max_log_sz, ctx->caps->stc_alloc_log_gran);
for (i = 0; i < MLX5HWS_TABLE_TYPE_MAX; i++) {
pool_attr.table_type = i;
ctx->stc_pool[i] = mlx5hws_pool_create(ctx, &pool_attr);
if (!ctx->stc_pool[i]) {
mlx5hws_err(ctx, "Failed to allocate STC pool [%d]", i);
ret = -ENOMEM;
goto free_stc_pools;
}
}
return 0;
free_stc_pools:
for (i = 0; i < MLX5HWS_TABLE_TYPE_MAX; i++)
if (ctx->stc_pool[i])
mlx5hws_pool_destroy(ctx->stc_pool[i]);
mlx5hws_definer_uninit_cache(ctx->definer_cache);
uninit_pat_cache:
mlx5hws_pat_uninit_pattern_cache(ctx->pattern_cache);
return ret;
}
static void hws_context_pools_uninit(struct mlx5hws_context *ctx)
{
int i;
for (i = 0; i < MLX5HWS_TABLE_TYPE_MAX; i++) {
if (ctx->stc_pool[i])
mlx5hws_pool_destroy(ctx->stc_pool[i]);
}
mlx5hws_definer_uninit_cache(ctx->definer_cache);
mlx5hws_pat_uninit_pattern_cache(ctx->pattern_cache);
}
static int hws_context_init_pd(struct mlx5hws_context *ctx)
{
int ret = 0;
ret = mlx5_core_alloc_pd(ctx->mdev, &ctx->pd_num);
if (ret) {
mlx5hws_err(ctx, "Failed to allocate PD\n");
return ret;
}
ctx->flags |= MLX5HWS_CONTEXT_FLAG_PRIVATE_PD;
return 0;
}
static int hws_context_uninit_pd(struct mlx5hws_context *ctx)
{
if (ctx->flags & MLX5HWS_CONTEXT_FLAG_PRIVATE_PD)
mlx5_core_dealloc_pd(ctx->mdev, ctx->pd_num);
return 0;
}
static void hws_context_check_hws_supp(struct mlx5hws_context *ctx)
{
struct mlx5hws_cmd_query_caps *caps = ctx->caps;
/* HWS not supported on device / FW */
if (!caps->wqe_based_update) {
mlx5hws_err(ctx, "Required HWS WQE based insertion cap not supported\n");
return;
}
if (!caps->eswitch_manager) {
mlx5hws_err(ctx, "HWS is not supported for non eswitch manager port\n");
return;
}
/* Current solution requires all rules to set reparse bit */
if ((!caps->nic_ft.reparse ||
(!caps->fdb_ft.reparse && caps->eswitch_manager)) ||
!IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) {
mlx5hws_err(ctx, "Required HWS reparse cap not supported\n");
return;
}
/* FW/HW must support 8DW STE */
if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) {
mlx5hws_err(ctx, "Required HWS STE format not supported\n");
return;
}
/* Adding rules by hash and by offset are requirements */
if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH) ||
!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET)) {
mlx5hws_err(ctx, "Required HWS RTC update mode not supported\n");
return;
}
/* Support for SELECT definer ID is required */
if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) {
mlx5hws_err(ctx, "Required HWS Dynamic definer not supported\n");
return;
}
ctx->flags |= MLX5HWS_CONTEXT_FLAG_HWS_SUPPORT;
}
static int hws_context_init_hws(struct mlx5hws_context *ctx,
struct mlx5hws_context_attr *attr)
{
int ret;
hws_context_check_hws_supp(ctx);
if (!(ctx->flags & MLX5HWS_CONTEXT_FLAG_HWS_SUPPORT))
return 0;
ret = hws_context_init_pd(ctx);
if (ret)
return ret;
ret = hws_context_pools_init(ctx);
if (ret)
goto uninit_pd;
if (attr->bwc)
ctx->flags |= MLX5HWS_CONTEXT_FLAG_BWC_SUPPORT;
ret = mlx5hws_send_queues_open(ctx, attr->queues, attr->queue_size);
if (ret)
goto pools_uninit;
INIT_LIST_HEAD(&ctx->tbl_list);
return 0;
pools_uninit:
hws_context_pools_uninit(ctx);
uninit_pd:
hws_context_uninit_pd(ctx);
return ret;
}
static void hws_context_uninit_hws(struct mlx5hws_context *ctx)
{
if (!(ctx->flags & MLX5HWS_CONTEXT_FLAG_HWS_SUPPORT))
return;
mlx5hws_send_queues_close(ctx);
hws_context_pools_uninit(ctx);
hws_context_uninit_pd(ctx);
}
struct mlx5hws_context *mlx5hws_context_open(struct mlx5_core_dev *mdev,
struct mlx5hws_context_attr *attr)
{
struct mlx5hws_context *ctx;
int ret;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return NULL;
ctx->mdev = mdev;
mutex_init(&ctx->ctrl_lock);
xa_init(&ctx->peer_ctx_xa);
ctx->caps = kzalloc(sizeof(*ctx->caps), GFP_KERNEL);
if (!ctx->caps)
goto free_ctx;
ret = mlx5hws_cmd_query_caps(mdev, ctx->caps);
if (ret)
goto free_caps;
ret = mlx5hws_vport_init_vports(ctx);
if (ret)
goto free_caps;
ret = hws_context_init_hws(ctx, attr);
if (ret)
goto uninit_vports;
mlx5hws_debug_init_dump(ctx);
return ctx;
uninit_vports:
mlx5hws_vport_uninit_vports(ctx);
free_caps:
kfree(ctx->caps);
free_ctx:
xa_destroy(&ctx->peer_ctx_xa);
mutex_destroy(&ctx->ctrl_lock);
kfree(ctx);
return NULL;
}
int mlx5hws_context_close(struct mlx5hws_context *ctx)
{
mlx5hws_debug_uninit_dump(ctx);
hws_context_uninit_hws(ctx);
mlx5hws_vport_uninit_vports(ctx);
kfree(ctx->caps);
xa_destroy(&ctx->peer_ctx_xa);
mutex_destroy(&ctx->ctrl_lock);
kfree(ctx);
return 0;
}
void mlx5hws_context_set_peer(struct mlx5hws_context *ctx,
struct mlx5hws_context *peer_ctx,
u16 peer_vhca_id)
{
mutex_lock(&ctx->ctrl_lock);
if (xa_err(xa_store(&ctx->peer_ctx_xa, peer_vhca_id, peer_ctx, GFP_KERNEL)))
pr_warn("HWS: failed storing peer vhca ID in peer xarray\n");
mutex_unlock(&ctx->ctrl_lock);
}
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */
#ifndef MLX5HWS_DEBUG_H_
#define MLX5HWS_DEBUG_H_
#define HWS_DEBUG_FORMAT_VERSION "1.0"
#define HWS_PTR_TO_ID(p) ((u64)(uintptr_t)(p) & 0xFFFFFFFFULL)
enum mlx5hws_debug_res_type {
MLX5HWS_DEBUG_RES_TYPE_CONTEXT = 4000,
MLX5HWS_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001,
MLX5HWS_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002,
MLX5HWS_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003,
MLX5HWS_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004,
MLX5HWS_DEBUG_RES_TYPE_CONTEXT_STC = 4005,
MLX5HWS_DEBUG_RES_TYPE_TABLE = 4100,
MLX5HWS_DEBUG_RES_TYPE_MATCHER = 4200,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_ATTR = 4201,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_MATCH_DEFINER = 4203,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_HASH_DEFINER = 4205,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_RANGE_DEFINER = 4206,
MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_COMPARE_MATCH_DEFINER = 4207,
};
static inline u64
mlx5hws_debug_icm_to_idx(u64 icm_addr)
{
return (icm_addr >> 6) & 0xffffffff;
}
void mlx5hws_debug_init_dump(struct mlx5hws_context *ctx);
void mlx5hws_debug_uninit_dump(struct mlx5hws_context *ctx);
#endif /* MLX5HWS_DEBUG_H_ */
This diff is collapsed.
......@@ -149,6 +149,7 @@ enum {
MLX5_WQE_CTRL_CQ_UPDATE = 2 << 2,
MLX5_WQE_CTRL_CQ_UPDATE_AND_EQE = 3 << 2,
MLX5_WQE_CTRL_SOLICITED = 1 << 1,
MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE = 1 << 5,
};
enum {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment