Commit 94810bd3 authored by David S. Miller's avatar David S. Miller

Merge tag 'mlx5-updates-2019-09-01-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2019-09-01  (Software steering support)

Abstract:
--------
Mellanox ConnetX devices supports packet matching, packet modification and
redirection. These functionalities are also referred to as flow-steering.
To configure a steering rule, the rule is written to the device owned
memory, this memory is accessed and cached by the device when processing
a packet.
Steering rules are constructed from multiple steering entries (STE).

Rules are configured using the Firmware command interface. The Firmware
processes the given driver command and translates them to STEs, then
writes them to the device memory in the current steering tables.
This process is slow due to the architecture of the command interface and
the processing complexity of each rule.

The highlight of this patchset is to cut the middle man (The firmware) and
do steering rules programming into device directly from the driver, with
no firmware intervention whatsoever.

Motivation:
-----------
Software (driver managed) steering allows for high rule insertion rates
compared to the FW steering described above, this is achieved by using
internal RDMA writes to the device owned memory instead of the slow
command interface to program steering rules.

Software (driver managed) steering, doesn't depend on new FW
for new steering functionality, new implementations can be done in the
driver skipping the FW layer.

Performance:
------------
The insertion rate on a single core using the new approach allows
programming ~300K rules per sec. (Done via direct raw test to the new mlx5
sw steering layer, without any kernel layer involved).

Test: TC L2 rules
33K/s with Software steering (this patchset).
5K/s  with FW and current driver.
This will improve OVS based solution performance.

Architecture and implementation details:
----------------------------------------
Software steering will be dynamically selected via devlink device
parameter. Example:
$ devlink dev param show pci/0000:06:00.0 name flow_steering_mode
          pci/0000:06:00.0:
          name flow_steering_mode type driver-specific
          values:
             cmode runtime value smfs

mlx5 software steering module a.k.a (DR - Direct Rule) is implemented
and contained in mlx5/core/steering directory and controlled by
MLX5_SW_STEERING kconfig flag.

mlx5 core steering layer (fs_core) already provides a shim layer for
implementing different steering mechanisms, software steering will
leverage that as seen at the end of this series.

When Software Steering for a specific steering domain
(NIC/RDMA/Vport/ESwitch, etc ..) is supported, it will cause rules
targeting this domain to be created using  SW steering instead of FW.

The implementation includes:
Domain - The steering domain is the object that all other object resides
    in. It holds the memory allocator, send engine, locks and other shared
    data needed by lower objects such as table, matcher, rule, action.
    Each domain can contain multiple tables. Domain is equivalent to
    namespaces e.g (NIC/RDMA/Vport/ESwitch, etc ..) as implemented
    currently in mlx5_core fs_core (flow steering core).

Table - Table objects are used for holding multiple matchers, each table
    has a level used to prevent processing loops. Packets are being
    directed to this table once it is set as the root table, this is done
    by fs_core using a FW command. A packet is being processed inside the
    table matcher by matcher until a successful hit, otherwise the packet
    will perform the default action.

Matcher - Matchers objects are used to specify the fields mask for
    matching when processing a packet. A matcher belongs to a table, each
    matcher can hold multiple rules, each rule with different matching
    values corresponding to the matcher mask. Each matcher has a priority
    used for rule processing order inside the table.

Action - Action objects are created to specify different steering actions
    such as count, reformat (encapsulate, decapsulate, ...), modify
    header, forward to table and many other actions. When creating a rule
    a sequence of actions can be provided to be executed on a successful
    match.

Rule - Rule objects are used to specify a specific match on packets as
    well as the actions that should be executed. A rule belongs to a
    matcher.

STE - This layer is used to hold the specific STE format for the device
    and to convert the requested rule to STEs. Each rule is constructed of
    an STE chain, Multiple rules construct a steering graph. Each node in
    the graph is a hash table containing multiple STEs. The index of each
    STE in the hash table is being calculated using a CRC32 hash function.

Memory pool - Used for managing and caching device owned memory for rule
    insertion. The memory is being allocated using DM (device memory) API.

Communication with device - layer for standard RDMA operation using  RC QP
    to configure the device steering.

Command utility - This module holds all of the FW commands that are
    required for SW steering to function.

Patch planning and files:
-------------------------
1) First patch, adds the support to Add flow steering actions to fs_cmd
shim layer.

2) Next 12 patch will add a file per each Software steering
functionality/module as described above. (See patches with title: DR, *)

3) Add CONFIG_MLX5_SW_STEERING for software steering support and enable
build with the new files

4) Next two patches will add the support for software steering in mlx5
steering shim layer
net/mlx5: Add API to set the namespace steering mode
net/mlx5: Add direct rule fs_cmd implementation

5) Last two patches will add the new devlink parameter to select mlx5
steering mode, will be valid only for switchdev mode for now.
Two modes are supported:
    1. DMFS - Device managed flow steering
    2. SMFS - Software/Driver managed flow steering.

    In the DMFS mode, the HW steering entities are created through the
    FW. In the SMFS mode this entities are created though the driver
    directly.

    The driver will use the devlink steering mode only if the steering
    domain supports it, for now SMFS will manages only the switchdev
    eswitch steering domain.

    User command examples:
    - Set SMFS flow steering mode::

        $ devlink dev param set pci/0000:06:00.0 name flow_steering_mode value "smfs" cmode runtime

    - Read device flow steering mode::

        $ devlink dev param show pci/0000:06:00.0 name flow_steering_mode
          pci/0000:06:00.0:
          name flow_steering_mode type driver-specific
          values:
             cmode runtime value smfs
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 67538eb5 e890acd5
...@@ -11,6 +11,7 @@ Contents ...@@ -11,6 +11,7 @@ Contents
- `Enabling the driver and kconfig options`_ - `Enabling the driver and kconfig options`_
- `Devlink info`_ - `Devlink info`_
- `Devlink parameters`_
- `Devlink health reporters`_ - `Devlink health reporters`_
- `mlx5 tracepoints`_ - `mlx5 tracepoints`_
...@@ -122,6 +123,38 @@ User command example:: ...@@ -122,6 +123,38 @@ User command example::
stored: stored:
fw.version 16.26.0100 fw.version 16.26.0100
Devlink parameters
==================
flow_steering_mode: Device flow steering mode
---------------------------------------------
The flow steering mode parameter controls the flow steering mode of the driver.
Two modes are supported:
1. 'dmfs' - Device managed flow steering.
2. 'smfs - Software/Driver managed flow steering.
In DMFS mode, the HW steering entities are created and managed through the
Firmware.
In SMFS mode, the HW steering entities are created and managed though by
the driver directly into Hardware without firmware intervention.
SMFS mode is faster and provides better rule inserstion rate compared to default DMFS mode.
User command examples:
- Set SMFS flow steering mode::
$ devlink dev param set pci/0000:06:00.0 name flow_steering_mode value "smfs" cmode runtime
- Read device flow steering mode::
$ devlink dev param show pci/0000:06:00.0 name flow_steering_mode
pci/0000:06:00.0:
name flow_steering_mode type driver-specific
values:
cmode runtime value smfs
Devlink health reporters Devlink health reporters
======================== ========================
......
...@@ -186,136 +186,6 @@ int mlx5_cmd_dealloc_memic(struct mlx5_dm *dm, phys_addr_t addr, u64 length) ...@@ -186,136 +186,6 @@ int mlx5_cmd_dealloc_memic(struct mlx5_dm *dm, phys_addr_t addr, u64 length)
return err; return err;
} }
int mlx5_cmd_alloc_sw_icm(struct mlx5_dm *dm, int type, u64 length,
u16 uid, phys_addr_t *addr, u32 *obj_id)
{
struct mlx5_core_dev *dev = dm->dev;
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
u32 in[MLX5_ST_SZ_DW(create_sw_icm_in)] = {};
unsigned long *block_map;
u64 icm_start_addr;
u32 log_icm_size;
u32 num_blocks;
u32 max_blocks;
u64 block_idx;
void *sw_icm;
int ret;
MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_SW_ICM);
MLX5_SET(general_obj_in_cmd_hdr, in, uid, uid);
switch (type) {
case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev,
steering_sw_icm_start_address);
log_icm_size = MLX5_CAP_DEV_MEM(dev, log_steering_sw_icm_size);
block_map = dm->steering_sw_icm_alloc_blocks;
break;
case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev,
header_modify_sw_icm_start_address);
log_icm_size = MLX5_CAP_DEV_MEM(dev,
log_header_modify_sw_icm_size);
block_map = dm->header_modify_sw_icm_alloc_blocks;
break;
default:
return -EINVAL;
}
num_blocks = (length + MLX5_SW_ICM_BLOCK_SIZE(dev) - 1) >>
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
max_blocks = BIT(log_icm_size - MLX5_LOG_SW_ICM_BLOCK_SIZE(dev));
spin_lock(&dm->lock);
block_idx = bitmap_find_next_zero_area(block_map,
max_blocks,
0,
num_blocks, 0);
if (block_idx < max_blocks)
bitmap_set(block_map,
block_idx, num_blocks);
spin_unlock(&dm->lock);
if (block_idx >= max_blocks)
return -ENOMEM;
sw_icm = MLX5_ADDR_OF(create_sw_icm_in, in, sw_icm);
icm_start_addr += block_idx << MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
MLX5_SET64(sw_icm, sw_icm, sw_icm_start_addr,
icm_start_addr);
MLX5_SET(sw_icm, sw_icm, log_sw_icm_size, ilog2(length));
ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (ret) {
spin_lock(&dm->lock);
bitmap_clear(block_map,
block_idx, num_blocks);
spin_unlock(&dm->lock);
return ret;
}
*addr = icm_start_addr;
*obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
return 0;
}
int mlx5_cmd_dealloc_sw_icm(struct mlx5_dm *dm, int type, u64 length,
u16 uid, phys_addr_t addr, u32 obj_id)
{
struct mlx5_core_dev *dev = dm->dev;
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
unsigned long *block_map;
u32 num_blocks;
u64 start_idx;
int err;
num_blocks = (length + MLX5_SW_ICM_BLOCK_SIZE(dev) - 1) >>
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
switch (type) {
case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM:
start_idx =
(addr - MLX5_CAP64_DEV_MEM(
dev, steering_sw_icm_start_address)) >>
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
block_map = dm->steering_sw_icm_alloc_blocks;
break;
case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM:
start_idx =
(addr -
MLX5_CAP64_DEV_MEM(
dev, header_modify_sw_icm_start_address)) >>
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
block_map = dm->header_modify_sw_icm_alloc_blocks;
break;
default:
return -EINVAL;
}
MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_SW_ICM);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
MLX5_SET(general_obj_in_cmd_hdr, in, uid, uid);
err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (err)
return err;
spin_lock(&dm->lock);
bitmap_clear(block_map,
start_idx, num_blocks);
spin_unlock(&dm->lock);
return 0;
}
int mlx5_cmd_query_ext_ppcnt_counters(struct mlx5_core_dev *dev, void *out) int mlx5_cmd_query_ext_ppcnt_counters(struct mlx5_core_dev *dev, void *out)
{ {
u32 in[MLX5_ST_SZ_DW(ppcnt_reg)] = {}; u32 in[MLX5_ST_SZ_DW(ppcnt_reg)] = {};
......
...@@ -65,8 +65,4 @@ int mlx5_cmd_alloc_q_counter(struct mlx5_core_dev *dev, u16 *counter_id, ...@@ -65,8 +65,4 @@ int mlx5_cmd_alloc_q_counter(struct mlx5_core_dev *dev, u16 *counter_id,
u16 uid); u16 uid);
int mlx5_cmd_mad_ifc(struct mlx5_core_dev *dev, const void *inb, void *outb, int mlx5_cmd_mad_ifc(struct mlx5_core_dev *dev, const void *inb, void *outb,
u16 opmod, u8 port); u16 opmod, u8 port);
int mlx5_cmd_alloc_sw_icm(struct mlx5_dm *dm, int type, u64 length,
u16 uid, phys_addr_t *addr, u32 *obj_id);
int mlx5_cmd_dealloc_sw_icm(struct mlx5_dm *dm, int type, u64 length,
u16 uid, phys_addr_t addr, u32 obj_id);
#endif /* MLX5_IB_CMD_H */ #endif /* MLX5_IB_CMD_H */
...@@ -322,11 +322,11 @@ void mlx5_ib_destroy_flow_action_raw(struct mlx5_ib_flow_action *maction) ...@@ -322,11 +322,11 @@ void mlx5_ib_destroy_flow_action_raw(struct mlx5_ib_flow_action *maction)
switch (maction->flow_action_raw.sub_type) { switch (maction->flow_action_raw.sub_type) {
case MLX5_IB_FLOW_ACTION_MODIFY_HEADER: case MLX5_IB_FLOW_ACTION_MODIFY_HEADER:
mlx5_modify_header_dealloc(maction->flow_action_raw.dev->mdev, mlx5_modify_header_dealloc(maction->flow_action_raw.dev->mdev,
maction->flow_action_raw.action_id); maction->flow_action_raw.modify_hdr);
break; break;
case MLX5_IB_FLOW_ACTION_PACKET_REFORMAT: case MLX5_IB_FLOW_ACTION_PACKET_REFORMAT:
mlx5_packet_reformat_dealloc(maction->flow_action_raw.dev->mdev, mlx5_packet_reformat_dealloc(maction->flow_action_raw.dev->mdev,
maction->flow_action_raw.action_id); maction->flow_action_raw.pkt_reformat);
break; break;
case MLX5_IB_FLOW_ACTION_DECAP: case MLX5_IB_FLOW_ACTION_DECAP:
break; break;
...@@ -352,10 +352,11 @@ mlx5_ib_create_modify_header(struct mlx5_ib_dev *dev, ...@@ -352,10 +352,11 @@ mlx5_ib_create_modify_header(struct mlx5_ib_dev *dev,
if (!maction) if (!maction)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
ret = mlx5_modify_header_alloc(dev->mdev, namespace, num_actions, in, maction->flow_action_raw.modify_hdr =
&maction->flow_action_raw.action_id); mlx5_modify_header_alloc(dev->mdev, namespace, num_actions, in);
if (ret) { if (IS_ERR(maction->flow_action_raw.modify_hdr)) {
ret = PTR_ERR(maction->flow_action_raw.modify_hdr);
kfree(maction); kfree(maction);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
...@@ -479,11 +480,13 @@ static int mlx5_ib_flow_action_create_packet_reformat_ctx( ...@@ -479,11 +480,13 @@ static int mlx5_ib_flow_action_create_packet_reformat_ctx(
if (ret) if (ret)
return ret; return ret;
ret = mlx5_packet_reformat_alloc(dev->mdev, prm_prt, len, maction->flow_action_raw.pkt_reformat =
in, namespace, mlx5_packet_reformat_alloc(dev->mdev, prm_prt, len,
&maction->flow_action_raw.action_id); in, namespace);
if (ret) if (IS_ERR(maction->flow_action_raw.pkt_reformat)) {
ret = PTR_ERR(maction->flow_action_raw.pkt_reformat);
return ret; return ret;
}
maction->flow_action_raw.sub_type = maction->flow_action_raw.sub_type =
MLX5_IB_FLOW_ACTION_PACKET_REFORMAT; MLX5_IB_FLOW_ACTION_PACKET_REFORMAT;
......
...@@ -2280,6 +2280,7 @@ static inline int check_dm_type_support(struct mlx5_ib_dev *dev, ...@@ -2280,6 +2280,7 @@ static inline int check_dm_type_support(struct mlx5_ib_dev *dev,
return -EOPNOTSUPP; return -EOPNOTSUPP;
break; break;
case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM:
case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM:
if (!capable(CAP_SYS_RAWIO) || if (!capable(CAP_SYS_RAWIO) ||
!capable(CAP_NET_RAW)) !capable(CAP_NET_RAW))
return -EPERM; return -EPERM;
...@@ -2344,18 +2345,18 @@ static int handle_alloc_dm_sw_icm(struct ib_ucontext *ctx, ...@@ -2344,18 +2345,18 @@ static int handle_alloc_dm_sw_icm(struct ib_ucontext *ctx,
struct uverbs_attr_bundle *attrs, struct uverbs_attr_bundle *attrs,
int type) int type)
{ {
struct mlx5_dm *dm_db = &to_mdev(ctx->device)->dm; struct mlx5_core_dev *dev = to_mdev(ctx->device)->mdev;
u64 act_size; u64 act_size;
int err; int err;
/* Allocation size must a multiple of the basic block size /* Allocation size must a multiple of the basic block size
* and a power of 2. * and a power of 2.
*/ */
act_size = round_up(attr->length, MLX5_SW_ICM_BLOCK_SIZE(dm_db->dev)); act_size = round_up(attr->length, MLX5_SW_ICM_BLOCK_SIZE(dev));
act_size = roundup_pow_of_two(act_size); act_size = roundup_pow_of_two(act_size);
dm->size = act_size; dm->size = act_size;
err = mlx5_cmd_alloc_sw_icm(dm_db, type, act_size, err = mlx5_dm_sw_icm_alloc(dev, type, act_size,
to_mucontext(ctx)->devx_uid, &dm->dev_addr, to_mucontext(ctx)->devx_uid, &dm->dev_addr,
&dm->icm_dm.obj_id); &dm->icm_dm.obj_id);
if (err) if (err)
...@@ -2365,9 +2366,9 @@ static int handle_alloc_dm_sw_icm(struct ib_ucontext *ctx, ...@@ -2365,9 +2366,9 @@ static int handle_alloc_dm_sw_icm(struct ib_ucontext *ctx,
MLX5_IB_ATTR_ALLOC_DM_RESP_START_OFFSET, MLX5_IB_ATTR_ALLOC_DM_RESP_START_OFFSET,
&dm->dev_addr, sizeof(dm->dev_addr)); &dm->dev_addr, sizeof(dm->dev_addr));
if (err) if (err)
mlx5_cmd_dealloc_sw_icm(dm_db, type, dm->size, mlx5_dm_sw_icm_dealloc(dev, type, dm->size,
to_mucontext(ctx)->devx_uid, to_mucontext(ctx)->devx_uid, dm->dev_addr,
dm->dev_addr, dm->icm_dm.obj_id); dm->icm_dm.obj_id);
return err; return err;
} }
...@@ -2407,8 +2408,14 @@ struct ib_dm *mlx5_ib_alloc_dm(struct ib_device *ibdev, ...@@ -2407,8 +2408,14 @@ struct ib_dm *mlx5_ib_alloc_dm(struct ib_device *ibdev,
attrs); attrs);
break; break;
case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM:
err = handle_alloc_dm_sw_icm(context, dm,
attr, attrs,
MLX5_SW_ICM_TYPE_STEERING);
break;
case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM:
err = handle_alloc_dm_sw_icm(context, dm, attr, attrs, type); err = handle_alloc_dm_sw_icm(context, dm,
attr, attrs,
MLX5_SW_ICM_TYPE_HEADER_MODIFY);
break; break;
default: default:
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
...@@ -2428,6 +2435,7 @@ int mlx5_ib_dealloc_dm(struct ib_dm *ibdm, struct uverbs_attr_bundle *attrs) ...@@ -2428,6 +2435,7 @@ int mlx5_ib_dealloc_dm(struct ib_dm *ibdm, struct uverbs_attr_bundle *attrs)
{ {
struct mlx5_ib_ucontext *ctx = rdma_udata_to_drv_context( struct mlx5_ib_ucontext *ctx = rdma_udata_to_drv_context(
&attrs->driver_udata, struct mlx5_ib_ucontext, ibucontext); &attrs->driver_udata, struct mlx5_ib_ucontext, ibucontext);
struct mlx5_core_dev *dev = to_mdev(ibdm->device)->mdev;
struct mlx5_dm *dm_db = &to_mdev(ibdm->device)->dm; struct mlx5_dm *dm_db = &to_mdev(ibdm->device)->dm;
struct mlx5_ib_dm *dm = to_mdm(ibdm); struct mlx5_ib_dm *dm = to_mdm(ibdm);
u32 page_idx; u32 page_idx;
...@@ -2439,18 +2447,22 @@ int mlx5_ib_dealloc_dm(struct ib_dm *ibdm, struct uverbs_attr_bundle *attrs) ...@@ -2439,18 +2447,22 @@ int mlx5_ib_dealloc_dm(struct ib_dm *ibdm, struct uverbs_attr_bundle *attrs)
if (ret) if (ret)
return ret; return ret;
page_idx = (dm->dev_addr - page_idx = (dm->dev_addr - pci_resource_start(dev->pdev, 0) -
pci_resource_start(dm_db->dev->pdev, 0) - MLX5_CAP64_DEV_MEM(dev, memic_bar_start_addr)) >>
MLX5_CAP64_DEV_MEM(dm_db->dev,
memic_bar_start_addr)) >>
PAGE_SHIFT; PAGE_SHIFT;
bitmap_clear(ctx->dm_pages, page_idx, bitmap_clear(ctx->dm_pages, page_idx,
DIV_ROUND_UP(dm->size, PAGE_SIZE)); DIV_ROUND_UP(dm->size, PAGE_SIZE));
break; break;
case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM:
ret = mlx5_dm_sw_icm_dealloc(dev, MLX5_SW_ICM_TYPE_STEERING,
dm->size, ctx->devx_uid, dm->dev_addr,
dm->icm_dm.obj_id);
if (ret)
return ret;
break;
case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM:
ret = mlx5_cmd_dealloc_sw_icm(dm_db, dm->type, dm->size, ret = mlx5_dm_sw_icm_dealloc(dev, MLX5_SW_ICM_TYPE_HEADER_MODIFY,
ctx->devx_uid, dm->dev_addr, dm->size, ctx->devx_uid, dm->dev_addr,
dm->icm_dm.obj_id); dm->icm_dm.obj_id);
if (ret) if (ret)
return ret; return ret;
...@@ -2646,7 +2658,8 @@ int parse_flow_flow_action(struct mlx5_ib_flow_action *maction, ...@@ -2646,7 +2658,8 @@ int parse_flow_flow_action(struct mlx5_ib_flow_action *maction,
if (action->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) if (action->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
return -EINVAL; return -EINVAL;
action->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; action->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
action->modify_id = maction->flow_action_raw.action_id; action->modify_hdr =
maction->flow_action_raw.modify_hdr;
return 0; return 0;
} }
if (maction->flow_action_raw.sub_type == if (maction->flow_action_raw.sub_type ==
...@@ -2663,8 +2676,8 @@ int parse_flow_flow_action(struct mlx5_ib_flow_action *maction, ...@@ -2663,8 +2676,8 @@ int parse_flow_flow_action(struct mlx5_ib_flow_action *maction,
return -EINVAL; return -EINVAL;
action->action |= action->action |=
MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
action->reformat_id = action->pkt_reformat =
maction->flow_action_raw.action_id; maction->flow_action_raw.pkt_reformat;
return 0; return 0;
} }
/* fall through */ /* fall through */
...@@ -6096,8 +6109,6 @@ static struct ib_counters *mlx5_ib_create_counters(struct ib_device *device, ...@@ -6096,8 +6109,6 @@ static struct ib_counters *mlx5_ib_create_counters(struct ib_device *device,
static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev)
{ {
struct mlx5_core_dev *mdev = dev->mdev;
mlx5_ib_cleanup_multiport_master(dev); mlx5_ib_cleanup_multiport_master(dev);
if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) { if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) {
srcu_barrier(&dev->mr_srcu); srcu_barrier(&dev->mr_srcu);
...@@ -6105,29 +6116,11 @@ static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) ...@@ -6105,29 +6116,11 @@ static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev)
} }
WARN_ON(!bitmap_empty(dev->dm.memic_alloc_pages, MLX5_MAX_MEMIC_PAGES)); WARN_ON(!bitmap_empty(dev->dm.memic_alloc_pages, MLX5_MAX_MEMIC_PAGES));
WARN_ON(dev->dm.steering_sw_icm_alloc_blocks &&
!bitmap_empty(
dev->dm.steering_sw_icm_alloc_blocks,
BIT(MLX5_CAP_DEV_MEM(mdev, log_steering_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(mdev))));
kfree(dev->dm.steering_sw_icm_alloc_blocks);
WARN_ON(dev->dm.header_modify_sw_icm_alloc_blocks &&
!bitmap_empty(dev->dm.header_modify_sw_icm_alloc_blocks,
BIT(MLX5_CAP_DEV_MEM(
mdev, log_header_modify_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(mdev))));
kfree(dev->dm.header_modify_sw_icm_alloc_blocks);
} }
static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev)
{ {
struct mlx5_core_dev *mdev = dev->mdev; struct mlx5_core_dev *mdev = dev->mdev;
u64 header_modify_icm_blocks = 0;
u64 steering_icm_blocks = 0;
int err; int err;
int i; int i;
...@@ -6174,51 +6167,17 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) ...@@ -6174,51 +6167,17 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev)
INIT_LIST_HEAD(&dev->qp_list); INIT_LIST_HEAD(&dev->qp_list);
spin_lock_init(&dev->reset_flow_resource_lock); spin_lock_init(&dev->reset_flow_resource_lock);
if (MLX5_CAP_GEN_64(mdev, general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM) {
if (MLX5_CAP64_DEV_MEM(mdev, steering_sw_icm_start_address)) {
steering_icm_blocks =
BIT(MLX5_CAP_DEV_MEM(mdev,
log_steering_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(mdev));
dev->dm.steering_sw_icm_alloc_blocks =
kcalloc(BITS_TO_LONGS(steering_icm_blocks),
sizeof(unsigned long), GFP_KERNEL);
if (!dev->dm.steering_sw_icm_alloc_blocks)
goto err_mp;
}
if (MLX5_CAP64_DEV_MEM(mdev,
header_modify_sw_icm_start_address)) {
header_modify_icm_blocks = BIT(
MLX5_CAP_DEV_MEM(
mdev, log_header_modify_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(mdev));
dev->dm.header_modify_sw_icm_alloc_blocks =
kcalloc(BITS_TO_LONGS(header_modify_icm_blocks),
sizeof(unsigned long), GFP_KERNEL);
if (!dev->dm.header_modify_sw_icm_alloc_blocks)
goto err_dm;
}
}
spin_lock_init(&dev->dm.lock); spin_lock_init(&dev->dm.lock);
dev->dm.dev = mdev; dev->dm.dev = mdev;
if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) { if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) {
err = init_srcu_struct(&dev->mr_srcu); err = init_srcu_struct(&dev->mr_srcu);
if (err) if (err)
goto err_dm; goto err_mp;
} }
return 0; return 0;
err_dm:
kfree(dev->dm.steering_sw_icm_alloc_blocks);
kfree(dev->dm.header_modify_sw_icm_alloc_blocks);
err_mp: err_mp:
mlx5_ib_cleanup_multiport_master(dev); mlx5_ib_cleanup_multiport_master(dev);
......
...@@ -868,7 +868,10 @@ struct mlx5_ib_flow_action { ...@@ -868,7 +868,10 @@ struct mlx5_ib_flow_action {
struct { struct {
struct mlx5_ib_dev *dev; struct mlx5_ib_dev *dev;
u32 sub_type; u32 sub_type;
u32 action_id; union {
struct mlx5_modify_hdr *modify_hdr;
struct mlx5_pkt_reformat *pkt_reformat;
};
} flow_action_raw; } flow_action_raw;
}; };
}; };
...@@ -881,8 +884,6 @@ struct mlx5_dm { ...@@ -881,8 +884,6 @@ struct mlx5_dm {
*/ */
spinlock_t lock; spinlock_t lock;
DECLARE_BITMAP(memic_alloc_pages, MLX5_MAX_MEMIC_PAGES); DECLARE_BITMAP(memic_alloc_pages, MLX5_MAX_MEMIC_PAGES);
unsigned long *steering_sw_icm_alloc_blocks;
unsigned long *header_modify_sw_icm_alloc_blocks;
}; };
struct mlx5_read_counters_attr { struct mlx5_read_counters_attr {
......
...@@ -154,3 +154,10 @@ config MLX5_EN_TLS ...@@ -154,3 +154,10 @@ config MLX5_EN_TLS
Build support for TLS cryptography-offload accelaration in the NIC. Build support for TLS cryptography-offload accelaration in the NIC.
Note: Support for hardware with this capability needs to be selected Note: Support for hardware with this capability needs to be selected
for this option to become available. for this option to become available.
config MLX5_SW_STEERING
bool "Mellanox Technologies software-managed steering"
depends on MLX5_CORE_EN && MLX5_ESWITCH
default y
help
Build support for software-managed steering in the NIC.
...@@ -15,7 +15,7 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \ ...@@ -15,7 +15,7 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o alloc.o qp.o port.o mr.o pd.o \ health.o mcg.o cq.o alloc.o qp.o port.o mr.o pd.o \
transobj.o vport.o sriov.o fs_cmd.o fs_core.o pci_irq.o \ transobj.o vport.o sriov.o fs_cmd.o fs_core.o pci_irq.o \
fs_counters.o rl.o lag.o dev.o events.o wq.o lib/gid.o \ fs_counters.o rl.o lag.o dev.o events.o wq.o lib/gid.o \
lib/devcom.o lib/pci_vsc.o diag/fs_tracepoint.o \ lib/devcom.o lib/pci_vsc.o lib/dm.o diag/fs_tracepoint.o \
diag/fw_tracer.o diag/crdump.o devlink.o diag/fw_tracer.o diag/crdump.o devlink.o
# #
...@@ -67,3 +67,10 @@ mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \ ...@@ -67,3 +67,10 @@ mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \
mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/tls_stats.o \ mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/tls_stats.o \
en_accel/ktls.o en_accel/ktls_tx.o en_accel/ktls.o en_accel/ktls_tx.o
mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \
steering/dr_matcher.o steering/dr_rule.o \
steering/dr_icm_pool.o steering/dr_crc32.o \
steering/dr_ste.o steering/dr_send.o \
steering/dr_cmd.o steering/dr_fw.o \
steering/dr_action.o steering/fs_dr.o
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <devlink.h> #include <devlink.h>
#include "mlx5_core.h" #include "mlx5_core.h"
#include "fs_core.h"
#include "eswitch.h" #include "eswitch.h"
static int mlx5_devlink_flash_update(struct devlink *devlink, static int mlx5_devlink_flash_update(struct devlink *devlink,
...@@ -107,12 +108,121 @@ void mlx5_devlink_free(struct devlink *devlink) ...@@ -107,12 +108,121 @@ void mlx5_devlink_free(struct devlink *devlink)
devlink_free(devlink); devlink_free(devlink);
} }
static int mlx5_devlink_fs_mode_validate(struct devlink *devlink, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
char *value = val.vstr;
int err = 0;
if (!strcmp(value, "dmfs")) {
return 0;
} else if (!strcmp(value, "smfs")) {
u8 eswitch_mode;
bool smfs_cap;
eswitch_mode = mlx5_eswitch_mode(dev->priv.eswitch);
smfs_cap = mlx5_fs_dr_is_supported(dev);
if (!smfs_cap) {
err = -EOPNOTSUPP;
NL_SET_ERR_MSG_MOD(extack,
"Software managed steering is not supported by current device");
}
else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
NL_SET_ERR_MSG_MOD(extack,
"Software managed steering is not supported when eswitch offlaods enabled.");
err = -EOPNOTSUPP;
}
} else {
NL_SET_ERR_MSG_MOD(extack,
"Bad parameter: supported values are [\"dmfs\", \"smfs\"]");
err = -EINVAL;
}
return err;
}
static int mlx5_devlink_fs_mode_set(struct devlink *devlink, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
enum mlx5_flow_steering_mode mode;
if (!strcmp(ctx->val.vstr, "smfs"))
mode = MLX5_FLOW_STEERING_MODE_SMFS;
else
mode = MLX5_FLOW_STEERING_MODE_DMFS;
dev->priv.steering->mode = mode;
return 0;
}
static int mlx5_devlink_fs_mode_get(struct devlink *devlink, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS)
strcpy(ctx->val.vstr, "smfs");
else
strcpy(ctx->val.vstr, "dmfs");
return 0;
}
enum mlx5_devlink_param_id {
MLX5_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
MLX5_DEVLINK_PARAM_FLOW_STEERING_MODE,
};
static const struct devlink_param mlx5_devlink_params[] = {
DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_FLOW_STEERING_MODE,
"flow_steering_mode", DEVLINK_PARAM_TYPE_STRING,
BIT(DEVLINK_PARAM_CMODE_RUNTIME),
mlx5_devlink_fs_mode_get, mlx5_devlink_fs_mode_set,
mlx5_devlink_fs_mode_validate),
};
static void mlx5_devlink_set_params_init_values(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
union devlink_param_value value;
if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_DMFS)
strcpy(value.vstr, "dmfs");
else
strcpy(value.vstr, "smfs");
devlink_param_driverinit_value_set(devlink,
MLX5_DEVLINK_PARAM_FLOW_STEERING_MODE,
value);
}
int mlx5_devlink_register(struct devlink *devlink, struct device *dev) int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
{ {
return devlink_register(devlink, dev); int err;
err = devlink_register(devlink, dev);
if (err)
return err;
err = devlink_params_register(devlink, mlx5_devlink_params,
ARRAY_SIZE(mlx5_devlink_params));
if (err)
goto params_reg_err;
mlx5_devlink_set_params_init_values(devlink);
devlink_params_publish(devlink);
return 0;
params_reg_err:
devlink_unregister(devlink);
return err;
} }
void mlx5_devlink_unregister(struct devlink *devlink) void mlx5_devlink_unregister(struct devlink *devlink)
{ {
devlink_params_unregister(devlink, mlx5_devlink_params,
ARRAY_SIZE(mlx5_devlink_params));
devlink_unregister(devlink); devlink_unregister(devlink);
} }
...@@ -291,14 +291,14 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv, ...@@ -291,14 +291,14 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
*/ */
goto out; goto out;
} }
e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev,
err = mlx5_packet_reformat_alloc(priv->mdev,
e->reformat_type, e->reformat_type,
ipv4_encap_size, encap_header, ipv4_encap_size, encap_header,
MLX5_FLOW_NAMESPACE_FDB, MLX5_FLOW_NAMESPACE_FDB);
&e->encap_id); if (IS_ERR(e->pkt_reformat)) {
if (err) err = PTR_ERR(e->pkt_reformat);
goto destroy_neigh_entry; goto destroy_neigh_entry;
}
e->flags |= MLX5_ENCAP_ENTRY_VALID; e->flags |= MLX5_ENCAP_ENTRY_VALID;
mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
...@@ -407,13 +407,14 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv, ...@@ -407,13 +407,14 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
goto out; goto out;
} }
err = mlx5_packet_reformat_alloc(priv->mdev, e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev,
e->reformat_type, e->reformat_type,
ipv6_encap_size, encap_header, ipv6_encap_size, encap_header,
MLX5_FLOW_NAMESPACE_FDB, MLX5_FLOW_NAMESPACE_FDB);
&e->encap_id); if (IS_ERR(e->pkt_reformat)) {
if (err) err = PTR_ERR(e->pkt_reformat);
goto destroy_neigh_entry; goto destroy_neigh_entry;
}
e->flags |= MLX5_ENCAP_ENTRY_VALID; e->flags |= MLX5_ENCAP_ENTRY_VALID;
mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev));
......
...@@ -161,7 +161,7 @@ struct mlx5e_encap_entry { ...@@ -161,7 +161,7 @@ struct mlx5e_encap_entry {
*/ */
struct hlist_node encap_hlist; struct hlist_node encap_hlist;
struct list_head flows; struct list_head flows;
u32 encap_id; struct mlx5_pkt_reformat *pkt_reformat;
const struct ip_tunnel_info *tun_info; const struct ip_tunnel_info *tun_info;
unsigned char h_dest[ETH_ALEN]; /* destination eth addr */ unsigned char h_dest[ETH_ALEN]; /* destination eth addr */
......
...@@ -61,7 +61,7 @@ ...@@ -61,7 +61,7 @@
struct mlx5_nic_flow_attr { struct mlx5_nic_flow_attr {
u32 action; u32 action;
u32 flow_tag; u32 flow_tag;
u32 mod_hdr_id; struct mlx5_modify_hdr *modify_hdr;
u32 hairpin_tirn; u32 hairpin_tirn;
u8 match_level; u8 match_level;
struct mlx5_flow_table *hairpin_ft; struct mlx5_flow_table *hairpin_ft;
...@@ -201,7 +201,7 @@ struct mlx5e_mod_hdr_entry { ...@@ -201,7 +201,7 @@ struct mlx5e_mod_hdr_entry {
struct mod_hdr_key key; struct mod_hdr_key key;
u32 mod_hdr_id; struct mlx5_modify_hdr *modify_hdr;
refcount_t refcnt; refcount_t refcnt;
struct completion res_ready; struct completion res_ready;
...@@ -334,7 +334,7 @@ static void mlx5e_mod_hdr_put(struct mlx5e_priv *priv, ...@@ -334,7 +334,7 @@ static void mlx5e_mod_hdr_put(struct mlx5e_priv *priv,
WARN_ON(!list_empty(&mh->flows)); WARN_ON(!list_empty(&mh->flows));
if (mh->compl_result > 0) if (mh->compl_result > 0)
mlx5_modify_header_dealloc(priv->mdev, mh->mod_hdr_id); mlx5_modify_header_dealloc(priv->mdev, mh->modify_hdr);
kfree(mh); kfree(mh);
} }
...@@ -395,11 +395,11 @@ static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv, ...@@ -395,11 +395,11 @@ static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv,
hash_add(tbl->hlist, &mh->mod_hdr_hlist, hash_key); hash_add(tbl->hlist, &mh->mod_hdr_hlist, hash_key);
mutex_unlock(&tbl->lock); mutex_unlock(&tbl->lock);
err = mlx5_modify_header_alloc(priv->mdev, namespace, mh->modify_hdr = mlx5_modify_header_alloc(priv->mdev, namespace,
mh->key.num_actions, mh->key.num_actions,
mh->key.actions, mh->key.actions);
&mh->mod_hdr_id); if (IS_ERR(mh->modify_hdr)) {
if (err) { err = PTR_ERR(mh->modify_hdr);
mh->compl_result = err; mh->compl_result = err;
goto alloc_header_err; goto alloc_header_err;
} }
...@@ -412,9 +412,9 @@ static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv, ...@@ -412,9 +412,9 @@ static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv,
list_add(&flow->mod_hdr, &mh->flows); list_add(&flow->mod_hdr, &mh->flows);
spin_unlock(&mh->flows_lock); spin_unlock(&mh->flows_lock);
if (mlx5e_is_eswitch_flow(flow)) if (mlx5e_is_eswitch_flow(flow))
flow->esw_attr->mod_hdr_id = mh->mod_hdr_id; flow->esw_attr->modify_hdr = mh->modify_hdr;
else else
flow->nic_attr->mod_hdr_id = mh->mod_hdr_id; flow->nic_attr->modify_hdr = mh->modify_hdr;
return 0; return 0;
...@@ -906,7 +906,6 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, ...@@ -906,7 +906,6 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
struct mlx5_flow_destination dest[2] = {}; struct mlx5_flow_destination dest[2] = {};
struct mlx5_flow_act flow_act = { struct mlx5_flow_act flow_act = {
.action = attr->action, .action = attr->action,
.reformat_id = 0,
.flags = FLOW_ACT_NO_APPEND, .flags = FLOW_ACT_NO_APPEND,
}; };
struct mlx5_fc *counter = NULL; struct mlx5_fc *counter = NULL;
...@@ -947,7 +946,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, ...@@ -947,7 +946,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
err = mlx5e_attach_mod_hdr(priv, flow, parse_attr); err = mlx5e_attach_mod_hdr(priv, flow, parse_attr);
flow_act.modify_id = attr->mod_hdr_id; flow_act.modify_hdr = attr->modify_hdr;
kfree(parse_attr->mod_hdr_actions); kfree(parse_attr->mod_hdr_actions);
if (err) if (err)
return err; return err;
...@@ -1304,14 +1303,13 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, ...@@ -1304,14 +1303,13 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow; struct mlx5e_tc_flow *flow;
int err; int err;
err = mlx5_packet_reformat_alloc(priv->mdev, e->pkt_reformat = mlx5_packet_reformat_alloc(priv->mdev,
e->reformat_type, e->reformat_type,
e->encap_size, e->encap_header, e->encap_size, e->encap_header,
MLX5_FLOW_NAMESPACE_FDB, MLX5_FLOW_NAMESPACE_FDB);
&e->encap_id); if (IS_ERR(e->pkt_reformat)) {
if (err) { mlx5_core_warn(priv->mdev, "Failed to offload cached encapsulation header, %lu\n",
mlx5_core_warn(priv->mdev, "Failed to offload cached encapsulation header, %d\n", PTR_ERR(e->pkt_reformat));
err);
return; return;
} }
e->flags |= MLX5_ENCAP_ENTRY_VALID; e->flags |= MLX5_ENCAP_ENTRY_VALID;
...@@ -1326,7 +1324,7 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, ...@@ -1326,7 +1324,7 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
esw_attr = flow->esw_attr; esw_attr = flow->esw_attr;
spec = &esw_attr->parse_attr->spec; spec = &esw_attr->parse_attr->spec;
esw_attr->dests[flow->tmp_efi_index].encap_id = e->encap_id; esw_attr->dests[flow->tmp_efi_index].pkt_reformat = e->pkt_reformat;
esw_attr->dests[flow->tmp_efi_index].flags |= MLX5_ESW_DEST_ENCAP_VALID; esw_attr->dests[flow->tmp_efi_index].flags |= MLX5_ESW_DEST_ENCAP_VALID;
/* Flow can be associated with multiple encap entries. /* Flow can be associated with multiple encap entries.
* Before offloading the flow verify that all of them have * Before offloading the flow verify that all of them have
...@@ -1395,7 +1393,7 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, ...@@ -1395,7 +1393,7 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
/* we know that the encap is valid */ /* we know that the encap is valid */
e->flags &= ~MLX5_ENCAP_ENTRY_VALID; e->flags &= ~MLX5_ENCAP_ENTRY_VALID;
mlx5_packet_reformat_dealloc(priv->mdev, e->encap_id); mlx5_packet_reformat_dealloc(priv->mdev, e->pkt_reformat);
} }
static struct mlx5_fc *mlx5e_tc_get_counter(struct mlx5e_tc_flow *flow) static struct mlx5_fc *mlx5e_tc_get_counter(struct mlx5e_tc_flow *flow)
...@@ -1561,7 +1559,7 @@ static void mlx5e_encap_dealloc(struct mlx5e_priv *priv, struct mlx5e_encap_entr ...@@ -1561,7 +1559,7 @@ static void mlx5e_encap_dealloc(struct mlx5e_priv *priv, struct mlx5e_encap_entr
mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e); mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e);
if (e->flags & MLX5_ENCAP_ENTRY_VALID) if (e->flags & MLX5_ENCAP_ENTRY_VALID)
mlx5_packet_reformat_dealloc(priv->mdev, e->encap_id); mlx5_packet_reformat_dealloc(priv->mdev, e->pkt_reformat);
} }
kfree(e->encap_header); kfree(e->encap_header);
...@@ -1896,7 +1894,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv, ...@@ -1896,7 +1894,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
*match_level = MLX5_MATCH_L2; *match_level = MLX5_MATCH_L2;
} }
} else if (*match_level != MLX5_MATCH_NONE) { } else if (*match_level != MLX5_MATCH_NONE) {
MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1); /* cvlan_tag enabled in match criteria and
* disabled in match value means both S & C tags
* don't exist (untagged of both)
*/
MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
*match_level = MLX5_MATCH_L2; *match_level = MLX5_MATCH_L2;
} }
...@@ -3045,7 +3046,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv, ...@@ -3045,7 +3046,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
flow->encaps[out_index].index = out_index; flow->encaps[out_index].index = out_index;
*encap_dev = e->out_dev; *encap_dev = e->out_dev;
if (e->flags & MLX5_ENCAP_ENTRY_VALID) { if (e->flags & MLX5_ENCAP_ENTRY_VALID) {
attr->dests[out_index].encap_id = e->encap_id; attr->dests[out_index].pkt_reformat = e->pkt_reformat;
attr->dests[out_index].flags |= MLX5_ESW_DEST_ENCAP_VALID; attr->dests[out_index].flags |= MLX5_ESW_DEST_ENCAP_VALID;
*encap_valid = true; *encap_valid = true;
} else { } else {
......
...@@ -69,7 +69,7 @@ struct vport_ingress { ...@@ -69,7 +69,7 @@ struct vport_ingress {
struct mlx5_flow_group *allow_spoofchk_only_grp; struct mlx5_flow_group *allow_spoofchk_only_grp;
struct mlx5_flow_group *allow_untagged_only_grp; struct mlx5_flow_group *allow_untagged_only_grp;
struct mlx5_flow_group *drop_grp; struct mlx5_flow_group *drop_grp;
int modify_metadata_id; struct mlx5_modify_hdr *modify_metadata;
struct mlx5_flow_handle *modify_metadata_rule; struct mlx5_flow_handle *modify_metadata_rule;
struct mlx5_flow_handle *allow_rule; struct mlx5_flow_handle *allow_rule;
struct mlx5_flow_handle *drop_rule; struct mlx5_flow_handle *drop_rule;
...@@ -153,6 +153,7 @@ struct mlx5_eswitch_fdb { ...@@ -153,6 +153,7 @@ struct mlx5_eswitch_fdb {
} legacy; } legacy;
struct offloads_fdb { struct offloads_fdb {
struct mlx5_flow_namespace *ns;
struct mlx5_flow_table *slow_fdb; struct mlx5_flow_table *slow_fdb;
struct mlx5_flow_group *send_to_vport_grp; struct mlx5_flow_group *send_to_vport_grp;
struct mlx5_flow_group *peer_miss_grp; struct mlx5_flow_group *peer_miss_grp;
...@@ -385,11 +386,11 @@ struct mlx5_esw_flow_attr { ...@@ -385,11 +386,11 @@ struct mlx5_esw_flow_attr {
struct { struct {
u32 flags; u32 flags;
struct mlx5_eswitch_rep *rep; struct mlx5_eswitch_rep *rep;
struct mlx5_pkt_reformat *pkt_reformat;
struct mlx5_core_dev *mdev; struct mlx5_core_dev *mdev;
u32 encap_id;
struct mlx5_termtbl_handle *termtbl; struct mlx5_termtbl_handle *termtbl;
} dests[MLX5_MAX_FLOW_FWD_VPORTS]; } dests[MLX5_MAX_FLOW_FWD_VPORTS];
u32 mod_hdr_id; struct mlx5_modify_hdr *modify_hdr;
u8 inner_match_level; u8 inner_match_level;
u8 outer_match_level; u8 outer_match_level;
struct mlx5_fc *counter; struct mlx5_fc *counter;
......
...@@ -190,10 +190,10 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, ...@@ -190,10 +190,10 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
MLX5_FLOW_DEST_VPORT_VHCA_ID; MLX5_FLOW_DEST_VPORT_VHCA_ID;
if (attr->dests[j].flags & MLX5_ESW_DEST_ENCAP) { if (attr->dests[j].flags & MLX5_ESW_DEST_ENCAP) {
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
flow_act.reformat_id = attr->dests[j].encap_id; flow_act.pkt_reformat = attr->dests[j].pkt_reformat;
dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID; dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID;
dest[i].vport.reformat_id = dest[i].vport.pkt_reformat =
attr->dests[j].encap_id; attr->dests[j].pkt_reformat;
} }
i++; i++;
} }
...@@ -213,7 +213,7 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, ...@@ -213,7 +213,7 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS; spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS;
if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
flow_act.modify_id = attr->mod_hdr_id; flow_act.modify_hdr = attr->modify_hdr;
fdb = esw_get_prio_table(esw, attr->chain, attr->prio, !!split); fdb = esw_get_prio_table(esw, attr->chain, attr->prio, !!split);
if (IS_ERR(fdb)) { if (IS_ERR(fdb)) {
...@@ -276,7 +276,7 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw, ...@@ -276,7 +276,7 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID; dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
if (attr->dests[i].flags & MLX5_ESW_DEST_ENCAP) { if (attr->dests[i].flags & MLX5_ESW_DEST_ENCAP) {
dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID; dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID;
dest[i].vport.reformat_id = attr->dests[i].encap_id; dest[i].vport.pkt_reformat = attr->dests[i].pkt_reformat;
} }
} }
dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
...@@ -1068,6 +1068,13 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports) ...@@ -1068,6 +1068,13 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
goto ns_err; goto ns_err;
} }
esw->fdb_table.offloads.ns = root_ns;
err = mlx5_flow_namespace_set_mode(root_ns,
esw->dev->priv.steering->mode);
if (err) {
esw_warn(dev, "Failed to set FDB namespace steering mode\n");
goto ns_err;
}
max_flow_counter = (MLX5_CAP_GEN(dev, max_flow_counter_31_16) << 16) | max_flow_counter = (MLX5_CAP_GEN(dev, max_flow_counter_31_16) << 16) |
MLX5_CAP_GEN(dev, max_flow_counter_15_0); MLX5_CAP_GEN(dev, max_flow_counter_15_0);
...@@ -1207,6 +1214,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports) ...@@ -1207,6 +1214,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
esw_destroy_offloads_fast_fdb_tables(esw); esw_destroy_offloads_fast_fdb_tables(esw);
mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb); mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb);
slow_fdb_err: slow_fdb_err:
/* Holds true only as long as DMFS is the default */
mlx5_flow_namespace_set_mode(root_ns, MLX5_FLOW_STEERING_MODE_DMFS);
ns_err: ns_err:
kvfree(flow_group_in); kvfree(flow_group_in);
return err; return err;
...@@ -1226,6 +1235,9 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw) ...@@ -1226,6 +1235,9 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw)
mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb); mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb);
esw_destroy_offloads_fast_fdb_tables(esw); esw_destroy_offloads_fast_fdb_tables(esw);
/* Holds true only as long as DMFS is the default */
mlx5_flow_namespace_set_mode(esw->fdb_table.offloads.ns,
MLX5_FLOW_STEERING_MODE_DMFS);
} }
static int esw_create_offloads_table(struct mlx5_eswitch *esw, int nvports) static int esw_create_offloads_table(struct mlx5_eswitch *esw, int nvports)
...@@ -1623,13 +1635,42 @@ static void mlx5_esw_offloads_unpair(struct mlx5_eswitch *esw) ...@@ -1623,13 +1635,42 @@ static void mlx5_esw_offloads_unpair(struct mlx5_eswitch *esw)
esw_del_fdb_peer_miss_rules(esw); esw_del_fdb_peer_miss_rules(esw);
} }
static int mlx5_esw_offloads_set_ns_peer(struct mlx5_eswitch *esw,
struct mlx5_eswitch *peer_esw,
bool pair)
{
struct mlx5_flow_root_namespace *peer_ns;
struct mlx5_flow_root_namespace *ns;
int err;
peer_ns = peer_esw->dev->priv.steering->fdb_root_ns;
ns = esw->dev->priv.steering->fdb_root_ns;
if (pair) {
err = mlx5_flow_namespace_set_peer(ns, peer_ns);
if (err)
return err;
mlx5_flow_namespace_set_peer(peer_ns, ns);
if (err) {
mlx5_flow_namespace_set_peer(ns, NULL);
return err;
}
} else {
mlx5_flow_namespace_set_peer(ns, NULL);
mlx5_flow_namespace_set_peer(peer_ns, NULL);
}
return 0;
}
static int mlx5_esw_offloads_devcom_event(int event, static int mlx5_esw_offloads_devcom_event(int event,
void *my_data, void *my_data,
void *event_data) void *event_data)
{ {
struct mlx5_eswitch *esw = my_data; struct mlx5_eswitch *esw = my_data;
struct mlx5_eswitch *peer_esw = event_data;
struct mlx5_devcom *devcom = esw->dev->priv.devcom; struct mlx5_devcom *devcom = esw->dev->priv.devcom;
struct mlx5_eswitch *peer_esw = event_data;
int err; int err;
switch (event) { switch (event) {
...@@ -1638,9 +1679,12 @@ static int mlx5_esw_offloads_devcom_event(int event, ...@@ -1638,9 +1679,12 @@ static int mlx5_esw_offloads_devcom_event(int event,
mlx5_eswitch_vport_match_metadata_enabled(peer_esw)) mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
break; break;
err = mlx5_esw_offloads_pair(esw, peer_esw); err = mlx5_esw_offloads_set_ns_peer(esw, peer_esw, true);
if (err) if (err)
goto err_out; goto err_out;
err = mlx5_esw_offloads_pair(esw, peer_esw);
if (err)
goto err_peer;
err = mlx5_esw_offloads_pair(peer_esw, esw); err = mlx5_esw_offloads_pair(peer_esw, esw);
if (err) if (err)
...@@ -1656,6 +1700,7 @@ static int mlx5_esw_offloads_devcom_event(int event, ...@@ -1656,6 +1700,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, false); mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, false);
mlx5_esw_offloads_unpair(peer_esw); mlx5_esw_offloads_unpair(peer_esw);
mlx5_esw_offloads_unpair(esw); mlx5_esw_offloads_unpair(esw);
mlx5_esw_offloads_set_ns_peer(esw, peer_esw, false);
break; break;
} }
...@@ -1663,7 +1708,8 @@ static int mlx5_esw_offloads_devcom_event(int event, ...@@ -1663,7 +1708,8 @@ static int mlx5_esw_offloads_devcom_event(int event,
err_pair: err_pair:
mlx5_esw_offloads_unpair(esw); mlx5_esw_offloads_unpair(esw);
err_peer:
mlx5_esw_offloads_set_ns_peer(esw, peer_esw, false);
err_out: err_out:
mlx5_core_err(esw->dev, "esw offloads devcom event failure, event %u err %d", mlx5_core_err(esw->dev, "esw offloads devcom event failure, event %u err %d",
event, err); event, err);
...@@ -1734,7 +1780,7 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw, ...@@ -1734,7 +1780,7 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
if (vport->ingress.modify_metadata_rule) { if (vport->ingress.modify_metadata_rule) {
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
flow_act.modify_id = vport->ingress.modify_metadata_id; flow_act.modify_hdr = vport->ingress.modify_metadata;
} }
vport->ingress.allow_rule = vport->ingress.allow_rule =
...@@ -1770,9 +1816,11 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw, ...@@ -1770,9 +1816,11 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
MLX5_SET(set_action_in, action, data, MLX5_SET(set_action_in, action, data,
mlx5_eswitch_get_vport_metadata_for_match(esw, vport->vport)); mlx5_eswitch_get_vport_metadata_for_match(esw, vport->vport));
err = mlx5_modify_header_alloc(esw->dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS, vport->ingress.modify_metadata =
1, action, &vport->ingress.modify_metadata_id); mlx5_modify_header_alloc(esw->dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
if (err) { 1, action);
if (IS_ERR(vport->ingress.modify_metadata)) {
err = PTR_ERR(vport->ingress.modify_metadata);
esw_warn(esw->dev, esw_warn(esw->dev,
"failed to alloc modify header for vport %d ingress acl (%d)\n", "failed to alloc modify header for vport %d ingress acl (%d)\n",
vport->vport, err); vport->vport, err);
...@@ -1780,7 +1828,7 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw, ...@@ -1780,7 +1828,7 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
} }
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_ALLOW; flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_ALLOW;
flow_act.modify_id = vport->ingress.modify_metadata_id; flow_act.modify_hdr = vport->ingress.modify_metadata;
vport->ingress.modify_metadata_rule = mlx5_add_flow_rules(vport->ingress.acl, vport->ingress.modify_metadata_rule = mlx5_add_flow_rules(vport->ingress.acl,
&spec, &flow_act, NULL, 0); &spec, &flow_act, NULL, 0);
if (IS_ERR(vport->ingress.modify_metadata_rule)) { if (IS_ERR(vport->ingress.modify_metadata_rule)) {
...@@ -1794,7 +1842,7 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw, ...@@ -1794,7 +1842,7 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
out: out:
if (err) if (err)
mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata_id); mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
return err; return err;
} }
...@@ -1803,7 +1851,7 @@ void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw, ...@@ -1803,7 +1851,7 @@ void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
{ {
if (vport->ingress.modify_metadata_rule) { if (vport->ingress.modify_metadata_rule) {
mlx5_del_flow_rules(vport->ingress.modify_metadata_rule); mlx5_del_flow_rules(vport->ingress.modify_metadata_rule);
mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata_id); mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
vport->ingress.modify_metadata_rule = NULL; vport->ingress.modify_metadata_rule = NULL;
} }
...@@ -2113,9 +2161,10 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) ...@@ -2113,9 +2161,10 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
else else
esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE; esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
mlx5_rdma_enable_roce(esw->dev);
err = esw_offloads_steering_init(esw); err = esw_offloads_steering_init(esw);
if (err) if (err)
return err; goto err_steering_init;
err = esw_set_passing_vport_metadata(esw, true); err = esw_set_passing_vport_metadata(esw, true);
if (err) if (err)
...@@ -2130,8 +2179,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) ...@@ -2130,8 +2179,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
esw_offloads_devcom_init(esw); esw_offloads_devcom_init(esw);
mutex_init(&esw->offloads.termtbl_mutex); mutex_init(&esw->offloads.termtbl_mutex);
mlx5_rdma_enable_roce(esw->dev);
return 0; return 0;
err_reps: err_reps:
...@@ -2139,6 +2186,8 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) ...@@ -2139,6 +2186,8 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
esw_set_passing_vport_metadata(esw, false); esw_set_passing_vport_metadata(esw, false);
err_vport_metadata: err_vport_metadata:
esw_offloads_steering_cleanup(esw); esw_offloads_steering_cleanup(esw);
err_steering_init:
mlx5_rdma_disable_roce(esw->dev);
return err; return err;
} }
...@@ -2163,12 +2212,12 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw, ...@@ -2163,12 +2212,12 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
void esw_offloads_disable(struct mlx5_eswitch *esw) void esw_offloads_disable(struct mlx5_eswitch *esw)
{ {
mlx5_rdma_disable_roce(esw->dev);
esw_offloads_devcom_cleanup(esw); esw_offloads_devcom_cleanup(esw);
esw_offloads_unload_all_reps(esw); esw_offloads_unload_all_reps(esw);
mlx5_eswitch_disable_pf_vf_vports(esw); mlx5_eswitch_disable_pf_vf_vports(esw);
esw_set_passing_vport_metadata(esw, false); esw_set_passing_vport_metadata(esw, false);
esw_offloads_steering_cleanup(esw); esw_offloads_steering_cleanup(esw);
mlx5_rdma_disable_roce(esw->dev);
esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE; esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
} }
......
...@@ -107,6 +107,50 @@ static int mlx5_cmd_stub_delete_fte(struct mlx5_flow_root_namespace *ns, ...@@ -107,6 +107,50 @@ static int mlx5_cmd_stub_delete_fte(struct mlx5_flow_root_namespace *ns,
return 0; return 0;
} }
static int mlx5_cmd_stub_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns,
int reformat_type,
size_t size,
void *reformat_data,
enum mlx5_flow_namespace_type namespace,
struct mlx5_pkt_reformat *pkt_reformat)
{
return 0;
}
static void mlx5_cmd_stub_packet_reformat_dealloc(struct mlx5_flow_root_namespace *ns,
struct mlx5_pkt_reformat *pkt_reformat)
{
}
static int mlx5_cmd_stub_modify_header_alloc(struct mlx5_flow_root_namespace *ns,
u8 namespace, u8 num_actions,
void *modify_actions,
struct mlx5_modify_hdr *modify_hdr)
{
return 0;
}
static void mlx5_cmd_stub_modify_header_dealloc(struct mlx5_flow_root_namespace *ns,
struct mlx5_modify_hdr *modify_hdr)
{
}
static int mlx5_cmd_stub_set_peer(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_root_namespace *peer_ns)
{
return 0;
}
static int mlx5_cmd_stub_create_ns(struct mlx5_flow_root_namespace *ns)
{
return 0;
}
static int mlx5_cmd_stub_destroy_ns(struct mlx5_flow_root_namespace *ns)
{
return 0;
}
static int mlx5_cmd_update_root_ft(struct mlx5_flow_root_namespace *ns, static int mlx5_cmd_update_root_ft(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_table *ft, u32 underlay_qpn, struct mlx5_flow_table *ft, u32 underlay_qpn,
bool disconnect) bool disconnect)
...@@ -412,11 +456,13 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, ...@@ -412,11 +456,13 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
} else { } else {
MLX5_SET(flow_context, in_flow_context, action, MLX5_SET(flow_context, in_flow_context, action,
fte->action.action); fte->action.action);
if (fte->action.pkt_reformat)
MLX5_SET(flow_context, in_flow_context, packet_reformat_id, MLX5_SET(flow_context, in_flow_context, packet_reformat_id,
fte->action.reformat_id); fte->action.pkt_reformat->id);
} }
if (fte->action.modify_hdr)
MLX5_SET(flow_context, in_flow_context, modify_header_id, MLX5_SET(flow_context, in_flow_context, modify_header_id,
fte->action.modify_id); fte->action.modify_hdr->id);
vlan = MLX5_ADDR_OF(flow_context, in_flow_context, push_vlan); vlan = MLX5_ADDR_OF(flow_context, in_flow_context, push_vlan);
...@@ -468,7 +514,7 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, ...@@ -468,7 +514,7 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
MLX5_FLOW_DEST_VPORT_REFORMAT_ID)); MLX5_FLOW_DEST_VPORT_REFORMAT_ID));
MLX5_SET(extended_dest_format, in_dests, MLX5_SET(extended_dest_format, in_dests,
packet_reformat_id, packet_reformat_id,
dst->dest_attr.vport.reformat_id); dst->dest_attr.vport.pkt_reformat->id);
} }
break; break;
default: default:
...@@ -643,14 +689,15 @@ int mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, u32 base_id, int bulk_len, ...@@ -643,14 +689,15 @@ int mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, u32 base_id, int bulk_len,
return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen);
} }
int mlx5_packet_reformat_alloc(struct mlx5_core_dev *dev, static int mlx5_cmd_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns,
int reformat_type, int reformat_type,
size_t size, size_t size,
void *reformat_data, void *reformat_data,
enum mlx5_flow_namespace_type namespace, enum mlx5_flow_namespace_type namespace,
u32 *packet_reformat_id) struct mlx5_pkt_reformat *pkt_reformat)
{ {
u32 out[MLX5_ST_SZ_DW(alloc_packet_reformat_context_out)]; u32 out[MLX5_ST_SZ_DW(alloc_packet_reformat_context_out)];
struct mlx5_core_dev *dev = ns->dev;
void *packet_reformat_context_in; void *packet_reformat_context_in;
int max_encap_size; int max_encap_size;
void *reformat; void *reformat;
...@@ -693,35 +740,36 @@ int mlx5_packet_reformat_alloc(struct mlx5_core_dev *dev, ...@@ -693,35 +740,36 @@ int mlx5_packet_reformat_alloc(struct mlx5_core_dev *dev,
memset(out, 0, sizeof(out)); memset(out, 0, sizeof(out));
err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
*packet_reformat_id = MLX5_GET(alloc_packet_reformat_context_out, pkt_reformat->id = MLX5_GET(alloc_packet_reformat_context_out,
out, packet_reformat_id); out, packet_reformat_id);
kfree(in); kfree(in);
return err; return err;
} }
EXPORT_SYMBOL(mlx5_packet_reformat_alloc);
void mlx5_packet_reformat_dealloc(struct mlx5_core_dev *dev, static void mlx5_cmd_packet_reformat_dealloc(struct mlx5_flow_root_namespace *ns,
u32 packet_reformat_id) struct mlx5_pkt_reformat *pkt_reformat)
{ {
u32 in[MLX5_ST_SZ_DW(dealloc_packet_reformat_context_in)]; u32 in[MLX5_ST_SZ_DW(dealloc_packet_reformat_context_in)];
u32 out[MLX5_ST_SZ_DW(dealloc_packet_reformat_context_out)]; u32 out[MLX5_ST_SZ_DW(dealloc_packet_reformat_context_out)];
struct mlx5_core_dev *dev = ns->dev;
memset(in, 0, sizeof(in)); memset(in, 0, sizeof(in));
MLX5_SET(dealloc_packet_reformat_context_in, in, opcode, MLX5_SET(dealloc_packet_reformat_context_in, in, opcode,
MLX5_CMD_OP_DEALLOC_PACKET_REFORMAT_CONTEXT); MLX5_CMD_OP_DEALLOC_PACKET_REFORMAT_CONTEXT);
MLX5_SET(dealloc_packet_reformat_context_in, in, packet_reformat_id, MLX5_SET(dealloc_packet_reformat_context_in, in, packet_reformat_id,
packet_reformat_id); pkt_reformat->id);
mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
} }
EXPORT_SYMBOL(mlx5_packet_reformat_dealloc);
int mlx5_modify_header_alloc(struct mlx5_core_dev *dev, static int mlx5_cmd_modify_header_alloc(struct mlx5_flow_root_namespace *ns,
u8 namespace, u8 num_actions, u8 namespace, u8 num_actions,
void *modify_actions, u32 *modify_header_id) void *modify_actions,
struct mlx5_modify_hdr *modify_hdr)
{ {
u32 out[MLX5_ST_SZ_DW(alloc_modify_header_context_out)]; u32 out[MLX5_ST_SZ_DW(alloc_modify_header_context_out)];
int max_actions, actions_size, inlen, err; int max_actions, actions_size, inlen, err;
struct mlx5_core_dev *dev = ns->dev;
void *actions_in; void *actions_in;
u8 table_type; u8 table_type;
u32 *in; u32 *in;
...@@ -772,26 +820,26 @@ int mlx5_modify_header_alloc(struct mlx5_core_dev *dev, ...@@ -772,26 +820,26 @@ int mlx5_modify_header_alloc(struct mlx5_core_dev *dev,
memset(out, 0, sizeof(out)); memset(out, 0, sizeof(out));
err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
*modify_header_id = MLX5_GET(alloc_modify_header_context_out, out, modify_header_id); modify_hdr->id = MLX5_GET(alloc_modify_header_context_out, out, modify_header_id);
kfree(in); kfree(in);
return err; return err;
} }
EXPORT_SYMBOL(mlx5_modify_header_alloc);
void mlx5_modify_header_dealloc(struct mlx5_core_dev *dev, u32 modify_header_id) static void mlx5_cmd_modify_header_dealloc(struct mlx5_flow_root_namespace *ns,
struct mlx5_modify_hdr *modify_hdr)
{ {
u32 in[MLX5_ST_SZ_DW(dealloc_modify_header_context_in)]; u32 in[MLX5_ST_SZ_DW(dealloc_modify_header_context_in)];
u32 out[MLX5_ST_SZ_DW(dealloc_modify_header_context_out)]; u32 out[MLX5_ST_SZ_DW(dealloc_modify_header_context_out)];
struct mlx5_core_dev *dev = ns->dev;
memset(in, 0, sizeof(in)); memset(in, 0, sizeof(in));
MLX5_SET(dealloc_modify_header_context_in, in, opcode, MLX5_SET(dealloc_modify_header_context_in, in, opcode,
MLX5_CMD_OP_DEALLOC_MODIFY_HEADER_CONTEXT); MLX5_CMD_OP_DEALLOC_MODIFY_HEADER_CONTEXT);
MLX5_SET(dealloc_modify_header_context_in, in, modify_header_id, MLX5_SET(dealloc_modify_header_context_in, in, modify_header_id,
modify_header_id); modify_hdr->id);
mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
} }
EXPORT_SYMBOL(mlx5_modify_header_dealloc);
static const struct mlx5_flow_cmds mlx5_flow_cmds = { static const struct mlx5_flow_cmds mlx5_flow_cmds = {
.create_flow_table = mlx5_cmd_create_flow_table, .create_flow_table = mlx5_cmd_create_flow_table,
...@@ -803,6 +851,13 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds = { ...@@ -803,6 +851,13 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds = {
.update_fte = mlx5_cmd_update_fte, .update_fte = mlx5_cmd_update_fte,
.delete_fte = mlx5_cmd_delete_fte, .delete_fte = mlx5_cmd_delete_fte,
.update_root_ft = mlx5_cmd_update_root_ft, .update_root_ft = mlx5_cmd_update_root_ft,
.packet_reformat_alloc = mlx5_cmd_packet_reformat_alloc,
.packet_reformat_dealloc = mlx5_cmd_packet_reformat_dealloc,
.modify_header_alloc = mlx5_cmd_modify_header_alloc,
.modify_header_dealloc = mlx5_cmd_modify_header_dealloc,
.set_peer = mlx5_cmd_stub_set_peer,
.create_ns = mlx5_cmd_stub_create_ns,
.destroy_ns = mlx5_cmd_stub_destroy_ns,
}; };
static const struct mlx5_flow_cmds mlx5_flow_cmd_stubs = { static const struct mlx5_flow_cmds mlx5_flow_cmd_stubs = {
...@@ -815,9 +870,16 @@ static const struct mlx5_flow_cmds mlx5_flow_cmd_stubs = { ...@@ -815,9 +870,16 @@ static const struct mlx5_flow_cmds mlx5_flow_cmd_stubs = {
.update_fte = mlx5_cmd_stub_update_fte, .update_fte = mlx5_cmd_stub_update_fte,
.delete_fte = mlx5_cmd_stub_delete_fte, .delete_fte = mlx5_cmd_stub_delete_fte,
.update_root_ft = mlx5_cmd_stub_update_root_ft, .update_root_ft = mlx5_cmd_stub_update_root_ft,
.packet_reformat_alloc = mlx5_cmd_stub_packet_reformat_alloc,
.packet_reformat_dealloc = mlx5_cmd_stub_packet_reformat_dealloc,
.modify_header_alloc = mlx5_cmd_stub_modify_header_alloc,
.modify_header_dealloc = mlx5_cmd_stub_modify_header_dealloc,
.set_peer = mlx5_cmd_stub_set_peer,
.create_ns = mlx5_cmd_stub_create_ns,
.destroy_ns = mlx5_cmd_stub_destroy_ns,
}; };
static const struct mlx5_flow_cmds *mlx5_fs_cmd_get_fw_cmds(void) const struct mlx5_flow_cmds *mlx5_fs_cmd_get_fw_cmds(void)
{ {
return &mlx5_flow_cmds; return &mlx5_flow_cmds;
} }
......
...@@ -75,6 +75,30 @@ struct mlx5_flow_cmds { ...@@ -75,6 +75,30 @@ struct mlx5_flow_cmds {
struct mlx5_flow_table *ft, struct mlx5_flow_table *ft,
u32 underlay_qpn, u32 underlay_qpn,
bool disconnect); bool disconnect);
int (*packet_reformat_alloc)(struct mlx5_flow_root_namespace *ns,
int reformat_type,
size_t size,
void *reformat_data,
enum mlx5_flow_namespace_type namespace,
struct mlx5_pkt_reformat *pkt_reformat);
void (*packet_reformat_dealloc)(struct mlx5_flow_root_namespace *ns,
struct mlx5_pkt_reformat *pkt_reformat);
int (*modify_header_alloc)(struct mlx5_flow_root_namespace *ns,
u8 namespace, u8 num_actions,
void *modify_actions,
struct mlx5_modify_hdr *modify_hdr);
void (*modify_header_dealloc)(struct mlx5_flow_root_namespace *ns,
struct mlx5_modify_hdr *modify_hdr);
int (*set_peer)(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_root_namespace *peer_ns);
int (*create_ns)(struct mlx5_flow_root_namespace *ns);
int (*destroy_ns)(struct mlx5_flow_root_namespace *ns);
}; };
int mlx5_cmd_fc_alloc(struct mlx5_core_dev *dev, u32 *id); int mlx5_cmd_fc_alloc(struct mlx5_core_dev *dev, u32 *id);
...@@ -90,5 +114,6 @@ int mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, u32 base_id, int bulk_len, ...@@ -90,5 +114,6 @@ int mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, u32 base_id, int bulk_len,
u32 *out); u32 *out);
const struct mlx5_flow_cmds *mlx5_fs_cmd_get_default(enum fs_flow_table_type type); const struct mlx5_flow_cmds *mlx5_fs_cmd_get_default(enum fs_flow_table_type type);
const struct mlx5_flow_cmds *mlx5_fs_cmd_get_fw_cmds(void);
#endif #endif
...@@ -1415,7 +1415,8 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, ...@@ -1415,7 +1415,8 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1,
((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ? ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ?
(d1->vport.vhca_id == d2->vport.vhca_id) : true) && (d1->vport.vhca_id == d2->vport.vhca_id) : true) &&
((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ? ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ?
(d1->vport.reformat_id == d2->vport.reformat_id) : true)) || (d1->vport.pkt_reformat->id ==
d2->vport.pkt_reformat->id) : true)) ||
(d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE && (d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE &&
d1->ft == d2->ft) || d1->ft == d2->ft) ||
(d1->type == MLX5_FLOW_DESTINATION_TYPE_TIR && (d1->type == MLX5_FLOW_DESTINATION_TYPE_TIR &&
...@@ -2888,3 +2889,160 @@ int mlx5_fs_remove_rx_underlay_qpn(struct mlx5_core_dev *dev, u32 underlay_qpn) ...@@ -2888,3 +2889,160 @@ int mlx5_fs_remove_rx_underlay_qpn(struct mlx5_core_dev *dev, u32 underlay_qpn)
return err; return err;
} }
EXPORT_SYMBOL(mlx5_fs_remove_rx_underlay_qpn); EXPORT_SYMBOL(mlx5_fs_remove_rx_underlay_qpn);
static struct mlx5_flow_root_namespace
*get_root_namespace(struct mlx5_core_dev *dev, enum mlx5_flow_namespace_type ns_type)
{
struct mlx5_flow_namespace *ns;
if (ns_type == MLX5_FLOW_NAMESPACE_ESW_EGRESS ||
ns_type == MLX5_FLOW_NAMESPACE_ESW_INGRESS)
ns = mlx5_get_flow_vport_acl_namespace(dev, ns_type, 0);
else
ns = mlx5_get_flow_namespace(dev, ns_type);
if (!ns)
return NULL;
return find_root(&ns->node);
}
struct mlx5_modify_hdr *mlx5_modify_header_alloc(struct mlx5_core_dev *dev,
u8 ns_type, u8 num_actions,
void *modify_actions)
{
struct mlx5_flow_root_namespace *root;
struct mlx5_modify_hdr *modify_hdr;
int err;
root = get_root_namespace(dev, ns_type);
if (!root)
return ERR_PTR(-EOPNOTSUPP);
modify_hdr = kzalloc(sizeof(*modify_hdr), GFP_KERNEL);
if (!modify_hdr)
return ERR_PTR(-ENOMEM);
modify_hdr->ns_type = ns_type;
err = root->cmds->modify_header_alloc(root, ns_type, num_actions,
modify_actions, modify_hdr);
if (err) {
kfree(modify_hdr);
return ERR_PTR(err);
}
return modify_hdr;
}
EXPORT_SYMBOL(mlx5_modify_header_alloc);
void mlx5_modify_header_dealloc(struct mlx5_core_dev *dev,
struct mlx5_modify_hdr *modify_hdr)
{
struct mlx5_flow_root_namespace *root;
root = get_root_namespace(dev, modify_hdr->ns_type);
if (WARN_ON(!root))
return;
root->cmds->modify_header_dealloc(root, modify_hdr);
kfree(modify_hdr);
}
EXPORT_SYMBOL(mlx5_modify_header_dealloc);
struct mlx5_pkt_reformat *mlx5_packet_reformat_alloc(struct mlx5_core_dev *dev,
int reformat_type,
size_t size,
void *reformat_data,
enum mlx5_flow_namespace_type ns_type)
{
struct mlx5_pkt_reformat *pkt_reformat;
struct mlx5_flow_root_namespace *root;
int err;
root = get_root_namespace(dev, ns_type);
if (!root)
return ERR_PTR(-EOPNOTSUPP);
pkt_reformat = kzalloc(sizeof(*pkt_reformat), GFP_KERNEL);
if (!pkt_reformat)
return ERR_PTR(-ENOMEM);
pkt_reformat->ns_type = ns_type;
pkt_reformat->reformat_type = reformat_type;
err = root->cmds->packet_reformat_alloc(root, reformat_type, size,
reformat_data, ns_type,
pkt_reformat);
if (err) {
kfree(pkt_reformat);
return ERR_PTR(err);
}
return pkt_reformat;
}
EXPORT_SYMBOL(mlx5_packet_reformat_alloc);
void mlx5_packet_reformat_dealloc(struct mlx5_core_dev *dev,
struct mlx5_pkt_reformat *pkt_reformat)
{
struct mlx5_flow_root_namespace *root;
root = get_root_namespace(dev, pkt_reformat->ns_type);
if (WARN_ON(!root))
return;
root->cmds->packet_reformat_dealloc(root, pkt_reformat);
kfree(pkt_reformat);
}
EXPORT_SYMBOL(mlx5_packet_reformat_dealloc);
int mlx5_flow_namespace_set_peer(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_root_namespace *peer_ns)
{
if (peer_ns && ns->mode != peer_ns->mode) {
mlx5_core_err(ns->dev,
"Can't peer namespace of different steering mode\n");
return -EINVAL;
}
return ns->cmds->set_peer(ns, peer_ns);
}
/* This function should be called only at init stage of the namespace.
* It is not safe to call this function while steering operations
* are executed in the namespace.
*/
int mlx5_flow_namespace_set_mode(struct mlx5_flow_namespace *ns,
enum mlx5_flow_steering_mode mode)
{
struct mlx5_flow_root_namespace *root;
const struct mlx5_flow_cmds *cmds;
int err;
root = find_root(&ns->node);
if (&root->ns != ns)
/* Can't set cmds to non root namespace */
return -EINVAL;
if (root->table_type != FS_FT_FDB)
return -EOPNOTSUPP;
if (root->mode == mode)
return 0;
if (mode == MLX5_FLOW_STEERING_MODE_SMFS)
cmds = mlx5_fs_cmd_get_dr_cmds();
else
cmds = mlx5_fs_cmd_get_fw_cmds();
if (!cmds)
return -EOPNOTSUPP;
err = cmds->create_ns(root);
if (err) {
mlx5_core_err(root->dev, "Failed to create flow namespace (%d)\n",
err);
return err;
}
root->cmds->destroy_ns(root);
root->cmds = cmds;
root->mode = mode;
return 0;
}
...@@ -37,6 +37,24 @@ ...@@ -37,6 +37,24 @@
#include <linux/mlx5/fs.h> #include <linux/mlx5/fs.h>
#include <linux/rhashtable.h> #include <linux/rhashtable.h>
#include <linux/llist.h> #include <linux/llist.h>
#include <steering/fs_dr.h>
struct mlx5_modify_hdr {
enum mlx5_flow_namespace_type ns_type;
union {
struct mlx5_fs_dr_action action;
u32 id;
};
};
struct mlx5_pkt_reformat {
enum mlx5_flow_namespace_type ns_type;
int reformat_type; /* from mlx5_ifc */
union {
struct mlx5_fs_dr_action action;
u32 id;
};
};
/* FS_TYPE_PRIO_CHAINS is a PRIO that will have namespaces only, /* FS_TYPE_PRIO_CHAINS is a PRIO that will have namespaces only,
* and those are in parallel to one another when going over them to connect * and those are in parallel to one another when going over them to connect
...@@ -80,8 +98,14 @@ enum fs_fte_status { ...@@ -80,8 +98,14 @@ enum fs_fte_status {
FS_FTE_STATUS_EXISTING = 1UL << 0, FS_FTE_STATUS_EXISTING = 1UL << 0,
}; };
enum mlx5_flow_steering_mode {
MLX5_FLOW_STEERING_MODE_DMFS,
MLX5_FLOW_STEERING_MODE_SMFS
};
struct mlx5_flow_steering { struct mlx5_flow_steering {
struct mlx5_core_dev *dev; struct mlx5_core_dev *dev;
enum mlx5_flow_steering_mode mode;
struct kmem_cache *fgs_cache; struct kmem_cache *fgs_cache;
struct kmem_cache *ftes_cache; struct kmem_cache *ftes_cache;
struct mlx5_flow_root_namespace *root_ns; struct mlx5_flow_root_namespace *root_ns;
...@@ -128,6 +152,7 @@ struct mlx5_flow_handle { ...@@ -128,6 +152,7 @@ struct mlx5_flow_handle {
/* Type of children is mlx5_flow_group */ /* Type of children is mlx5_flow_group */
struct mlx5_flow_table { struct mlx5_flow_table {
struct fs_node node; struct fs_node node;
struct mlx5_fs_dr_table fs_dr_table;
u32 id; u32 id;
u16 vport; u16 vport;
unsigned int max_fte; unsigned int max_fte;
...@@ -168,6 +193,7 @@ struct mlx5_ft_underlay_qp { ...@@ -168,6 +193,7 @@ struct mlx5_ft_underlay_qp {
/* Type of children is mlx5_flow_rule */ /* Type of children is mlx5_flow_rule */
struct fs_fte { struct fs_fte {
struct fs_node node; struct fs_node node;
struct mlx5_fs_dr_rule fs_dr_rule;
u32 val[MLX5_ST_SZ_DW_MATCH_PARAM]; u32 val[MLX5_ST_SZ_DW_MATCH_PARAM];
u32 dests_size; u32 dests_size;
u32 index; u32 index;
...@@ -203,6 +229,7 @@ struct mlx5_flow_group_mask { ...@@ -203,6 +229,7 @@ struct mlx5_flow_group_mask {
/* Type of children is fs_fte */ /* Type of children is fs_fte */
struct mlx5_flow_group { struct mlx5_flow_group {
struct fs_node node; struct fs_node node;
struct mlx5_fs_dr_matcher fs_dr_matcher;
struct mlx5_flow_group_mask mask; struct mlx5_flow_group_mask mask;
u32 start_index; u32 start_index;
u32 max_ftes; u32 max_ftes;
...@@ -214,6 +241,8 @@ struct mlx5_flow_group { ...@@ -214,6 +241,8 @@ struct mlx5_flow_group {
struct mlx5_flow_root_namespace { struct mlx5_flow_root_namespace {
struct mlx5_flow_namespace ns; struct mlx5_flow_namespace ns;
enum mlx5_flow_steering_mode mode;
struct mlx5_fs_dr_domain fs_dr_domain;
enum fs_flow_table_type table_type; enum fs_flow_table_type table_type;
struct mlx5_core_dev *dev; struct mlx5_core_dev *dev;
struct mlx5_flow_table *root_ft; struct mlx5_flow_table *root_ft;
...@@ -231,6 +260,14 @@ void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev, ...@@ -231,6 +260,14 @@ void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev,
void mlx5_fc_update_sampling_interval(struct mlx5_core_dev *dev, void mlx5_fc_update_sampling_interval(struct mlx5_core_dev *dev,
unsigned long interval); unsigned long interval);
const struct mlx5_flow_cmds *mlx5_fs_cmd_get_fw_cmds(void);
int mlx5_flow_namespace_set_peer(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_root_namespace *peer_ns);
int mlx5_flow_namespace_set_mode(struct mlx5_flow_namespace *ns,
enum mlx5_flow_steering_mode mode);
int mlx5_init_fs(struct mlx5_core_dev *dev); int mlx5_init_fs(struct mlx5_core_dev *dev);
void mlx5_cleanup_fs(struct mlx5_core_dev *dev); void mlx5_cleanup_fs(struct mlx5_core_dev *dev);
......
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
// Copyright (c) 2019 Mellanox Technologies
#include <linux/mlx5/driver.h>
#include <linux/mlx5/device.h>
#include "mlx5_core.h"
#include "lib/mlx5.h"
struct mlx5_dm {
/* protect access to icm bitmask */
spinlock_t lock;
unsigned long *steering_sw_icm_alloc_blocks;
unsigned long *header_modify_sw_icm_alloc_blocks;
};
struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
{
u64 header_modify_icm_blocks = 0;
u64 steering_icm_blocks = 0;
struct mlx5_dm *dm;
if (!(MLX5_CAP_GEN_64(dev, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM))
return 0;
dm = kzalloc(sizeof(*dm), GFP_KERNEL);
if (!dm)
return ERR_PTR(-ENOMEM);
spin_lock_init(&dm->lock);
if (MLX5_CAP64_DEV_MEM(dev, steering_sw_icm_start_address)) {
steering_icm_blocks =
BIT(MLX5_CAP_DEV_MEM(dev, log_steering_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev));
dm->steering_sw_icm_alloc_blocks =
kcalloc(BITS_TO_LONGS(steering_icm_blocks),
sizeof(unsigned long), GFP_KERNEL);
if (!dm->steering_sw_icm_alloc_blocks)
goto err_steering;
}
if (MLX5_CAP64_DEV_MEM(dev, header_modify_sw_icm_start_address)) {
header_modify_icm_blocks =
BIT(MLX5_CAP_DEV_MEM(dev, log_header_modify_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev));
dm->header_modify_sw_icm_alloc_blocks =
kcalloc(BITS_TO_LONGS(header_modify_icm_blocks),
sizeof(unsigned long), GFP_KERNEL);
if (!dm->header_modify_sw_icm_alloc_blocks)
goto err_modify_hdr;
}
return dm;
err_modify_hdr:
kfree(dm->steering_sw_icm_alloc_blocks);
err_steering:
kfree(dm);
return ERR_PTR(-ENOMEM);
}
void mlx5_dm_cleanup(struct mlx5_core_dev *dev)
{
struct mlx5_dm *dm = dev->dm;
if (!dev->dm)
return;
if (dm->steering_sw_icm_alloc_blocks) {
WARN_ON(!bitmap_empty(dm->steering_sw_icm_alloc_blocks,
BIT(MLX5_CAP_DEV_MEM(dev, log_steering_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev))));
kfree(dm->steering_sw_icm_alloc_blocks);
}
if (dm->header_modify_sw_icm_alloc_blocks) {
WARN_ON(!bitmap_empty(dm->header_modify_sw_icm_alloc_blocks,
BIT(MLX5_CAP_DEV_MEM(dev,
log_header_modify_sw_icm_size) -
MLX5_LOG_SW_ICM_BLOCK_SIZE(dev))));
kfree(dm->header_modify_sw_icm_alloc_blocks);
}
kfree(dm);
}
int mlx5_dm_sw_icm_alloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type,
u64 length, u16 uid, phys_addr_t *addr, u32 *obj_id)
{
u32 num_blocks = DIV_ROUND_UP_ULL(length, MLX5_SW_ICM_BLOCK_SIZE(dev));
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
u32 in[MLX5_ST_SZ_DW(create_sw_icm_in)] = {};
struct mlx5_dm *dm = dev->dm;
unsigned long *block_map;
u64 icm_start_addr;
u32 log_icm_size;
u32 max_blocks;
u64 block_idx;
void *sw_icm;
int ret;
if (!dev->dm)
return -EOPNOTSUPP;
if (!length || (length & (length - 1)) ||
length & (MLX5_SW_ICM_BLOCK_SIZE(dev) - 1))
return -EINVAL;
MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_SW_ICM);
MLX5_SET(general_obj_in_cmd_hdr, in, uid, uid);
switch (type) {
case MLX5_SW_ICM_TYPE_STEERING:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev, steering_sw_icm_start_address);
log_icm_size = MLX5_CAP_DEV_MEM(dev, log_steering_sw_icm_size);
block_map = dm->steering_sw_icm_alloc_blocks;
break;
case MLX5_SW_ICM_TYPE_HEADER_MODIFY:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev, header_modify_sw_icm_start_address);
log_icm_size = MLX5_CAP_DEV_MEM(dev,
log_header_modify_sw_icm_size);
block_map = dm->header_modify_sw_icm_alloc_blocks;
break;
default:
return -EINVAL;
}
if (!block_map)
return -EOPNOTSUPP;
max_blocks = BIT(log_icm_size - MLX5_LOG_SW_ICM_BLOCK_SIZE(dev));
spin_lock(&dm->lock);
block_idx = bitmap_find_next_zero_area(block_map,
max_blocks,
0,
num_blocks, 0);
if (block_idx < max_blocks)
bitmap_set(block_map,
block_idx, num_blocks);
spin_unlock(&dm->lock);
if (block_idx >= max_blocks)
return -ENOMEM;
sw_icm = MLX5_ADDR_OF(create_sw_icm_in, in, sw_icm);
icm_start_addr += block_idx << MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
MLX5_SET64(sw_icm, sw_icm, sw_icm_start_addr,
icm_start_addr);
MLX5_SET(sw_icm, sw_icm, log_sw_icm_size, ilog2(length));
ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (ret) {
spin_lock(&dm->lock);
bitmap_clear(block_map,
block_idx, num_blocks);
spin_unlock(&dm->lock);
return ret;
}
*addr = icm_start_addr;
*obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
return 0;
}
EXPORT_SYMBOL_GPL(mlx5_dm_sw_icm_alloc);
int mlx5_dm_sw_icm_dealloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type,
u64 length, u16 uid, phys_addr_t addr, u32 obj_id)
{
u32 num_blocks = DIV_ROUND_UP_ULL(length, MLX5_SW_ICM_BLOCK_SIZE(dev));
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
struct mlx5_dm *dm = dev->dm;
unsigned long *block_map;
u64 icm_start_addr;
u64 start_idx;
int err;
if (!dev->dm)
return -EOPNOTSUPP;
switch (type) {
case MLX5_SW_ICM_TYPE_STEERING:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev, steering_sw_icm_start_address);
block_map = dm->steering_sw_icm_alloc_blocks;
break;
case MLX5_SW_ICM_TYPE_HEADER_MODIFY:
icm_start_addr = MLX5_CAP64_DEV_MEM(dev, header_modify_sw_icm_start_address);
block_map = dm->header_modify_sw_icm_alloc_blocks;
break;
default:
return -EINVAL;
}
MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_SW_ICM);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
MLX5_SET(general_obj_in_cmd_hdr, in, uid, uid);
err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (err)
return err;
start_idx = (addr - icm_start_addr) >> MLX5_LOG_SW_ICM_BLOCK_SIZE(dev);
spin_lock(&dm->lock);
bitmap_clear(block_map,
start_idx, num_blocks);
spin_unlock(&dm->lock);
return 0;
}
EXPORT_SYMBOL_GPL(mlx5_dm_sw_icm_dealloc);
...@@ -876,6 +876,10 @@ static int mlx5_init_once(struct mlx5_core_dev *dev) ...@@ -876,6 +876,10 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
goto err_eswitch_cleanup; goto err_eswitch_cleanup;
} }
dev->dm = mlx5_dm_create(dev);
if (IS_ERR(dev->dm))
mlx5_core_warn(dev, "Failed to init device memory%d\n", err);
dev->tracer = mlx5_fw_tracer_create(dev); dev->tracer = mlx5_fw_tracer_create(dev);
dev->hv_vhca = mlx5_hv_vhca_create(dev); dev->hv_vhca = mlx5_hv_vhca_create(dev);
...@@ -910,6 +914,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev) ...@@ -910,6 +914,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
{ {
mlx5_hv_vhca_destroy(dev->hv_vhca); mlx5_hv_vhca_destroy(dev->hv_vhca);
mlx5_fw_tracer_destroy(dev->tracer); mlx5_fw_tracer_destroy(dev->tracer);
mlx5_dm_cleanup(dev);
mlx5_fpga_cleanup(dev); mlx5_fpga_cleanup(dev);
mlx5_eswitch_cleanup(dev->priv.eswitch); mlx5_eswitch_cleanup(dev->priv.eswitch);
mlx5_sriov_cleanup(dev); mlx5_sriov_cleanup(dev);
......
...@@ -198,6 +198,9 @@ int mlx5_set_mtpps(struct mlx5_core_dev *mdev, u32 *mtpps, u32 mtpps_size); ...@@ -198,6 +198,9 @@ int mlx5_set_mtpps(struct mlx5_core_dev *mdev, u32 *mtpps, u32 mtpps_size);
int mlx5_query_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 *arm, u8 *mode); int mlx5_query_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 *arm, u8 *mode);
int mlx5_set_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 arm, u8 mode); int mlx5_set_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 arm, u8 mode);
struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev);
void mlx5_dm_cleanup(struct mlx5_core_dev *dev);
#define MLX5_PPS_CAP(mdev) (MLX5_CAP_GEN((mdev), pps) && \ #define MLX5_PPS_CAP(mdev) (MLX5_CAP_GEN((mdev), pps) && \
MLX5_CAP_GEN((mdev), pps_modify) && \ MLX5_CAP_GEN((mdev), pps_modify) && \
MLX5_CAP_MCAM_FEATURE((mdev), mtpps_fs) && \ MLX5_CAP_MCAM_FEATURE((mdev), mtpps_fs) && \
......
...@@ -14,9 +14,6 @@ static void mlx5_rdma_disable_roce_steering(struct mlx5_core_dev *dev) ...@@ -14,9 +14,6 @@ static void mlx5_rdma_disable_roce_steering(struct mlx5_core_dev *dev)
{ {
struct mlx5_core_roce *roce = &dev->priv.roce; struct mlx5_core_roce *roce = &dev->priv.roce;
if (!roce->ft)
return;
mlx5_del_flow_rules(roce->allow_rule); mlx5_del_flow_rules(roce->allow_rule);
mlx5_destroy_flow_group(roce->fg); mlx5_destroy_flow_group(roce->fg);
mlx5_destroy_flow_table(roce->ft); mlx5_destroy_flow_table(roce->ft);
...@@ -145,6 +142,11 @@ static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev) ...@@ -145,6 +142,11 @@ static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev)
void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev) void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev)
{ {
struct mlx5_core_roce *roce = &dev->priv.roce;
if (!roce->ft)
return;
mlx5_rdma_disable_roce_steering(dev); mlx5_rdma_disable_roce_steering(dev);
mlx5_rdma_del_roce_addr(dev); mlx5_rdma_del_roce_addr(dev);
mlx5_nic_vport_disable_roce(dev); mlx5_nic_vport_disable_roce(dev);
......
# SPDX-License-Identifier: GPL-2.0-only
subdir-ccflags-y += -I$(src)/..
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2019 Mellanox Technologies. */
/* Copyright (c) 2011-2015 Stephan Brumme. All rights reserved.
* Slicing-by-16 contributed by Bulat Ziganshin
*
* This software is provided 'as-is', without any express or implied warranty.
* In no event will the author be held liable for any damages arising from the
* of this software.
*
* Permission is granted to anyone to use this software for any purpose,
* including commercial applications, and to alter it and redistribute it
* freely, subject to the following restrictions:
*
* 1. The origin of this software must not be misrepresented; you must not
* claim that you wrote the original software.
* 2. If you use this software in a product, an acknowledgment in the product
* documentation would be appreciated but is not required.
* 3. Altered source versions must be plainly marked as such, and must not be
* misrepresented as being the original software.
*
* Taken from http://create.stephan-brumme.com/crc32/ and adapted.
*/
#include "dr_types.h"
#define DR_STE_CRC_POLY 0xEDB88320L
static u32 dr_ste_crc_tab32[8][256];
static void dr_crc32_calc_lookup_entry(u32 (*tbl)[256], u8 i, u8 j)
{
tbl[i][j] = (tbl[i - 1][j] >> 8) ^ tbl[0][tbl[i - 1][j] & 0xff];
}
void mlx5dr_crc32_init_table(void)
{
u32 crc, i, j;
for (i = 0; i < 256; i++) {
crc = i;
for (j = 0; j < 8; j++) {
if (crc & 0x00000001L)
crc = (crc >> 1) ^ DR_STE_CRC_POLY;
else
crc = crc >> 1;
}
dr_ste_crc_tab32[0][i] = crc;
}
/* Init CRC lookup tables according to crc_slice_8 algorithm */
for (i = 0; i < 256; i++) {
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 1, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 2, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 3, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 4, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 5, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 6, i);
dr_crc32_calc_lookup_entry(dr_ste_crc_tab32, 7, i);
}
}
/* Compute CRC32 (Slicing-by-8 algorithm) */
u32 mlx5dr_crc32_slice8_calc(const void *input_data, size_t length)
{
const u32 *curr = (const u32 *)input_data;
const u8 *curr_char;
u32 crc = 0, one, two;
if (!input_data)
return 0;
/* Process eight bytes at once (Slicing-by-8) */
while (length >= 8) {
one = *curr++ ^ crc;
two = *curr++;
crc = dr_ste_crc_tab32[0][(two >> 24) & 0xff]
^ dr_ste_crc_tab32[1][(two >> 16) & 0xff]
^ dr_ste_crc_tab32[2][(two >> 8) & 0xff]
^ dr_ste_crc_tab32[3][two & 0xff]
^ dr_ste_crc_tab32[4][(one >> 24) & 0xff]
^ dr_ste_crc_tab32[5][(one >> 16) & 0xff]
^ dr_ste_crc_tab32[6][(one >> 8) & 0xff]
^ dr_ste_crc_tab32[7][one & 0xff];
length -= 8;
}
curr_char = (const u8 *)curr;
/* Remaining 1 to 7 bytes (standard algorithm) */
while (length-- != 0)
crc = (crc >> 8) ^ dr_ste_crc_tab32[0][(crc & 0xff)
^ *curr_char++];
return ((crc >> 24) & 0xff) | ((crc << 8) & 0xff0000) |
((crc >> 8) & 0xff00) | ((crc << 24) & 0xff000000);
}
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2019 Mellanox Technologies. */
#include <linux/mlx5/eswitch.h>
#include "dr_types.h"
static int dr_domain_init_cache(struct mlx5dr_domain *dmn)
{
/* Per vport cached FW FT for checksum recalculation, this
* recalculation is needed due to a HW bug.
*/
dmn->cache.recalc_cs_ft = kcalloc(dmn->info.caps.num_vports,
sizeof(dmn->cache.recalc_cs_ft[0]),
GFP_KERNEL);
if (!dmn->cache.recalc_cs_ft)
return -ENOMEM;
return 0;
}
static void dr_domain_uninit_cache(struct mlx5dr_domain *dmn)
{
int i;
for (i = 0; i < dmn->info.caps.num_vports; i++) {
if (!dmn->cache.recalc_cs_ft[i])
continue;
mlx5dr_fw_destroy_recalc_cs_ft(dmn, dmn->cache.recalc_cs_ft[i]);
}
kfree(dmn->cache.recalc_cs_ft);
}
int mlx5dr_domain_cache_get_recalc_cs_ft_addr(struct mlx5dr_domain *dmn,
u32 vport_num,
u64 *rx_icm_addr)
{
struct mlx5dr_fw_recalc_cs_ft *recalc_cs_ft;
recalc_cs_ft = dmn->cache.recalc_cs_ft[vport_num];
if (!recalc_cs_ft) {
/* Table not in cache, need to allocate a new one */
recalc_cs_ft = mlx5dr_fw_create_recalc_cs_ft(dmn, vport_num);
if (!recalc_cs_ft)
return -EINVAL;
dmn->cache.recalc_cs_ft[vport_num] = recalc_cs_ft;
}
*rx_icm_addr = recalc_cs_ft->rx_icm_addr;
return 0;
}
static int dr_domain_init_resources(struct mlx5dr_domain *dmn)
{
int ret;
ret = mlx5_core_alloc_pd(dmn->mdev, &dmn->pdn);
if (ret) {
mlx5dr_dbg(dmn, "Couldn't allocate PD\n");
return ret;
}
dmn->uar = mlx5_get_uars_page(dmn->mdev);
if (!dmn->uar) {
mlx5dr_err(dmn, "Couldn't allocate UAR\n");
goto clean_pd;
}
dmn->ste_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_STE);
if (!dmn->ste_icm_pool) {
mlx5dr_err(dmn, "Couldn't get icm memory for %s\n",
dev_name(dmn->mdev->device));
goto clean_uar;
}
dmn->action_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_MODIFY_ACTION);
if (!dmn->action_icm_pool) {
mlx5dr_err(dmn, "Couldn't get action icm memory for %s\n",
dev_name(dmn->mdev->device));
goto free_ste_icm_pool;
}
ret = mlx5dr_send_ring_alloc(dmn);
if (ret) {
mlx5dr_err(dmn, "Couldn't create send-ring for %s\n",
dev_name(dmn->mdev->device));
goto free_action_icm_pool;
}
return 0;
free_action_icm_pool:
mlx5dr_icm_pool_destroy(dmn->action_icm_pool);
free_ste_icm_pool:
mlx5dr_icm_pool_destroy(dmn->ste_icm_pool);
clean_uar:
mlx5_put_uars_page(dmn->mdev, dmn->uar);
clean_pd:
mlx5_core_dealloc_pd(dmn->mdev, dmn->pdn);
return ret;
}
static void dr_domain_uninit_resources(struct mlx5dr_domain *dmn)
{
mlx5dr_send_ring_free(dmn, dmn->send_ring);
mlx5dr_icm_pool_destroy(dmn->action_icm_pool);
mlx5dr_icm_pool_destroy(dmn->ste_icm_pool);
mlx5_put_uars_page(dmn->mdev, dmn->uar);
mlx5_core_dealloc_pd(dmn->mdev, dmn->pdn);
}
static int dr_domain_query_vport(struct mlx5dr_domain *dmn,
bool other_vport,
u16 vport_number)
{
struct mlx5dr_cmd_vport_cap *vport_caps;
int ret;
vport_caps = &dmn->info.caps.vports_caps[vport_number];
ret = mlx5dr_cmd_query_esw_vport_context(dmn->mdev,
other_vport,
vport_number,
&vport_caps->icm_address_rx,
&vport_caps->icm_address_tx);
if (ret)
return ret;
ret = mlx5dr_cmd_query_gvmi(dmn->mdev,
other_vport,
vport_number,
&vport_caps->vport_gvmi);
if (ret)
return ret;
vport_caps->num = vport_number;
vport_caps->vhca_gvmi = dmn->info.caps.gvmi;
return 0;
}
static int dr_domain_query_vports(struct mlx5dr_domain *dmn)
{
struct mlx5dr_esw_caps *esw_caps = &dmn->info.caps.esw_caps;
struct mlx5dr_cmd_vport_cap *wire_vport;
int vport;
int ret;
/* Query vports (except wire vport) */
for (vport = 0; vport < dmn->info.caps.num_esw_ports - 1; vport++) {
ret = dr_domain_query_vport(dmn, !!vport, vport);
if (ret)
return ret;
}
/* Last vport is the wire port */
wire_vport = &dmn->info.caps.vports_caps[vport];
wire_vport->num = WIRE_PORT;
wire_vport->icm_address_rx = esw_caps->uplink_icm_address_rx;
wire_vport->icm_address_tx = esw_caps->uplink_icm_address_tx;
wire_vport->vport_gvmi = 0;
wire_vport->vhca_gvmi = dmn->info.caps.gvmi;
return 0;
}
static int dr_domain_query_fdb_caps(struct mlx5_core_dev *mdev,
struct mlx5dr_domain *dmn)
{
int ret;
if (!dmn->info.caps.eswitch_manager)
return -EOPNOTSUPP;
ret = mlx5dr_cmd_query_esw_caps(mdev, &dmn->info.caps.esw_caps);
if (ret)
return ret;
dmn->info.caps.fdb_sw_owner = dmn->info.caps.esw_caps.sw_owner;
dmn->info.caps.esw_rx_drop_address = dmn->info.caps.esw_caps.drop_icm_address_rx;
dmn->info.caps.esw_tx_drop_address = dmn->info.caps.esw_caps.drop_icm_address_tx;
dmn->info.caps.vports_caps = kcalloc(dmn->info.caps.num_esw_ports,
sizeof(dmn->info.caps.vports_caps[0]),
GFP_KERNEL);
if (!dmn->info.caps.vports_caps)
return -ENOMEM;
ret = dr_domain_query_vports(dmn);
if (ret) {
mlx5dr_dbg(dmn, "Failed to query vports caps\n");
goto free_vports_caps;
}
dmn->info.caps.num_vports = dmn->info.caps.num_esw_ports - 1;
return 0;
free_vports_caps:
kfree(dmn->info.caps.vports_caps);
dmn->info.caps.vports_caps = NULL;
return ret;
}
static int dr_domain_caps_init(struct mlx5_core_dev *mdev,
struct mlx5dr_domain *dmn)
{
struct mlx5dr_cmd_vport_cap *vport_cap;
int ret;
if (MLX5_CAP_GEN(mdev, port_type) != MLX5_CAP_PORT_TYPE_ETH) {
mlx5dr_dbg(dmn, "Failed to allocate domain, bad link type\n");
return -EOPNOTSUPP;
}
dmn->info.caps.num_esw_ports = mlx5_eswitch_get_total_vports(mdev);
ret = mlx5dr_cmd_query_device(mdev, &dmn->info.caps);
if (ret)
return ret;
ret = dr_domain_query_fdb_caps(mdev, dmn);
if (ret)
return ret;
switch (dmn->type) {
case MLX5DR_DOMAIN_TYPE_NIC_RX:
if (!dmn->info.caps.rx_sw_owner)
return -ENOTSUPP;
dmn->info.supp_sw_steering = true;
dmn->info.rx.ste_type = MLX5DR_STE_TYPE_RX;
dmn->info.rx.default_icm_addr = dmn->info.caps.nic_rx_drop_address;
dmn->info.rx.drop_icm_addr = dmn->info.caps.nic_rx_drop_address;
break;
case MLX5DR_DOMAIN_TYPE_NIC_TX:
if (!dmn->info.caps.tx_sw_owner)
return -ENOTSUPP;
dmn->info.supp_sw_steering = true;
dmn->info.tx.ste_type = MLX5DR_STE_TYPE_TX;
dmn->info.tx.default_icm_addr = dmn->info.caps.nic_tx_allow_address;
dmn->info.tx.drop_icm_addr = dmn->info.caps.nic_tx_drop_address;
break;
case MLX5DR_DOMAIN_TYPE_FDB:
if (!dmn->info.caps.eswitch_manager)
return -ENOTSUPP;
if (!dmn->info.caps.fdb_sw_owner)
return -ENOTSUPP;
dmn->info.rx.ste_type = MLX5DR_STE_TYPE_RX;
dmn->info.tx.ste_type = MLX5DR_STE_TYPE_TX;
vport_cap = mlx5dr_get_vport_cap(&dmn->info.caps, 0);
if (!vport_cap) {
mlx5dr_dbg(dmn, "Failed to get esw manager vport\n");
return -ENOENT;
}
dmn->info.supp_sw_steering = true;
dmn->info.tx.default_icm_addr = vport_cap->icm_address_tx;
dmn->info.rx.default_icm_addr = vport_cap->icm_address_rx;
dmn->info.rx.drop_icm_addr = dmn->info.caps.esw_rx_drop_address;
dmn->info.tx.drop_icm_addr = dmn->info.caps.esw_tx_drop_address;
break;
default:
mlx5dr_dbg(dmn, "Invalid domain\n");
ret = -EINVAL;
break;
}
return ret;
}
static void dr_domain_caps_uninit(struct mlx5dr_domain *dmn)
{
kfree(dmn->info.caps.vports_caps);
}
struct mlx5dr_domain *
mlx5dr_domain_create(struct mlx5_core_dev *mdev, enum mlx5dr_domain_type type)
{
struct mlx5dr_domain *dmn;
int ret;
if (type > MLX5DR_DOMAIN_TYPE_FDB)
return NULL;
dmn = kzalloc(sizeof(*dmn), GFP_KERNEL);
if (!dmn)
return NULL;
dmn->mdev = mdev;
dmn->type = type;
refcount_set(&dmn->refcount, 1);
mutex_init(&dmn->mutex);
if (dr_domain_caps_init(mdev, dmn)) {
mlx5dr_dbg(dmn, "Failed init domain, no caps\n");
goto free_domain;
}
dmn->info.max_log_action_icm_sz = DR_CHUNK_SIZE_4K;
dmn->info.max_log_sw_icm_sz = min_t(u32, DR_CHUNK_SIZE_1024K,
dmn->info.caps.log_icm_size);
if (!dmn->info.supp_sw_steering) {
mlx5dr_err(dmn, "SW steering not supported for %s\n",
dev_name(mdev->device));
goto uninit_caps;
}
/* Allocate resources */
ret = dr_domain_init_resources(dmn);
if (ret) {
mlx5dr_err(dmn, "Failed init domain resources for %s\n",
dev_name(mdev->device));
goto uninit_caps;
}
ret = dr_domain_init_cache(dmn);
if (ret) {
mlx5dr_err(dmn, "Failed initialize domain cache\n");
goto uninit_resourses;
}
/* Init CRC table for htbl CRC calculation */
mlx5dr_crc32_init_table();
return dmn;
uninit_resourses:
dr_domain_uninit_resources(dmn);
uninit_caps:
dr_domain_caps_uninit(dmn);
free_domain:
kfree(dmn);
return NULL;
}
/* Assure synchronization of the device steering tables with updates made by SW
* insertion.
*/
int mlx5dr_domain_sync(struct mlx5dr_domain *dmn, u32 flags)
{
int ret = 0;
if (flags & MLX5DR_DOMAIN_SYNC_FLAGS_SW) {
mutex_lock(&dmn->mutex);
ret = mlx5dr_send_ring_force_drain(dmn);
mutex_unlock(&dmn->mutex);
if (ret)
return ret;
}
if (flags & MLX5DR_DOMAIN_SYNC_FLAGS_HW)
ret = mlx5dr_cmd_sync_steering(dmn->mdev);
return ret;
}
int mlx5dr_domain_destroy(struct mlx5dr_domain *dmn)
{
if (refcount_read(&dmn->refcount) > 1)
return -EBUSY;
/* make sure resources are not used by the hardware */
mlx5dr_cmd_sync_steering(dmn->mdev);
dr_domain_uninit_cache(dmn);
dr_domain_uninit_resources(dmn);
dr_domain_caps_uninit(dmn);
mutex_destroy(&dmn->mutex);
kfree(dmn);
return 0;
}
void mlx5dr_domain_set_peer(struct mlx5dr_domain *dmn,
struct mlx5dr_domain *peer_dmn)
{
mutex_lock(&dmn->mutex);
if (dmn->peer_dmn)
refcount_dec(&dmn->peer_dmn->refcount);
dmn->peer_dmn = peer_dmn;
if (dmn->peer_dmn)
refcount_inc(&dmn->peer_dmn->refcount);
mutex_unlock(&dmn->mutex);
}
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2019 Mellanox Technologies. */
#include <linux/types.h>
#include "dr_types.h"
struct mlx5dr_fw_recalc_cs_ft *
mlx5dr_fw_create_recalc_cs_ft(struct mlx5dr_domain *dmn, u32 vport_num)
{
struct mlx5dr_fw_recalc_cs_ft *recalc_cs_ft;
u32 table_id, group_id, modify_hdr_id;
u64 rx_icm_addr, modify_ttl_action;
int ret;
recalc_cs_ft = kzalloc(sizeof(*recalc_cs_ft), GFP_KERNEL);
if (!recalc_cs_ft)
return NULL;
ret = mlx5dr_cmd_create_flow_table(dmn->mdev, MLX5_FLOW_TABLE_TYPE_FDB,
0, 0, dmn->info.caps.max_ft_level - 1,
false, true, &rx_icm_addr, &table_id);
if (ret) {
mlx5dr_err(dmn, "Failed creating TTL W/A FW flow table %d\n", ret);
goto free_ttl_tbl;
}
ret = mlx5dr_cmd_create_empty_flow_group(dmn->mdev,
MLX5_FLOW_TABLE_TYPE_FDB,
table_id, &group_id);
if (ret) {
mlx5dr_err(dmn, "Failed creating TTL W/A FW flow group %d\n", ret);
goto destroy_flow_table;
}
/* Modify TTL action by adding zero to trigger CS recalculation */
modify_ttl_action = 0;
MLX5_SET(set_action_in, &modify_ttl_action, action_type, MLX5_ACTION_TYPE_ADD);
MLX5_SET(set_action_in, &modify_ttl_action, field, MLX5_ACTION_IN_FIELD_OUT_IP_TTL);
ret = mlx5dr_cmd_alloc_modify_header(dmn->mdev, MLX5_FLOW_TABLE_TYPE_FDB, 1,
&modify_ttl_action,
&modify_hdr_id);
if (ret) {
mlx5dr_err(dmn, "Failed modify header TTL %d\n", ret);
goto destroy_flow_group;
}
ret = mlx5dr_cmd_set_fte_modify_and_vport(dmn->mdev,
MLX5_FLOW_TABLE_TYPE_FDB,
table_id, group_id, modify_hdr_id,
vport_num);
if (ret) {
mlx5dr_err(dmn, "Failed setting TTL W/A flow table entry %d\n", ret);
goto dealloc_modify_header;
}
recalc_cs_ft->modify_hdr_id = modify_hdr_id;
recalc_cs_ft->rx_icm_addr = rx_icm_addr;
recalc_cs_ft->table_id = table_id;
recalc_cs_ft->group_id = group_id;
return recalc_cs_ft;
dealloc_modify_header:
mlx5dr_cmd_dealloc_modify_header(dmn->mdev, modify_hdr_id);
destroy_flow_group:
mlx5dr_cmd_destroy_flow_group(dmn->mdev,
MLX5_FLOW_TABLE_TYPE_FDB,
table_id, group_id);
destroy_flow_table:
mlx5dr_cmd_destroy_flow_table(dmn->mdev, table_id, MLX5_FLOW_TABLE_TYPE_FDB);
free_ttl_tbl:
kfree(recalc_cs_ft);
return NULL;
}
void mlx5dr_fw_destroy_recalc_cs_ft(struct mlx5dr_domain *dmn,
struct mlx5dr_fw_recalc_cs_ft *recalc_cs_ft)
{
mlx5dr_cmd_del_flow_table_entry(dmn->mdev,
MLX5_FLOW_TABLE_TYPE_FDB,
recalc_cs_ft->table_id);
mlx5dr_cmd_dealloc_modify_header(dmn->mdev, recalc_cs_ft->modify_hdr_id);
mlx5dr_cmd_destroy_flow_group(dmn->mdev,
MLX5_FLOW_TABLE_TYPE_FDB,
recalc_cs_ft->table_id,
recalc_cs_ft->group_id);
mlx5dr_cmd_destroy_flow_table(dmn->mdev,
recalc_cs_ft->table_id,
MLX5_FLOW_TABLE_TYPE_FDB);
kfree(recalc_cs_ft);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment