Commit c9448e82 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'mlx5-updates-2020-11-03' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2020-11-03

This series includes updates to mlx5 software steering component.

1) Few improvements in the DR area, such as removing unneeded checks,
  renaming to better general names, refactor in some places, etc.

2) Software steering (DR) Memory management improvements

This patch series contains SW Steering memory management improvements:
using buddy allocator instead of an existing bucket allocator, and
several other optimizations.

The buddy system is a memory allocation and management algorithm
that manages memory in power of two increments.

The algorithm is well-known and well-described, such as here:
https://en.wikipedia.org/wiki/Buddy_memory_allocation

Linux uses this algorithm for managing and allocating physical pages,
as described here:
https://www.kernel.org/doc/gorman/html/understand/understand009.html

In our case, although the algorithm in principal is similar to the
Linux physical page allocator, the "building blocks" and the circumstances
are different: in SW steering, buddy allocator doesn't really allocates
a memory, but rather manages ICM (Interconnect Context Memory) that was
previously allocated and registered.

The ICM memory that is used in SW steering is always power
of 2 (order), so buddy system is a good fit for this.

Patches in this series:

[PATH 4] net/mlx5: DR, Add buddy allocator utilities
  This patch adds a modified implementation of a well-known buddy allocator,
  adjusted for SW steering needs: the algorithm in principal is similar to
  the Linux physical page allocator, but in our case buddy allocator doesn't
  really allocate a memory, but rather manages ICM memory that was previously
  allocated and registered.

[PATH 5] net/mlx5: DR, Handle ICM memory via buddy allocation instead of bucket management
  This patch changes ICM management of SW steering to use buddy-system mechanism
  Instead of the previous bucket management.

[PATH 6] net/mlx5: DR, Sync chunks only during free
  This patch makes syncing happen only when freeing memory chunks.

[PATH 7] net/mlx5: DR, ICM memory pools sync optimization
  This patch adds tracking of pool's "hot" memory and makes the
  check whether steering sync is required much shorter and faster.

[PATH 8] net/mlx5: DR, Free buddy ICM memory if it is unused
  This patch adds tracking buddy's used ICM memory,
  and frees the buddy if all its memory becomes unused.

3) Misc code cleanups

* tag 'mlx5-updates-2020-11-03' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net: mlx5: Replace in_irq() usage
  net/mlx5: Cleanup kernel-doc warnings
  net/mlx4: Cleanup kernel-doc warnings
  net/mlx5e: Validate stop_room size upon user input
  net/mlx5: DR, Free unused buddy ICM memory
  net/mlx5: DR, ICM memory pools sync optimization
  net/mlx5: DR, Sync chunks only during free
  net/mlx5: DR, Handle ICM memory via buddy allocation instead of buckets
  net/mlx5: DR, Add buddy allocator utilities
  net/mlx5: DR, Rename matcher functions to be more HW agnostic
  net/mlx5: DR, Rename builders HW specific names
  net/mlx5: DR, Remove unused member of action struct
====================

Link: https://lore.kernel.org/r/20201105201242.21716-1-saeedm@nvidia.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents c1aedf01 51443685
...@@ -135,7 +135,7 @@ int mlx4_SET_VPORT_QOS_get(struct mlx4_dev *dev, u8 port, u8 vport, ...@@ -135,7 +135,7 @@ int mlx4_SET_VPORT_QOS_get(struct mlx4_dev *dev, u8 port, u8 vport,
* @dev: mlx4_dev. * @dev: mlx4_dev.
* @port: Physical port number. * @port: Physical port number.
* @vport: Vport id. * @vport: Vport id.
* @out_param: Array of mlx4_vport_qos_param which holds the requested values. * @in_param: Array of mlx4_vport_qos_param which holds the requested values.
* *
* Returns 0 on success or a negative mlx4_core errno code. * Returns 0 on success or a negative mlx4_core errno code.
**/ **/
......
...@@ -81,7 +81,7 @@ mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/t ...@@ -81,7 +81,7 @@ mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/t
mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \ mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \
steering/dr_matcher.o steering/dr_rule.o \ steering/dr_matcher.o steering/dr_rule.o \
steering/dr_icm_pool.o \ steering/dr_icm_pool.o steering/dr_buddy.o \
steering/dr_ste.o steering/dr_send.o \ steering/dr_ste.o steering/dr_send.o \
steering/dr_cmd.o steering/dr_fw.o \ steering/dr_cmd.o steering/dr_fw.o \
steering/dr_action.o steering/fs_dr.o steering/dr_action.o steering/fs_dr.o
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
/* Copyright (c) 2019 Mellanox Technologies. */ /* Copyright (c) 2019 Mellanox Technologies. */
#include "en/params.h" #include "en/params.h"
#include "en/txrx.h"
#include "en_accel/tls_rxtx.h"
static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params, static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk) struct mlx5e_xsk_param *xsk)
...@@ -152,3 +154,35 @@ u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev, ...@@ -152,3 +154,35 @@ u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev,
return is_linear_skb ? mlx5e_get_linear_rq_headroom(params, xsk) : 0; return is_linear_skb ? mlx5e_get_linear_rq_headroom(params, xsk) : 0;
} }
u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
{
bool is_mpwqe = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE);
u16 stop_room;
stop_room = mlx5e_tls_get_stop_room(mdev, params);
stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS);
if (is_mpwqe)
/* A MPWQE can take up to the maximum-sized WQE + all the normal
* stop room can be taken if a new packet breaks the active
* MPWQE session and allocates its WQEs right away.
*/
stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS);
return stop_room;
}
int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params)
{
size_t sq_size = 1 << params->log_sq_size;
u16 stop_room;
stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params);
if (stop_room >= sq_size) {
netdev_err(priv->netdev, "Stop room %hu is bigger than the SQ size %zu\n",
stop_room, sq_size);
return -EINVAL;
}
return 0;
}
...@@ -30,6 +30,7 @@ struct mlx5e_sq_param { ...@@ -30,6 +30,7 @@ struct mlx5e_sq_param {
u32 sqc[MLX5_ST_SZ_DW(sqc)]; u32 sqc[MLX5_ST_SZ_DW(sqc)];
struct mlx5_wq_param wq; struct mlx5_wq_param wq;
bool is_mpw; bool is_mpw;
u16 stop_room;
}; };
struct mlx5e_channel_param { struct mlx5e_channel_param {
...@@ -124,4 +125,7 @@ void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, ...@@ -124,4 +125,7 @@ void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv,
struct mlx5e_params *params, struct mlx5e_params *params,
struct mlx5e_sq_param *param); struct mlx5e_sq_param *param);
u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params);
int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params);
#endif /* __MLX5_EN_PARAMS_H__ */ #endif /* __MLX5_EN_PARAMS_H__ */
...@@ -13,20 +13,20 @@ struct mlx5e_dump_wqe { ...@@ -13,20 +13,20 @@ struct mlx5e_dump_wqe {
(DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB)) (DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB))
static u8 static u8
mlx5e_ktls_dumps_num_wqes(struct mlx5e_txqsq *sq, unsigned int nfrags, mlx5e_ktls_dumps_num_wqes(struct mlx5e_params *params, unsigned int nfrags,
unsigned int sync_len) unsigned int sync_len)
{ {
/* Given the MTU and sync_len, calculates an upper bound for the /* Given the MTU and sync_len, calculates an upper bound for the
* number of DUMP WQEs needed for the TX resync of a record. * number of DUMP WQEs needed for the TX resync of a record.
*/ */
return nfrags + DIV_ROUND_UP(sync_len, sq->hw_mtu); return nfrags + DIV_ROUND_UP(sync_len, MLX5E_SW2HW_MTU(params, params->sw_mtu));
} }
u16 mlx5e_ktls_get_stop_room(struct mlx5e_txqsq *sq) u16 mlx5e_ktls_get_stop_room(struct mlx5e_params *params)
{ {
u16 num_dumps, stop_room = 0; u16 num_dumps, stop_room = 0;
num_dumps = mlx5e_ktls_dumps_num_wqes(sq, MAX_SKB_FRAGS, TLS_MAX_PAYLOAD_SIZE); num_dumps = mlx5e_ktls_dumps_num_wqes(params, MAX_SKB_FRAGS, TLS_MAX_PAYLOAD_SIZE);
stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS); stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS);
stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS); stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS);
......
...@@ -14,7 +14,7 @@ struct mlx5e_accel_tx_tls_state { ...@@ -14,7 +14,7 @@ struct mlx5e_accel_tx_tls_state {
u32 tls_tisn; u32 tls_tisn;
}; };
u16 mlx5e_ktls_get_stop_room(struct mlx5e_txqsq *sq); u16 mlx5e_ktls_get_stop_room(struct mlx5e_params *params);
bool mlx5e_ktls_handle_tx_skb(struct tls_context *tls_ctx, struct mlx5e_txqsq *sq, bool mlx5e_ktls_handle_tx_skb(struct tls_context *tls_ctx, struct mlx5e_txqsq *sq,
struct sk_buff *skb, int datalen, struct sk_buff *skb, int datalen,
......
...@@ -385,15 +385,13 @@ void mlx5e_tls_handle_rx_skb_metadata(struct mlx5e_rq *rq, struct sk_buff *skb, ...@@ -385,15 +385,13 @@ void mlx5e_tls_handle_rx_skb_metadata(struct mlx5e_rq *rq, struct sk_buff *skb,
*cqe_bcnt -= MLX5E_METADATA_ETHER_LEN; *cqe_bcnt -= MLX5E_METADATA_ETHER_LEN;
} }
u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq) u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
{ {
struct mlx5_core_dev *mdev = sq->channel->mdev;
if (!mlx5_accel_is_tls_device(mdev)) if (!mlx5_accel_is_tls_device(mdev))
return 0; return 0;
if (mlx5_accel_is_ktls_device(mdev)) if (mlx5_accel_is_ktls_device(mdev))
return mlx5e_ktls_get_stop_room(sq); return mlx5e_ktls_get_stop_room(params);
/* FPGA */ /* FPGA */
/* Resync SKB. */ /* Resync SKB. */
......
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
#include "en.h" #include "en.h"
#include "en/txrx.h" #include "en/txrx.h"
u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq); u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params);
bool mlx5e_tls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, bool mlx5e_tls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq,
struct sk_buff *skb, struct mlx5e_accel_tx_tls_state *state); struct sk_buff *skb, struct mlx5e_accel_tx_tls_state *state);
...@@ -71,7 +71,7 @@ mlx5e_accel_is_tls(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return false; ...@@ -71,7 +71,7 @@ mlx5e_accel_is_tls(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return false;
static inline void static inline void
mlx5e_tls_handle_rx_skb(struct mlx5e_rq *rq, struct sk_buff *skb, mlx5e_tls_handle_rx_skb(struct mlx5e_rq *rq, struct sk_buff *skb,
struct mlx5_cqe64 *cqe, u32 *cqe_bcnt) {} struct mlx5_cqe64 *cqe, u32 *cqe_bcnt) {}
static inline u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq) static inline u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
{ {
return 0; return 0;
} }
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include "en.h" #include "en.h"
#include "en/port.h" #include "en/port.h"
#include "en/params.h"
#include "en/xsk/pool.h" #include "en/xsk/pool.h"
#include "lib/clock.h" #include "lib/clock.h"
...@@ -369,6 +370,10 @@ int mlx5e_ethtool_set_ringparam(struct mlx5e_priv *priv, ...@@ -369,6 +370,10 @@ int mlx5e_ethtool_set_ringparam(struct mlx5e_priv *priv,
new_channels.params.log_rq_mtu_frames = log_rq_size; new_channels.params.log_rq_mtu_frames = log_rq_size;
new_channels.params.log_sq_size = log_sq_size; new_channels.params.log_sq_size = log_sq_size;
err = mlx5e_validate_params(priv, &new_channels.params);
if (err)
goto unlock;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
priv->channels.params = new_channels.params; priv->channels.params = new_channels.params;
goto unlock; goto unlock;
......
...@@ -1121,28 +1121,6 @@ static int mlx5e_alloc_txqsq_db(struct mlx5e_txqsq *sq, int numa) ...@@ -1121,28 +1121,6 @@ static int mlx5e_alloc_txqsq_db(struct mlx5e_txqsq *sq, int numa)
return 0; return 0;
} }
static int mlx5e_calc_sq_stop_room(struct mlx5e_txqsq *sq, u8 log_sq_size)
{
int sq_size = 1 << log_sq_size;
sq->stop_room = mlx5e_tls_get_stop_room(sq);
sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS);
if (test_bit(MLX5E_SQ_STATE_MPWQE, &sq->state))
/* A MPWQE can take up to the maximum-sized WQE + all the normal
* stop room can be taken if a new packet breaks the active
* MPWQE session and allocates its WQEs right away.
*/
sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS);
if (WARN_ON(sq->stop_room >= sq_size)) {
netdev_err(sq->channel->netdev, "Stop room %hu is bigger than the SQ size %d\n",
sq->stop_room, sq_size);
return -ENOSPC;
}
return 0;
}
static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work); static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work);
static int mlx5e_alloc_txqsq(struct mlx5e_channel *c, static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
int txq_ix, int txq_ix,
...@@ -1176,9 +1154,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c, ...@@ -1176,9 +1154,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
set_bit(MLX5E_SQ_STATE_TLS, &sq->state); set_bit(MLX5E_SQ_STATE_TLS, &sq->state);
if (param->is_mpw) if (param->is_mpw)
set_bit(MLX5E_SQ_STATE_MPWQE, &sq->state); set_bit(MLX5E_SQ_STATE_MPWQE, &sq->state);
err = mlx5e_calc_sq_stop_room(sq, params->log_sq_size); sq->stop_room = param->stop_room;
if (err)
return err;
param->wq.db_numa_node = cpu_to_node(c->cpu); param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl); err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
...@@ -2225,6 +2201,7 @@ static void mlx5e_build_sq_param(struct mlx5e_priv *priv, ...@@ -2225,6 +2201,7 @@ static void mlx5e_build_sq_param(struct mlx5e_priv *priv,
MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
MLX5_SET(sqc, sqc, allow_swp, allow_swp); MLX5_SET(sqc, sqc, allow_swp, allow_swp);
param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE);
param->stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params);
mlx5e_build_tx_cq_param(priv, params, &param->cqp); mlx5e_build_tx_cq_param(priv, params, &param->cqp);
} }
...@@ -3999,6 +3976,9 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu, ...@@ -3999,6 +3976,9 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
new_channels.params = *params; new_channels.params = *params;
new_channels.params.sw_mtu = new_mtu; new_channels.params.sw_mtu = new_mtu;
err = mlx5e_validate_params(priv, &new_channels.params);
if (err)
goto out;
if (params->xdp_prog && if (params->xdp_prog &&
!mlx5e_rx_is_linear_skb(&new_channels.params, NULL)) { !mlx5e_rx_is_linear_skb(&new_channels.params, NULL)) {
......
...@@ -189,19 +189,21 @@ u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq) ...@@ -189,19 +189,21 @@ u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq)
return count_eqe; return count_eqe;
} }
static void mlx5_eq_async_int_lock(struct mlx5_eq_async *eq, unsigned long *flags) static void mlx5_eq_async_int_lock(struct mlx5_eq_async *eq, bool recovery,
unsigned long *flags)
__acquires(&eq->lock) __acquires(&eq->lock)
{ {
if (in_irq()) if (!recovery)
spin_lock(&eq->lock); spin_lock(&eq->lock);
else else
spin_lock_irqsave(&eq->lock, *flags); spin_lock_irqsave(&eq->lock, *flags);
} }
static void mlx5_eq_async_int_unlock(struct mlx5_eq_async *eq, unsigned long *flags) static void mlx5_eq_async_int_unlock(struct mlx5_eq_async *eq, bool recovery,
unsigned long *flags)
__releases(&eq->lock) __releases(&eq->lock)
{ {
if (in_irq()) if (!recovery)
spin_unlock(&eq->lock); spin_unlock(&eq->lock);
else else
spin_unlock_irqrestore(&eq->lock, *flags); spin_unlock_irqrestore(&eq->lock, *flags);
...@@ -223,11 +225,13 @@ static int mlx5_eq_async_int(struct notifier_block *nb, ...@@ -223,11 +225,13 @@ static int mlx5_eq_async_int(struct notifier_block *nb,
struct mlx5_eqe *eqe; struct mlx5_eqe *eqe;
unsigned long flags; unsigned long flags;
int num_eqes = 0; int num_eqes = 0;
bool recovery;
dev = eq->dev; dev = eq->dev;
eqt = dev->priv.eq_table; eqt = dev->priv.eq_table;
mlx5_eq_async_int_lock(eq_async, &flags); recovery = action == ASYNC_EQ_RECOVER;
mlx5_eq_async_int_lock(eq_async, recovery, &flags);
eqe = next_eqe_sw(eq); eqe = next_eqe_sw(eq);
if (!eqe) if (!eqe)
...@@ -249,9 +253,9 @@ static int mlx5_eq_async_int(struct notifier_block *nb, ...@@ -249,9 +253,9 @@ static int mlx5_eq_async_int(struct notifier_block *nb,
out: out:
eq_update_ci(eq, 1); eq_update_ci(eq, 1);
mlx5_eq_async_int_unlock(eq_async, &flags); mlx5_eq_async_int_unlock(eq_async, recovery, &flags);
return unlikely(action == ASYNC_EQ_RECOVER) ? num_eqes : 0; return unlikely(recovery) ? num_eqes : 0;
} }
void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev) void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev)
......
...@@ -47,11 +47,12 @@ ...@@ -47,11 +47,12 @@
/** /**
* enum mlx5_fpga_access_type - Enumerated the different methods possible for * enum mlx5_fpga_access_type - Enumerated the different methods possible for
* accessing the device memory address space * accessing the device memory address space
*
* @MLX5_FPGA_ACCESS_TYPE_I2C: Use the slow CX-FPGA I2C bus
* @MLX5_FPGA_ACCESS_TYPE_DONTCARE: Use the fastest available method
*/ */
enum mlx5_fpga_access_type { enum mlx5_fpga_access_type {
/** Use the slow CX-FPGA I2C bus */
MLX5_FPGA_ACCESS_TYPE_I2C = 0x0, MLX5_FPGA_ACCESS_TYPE_I2C = 0x0,
/** Use the fastest available method */
MLX5_FPGA_ACCESS_TYPE_DONTCARE = 0x0, MLX5_FPGA_ACCESS_TYPE_DONTCARE = 0x0,
}; };
...@@ -113,6 +114,7 @@ struct mlx5_fpga_conn_attr { ...@@ -113,6 +114,7 @@ struct mlx5_fpga_conn_attr {
* subsequent receives. * subsequent receives.
*/ */
void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf); void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf);
/** @cb_arg: A context to be passed to recv_cb callback */
void *cb_arg; void *cb_arg;
}; };
...@@ -145,7 +147,7 @@ void mlx5_fpga_sbu_conn_destroy(struct mlx5_fpga_conn *conn); ...@@ -145,7 +147,7 @@ void mlx5_fpga_sbu_conn_destroy(struct mlx5_fpga_conn *conn);
/** /**
* mlx5_fpga_sbu_conn_sendmsg() - Queue the transmission of a packet * mlx5_fpga_sbu_conn_sendmsg() - Queue the transmission of a packet
* @fdev: An FPGA SBU connection * @conn: An FPGA SBU connection
* @buf: The packet buffer * @buf: The packet buffer
* *
* Queues a packet for transmission over an FPGA SBU connection. * Queues a packet for transmission over an FPGA SBU connection.
......
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 - 2008 Mellanox Technologies. All rights reserved.
* Copyright (c) 2006 - 2007 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved.
*/
#include "dr_types.h"
int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int max_order)
{
int i;
buddy->max_order = max_order;
INIT_LIST_HEAD(&buddy->list_node);
INIT_LIST_HEAD(&buddy->used_list);
INIT_LIST_HEAD(&buddy->hot_list);
buddy->bitmap = kcalloc(buddy->max_order + 1,
sizeof(*buddy->bitmap),
GFP_KERNEL);
buddy->num_free = kcalloc(buddy->max_order + 1,
sizeof(*buddy->num_free),
GFP_KERNEL);
if (!buddy->bitmap || !buddy->num_free)
goto err_free_all;
/* Allocating max_order bitmaps, one for each order */
for (i = 0; i <= buddy->max_order; ++i) {
unsigned int size = 1 << (buddy->max_order - i);
buddy->bitmap[i] = bitmap_zalloc(size, GFP_KERNEL);
if (!buddy->bitmap[i])
goto err_out_free_each_bit_per_order;
}
/* In the beginning, we have only one order that is available for
* use (the biggest one), so mark the first bit in both bitmaps.
*/
bitmap_set(buddy->bitmap[buddy->max_order], 0, 1);
buddy->num_free[buddy->max_order] = 1;
return 0;
err_out_free_each_bit_per_order:
for (i = 0; i <= buddy->max_order; ++i)
bitmap_free(buddy->bitmap[i]);
err_free_all:
kfree(buddy->num_free);
kfree(buddy->bitmap);
return -ENOMEM;
}
void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy)
{
int i;
list_del(&buddy->list_node);
for (i = 0; i <= buddy->max_order; ++i)
bitmap_free(buddy->bitmap[i]);
kfree(buddy->num_free);
kfree(buddy->bitmap);
}
static int dr_buddy_find_free_seg(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int start_order,
unsigned int *segment,
unsigned int *order)
{
unsigned int seg, order_iter, m;
for (order_iter = start_order;
order_iter <= buddy->max_order; ++order_iter) {
if (!buddy->num_free[order_iter])
continue;
m = 1 << (buddy->max_order - order_iter);
seg = find_first_bit(buddy->bitmap[order_iter], m);
if (WARN(seg >= m,
"ICM Buddy: failed finding free mem for order %d\n",
order_iter))
return -ENOMEM;
break;
}
if (order_iter > buddy->max_order)
return -ENOMEM;
*segment = seg;
*order = order_iter;
return 0;
}
/**
* mlx5dr_buddy_alloc_mem() - Update second level bitmap.
* @buddy: Buddy to update.
* @order: Order of the buddy to update.
* @segment: Segment number.
*
* This function finds the first area of the ICM memory managed by this buddy.
* It uses the data structures of the buddy system in order to find the first
* area of free place, starting from the current order till the maximum order
* in the system.
*
* Return: 0 when segment is set, non-zero error status otherwise.
*
* The function returns the location (segment) in the whole buddy ICM memory
* area - the index of the memory segment that is available for use.
*/
int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int order,
unsigned int *segment)
{
unsigned int seg, order_iter;
int err;
err = dr_buddy_find_free_seg(buddy, order, &seg, &order_iter);
if (err)
return err;
bitmap_clear(buddy->bitmap[order_iter], seg, 1);
--buddy->num_free[order_iter];
/* If we found free memory in some order that is bigger than the
* required order, we need to split every order between the required
* order and the order that we found into two parts, and mark accordingly.
*/
while (order_iter > order) {
--order_iter;
seg <<= 1;
bitmap_set(buddy->bitmap[order_iter], seg ^ 1, 1);
++buddy->num_free[order_iter];
}
seg <<= order;
*segment = seg;
return 0;
}
void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int seg, unsigned int order)
{
seg >>= order;
/* Whenever a segment is free,
* the mem is added to the buddy that gave it.
*/
while (test_bit(seg ^ 1, buddy->bitmap[order])) {
bitmap_clear(buddy->bitmap[order], seg ^ 1, 1);
--buddy->num_free[order];
seg >>= 1;
++order;
}
bitmap_set(buddy->bitmap[order], seg, 1);
++buddy->num_free[order];
}
...@@ -93,12 +93,12 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev, ...@@ -93,12 +93,12 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id); caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id);
caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols); caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols);
if (mlx5dr_matcher_supp_flex_parser_icmp_v4(caps)) { if (caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED) {
caps->flex_parser_id_icmp_dw0 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw0); caps->flex_parser_id_icmp_dw0 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw0);
caps->flex_parser_id_icmp_dw1 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw1); caps->flex_parser_id_icmp_dw1 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw1);
} }
if (mlx5dr_matcher_supp_flex_parser_icmp_v6(caps)) { if (caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED) {
caps->flex_parser_id_icmpv6_dw0 = caps->flex_parser_id_icmpv6_dw0 =
MLX5_CAP_GEN(mdev, flex_parser_id_icmpv6_dw0); MLX5_CAP_GEN(mdev, flex_parser_id_icmpv6_dw0);
caps->flex_parser_id_icmpv6_dw1 = caps->flex_parser_id_icmpv6_dw1 =
......
...@@ -85,7 +85,7 @@ static bool dr_mask_is_ttl_set(struct mlx5dr_match_spec *spec) ...@@ -85,7 +85,7 @@ static bool dr_mask_is_ttl_set(struct mlx5dr_match_spec *spec)
(_misc2)._inner_outer##_first_mpls_s_bos || \ (_misc2)._inner_outer##_first_mpls_s_bos || \
(_misc2)._inner_outer##_first_mpls_ttl) (_misc2)._inner_outer##_first_mpls_ttl)
static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc) static bool dr_mask_is_tnl_gre_set(struct mlx5dr_match_misc *misc)
{ {
return (misc->gre_key_h || misc->gre_key_l || return (misc->gre_key_h || misc->gre_key_l ||
misc->gre_protocol || misc->gre_c_present || misc->gre_protocol || misc->gre_c_present ||
...@@ -98,12 +98,12 @@ static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc) ...@@ -98,12 +98,12 @@ static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc)
(_misc2).outer_first_mpls_over_##gre_udp##_s_bos || \ (_misc2).outer_first_mpls_over_##gre_udp##_s_bos || \
(_misc2).outer_first_mpls_over_##gre_udp##_ttl) (_misc2).outer_first_mpls_over_##gre_udp##_ttl)
#define DR_MASK_IS_FLEX_PARSER_0_SET(_misc2) ( \ #define DR_MASK_IS_TNL_MPLS_SET(_misc2) ( \
DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), gre) || \ DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), gre) || \
DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), udp)) DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), udp))
static bool static bool
dr_mask_is_misc3_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3) dr_mask_is_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3)
{ {
return (misc3->outer_vxlan_gpe_vni || return (misc3->outer_vxlan_gpe_vni ||
misc3->outer_vxlan_gpe_next_protocol || misc3->outer_vxlan_gpe_next_protocol ||
...@@ -111,21 +111,20 @@ dr_mask_is_misc3_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3) ...@@ -111,21 +111,20 @@ dr_mask_is_misc3_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3)
} }
static bool static bool
dr_matcher_supp_flex_parser_vxlan_gpe(struct mlx5dr_cmd_caps *caps) dr_matcher_supp_vxlan_gpe(struct mlx5dr_cmd_caps *caps)
{ {
return caps->flex_protocols & return caps->flex_protocols & MLX5_FLEX_PARSER_VXLAN_GPE_ENABLED;
MLX5_FLEX_PARSER_VXLAN_GPE_ENABLED;
} }
static bool static bool
dr_mask_is_flex_parser_tnl_vxlan_gpe_set(struct mlx5dr_match_param *mask, dr_mask_is_tnl_vxlan_gpe(struct mlx5dr_match_param *mask,
struct mlx5dr_domain *dmn) struct mlx5dr_domain *dmn)
{ {
return dr_mask_is_misc3_vxlan_gpe_set(&mask->misc3) && return dr_mask_is_vxlan_gpe_set(&mask->misc3) &&
dr_matcher_supp_flex_parser_vxlan_gpe(&dmn->info.caps); dr_matcher_supp_vxlan_gpe(&dmn->info.caps);
} }
static bool dr_mask_is_misc_geneve_set(struct mlx5dr_match_misc *misc) static bool dr_mask_is_tnl_geneve_set(struct mlx5dr_match_misc *misc)
{ {
return misc->geneve_vni || return misc->geneve_vni ||
misc->geneve_oam || misc->geneve_oam ||
...@@ -134,26 +133,46 @@ static bool dr_mask_is_misc_geneve_set(struct mlx5dr_match_misc *misc) ...@@ -134,26 +133,46 @@ static bool dr_mask_is_misc_geneve_set(struct mlx5dr_match_misc *misc)
} }
static bool static bool
dr_matcher_supp_flex_parser_geneve(struct mlx5dr_cmd_caps *caps) dr_matcher_supp_tnl_geneve(struct mlx5dr_cmd_caps *caps)
{ {
return caps->flex_protocols & return caps->flex_protocols & MLX5_FLEX_PARSER_GENEVE_ENABLED;
MLX5_FLEX_PARSER_GENEVE_ENABLED;
} }
static bool static bool
dr_mask_is_flex_parser_tnl_geneve_set(struct mlx5dr_match_param *mask, dr_mask_is_tnl_geneve(struct mlx5dr_match_param *mask,
struct mlx5dr_domain *dmn) struct mlx5dr_domain *dmn)
{ {
return dr_mask_is_misc_geneve_set(&mask->misc) && return dr_mask_is_tnl_geneve_set(&mask->misc) &&
dr_matcher_supp_flex_parser_geneve(&dmn->info.caps); dr_matcher_supp_tnl_geneve(&dmn->info.caps);
} }
static bool dr_mask_is_flex_parser_icmpv6_set(struct mlx5dr_match_misc3 *misc3) static int dr_matcher_supp_icmp_v4(struct mlx5dr_cmd_caps *caps)
{
return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED;
}
static int dr_matcher_supp_icmp_v6(struct mlx5dr_cmd_caps *caps)
{
return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED;
}
static bool dr_mask_is_icmpv6_set(struct mlx5dr_match_misc3 *misc3)
{ {
return (misc3->icmpv6_type || misc3->icmpv6_code || return (misc3->icmpv6_type || misc3->icmpv6_code ||
misc3->icmpv6_header_data); misc3->icmpv6_header_data);
} }
static bool dr_mask_is_icmp(struct mlx5dr_match_param *mask,
struct mlx5dr_domain *dmn)
{
if (DR_MASK_IS_ICMPV4_SET(&mask->misc3))
return dr_matcher_supp_icmp_v4(&dmn->info.caps);
else if (dr_mask_is_icmpv6_set(&mask->misc3))
return dr_matcher_supp_icmp_v6(&dmn->info.caps);
return false;
}
static bool dr_mask_is_wqe_metadata_set(struct mlx5dr_match_misc2 *misc2) static bool dr_mask_is_wqe_metadata_set(struct mlx5dr_match_misc2 *misc2)
{ {
return misc2->metadata_reg_a; return misc2->metadata_reg_a;
...@@ -257,7 +276,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -257,7 +276,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
if (dr_mask_is_smac_set(&mask.outer) && if (dr_mask_is_smac_set(&mask.outer) &&
dr_mask_is_dmac_set(&mask.outer)) { dr_mask_is_dmac_set(&mask.outer)) {
mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], &mask, mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++], &mask,
inner, rx); inner, rx);
} }
...@@ -277,7 +296,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -277,7 +296,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
inner, rx); inner, rx);
if (DR_MASK_IS_ETH_L4_SET(mask.outer, mask.misc, outer)) if (DR_MASK_IS_ETH_L4_SET(mask.outer, mask.misc, outer))
mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask,
inner, rx); inner, rx);
} else { } else {
if (dr_mask_is_ipv4_5_tuple_set(&mask.outer)) if (dr_mask_is_ipv4_5_tuple_set(&mask.outer))
...@@ -289,13 +308,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -289,13 +308,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
inner, rx); inner, rx);
} }
if (dr_mask_is_flex_parser_tnl_vxlan_gpe_set(&mask, dmn)) if (dr_mask_is_tnl_vxlan_gpe(&mask, dmn))
mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(&sb[idx++], mlx5dr_ste_build_tnl_vxlan_gpe(&sb[idx++], &mask,
&mask,
inner, rx); inner, rx);
else if (dr_mask_is_flex_parser_tnl_geneve_set(&mask, dmn)) else if (dr_mask_is_tnl_geneve(&mask, dmn))
mlx5dr_ste_build_flex_parser_tnl_geneve(&sb[idx++], mlx5dr_ste_build_tnl_geneve(&sb[idx++], &mask,
&mask,
inner, rx); inner, rx);
if (DR_MASK_IS_ETH_L4_MISC_SET(mask.misc3, outer)) if (DR_MASK_IS_ETH_L4_MISC_SET(mask.misc3, outer))
...@@ -304,22 +321,18 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -304,22 +321,18 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, outer)) if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, outer))
mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx);
if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2))
mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx);
inner, rx);
if ((DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(&mask.misc3) && if (dr_mask_is_icmp(&mask, dmn)) {
mlx5dr_matcher_supp_flex_parser_icmp_v4(&dmn->info.caps)) || ret = mlx5dr_ste_build_icmp(&sb[idx++],
(dr_mask_is_flex_parser_icmpv6_set(&mask.misc3) &&
mlx5dr_matcher_supp_flex_parser_icmp_v6(&dmn->info.caps))) {
ret = mlx5dr_ste_build_flex_parser_1(&sb[idx++],
&mask, &dmn->info.caps, &mask, &dmn->info.caps,
inner, rx); inner, rx);
if (ret) if (ret)
return ret; return ret;
} }
if (dr_mask_is_gre_set(&mask.misc)) if (dr_mask_is_tnl_gre_set(&mask.misc))
mlx5dr_ste_build_gre(&sb[idx++], &mask, inner, rx); mlx5dr_ste_build_tnl_gre(&sb[idx++], &mask, inner, rx);
} }
/* Inner */ /* Inner */
...@@ -334,7 +347,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -334,7 +347,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
if (dr_mask_is_smac_set(&mask.inner) && if (dr_mask_is_smac_set(&mask.inner) &&
dr_mask_is_dmac_set(&mask.inner)) { dr_mask_is_dmac_set(&mask.inner)) {
mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++],
&mask, inner, rx); &mask, inner, rx);
} }
...@@ -354,7 +367,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -354,7 +367,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
inner, rx); inner, rx);
if (DR_MASK_IS_ETH_L4_SET(mask.inner, mask.misc, inner)) if (DR_MASK_IS_ETH_L4_SET(mask.inner, mask.misc, inner))
mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask,
inner, rx); inner, rx);
} else { } else {
if (dr_mask_is_ipv4_5_tuple_set(&mask.inner)) if (dr_mask_is_ipv4_5_tuple_set(&mask.inner))
...@@ -372,8 +385,8 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, ...@@ -372,8 +385,8 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, inner)) if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, inner))
mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx);
if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2))
mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, inner, rx); mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx);
} }
/* Empty matcher, takes all */ /* Empty matcher, takes all */
if (matcher->match_criteria == DR_MATCHER_CRITERIA_EMPTY) if (matcher->match_criteria == DR_MATCHER_CRITERIA_EMPTY)
......
...@@ -1090,7 +1090,7 @@ static int dr_ste_build_eth_l2_src_des_tag(struct mlx5dr_match_param *value, ...@@ -1090,7 +1090,7 @@ static int dr_ste_build_eth_l2_src_des_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx) bool inner, bool rx)
{ {
...@@ -1594,7 +1594,7 @@ static int dr_ste_build_ipv6_l3_l4_tag(struct mlx5dr_match_param *value, ...@@ -1594,7 +1594,7 @@ static int dr_ste_build_ipv6_l3_l4_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx) bool inner, bool rx)
{ {
...@@ -1693,7 +1693,7 @@ static int dr_ste_build_gre_tag(struct mlx5dr_match_param *value, ...@@ -1693,7 +1693,7 @@ static int dr_ste_build_gre_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, bool inner, bool rx) struct mlx5dr_match_param *mask, bool inner, bool rx)
{ {
dr_ste_build_gre_bit_mask(mask, inner, sb->bit_mask); dr_ste_build_gre_bit_mask(mask, inner, sb->bit_mask);
...@@ -1771,7 +1771,7 @@ static int dr_ste_build_flex_parser_0_tag(struct mlx5dr_match_param *value, ...@@ -1771,7 +1771,7 @@ static int dr_ste_build_flex_parser_0_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx) bool inner, bool rx)
{ {
...@@ -1792,8 +1792,8 @@ static int dr_ste_build_flex_parser_1_bit_mask(struct mlx5dr_match_param *mask, ...@@ -1792,8 +1792,8 @@ static int dr_ste_build_flex_parser_1_bit_mask(struct mlx5dr_match_param *mask,
struct mlx5dr_cmd_caps *caps, struct mlx5dr_cmd_caps *caps,
u8 *bit_mask) u8 *bit_mask)
{ {
bool is_ipv4_mask = DR_MASK_IS_ICMPV4_SET(&mask->misc3);
struct mlx5dr_match_misc3 *misc_3_mask = &mask->misc3; struct mlx5dr_match_misc3 *misc_3_mask = &mask->misc3;
bool is_ipv4_mask = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3_mask);
u32 icmp_header_data_mask; u32 icmp_header_data_mask;
u32 icmp_type_mask; u32 icmp_type_mask;
u32 icmp_code_mask; u32 icmp_code_mask;
...@@ -1869,7 +1869,7 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value, ...@@ -1869,7 +1869,7 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value,
u32 icmp_code; u32 icmp_code;
bool is_ipv4; bool is_ipv4;
is_ipv4 = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3); is_ipv4 = DR_MASK_IS_ICMPV4_SET(misc_3);
if (is_ipv4) { if (is_ipv4) {
icmp_header_data = misc_3->icmpv4_header_data; icmp_header_data = misc_3->icmpv4_header_data;
icmp_type = misc_3->icmpv4_type; icmp_type = misc_3->icmpv4_type;
...@@ -1928,7 +1928,7 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value, ...@@ -1928,7 +1928,7 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
struct mlx5dr_cmd_caps *caps, struct mlx5dr_cmd_caps *caps,
bool inner, bool rx) bool inner, bool rx)
...@@ -2069,7 +2069,7 @@ dr_ste_build_flex_parser_tnl_vxlan_gpe_tag(struct mlx5dr_match_param *value, ...@@ -2069,7 +2069,7 @@ dr_ste_build_flex_parser_tnl_vxlan_gpe_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx) bool inner, bool rx)
{ {
...@@ -2122,7 +2122,7 @@ dr_ste_build_flex_parser_tnl_geneve_tag(struct mlx5dr_match_param *value, ...@@ -2122,7 +2122,7 @@ dr_ste_build_flex_parser_tnl_geneve_tag(struct mlx5dr_match_param *value,
return 0; return 0;
} }
void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx) bool inner, bool rx)
{ {
......
...@@ -114,7 +114,7 @@ enum mlx5dr_ipv { ...@@ -114,7 +114,7 @@ enum mlx5dr_ipv {
struct mlx5dr_icm_pool; struct mlx5dr_icm_pool;
struct mlx5dr_icm_chunk; struct mlx5dr_icm_chunk;
struct mlx5dr_icm_bucket; struct mlx5dr_icm_buddy_mem;
struct mlx5dr_ste_htbl; struct mlx5dr_ste_htbl;
struct mlx5dr_match_param; struct mlx5dr_match_param;
struct mlx5dr_cmd_caps; struct mlx5dr_cmd_caps;
...@@ -288,7 +288,7 @@ int mlx5dr_ste_build_ste_arr(struct mlx5dr_matcher *matcher, ...@@ -288,7 +288,7 @@ int mlx5dr_ste_build_ste_arr(struct mlx5dr_matcher *matcher,
struct mlx5dr_matcher_rx_tx *nic_matcher, struct mlx5dr_matcher_rx_tx *nic_matcher,
struct mlx5dr_match_param *value, struct mlx5dr_match_param *value,
u8 *ste_arr); u8 *ste_arr);
void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *builder, void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *builder,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_eth_l3_ipv4_5_tuple(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_l3_ipv4_5_tuple(struct mlx5dr_ste_build *sb,
...@@ -312,29 +312,29 @@ void mlx5dr_ste_build_eth_l2_dst(struct mlx5dr_ste_build *sb, ...@@ -312,29 +312,29 @@ void mlx5dr_ste_build_eth_l2_dst(struct mlx5dr_ste_build *sb,
void mlx5dr_ste_build_eth_l2_tnl(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_l2_tnl(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_eth_l4_misc(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_l4_misc(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_mpls(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_mpls(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
struct mlx5dr_cmd_caps *caps, struct mlx5dr_cmd_caps *caps,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask, struct mlx5dr_match_param *mask,
bool inner, bool rx); bool inner, bool rx);
void mlx5dr_ste_build_general_purpose(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_general_purpose(struct mlx5dr_ste_build *sb,
...@@ -588,7 +588,7 @@ struct mlx5dr_match_param { ...@@ -588,7 +588,7 @@ struct mlx5dr_match_param {
struct mlx5dr_match_misc3 misc3; struct mlx5dr_match_misc3 misc3;
}; };
#define DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \ #define DR_MASK_IS_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \
(_misc3)->icmpv4_code || \ (_misc3)->icmpv4_code || \
(_misc3)->icmpv4_header_data) (_misc3)->icmpv4_header_data)
...@@ -731,7 +731,6 @@ struct mlx5dr_action { ...@@ -731,7 +731,6 @@ struct mlx5dr_action {
struct mlx5dr_domain *dmn; struct mlx5dr_domain *dmn;
struct mlx5dr_icm_chunk *chunk; struct mlx5dr_icm_chunk *chunk;
u8 *data; u8 *data;
u32 data_size;
u16 num_of_actions; u16 num_of_actions;
u32 index; u32 index;
u8 allow_rx:1; u8 allow_rx:1;
...@@ -804,7 +803,7 @@ void mlx5dr_rule_update_rule_member(struct mlx5dr_ste *new_ste, ...@@ -804,7 +803,7 @@ void mlx5dr_rule_update_rule_member(struct mlx5dr_ste *new_ste,
struct mlx5dr_ste *ste); struct mlx5dr_ste *ste);
struct mlx5dr_icm_chunk { struct mlx5dr_icm_chunk {
struct mlx5dr_icm_bucket *bucket; struct mlx5dr_icm_buddy_mem *buddy_mem;
struct list_head chunk_list; struct list_head chunk_list;
u32 rkey; u32 rkey;
u32 num_of_entries; u32 num_of_entries;
...@@ -812,6 +811,11 @@ struct mlx5dr_icm_chunk { ...@@ -812,6 +811,11 @@ struct mlx5dr_icm_chunk {
u64 icm_addr; u64 icm_addr;
u64 mr_addr; u64 mr_addr;
/* indicates the index of this chunk in the whole memory,
* used for deleting the chunk from the buddy
*/
unsigned int seg;
/* Memory optimisation */ /* Memory optimisation */
struct mlx5dr_ste *ste_arr; struct mlx5dr_ste *ste_arr;
u8 *hw_ste_arr; u8 *hw_ste_arr;
...@@ -840,23 +844,20 @@ static inline void mlx5dr_domain_unlock(struct mlx5dr_domain *dmn) ...@@ -840,23 +844,20 @@ static inline void mlx5dr_domain_unlock(struct mlx5dr_domain *dmn)
mlx5dr_domain_nic_unlock(&dmn->info.rx); mlx5dr_domain_nic_unlock(&dmn->info.rx);
} }
static inline int
mlx5dr_matcher_supp_flex_parser_icmp_v4(struct mlx5dr_cmd_caps *caps)
{
return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED;
}
static inline int
mlx5dr_matcher_supp_flex_parser_icmp_v6(struct mlx5dr_cmd_caps *caps)
{
return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED;
}
int mlx5dr_matcher_select_builders(struct mlx5dr_matcher *matcher, int mlx5dr_matcher_select_builders(struct mlx5dr_matcher *matcher,
struct mlx5dr_matcher_rx_tx *nic_matcher, struct mlx5dr_matcher_rx_tx *nic_matcher,
enum mlx5dr_ipv outer_ipv, enum mlx5dr_ipv outer_ipv,
enum mlx5dr_ipv inner_ipv); enum mlx5dr_ipv inner_ipv);
static inline int
mlx5dr_icm_pool_dm_type_to_entry_size(enum mlx5dr_icm_type icm_type)
{
if (icm_type == DR_ICM_TYPE_STE)
return DR_STE_SIZE;
return DR_MODIFY_ACTION_SIZE;
}
static inline u32 static inline u32
mlx5dr_icm_pool_chunk_size_to_entries(enum mlx5dr_icm_chunk_size chunk_size) mlx5dr_icm_pool_chunk_size_to_entries(enum mlx5dr_icm_chunk_size chunk_size)
{ {
...@@ -870,11 +871,7 @@ mlx5dr_icm_pool_chunk_size_to_byte(enum mlx5dr_icm_chunk_size chunk_size, ...@@ -870,11 +871,7 @@ mlx5dr_icm_pool_chunk_size_to_byte(enum mlx5dr_icm_chunk_size chunk_size,
int num_of_entries; int num_of_entries;
int entry_size; int entry_size;
if (icm_type == DR_ICM_TYPE_STE) entry_size = mlx5dr_icm_pool_dm_type_to_entry_size(icm_type);
entry_size = DR_STE_SIZE;
else
entry_size = DR_MODIFY_ACTION_SIZE;
num_of_entries = mlx5dr_icm_pool_chunk_size_to_entries(chunk_size); num_of_entries = mlx5dr_icm_pool_chunk_size_to_entries(chunk_size);
return entry_size * num_of_entries; return entry_size * num_of_entries;
......
...@@ -127,4 +127,36 @@ mlx5dr_is_supported(struct mlx5_core_dev *dev) ...@@ -127,4 +127,36 @@ mlx5dr_is_supported(struct mlx5_core_dev *dev)
return MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner); return MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner);
} }
/* buddy functions & structure */
struct mlx5dr_icm_mr;
struct mlx5dr_icm_buddy_mem {
unsigned long **bitmap;
unsigned int *num_free;
u32 max_order;
struct list_head list_node;
struct mlx5dr_icm_mr *icm_mr;
struct mlx5dr_icm_pool *pool;
/* This is the list of used chunks. HW may be accessing this memory */
struct list_head used_list;
u64 used_memory;
/* Hardware may be accessing this memory but at some future,
* undetermined time, it might cease to do so.
* sync_ste command sets them free.
*/
struct list_head hot_list;
};
int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int max_order);
void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy);
int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int order,
unsigned int *segment);
void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy,
unsigned int seg, unsigned int order);
#endif /* _MLX5DR_H_ */ #endif /* _MLX5DR_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment