Commit 71e084e2 authored by Shay Drory's avatar Shay Drory Committed by Saeed Mahameed

net/mlx5: Allocating a pool of MSI-X vectors for SFs

SFs (Sub Functions) currently use IRQs from the global IRQ table their
parent Physical Function have. In order to better scale, we need to
allocate more IRQs and share them between different SFs.

Driver will maintain 3 separated irq pools:
1. A pool that serve the PF consumer (PF's netdev, rdma stacks), similar
to what the driver had before this patch. i.e, this pool will share irqs
between rdma and netev, and will keep the irq indexes and allocation
order. The last is important for PF netdev rmap (aRFS).

2. A pool of control IRQs for SFs. The size of this pool is the number
of SFs that can be created divided by SFS_PER_IRQ. This pool will serve
the control path EQs of the SFs.

3. A pool of completion data path IRQs for SFs transport queues. The
size of this pool is:
num_irqs_allocated - pf_pool_size - sf_ctrl_pool_size.
This pool will served netdev and rdma stacks. Moreover, rmap is not
supported on SFs.

Sharing methodology of the SFs pools is explained in the next patch.

Important note: rmap is not supported on SFs because rmap mapping cannot
function correctly for IRQs that are shared for different core/netdev RX
rings.
Signed-off-by: default avatarShay Drory <shayd@nvidia.com>
Reviewed-by: default avatarLeon Romanovsky <leonro@nvidia.com>
Reviewed-by: default avatarTariq Toukan <tariqt@nvidia.com>
Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
parent fc63dd2a
......@@ -471,14 +471,7 @@ static int create_async_eq(struct mlx5_core_dev *dev,
int err;
mutex_lock(&eq_table->lock);
/* Async EQs must share irq index 0 */
if (param->irq_index != 0) {
err = -EINVAL;
goto unlock;
}
err = create_map_eq(dev, eq, param);
unlock:
mutex_unlock(&eq_table->lock);
return err;
}
......@@ -996,8 +989,11 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
eq_table->num_comp_eqs =
min_t(int,
mlx5_irq_get_num_comp(eq_table->irq_table),
mlx5_irq_table_get_num_comp(eq_table->irq_table),
num_eqs - MLX5_MAX_ASYNC_EQS);
if (mlx5_core_is_sf(dev))
eq_table->num_comp_eqs = min_t(int, eq_table->num_comp_eqs,
MLX5_COMP_EQS_PER_SF);
err = create_async_eqs(dev);
if (err) {
......
......@@ -6,13 +6,17 @@
#include <linux/mlx5/driver.h>
#define MLX5_COMP_EQS_PER_SF 8
#define MLX5_IRQ_EQ_CTRL (0)
struct mlx5_irq;
int mlx5_irq_table_init(struct mlx5_core_dev *dev);
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
int mlx5_irq_table_create(struct mlx5_core_dev *dev);
void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
int mlx5_irq_get_num_comp(struct mlx5_irq_table *table);
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table);
struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev);
int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int devfn,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment