Commit 57e70716 authored by Shay Drory's avatar Shay Drory Committed by Leon Romanovsky

RDMA/mlx5: Implement mkeys management via LIFO queue

Currently, mkeys are managed via xarray. This implementation leads to
a degradation in cases many MRs are unregistered in parallel, due to xarray
internal implementation, for example: deregistration 1M MRs via 64 threads
is taking ~15% more time[1].

Hence, implement mkeys management via LIFO queue, which solved the
degradation.

[1]
2.8us in kernel v5.19 compare to 3.2us in kernel v6.4
Signed-off-by: default avatarShay Drory <shayd@nvidia.com>
Link: https://lore.kernel.org/r/fde3d4cfab0f32f0ccb231cd113298256e1502c5.1695283384.git.leon@kernel.orgSigned-off-by: default avatarLeon Romanovsky <leon@kernel.org>
parent cb7ab785
...@@ -753,10 +753,25 @@ struct umr_common { ...@@ -753,10 +753,25 @@ struct umr_common {
unsigned int state; unsigned int state;
}; };
#define NUM_MKEYS_PER_PAGE \
((PAGE_SIZE - sizeof(struct list_head)) / sizeof(u32))
struct mlx5_mkeys_page {
u32 mkeys[NUM_MKEYS_PER_PAGE];
struct list_head list;
};
static_assert(sizeof(struct mlx5_mkeys_page) == PAGE_SIZE);
struct mlx5_mkeys_queue {
struct list_head pages_list;
u32 num_pages;
unsigned long ci;
spinlock_t lock; /* sync list ops */
};
struct mlx5_cache_ent { struct mlx5_cache_ent {
struct xarray mkeys; struct mlx5_mkeys_queue mkeys_queue;
unsigned long stored; u32 pending;
unsigned long reserved;
char name[4]; char name[4];
......
This diff is collapsed.
...@@ -332,8 +332,8 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey, ...@@ -332,8 +332,8 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey,
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
mlx5_ib_warn(dev, mlx5_ib_warn(dev,
"reg umr failed (%u). Trying to recover and resubmit the flushed WQEs\n", "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n",
umr_context.status); umr_context.status, mkey);
mutex_lock(&umrc->lock); mutex_lock(&umrc->lock);
err = mlx5r_umr_recover(dev); err = mlx5r_umr_recover(dev);
mutex_unlock(&umrc->lock); mutex_unlock(&umrc->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment