Commit 76e463f6 authored by Leon Romanovsky's avatar Leon Romanovsky

net/mlx5e: Overcome slow response for first IPsec ASO WQE

First ASO WQE causes to cache miss in hardware, which can't return
result immediately. It causes to the situation where such WQE is polled
earlier than it is needed. Add logic to retry ASO CQ polling operation.

Link: https://lore.kernel.org/r/eb92a758c533ff3f058e0dcb4f8d2324355304ad.1680162300.git.leonro@nvidia.comReviewed-by: default avatarRaed Salem <raeds@nvidia.com>
Signed-off-by: default avatarLeon Romanovsky <leonro@nvidia.com>
parent d05971a4
...@@ -568,6 +568,7 @@ int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry, ...@@ -568,6 +568,7 @@ int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_wqe_aso_ctrl_seg *ctrl; struct mlx5_wqe_aso_ctrl_seg *ctrl;
struct mlx5e_hw_objs *res; struct mlx5e_hw_objs *res;
struct mlx5_aso_wqe *wqe; struct mlx5_aso_wqe *wqe;
unsigned long expires;
u8 ds_cnt; u8 ds_cnt;
int ret; int ret;
...@@ -589,7 +590,12 @@ int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry, ...@@ -589,7 +590,12 @@ int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry,
mlx5e_ipsec_aso_copy(ctrl, data); mlx5e_ipsec_aso_copy(ctrl, data);
mlx5_aso_post_wqe(aso->aso, false, &wqe->ctrl); mlx5_aso_post_wqe(aso->aso, false, &wqe->ctrl);
ret = mlx5_aso_poll_cq(aso->aso, false); expires = jiffies + msecs_to_jiffies(10);
do {
ret = mlx5_aso_poll_cq(aso->aso, false);
if (ret)
usleep_range(2, 10);
} while (ret && time_is_after_jiffies(expires));
spin_unlock_bh(&aso->lock); spin_unlock_bh(&aso->lock);
return ret; return ret;
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment