Commit 36154be4 authored by Tariq Toukan's avatar Tariq Toukan Committed by David S. Miller

net/mlx5e: Fix wrong CQE decompression

In cqe compression with striding RQ, the decompression of the CQE field
wqe_counter was done with a wrong wraparound value.
This caused handling cqes with a wrong pointer to wqe (rx descriptor)
and creating SKBs with wrong data, pointing to wrong (and already consumed)
strides/pages.

The meaning of the CQE field wqe_counter in striding RQ holds the
stride index instead of the WQE index. Hence, when decompressing
a CQE, wqe_counter should have wrapped-around the number of strides
in a single multi-packet WQE.

We dropped this wrap-around mask at all in CQE decompression of striding
RQ. It is not needed as in such cases the CQE compression session would
break because of different value of wqe_id field, starting a new
compression session.

Tested:
 ethtool -K ethxx lro off/on
 ethtool --set-priv-flags ethxx rx_cqe_compress on
 super_netperf 16 {ipv4,ipv6} -t TCP_STREAM -m 50 -D
 verified no csum errors and no page refcount issues.

Fixes: 7219ab34 ("net/mlx5e: CQE compression")
Signed-off-by: default avatarTariq Toukan <tariqt@mellanox.com>
Reported-by: default avatarTom Herbert <tom@herbertland.com>
Cc: kernel-team@fb.com
Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 6dc4b54e
...@@ -94,19 +94,18 @@ static inline void mlx5e_cqes_update_owner(struct mlx5e_cq *cq, u32 cqcc, int n) ...@@ -94,19 +94,18 @@ static inline void mlx5e_cqes_update_owner(struct mlx5e_cq *cq, u32 cqcc, int n)
static inline void mlx5e_decompress_cqe(struct mlx5e_rq *rq, static inline void mlx5e_decompress_cqe(struct mlx5e_rq *rq,
struct mlx5e_cq *cq, u32 cqcc) struct mlx5e_cq *cq, u32 cqcc)
{ {
u16 wqe_cnt_step;
cq->title.byte_cnt = cq->mini_arr[cq->mini_arr_idx].byte_cnt; cq->title.byte_cnt = cq->mini_arr[cq->mini_arr_idx].byte_cnt;
cq->title.check_sum = cq->mini_arr[cq->mini_arr_idx].checksum; cq->title.check_sum = cq->mini_arr[cq->mini_arr_idx].checksum;
cq->title.op_own &= 0xf0; cq->title.op_own &= 0xf0;
cq->title.op_own |= 0x01 & (cqcc >> cq->wq.log_sz); cq->title.op_own |= 0x01 & (cqcc >> cq->wq.log_sz);
cq->title.wqe_counter = cpu_to_be16(cq->decmprs_wqe_counter); cq->title.wqe_counter = cpu_to_be16(cq->decmprs_wqe_counter);
wqe_cnt_step = if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ)
rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ? cq->decmprs_wqe_counter +=
mpwrq_get_cqe_consumed_strides(&cq->title) : 1; mpwrq_get_cqe_consumed_strides(&cq->title);
else
cq->decmprs_wqe_counter = cq->decmprs_wqe_counter =
(cq->decmprs_wqe_counter + wqe_cnt_step) & rq->wq.sz_m1; (cq->decmprs_wqe_counter + 1) & rq->wq.sz_m1;
} }
static inline void mlx5e_decompress_cqe_no_hash(struct mlx5e_rq *rq, static inline void mlx5e_decompress_cqe_no_hash(struct mlx5e_rq *rq,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment