Commit 0fe9c551 authored by Marta Rybczynska's avatar Marta Rybczynska Committed by Greg Kroah-Hartman

nvme-rdma: support devices with queue size < 32

commit 0544f549 upstream.

In the case of small NVMe-oF queue size (<32) we may enter a deadlock
caused by the fact that the IB completions aren't sent waiting for 32
and the send queue will fill up.

The error is seen as (using mlx5):
[ 2048.693355] mlx5_0:mlx5_ib_post_send:3765:(pid 7273):
[ 2048.693360] nvme nvme1: nvme_rdma_post_send failed with error code -12

This patch changes the way the signaling is done so that it depends on
the queue depth now. The magic define has been removed completely.
Signed-off-by: default avatarMarta Rybczynska <marta.rybczynska@kalray.eu>
Signed-off-by: default avatarSamuel Jones <sjones@kalray.eu>
Acked-by: default avatarSagi Grimberg <sagi@grimberg.me>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent f88d3d6e
...@@ -1029,6 +1029,19 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) ...@@ -1029,6 +1029,19 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
nvme_rdma_wr_error(cq, wc, "SEND"); nvme_rdma_wr_error(cq, wc, "SEND");
} }
static inline int nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
{
int sig_limit;
/*
* We signal completion every queue depth/2 and also handle the
* degenerated case of a device with queue_depth=1, where we
* would need to signal every message.
*/
sig_limit = max(queue->queue_size / 2, 1);
return (++queue->sig_count % sig_limit) == 0;
}
static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge, struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge,
struct ib_send_wr *first, bool flush) struct ib_send_wr *first, bool flush)
...@@ -1056,9 +1069,6 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, ...@@ -1056,9 +1069,6 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
* Would have been way to obvious to handle this in hardware or * Would have been way to obvious to handle this in hardware or
* at least the RDMA stack.. * at least the RDMA stack..
* *
* This messy and racy code sniplet is copy and pasted from the iSER
* initiator, and the magic '32' comes from there as well.
*
* Always signal the flushes. The magic request used for the flush * Always signal the flushes. The magic request used for the flush
* sequencer is not allocated in our driver's tagset and it's * sequencer is not allocated in our driver's tagset and it's
* triggered to be freed by blk_cleanup_queue(). So we need to * triggered to be freed by blk_cleanup_queue(). So we need to
...@@ -1066,7 +1076,7 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, ...@@ -1066,7 +1076,7 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
* embedded in request's payload, is not freed when __ib_process_cq() * embedded in request's payload, is not freed when __ib_process_cq()
* calls wr_cqe->done(). * calls wr_cqe->done().
*/ */
if ((++queue->sig_count % 32) == 0 || flush) if (nvme_rdma_queue_sig_limit(queue) || flush)
wr.send_flags |= IB_SEND_SIGNALED; wr.send_flags |= IB_SEND_SIGNALED;
if (first) if (first)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment