Commit a8cf95f9 authored by Pavel Begunkov's avatar Pavel Begunkov Committed by Jens Axboe

io_uring: fix overflow handling regression

Because the single task locking series got reordered ahead of the
timeout and completion lock changes, two hunks inadvertently ended up
using __io_fill_cqe_req() rather than io_fill_cqe_req(). This meant
that we dropped overflow handling in those two spots. Reinstate the
correct CQE filling helper.

Fixes: f66f7342 ("io_uring: skip spinlocking for ->task_complete")
Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent e5f30f6f
...@@ -927,7 +927,7 @@ static void __io_req_complete_post(struct io_kiocb *req) ...@@ -927,7 +927,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
io_cq_lock(ctx); io_cq_lock(ctx);
if (!(req->flags & REQ_F_CQE_SKIP)) if (!(req->flags & REQ_F_CQE_SKIP))
__io_fill_cqe_req(ctx, req); io_fill_cqe_req(ctx, req);
/* /*
* If we're the last reference to this request, add to our locked * If we're the last reference to this request, add to our locked
......
...@@ -1062,7 +1062,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) ...@@ -1062,7 +1062,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
continue; continue;
req->cqe.flags = io_put_kbuf(req, 0); req->cqe.flags = io_put_kbuf(req, 0);
__io_fill_cqe_req(req->ctx, req); io_fill_cqe_req(req->ctx, req);
} }
if (unlikely(!nr_events)) if (unlikely(!nr_events))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment