Commit b484a40d authored by Ming Lei's avatar Ming Lei Committed by Jens Axboe

io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
Meantime io_wq IO code path may share resource with normal iopoll code
path.

So if any HIPRI request is submittd via io_wq, this request may not get resouce
for moving on, given iopoll isn't possible in io_wq_put_and_exit().

The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
with default null_blk parameters.

Fix it by always cancelling all requests in io_wq by adding helper of
io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
follows canceling requests immediately.

Closes: https://lore.kernel.org/linux-block/3893581.1691785261@warthog.procyon.org.uk/Reported-by: default avatarDavid Howells <dhowells@redhat.com>
Cc: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20230901134916.2415386-1-ming.lei@redhat.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent bd6fc5da
......@@ -3290,6 +3290,37 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
return percpu_counter_sum(&tctx->inflight);
}
static void io_uring_cancel_wq(struct io_uring_task *tctx)
{
int ret;
if (!tctx->io_wq)
return;
/*
* FIXED_FILE request isn't tracked in do_exit(), and these
* requests may be submitted to our io_wq as iopoll, so have to
* cancel them before destroying io_wq for avoiding IO hang
*/
do {
struct io_tctx_node *node;
unsigned long index;
ret = 0;
xa_for_each(&tctx->xa, index, node) {
struct io_ring_ctx *ctx = node->ctx;
struct io_task_cancel cancel = { .task = current, .all = true, };
enum io_wq_cancel cret;
io_iopoll_try_reap_events(ctx);
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
&cancel, true);
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
cond_resched();
}
} while (ret);
}
/*
* Find any io_uring ctx that this task has registered or done IO on, and cancel
* requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
......@@ -3361,6 +3392,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
finish_wait(&tctx->wait, &wait);
} while (1);
io_uring_cancel_wq(tctx);
io_uring_clean_tctx(tctx);
if (cancel_all) {
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment