Commit c5def4ab authored by Jens Axboe's avatar Jens Axboe

io-wq: add support for bounded vs unbunded work

io_uring supports request types that basically have two different
lifetimes:

1) Bounded completion time. These are requests like disk reads or writes,
   which we know will finish in a finite amount of time.
2) Unbounded completion time. These are generally networked IO, where we
   have no idea how long they will take to complete. Another example is
   POLL commands.

This patch provides support for io-wq to handle these differently, so we
don't starve bounded requests by tying up workers for too long. By default
all work is bounded, unless otherwise specified in the work item.
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 91d666ea
This diff is collapsed.
......@@ -9,6 +9,7 @@ enum {
IO_WQ_WORK_HASHED = 4,
IO_WQ_WORK_NEEDS_USER = 8,
IO_WQ_WORK_NEEDS_FILES = 16,
IO_WQ_WORK_UNBOUND = 32,
IO_WQ_HASH_SHIFT = 24, /* upper 8 bits are used for hash key */
};
......@@ -33,7 +34,8 @@ struct io_wq_work {
(work)->files = NULL; \
} while (0) \
struct io_wq *io_wq_create(unsigned concurrency, struct mm_struct *mm);
struct io_wq *io_wq_create(unsigned bounded, struct mm_struct *mm,
struct user_struct *user);
void io_wq_destroy(struct io_wq *wq);
void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
......
......@@ -3745,7 +3745,7 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
/* Do QD, or 4 * CPUS, whatever is smallest */
concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
ctx->io_wq = io_wq_create(concurrency, ctx->sqo_mm);
ctx->io_wq = io_wq_create(concurrency, ctx->sqo_mm, NULL);
if (IS_ERR(ctx->io_wq)) {
ret = PTR_ERR(ctx->io_wq);
ctx->io_wq = NULL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment