Commit 10c87333 authored by Jens Axboe's avatar Jens Axboe

io_uring: allow re-poll if we made progress

We currently check REQ_F_POLLED before arming async poll for a
notification to retry. If it's set, then we don't allow poll and will
punt to io-wq instead. This is done to prevent a situation where a buggy
driver will repeatedly return that there's space/data available yet we
get -EAGAIN.

However, if we already transferred data, then it should be safe to rely
on poll again. Gate the check on whether or not REQ_F_PARTIAL_IO is
also set.
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 4c3c0943
......@@ -6266,7 +6266,9 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
if (!def->pollin && !def->pollout)
return IO_APOLL_ABORTED;
if (!file_can_poll(req->file) || (req->flags & REQ_F_POLLED))
if (!file_can_poll(req->file))
return IO_APOLL_ABORTED;
if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
return IO_APOLL_ABORTED;
if (def->pollin) {
......@@ -6281,8 +6283,10 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
}
if (def->poll_exclusive)
mask |= EPOLLEXCLUSIVE;
if (!(issue_flags & IO_URING_F_UNLOCKED) &&
!list_empty(&ctx->apoll_cache)) {
if (req->flags & REQ_F_POLLED) {
apoll = req->apoll;
} else if (!(issue_flags & IO_URING_F_UNLOCKED) &&
!list_empty(&ctx->apoll_cache)) {
apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
poll.wait.entry);
list_del_init(&apoll->poll.wait.entry);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment