1. 26 Jan, 2020 2 commits
  2. 21 Jan, 2020 1 commit
  3. 17 Jan, 2020 1 commit
  4. 16 Jan, 2020 2 commits
  5. 15 Jan, 2020 2 commits
  6. 14 Jan, 2020 1 commit
  7. 07 Jan, 2020 1 commit
    • Jens Axboe's avatar
      io_uring: remove punt of short reads to async context · eacc6dfa
      Jens Axboe authored
      We currently punt any short read on a regular file to async context,
      but this fails if the short read is due to running into EOF. This is
      especially problematic since we only do the single prep for commands
      now, as we don't reset kiocb->ki_pos. This can result in a 4k read on
      a 1k file returning zero, as we detect the short read and then retry
      from async context. At the time of retry, the position is now 1k, and
      we end up reading nothing, and hence return 0.
      
      Instead of trying to patch around the fact that short reads can be
      legitimate and won't succeed in case of retry, remove the logic to punt
      a short read to async context. Simply return it.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      eacc6dfa
  8. 24 Dec, 2019 1 commit
  9. 23 Dec, 2019 1 commit
  10. 20 Dec, 2019 7 commits
    • Jens Axboe's avatar
      io_uring: pass in 'sqe' to the prep handlers · 3529d8c2
      Jens Axboe authored
      This moves the prep handlers outside of the opcode handlers, and allows
      us to pass in the sqe directly. If the sqe is non-NULL, it means that
      the request should be prepared for the first time.
      
      With the opcode handlers not having access to the sqe at all, we are
      guaranteed that the prep handler has setup the request fully by the
      time we get there. As before, for opcodes that need to copy in more
      data then the io_kiocb allows for, the io_async_ctx holds that info. If
      a prep handler is invoked with req->io set, it must use that to retain
      information for later.
      
      Finally, we can remove io_kiocb->sqe as well.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3529d8c2
    • Jens Axboe's avatar
      io_uring: standardize the prep methods · 06b76d44
      Jens Axboe authored
      We currently have a mix of use cases. Most of the newer ones are pretty
      uniform, but we have some older ones that use different calling
      calling conventions. This is confusing.
      
      For the opcodes that currently rely on the req->io->sqe copy saving
      them from reuse, add a request type struct in the io_kiocb command
      union to store the data they need.
      
      Prepare for all opcodes having a standard prep method, so we can call
      it in a uniform fashion and outside of the opcode handler. This is in
      preparation for passing in the 'sqe' pointer, rather than storing it
      in the io_kiocb. Once we have uniform prep handlers, we can leave all
      the prep work to that part, and not even pass in the sqe to the opcode
      handler. This ensures that we don't reuse sqe data inadvertently.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      06b76d44
    • Jens Axboe's avatar
      io_uring: read 'count' for IORING_OP_TIMEOUT in prep handler · 26a61679
      Jens Axboe authored
      Add the count field to struct io_timeout, and ensure the prep handler
      has read it. Timeout also needs an async context always, set it up
      in the prep handler if we don't have one.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      26a61679
    • Jens Axboe's avatar
      io_uring: move all prep state for IORING_OP_{SEND,RECV}_MGS to prep handler · e47293fd
      Jens Axboe authored
      Add struct io_sr_msg in our io_kiocb per-command union, and ensure that
      the send/recvmsg prep handlers have grabbed what they need from the SQE
      by the time prep is done.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e47293fd
    • Jens Axboe's avatar
      io_uring: move all prep state for IORING_OP_CONNECT to prep handler · 3fbb51c1
      Jens Axboe authored
      Add struct io_connect in our io_kiocb per-command union, and ensure
      that io_connect_prep() has grabbed what it needs from the SQE.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3fbb51c1
    • Jens Axboe's avatar
      io_uring: add and use struct io_rw for read/writes · 9adbd45d
      Jens Axboe authored
      Put the kiocb in struct io_rw, and add the addr/len for the request as
      well. Use the kiocb->private field for the buffer index for fixed reads
      and writes.
      
      Any use of kiocb->ki_filp is flipped to req->file. It's the same thing,
      and less confusing.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9adbd45d
    • Jens Axboe's avatar
      io_uring: use u64_to_user_ptr() consistently · d55e5f5b
      Jens Axboe authored
      We use it in some spots, but not consistently. Convert the rest over,
      makes it easier to read as well.
      
      No functional changes in this patch.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d55e5f5b
  11. 18 Dec, 2019 12 commits
  12. 16 Dec, 2019 1 commit
    • Jens Axboe's avatar
      io_uring: fix sporadic -EFAULT from IORING_OP_RECVMSG · 0b416c3e
      Jens Axboe authored
      If we have to punt the recvmsg to async context, we copy all the
      context.  But since the iovec used can be either on-stack (if small) or
      dynamically allocated, if it's on-stack, then we need to ensure we reset
      the iov pointer. If we don't, then we're reusing old stack data, and
      that can lead to -EFAULTs if things get overwritten.
      
      Ensure we retain the right pointers for the iov, and free it as well if
      we end up having to go beyond UIO_FASTIOV number of vectors.
      
      Fixes: 03b1230c ("io_uring: ensure async punted sendmsg/recvmsg requests copy data")
      Reported-by: default avatar李通洲 <carter.li@eoitek.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0b416c3e
  13. 15 Dec, 2019 1 commit
  14. 13 Dec, 2019 7 commits