1. 25 Oct, 2020 1 commit
  2. 23 Oct, 2020 7 commits
  3. 22 Oct, 2020 4 commits
  4. 21 Oct, 2020 1 commit
  5. 20 Oct, 2020 1 commit
  6. 19 Oct, 2020 9 commits
  7. 17 Oct, 2020 17 commits
    • Jens Axboe's avatar
      mm: use limited read-ahead to satisfy read · 324bcf54
      Jens Axboe authored
      For the case where read-ahead is disabled on the file, or if the cgroup
      is congested, ensure that we can at least do 1 page of read-ahead to
      make progress on the read in an async fashion. This could potentially be
      larger, but it's not needed in terms of functionality, so let's error on
      the side of caution as larger counts of pages may run into reclaim
      issues (particularly if we're congested).
      
      This makes sure we're not hitting the potentially sync ->readpage() path
      for IO that is marked IOCB_WAITQ, which could cause us to block. It also
      means we'll use the same path for IO, regardless of whether or not
      read-ahead happens to be disabled on the lower level device.
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reported-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Reported-by: default avatarHao_Xu <haoxu@linux.alibaba.com>
      [axboe: updated for new ractl API]
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      324bcf54
    • Jens Axboe's avatar
      mm: mark async iocb read as NOWAIT once some data has been copied · 13bd6914
      Jens Axboe authored
      Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
      we should no longer attempt to async lock a new page. Instead make sure
      we return the copied amount, and let the caller retry, instead of
      returning -EIOCBQUEUED for a new page.
      
      This should only be possible with read-ahead disabled on the below
      device, and multiple threads racing on the same file. Haven't been able
      to reproduce on anything else.
      
      Cc: stable@vger.kernel.org # v5.9
      Fixes: 1a0a7853 ("mm: support async buffered reads in generic_file_buffered_read()")
      Reported-by: default avatarKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      13bd6914
    • Pavel Begunkov's avatar
      io_uring: fix double poll mask init · 58852d4d
      Pavel Begunkov authored
      __io_queue_proc() is used by both, poll reqs and apoll. Don't use
      req->poll.events to copy poll mask because for apoll it aliases with
      private data of the request.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      58852d4d
    • Jens Axboe's avatar
      io-wq: inherit audit loginuid and sessionid · 4ea33a97
      Jens Axboe authored
      Make sure the async io-wq workers inherit the loginuid and sessionid from
      the original task, and restore them to unset once we're done with the
      async work item.
      
      While at it, disable the ability for kernel threads to write to their own
      loginuid.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4ea33a97
    • Jens Axboe's avatar
      io_uring: use percpu counters to track inflight requests · d8a6df10
      Jens Axboe authored
      Even though we place the req_issued and req_complete in separate
      cachelines, there's considerable overhead in doing the atomics
      particularly on the completion side.
      
      Get rid of having the two counters, and just use a percpu_counter for
      this. That's what it was made for, after all. This considerably
      reduces the overhead in __io_free_req().
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d8a6df10
    • Jens Axboe's avatar
      io_uring: assign new io_identity for task if members have changed · 500a373d
      Jens Axboe authored
      This avoids doing a copy for each new async IO, if some parts of the
      io_identity has changed. We avoid reference counting for the normal
      fast path of nothing ever changing.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      500a373d
    • Jens Axboe's avatar
      io_uring: store io_identity in io_uring_task · 5c3462cf
      Jens Axboe authored
      This is, by definition, a per-task structure. So store it in the
      task context, instead of doing carrying it in each io_kiocb. We're being
      a bit inefficient if members have changed, as that requires an alloc and
      copy of a new io_identity struct. The next patch will fix that up.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5c3462cf
    • Jens Axboe's avatar
      io_uring: COW io_identity on mismatch · 1e6fa521
      Jens Axboe authored
      If the io_identity doesn't completely match the task, then create a
      copy of it and use that. The existing copy remains valid until the last
      user of it has gone away.
      
      This also changes the personality lookup to be indexed by io_identity,
      instead of creds directly.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1e6fa521
    • Jens Axboe's avatar
      io_uring: move io identity items into separate struct · 98447d65
      Jens Axboe authored
      io-wq contains a pointer to the identity, which we just hold in io_kiocb
      for now. This is in preparation for putting this outside io_kiocb. The
      only exception is struct files_struct, which we'll need different rules
      for to avoid a circular dependency.
      
      No functional changes in this patch.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      98447d65
    • Jens Axboe's avatar
      io_uring: rely solely on work flags to determine personality. · dfead8a8
      Jens Axboe authored
      We solely rely on work->work_flags now, so use that for proper checking
      and clearing/dropping of various identity items.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dfead8a8
    • Jens Axboe's avatar
      io_uring: pass required context in as flags · 0f203765
      Jens Axboe authored
      We have a number of bits that decide what context to inherit. Set up
      io-wq flags for these instead. This is in preparation for always having
      the various members set, but not always needing them for all requests.
      
      No intended functional changes in this patch.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0f203765
    • Jens Axboe's avatar
      io-wq: assign NUMA node locality if appropriate · a8b595b2
      Jens Axboe authored
      There was an assumption that kthread_create_on_node() would properly set
      NUMA affinities in terms of CPUs allowed, but it doesn't. Make sure we
      do this when creating an io-wq context on NUMA.
      
      Cc: stable@vger.kernel.org
      Stefan Metzmacher <metze@samba.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a8b595b2
    • Jens Axboe's avatar
      io_uring: fix error path cleanup in io_sqe_files_register() · 55cbc256
      Jens Axboe authored
      syzbot reports the following crash:
      
      general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
      KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
      CPU: 1 PID: 8927 Comm: syz-executor.3 Not tainted 5.9.0-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      RIP: 0010:io_file_from_index fs/io_uring.c:5963 [inline]
      RIP: 0010:io_sqe_files_register fs/io_uring.c:7369 [inline]
      RIP: 0010:__io_uring_register fs/io_uring.c:9463 [inline]
      RIP: 0010:__do_sys_io_uring_register+0x2fd2/0x3ee0 fs/io_uring.c:9553
      Code: ec 03 49 c1 ee 03 49 01 ec 49 01 ee e8 57 61 9c ff 41 80 3c 24 00 0f 85 9b 09 00 00 4d 8b af b8 01 00 00 4c 89 e8 48 c1 e8 03 <80> 3c 28 00 0f 85 76 09 00 00 49 8b 55 00 89 d8 c1 f8 09 48 98 4c
      RSP: 0018:ffffc90009137d68 EFLAGS: 00010246
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc9000ef2a000
      RDX: 0000000000040000 RSI: ffffffff81d81dd9 RDI: 0000000000000005
      RBP: dffffc0000000000 R08: 0000000000000001 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000000 R12: ffffed1012882a37
      R13: 0000000000000000 R14: ffffed1012882a38 R15: ffff888094415000
      FS:  00007f4266f3c700(0000) GS:ffff8880ae500000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 000000000118c000 CR3: 000000008e57d000 CR4: 00000000001506e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      RIP: 0033:0x45de59
      Code: 0d b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b3 fb ff c3 66 2e 0f 1f 84 00 00 00 00
      RSP: 002b:00007f4266f3bc78 EFLAGS: 00000246 ORIG_RAX: 00000000000001ab
      RAX: ffffffffffffffda RBX: 00000000000083c0 RCX: 000000000045de59
      RDX: 0000000020000280 RSI: 0000000000000002 RDI: 0000000000000005
      RBP: 000000000118bf68 R08: 0000000000000000 R09: 0000000000000000
      R10: 40000000000000a1 R11: 0000000000000246 R12: 000000000118bf2c
      R13: 00007fff2fa4f12f R14: 00007f4266f3c9c0 R15: 000000000118bf2c
      Modules linked in:
      ---[ end trace 2a40a195e2d5e6e6 ]---
      RIP: 0010:io_file_from_index fs/io_uring.c:5963 [inline]
      RIP: 0010:io_sqe_files_register fs/io_uring.c:7369 [inline]
      RIP: 0010:__io_uring_register fs/io_uring.c:9463 [inline]
      RIP: 0010:__do_sys_io_uring_register+0x2fd2/0x3ee0 fs/io_uring.c:9553
      Code: ec 03 49 c1 ee 03 49 01 ec 49 01 ee e8 57 61 9c ff 41 80 3c 24 00 0f 85 9b 09 00 00 4d 8b af b8 01 00 00 4c 89 e8 48 c1 e8 03 <80> 3c 28 00 0f 85 76 09 00 00 49 8b 55 00 89 d8 c1 f8 09 48 98 4c
      RSP: 0018:ffffc90009137d68 EFLAGS: 00010246
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc9000ef2a000
      RDX: 0000000000040000 RSI: ffffffff81d81dd9 RDI: 0000000000000005
      RBP: dffffc0000000000 R08: 0000000000000001 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000000 R12: ffffed1012882a37
      R13: 0000000000000000 R14: ffffed1012882a38 R15: ffff888094415000
      FS:  00007f4266f3c700(0000) GS:ffff8880ae400000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 000000000074a918 CR3: 000000008e57d000 CR4: 00000000001506f0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      
      which is a copy of fget failure condition jumping to cleanup, but the
      cleanup requires ctx->file_data to be assigned. Assign it when setup,
      and ensure that we clear it again for the error path exit.
      
      Fixes: 5398ae69 ("io_uring: clean file_data access in files_register")
      Reported-by: syzbot+f4ebcc98223dafd8991e@syzkaller.appspotmail.com
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      55cbc256
    • Jens Axboe's avatar
      Revert "io_uring: mark io_uring_fops/io_op_defs as __read_mostly" · 0918682b
      Jens Axboe authored
      This reverts commit 738277ad.
      
      This change didn't make a lot of sense, and as Linus reports, it actually
      fails on clang:
      
         /tmp/io_uring-dd40c4.s:26476: Warning: ignoring changed section
         attributes for .data..read_mostly
      
      The arrays are already marked const so, by definition, they are not
      just read-mostly, they are read-only.
      Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0918682b
    • Pavel Begunkov's avatar
      io_uring: fix REQ_F_COMP_LOCKED by killing it · 216578e5
      Pavel Begunkov authored
      REQ_F_COMP_LOCKED is used and implemented in a buggy way. The problem is
      that the flag is set before io_put_req() but not cleared after, and if
      that wasn't the final reference, the request will be freed with the flag
      set from some other context, which may not hold a spinlock. That means
      possible races with removing linked timeouts and unsynchronised
      completion (e.g. access to CQ).
      
      Instead of fixing REQ_F_COMP_LOCKED, kill the flag and use
      task_work_add() to move such requests to a fresh context to free from
      it, as was done with __io_free_req_finish().
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      216578e5
    • Pavel Begunkov's avatar
      io_uring: dig out COMP_LOCK from deep call chain · 4edf20f9
      Pavel Begunkov authored
      io_req_clean_work() checks REQ_F_COMP_LOCK to pass this two layers up.
      Move the check up into __io_free_req(), so at least it doesn't looks so
      ugly and would facilitate further changes.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4edf20f9
    • Pavel Begunkov's avatar
      io_uring: don't put a poll req under spinlock · 6a0af224
      Pavel Begunkov authored
      Move io_put_req() in io_poll_task_handler() from under spinlock. This
      eliminates the need to use REQ_F_COMP_LOCKED, at the expense of
      potentially having to grab the lock again. That's still a better trade
      off than relying on the locked flag.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6a0af224