1. 08 Mar, 2021 1 commit
    • Mika Westerberg's avatar
      thunderbolt: Initialize HopID IDAs in tb_switch_alloc() · 781e14ea
      Mika Westerberg authored
      If there is a failure before the tb_switch_add() is called the switch
      object is released by tb_switch_release() but at that point HopID IDAs
      have not yet been initialized. So we see splat like this:
      
      BUG: spinlock bad magic on CPU#2, kworker/u8:5/115
      ...
      Workqueue: thunderbolt0 tb_handle_hotplug
      Call Trace:
       dump_stack+0x97/0xdc
       ? spin_bug+0x9a/0xa7
       do_raw_spin_lock+0x68/0x98
       _raw_spin_lock_irqsave+0x3f/0x5d
       ida_destroy+0x4f/0x127
       tb_switch_release+0x6d/0xfd
       device_release+0x2c/0x7d
       kobject_put+0x9b/0xbc
       tb_handle_hotplug+0x278/0x452
       process_one_work+0x1db/0x396
       worker_thread+0x216/0x375
       kthread+0x14d/0x155
       ? pr_cont_work+0x58/0x58
       ? kthread_blkcg+0x2e/0x2e
       ret_from_fork+0x1f/0x40
      
      Fix this by always initializing HopID IDAs in tb_switch_alloc().
      
      Fixes: 0b2863ac ("thunderbolt: Add functions for allocating and releasing HopIDs")
      Cc: stable@vger.kernel.org
      Reported-by: default avatarChiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
      Signed-off-by: default avatarMika Westerberg <mika.westerberg@linux.intel.com>
      781e14ea
  2. 06 Mar, 2021 4 commits
  3. 05 Mar, 2021 33 commits
  4. 04 Mar, 2021 2 commits
    • Jens Axboe's avatar
      kernel: provide create_io_thread() helper · cc440e87
      Jens Axboe authored
      Provide a generic helper for setting up an io_uring worker. Returns a
      task_struct so that the caller can do whatever setup is needed, then call
      wake_up_new_task() to kick it into gear.
      
      Add a kernel_clone_args member, io_thread, which tells copy_process() to
      mark the task with PF_IO_WORKER.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      cc440e87
    • Pavel Begunkov's avatar
      io_uring: reliably cancel linked timeouts · dd59a3d5
      Pavel Begunkov authored
      Linked timeouts are fired asynchronously (i.e. soft-irq), and use
      generic cancellation paths to do its stuff, including poking into io-wq.
      The problem is that it's racy to access tctx->io_wq, as
      io_uring_task_cancel() and others may be happening at this exact moment.
      Mark linked timeouts with REQ_F_INLIFGHT for now, making sure there are
      no timeouts before io-wq destraction.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dd59a3d5