Commit e94f6852 authored by Pavel Begunkov's avatar Pavel Begunkov Committed by Jens Axboe

block: kill extra rcu lock/unlock in queue enter

blk_try_enter_queue() already takes rcu_read_lock/unlock, so we can
avoid the second pair in percpu_ref_tryget_live(), use a newly added
percpu_ref_tryget_live_rcu().

As rcu_read_lock/unlock imply barrier()s, it's pretty noticeable,
especially for for !CONFIG_PREEMPT_RCU (default for some distributions),
where __rcu_read_lock/unlock() are not inlined.

3.20%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
3.05%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

2.52%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
2.28%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock
Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b11c67ea495ed9d44f067622d852de4a510ce65.1634822969.git.asml.silence@gmail.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 3b13c168
...@@ -389,7 +389,7 @@ EXPORT_SYMBOL(blk_cleanup_queue); ...@@ -389,7 +389,7 @@ EXPORT_SYMBOL(blk_cleanup_queue);
static bool blk_try_enter_queue(struct request_queue *q, bool pm) static bool blk_try_enter_queue(struct request_queue *q, bool pm)
{ {
rcu_read_lock(); rcu_read_lock();
if (!percpu_ref_tryget_live(&q->q_usage_counter)) if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter))
goto fail; goto fail;
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment