Commit 5ed61d3f authored by Ming Lei's avatar Ming Lei Committed by Jens Axboe

block: add a read barrier in blk_queue_enter()

Without the barrier, reading DEAD flag of .q_usage_counter
and reading .mq_freeze_depth may be reordered, then the
following wait_event_interruptible() may never return.
Reviewed-by: default avatarHannes Reinecke <hare@suse.com>
Signed-off-by: default avatarMing Lei <tom.leiming@gmail.com>
Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: default avatarJens Axboe <axboe@fb.com>
parent d9d149a3
...@@ -669,6 +669,15 @@ int blk_queue_enter(struct request_queue *q, bool nowait) ...@@ -669,6 +669,15 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
if (nowait) if (nowait)
return -EBUSY; return -EBUSY;
/*
* read pair of barrier in blk_mq_freeze_queue_start(),
* we need to order reading __PERCPU_REF_DEAD flag of
* .q_usage_counter and reading .mq_freeze_depth,
* otherwise the following wait may never return if the
* two reads are reordered.
*/
smp_rmb();
ret = wait_event_interruptible(q->mq_freeze_wq, ret = wait_event_interruptible(q->mq_freeze_wq,
!atomic_read(&q->mq_freeze_depth) || !atomic_read(&q->mq_freeze_depth) ||
blk_queue_dying(q)); blk_queue_dying(q));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment