Commit a7af0af3 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Jens Axboe

blk-mq: attempt to fix atomic flag memory ordering

Attempt to untangle the ordering in blk-mq. The patch introducing the
single smp_mb__before_atomic() is obviously broken in that it doesn't
clearly specify a pairing barrier and an obtained guarantee.

The comment is further misleading in that it hints that the
deadline store and the COMPLETE store also need to be ordered, but
AFAICT there is no such dependency. However what does appear to be
important is the clear happening _after_ the store, and that worked by
pure accident.

This clarifies blk_mq_start_request() -- we should not get there with
STARTING set -- this simplifies the code and makes the barrier usage
sane (the old code could be read to allow not having _any_ atomic after
the barrier, in which case the barrier hasn't got anything to order). We
then also introduce the missing pairing barrier for it.

Also down-grade the barrier to smp_wmb(), this is cheaper for
PowerPC/ARM and doesn't cost anything extra on x86.

And it documents the STARTING vs COMPLETE ordering. Although I've not
been entirely successful in reverse engineering the blk-mq state
machine so there might still be more funnies around timeout vs
requeue.

If I got anything wrong, feel free to educate me by adding comments to
clarify things ;-)

Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ming Lei <tom.leiming@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Fixes: 538b7534 ("blk-mq: request deadline must be visible before marking rq as started")
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 9c988374
...@@ -596,22 +596,32 @@ void blk_mq_start_request(struct request *rq) ...@@ -596,22 +596,32 @@ void blk_mq_start_request(struct request *rq)
blk_add_timer(rq); blk_add_timer(rq);
/* WARN_ON_ONCE(test_bit(REQ_ATOM_STARTED, &rq->atomic_flags));
* Ensure that ->deadline is visible before set the started
* flag and clear the completed flag.
*/
smp_mb__before_atomic();
/* /*
* Mark us as started and clear complete. Complete might have been * Mark us as started and clear complete. Complete might have been
* set if requeue raced with timeout, which then marked it as * set if requeue raced with timeout, which then marked it as
* complete. So be sure to clear complete again when we start * complete. So be sure to clear complete again when we start
* the request, otherwise we'll ignore the completion event. * the request, otherwise we'll ignore the completion event.
*
* Ensure that ->deadline is visible before we set STARTED, such that
* blk_mq_check_expired() is guaranteed to observe our ->deadline when
* it observes STARTED.
*/ */
if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) smp_wmb();
set_bit(REQ_ATOM_STARTED, &rq->atomic_flags); set_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags)) if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags)) {
/*
* Coherence order guarantees these consecutive stores to a
* single variable propagate in the specified order. Thus the
* clear_bit() is ordered _after_ the set bit. See
* blk_mq_check_expired().
*
* (the bits must be part of the same byte for this to be
* true).
*/
clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags); clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
}
if (q->dma_drain_size && blk_rq_bytes(rq)) { if (q->dma_drain_size && blk_rq_bytes(rq)) {
/* /*
...@@ -781,10 +791,19 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, ...@@ -781,10 +791,19 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
struct request *rq, void *priv, bool reserved) struct request *rq, void *priv, bool reserved)
{ {
struct blk_mq_timeout_data *data = priv; struct blk_mq_timeout_data *data = priv;
unsigned long deadline;
if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags))
return; return;
/*
* Ensures that if we see STARTED we must also see our
* up-to-date deadline, see blk_mq_start_request().
*/
smp_rmb();
deadline = READ_ONCE(rq->deadline);
/* /*
* The rq being checked may have been freed and reallocated * The rq being checked may have been freed and reallocated
* out already here, we avoid this race by checking rq->deadline * out already here, we avoid this race by checking rq->deadline
...@@ -798,11 +817,20 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, ...@@ -798,11 +817,20 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
* and clearing the flag in blk_mq_start_request(), so * and clearing the flag in blk_mq_start_request(), so
* this rq won't be timed out too. * this rq won't be timed out too.
*/ */
if (time_after_eq(jiffies, rq->deadline)) { if (time_after_eq(jiffies, deadline)) {
if (!blk_mark_rq_complete(rq)) if (!blk_mark_rq_complete(rq)) {
/*
* Again coherence order ensures that consecutive reads
* from the same variable must be in that order. This
* ensures that if we see COMPLETE clear, we must then
* see STARTED set and we'll ignore this timeout.
*
* (There's also the MB implied by the test_and_clear())
*/
blk_mq_rq_timed_out(rq, reserved); blk_mq_rq_timed_out(rq, reserved);
} else if (!data->next_set || time_after(data->next, rq->deadline)) { }
data->next = rq->deadline; } else if (!data->next_set || time_after(data->next, deadline)) {
data->next = deadline;
data->next_set = 1; data->next_set = 1;
} }
} }
......
...@@ -211,7 +211,7 @@ void blk_add_timer(struct request *req) ...@@ -211,7 +211,7 @@ void blk_add_timer(struct request *req)
if (!req->timeout) if (!req->timeout)
req->timeout = q->rq_timeout; req->timeout = q->rq_timeout;
req->deadline = jiffies + req->timeout; WRITE_ONCE(req->deadline, jiffies + req->timeout);
/* /*
* Only the non-mq case needs to add the request to a protected list. * Only the non-mq case needs to add the request to a protected list.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment