Commit c81ffa32 authored by Tang Junhui's avatar Tang Junhui Committed by Jens Axboe

bcache: fix sequential large write IO bypass

Sequential write IOs were tested with bs=1M by FIO in writeback cache
mode, these IOs were expected to be bypassed, but actually they did not.
We debug the code, and find in check_should_bypass():
    if (!congested &&
        mode == CACHE_MODE_WRITEBACK &&
        op_is_write(bio_op(bio)) &&
        (bio->bi_opf & REQ_SYNC))
        goto rescale
that means, If in writeback mode, a write IO with REQ_SYNC flag will not
be bypassed though it is a sequential large IO, It's not a correct thing
to do actually, so this patch remove these codes.
Signed-off-by: default avatartang.junhui <tang.junhui@zte.com.cn>
Reviewed-by: default avatarKent Overstreet <kent.overstreet@gmail.com>
Reviewed-by: default avatarEric Wheeler <bcache@linux.ewheeler.net>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 4b758df2
...@@ -400,12 +400,6 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio) ...@@ -400,12 +400,6 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
if (!congested && !dc->sequential_cutoff) if (!congested && !dc->sequential_cutoff)
goto rescale; goto rescale;
if (!congested &&
mode == CACHE_MODE_WRITEBACK &&
op_is_write(bio->bi_opf) &&
op_is_sync(bio->bi_opf))
goto rescale;
spin_lock(&dc->io_lock); spin_lock(&dc->io_lock);
hlist_for_each_entry(i, iohash(dc, bio->bi_iter.bi_sector), hash) hlist_for_each_entry(i, iohash(dc, bio->bi_iter.bi_sector), hash)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment