Commit a8394090 authored by Tang Junhui's avatar Tang Junhui Committed by Jens Axboe

bcache: correct cache_dirty_target in __update_writeback_rate()

__update_write_rate() uses a Proportion-Differentiation Controller
algorithm to control writeback rate. A dirty target number is used in
this PD controller to control writeback rate. A larger target number
will make the writeback rate smaller, on the versus, a smaller target
number will make the writeback rate larger.

bcache uses the following steps to calculate the target number,
1) cache_sectors = all-buckets-of-cache-set * buckets-size
2) cache_dirty_target = cache_sectors * cached-device-writeback_percent
3) target = cache_dirty_target *
(sectors-of-cached-device/sectors-of-all-cached-devices-of-this-cache-set)

The calculation at step 1) for cache_sectors is incorrect, which does
not consider dirty blocks occupied by flash only volume.

A flash only volume can be took as a bcache device without cached
device. All data sectors allocated for it are persistent on cache device
and marked dirty, they are not touched by bcache writeback and garbage
collection code. So data blocks of flash only volume should be ignore
when calculating cache_sectors of cache set.

Current code does not subtract dirty sectors of flash only volume, which
results a larger target number from the above 3 steps. And in sequence
the cache device's writeback rate is smaller then a correct value,
writeback speed is slower on all cached devices.

This patch fixes the incorrect slower writeback rate by subtracting
dirty sectors of flash only volumes in __update_writeback_rate().

(Commit log composed by Coly Li to pass checkpatch.pl checking)
Signed-off-by: default avatarTang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: default avatarColy Li <colyli@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 0b43f49d
...@@ -21,7 +21,8 @@ ...@@ -21,7 +21,8 @@
static void __update_writeback_rate(struct cached_dev *dc) static void __update_writeback_rate(struct cached_dev *dc)
{ {
struct cache_set *c = dc->disk.c; struct cache_set *c = dc->disk.c;
uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size; uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size -
bcache_flash_devs_sectors_dirty(c);
uint64_t cache_dirty_target = uint64_t cache_dirty_target =
div_u64(cache_sectors * dc->writeback_percent, 100); div_u64(cache_sectors * dc->writeback_percent, 100);
......
...@@ -14,6 +14,25 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d) ...@@ -14,6 +14,25 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d)
return ret; return ret;
} }
static inline uint64_t bcache_flash_devs_sectors_dirty(struct cache_set *c)
{
uint64_t i, ret = 0;
mutex_lock(&bch_register_lock);
for (i = 0; i < c->nr_uuids; i++) {
struct bcache_device *d = c->devices[i];
if (!d || !UUID_FLASH_ONLY(&c->uuids[i]))
continue;
ret += bcache_dev_sectors_dirty(d);
}
mutex_unlock(&bch_register_lock);
return ret;
}
static inline unsigned offset_to_stripe(struct bcache_device *d, static inline unsigned offset_to_stripe(struct bcache_device *d,
uint64_t offset) uint64_t offset)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment