Commit f9f28e5b authored by Naohiro Aota's avatar Naohiro Aota Committed by David Sterba

btrfs: zoned: fix negative space_info->bytes_readonly

Consider we have a using block group on zoned btrfs.

|<- ZU ->|<- used ->|<---free--->|
                     `- Alloc offset
ZU: Zone unusable

Marking the block group read-only will migrate the zone unusable bytes
to the read-only bytes. So, we will have this.

|<- RO ->|<- used ->|<--- RO --->|

RO: Read only

When marking it back to read-write, btrfs_dec_block_group_ro()
subtracts the above "RO" bytes from the
space_info->bytes_readonly. And, it moves the zone unusable bytes back
and again subtracts those bytes from the space_info->bytes_readonly,
leading to negative bytes_readonly.

This can be observed in the output as eg.:

  Data, single: total=512.00MiB, used=165.21MiB, zone_unusable=16.00EiB
  Data, single: total=536870912, used=173256704, zone_unusable=18446744073603186688

This commit fixes the issue by reordering the operations.

Link: https://github.com/naota/linux/issues/37Reported-by: default avatarDavid Sterba <dsterba@suse.com>
Fixes: 169e0da9 ("btrfs: zoned: track unusable bytes for zones")
CC: stable@vger.kernel.org # 5.12+
Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent aefd7f70
...@@ -2442,16 +2442,16 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache) ...@@ -2442,16 +2442,16 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache)
spin_lock(&sinfo->lock); spin_lock(&sinfo->lock);
spin_lock(&cache->lock); spin_lock(&cache->lock);
if (!--cache->ro) { if (!--cache->ro) {
num_bytes = cache->length - cache->reserved -
cache->pinned - cache->bytes_super -
cache->zone_unusable - cache->used;
sinfo->bytes_readonly -= num_bytes;
if (btrfs_is_zoned(cache->fs_info)) { if (btrfs_is_zoned(cache->fs_info)) {
/* Migrate zone_unusable bytes back */ /* Migrate zone_unusable bytes back */
cache->zone_unusable = cache->alloc_offset - cache->used; cache->zone_unusable = cache->alloc_offset - cache->used;
sinfo->bytes_zone_unusable += cache->zone_unusable; sinfo->bytes_zone_unusable += cache->zone_unusable;
sinfo->bytes_readonly -= cache->zone_unusable; sinfo->bytes_readonly -= cache->zone_unusable;
} }
num_bytes = cache->length - cache->reserved -
cache->pinned - cache->bytes_super -
cache->zone_unusable - cache->used;
sinfo->bytes_readonly -= num_bytes;
list_del_init(&cache->ro_list); list_del_init(&cache->ro_list);
} }
spin_unlock(&cache->lock); spin_unlock(&cache->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment