1. 09 Dec, 2020 19 commits
    • Filipe Manana's avatar
      btrfs: fix race that makes inode logging fallback to transaction commit · 47d3db41
      Filipe Manana authored
      When logging an inode and the previous transaction is still committing, we
      have a time window where we can end up incorrectly think an inode has its
      last_unlink_trans field with a value greater than the last transaction
      committed, which results in the logging to fallback to a full transaction
      commit, which is usually much more expensive than doing a log commit.
      
      The race is described by the following steps:
      
      1) We are at transaction 1000;
      
      2) We modify an inode X (a directory) using transaction 1000 and set its
         last_unlink_trans field to 1000, because for example we removed one
         of its subdirectories;
      
      3) We create a new inode Y with a dentry in inode X using transaction 1000,
         so its generation field is set to 1000;
      
      4) The commit for transaction 1000 is started by task A;
      
      5) The task committing transaction 1000 sets the transaction state to
         unblocked, writes the dirty extent buffers and the super blocks, then
         unlocks tree_log_mutex;
      
      6) Some task starts a new transaction with a generation of 1001;
      
      7) We do some modification to inode Y (using transaction 1001);
      
      8) The transaction 1000 commit starts unpinning extents. At this point
         fs_info->last_trans_committed still has a value of 999;
      
      9) Task B starts an fsync on inode Y, and gets a handle for transaction
         1001. When it gets to check_parent_dirs_for_sync() it does the checking
         of the ancestor dentries because the following check does not evaluate
         to true:
      
             if (S_ISREG(inode->vfs_inode.i_mode) &&
                 inode->generation <= last_committed &&
                 inode->last_unlink_trans <= last_committed)
                     goto out;
      
         The generation value for inode Y is 1000 and last_committed, which has
         the value read from fs_info->last_trans_committed, has a value of 999,
         so that check evaluates to false and we proceed to check the ancestor
         inodes.
      
         Once we get to the first ancestor, inode X, we call
         btrfs_must_commit_transaction() on it, which evaluates to true:
      
         static bool btrfs_must_commit_transaction(...)
         {
             struct btrfs_fs_info *fs_info = inode->root->fs_info;
             bool ret = false;
      
             mutex_lock(&inode->log_mutex);
             if (inode->last_unlink_trans > fs_info->last_trans_committed) {
                 /*
                  * Make sure any commits to the log are forced to be full
                  * commits.
                  */
                  btrfs_set_log_full_commit(trans);
                  ret = true;
             }
          (...)
      
          because inode's X last_unlink_trans has a value of 1000 and
          fs_info->last_trans_committed still has a value of 999, it returns
          true to check_parent_dirs_for_sync(), making it return 1 which is
          propagated up to btrfs_sync_file(), causing it to fallback to a full
          transaction commit of transaction 1001.
      
          We should have not fallen back to commit transaction 1001, since inode
          X had last_unlink_trans set to 1000 and the super blocks for
          transaction 1000 were already written. So while not resulting in a
          functional problem, it leads to a lot more work and higher latencies
          for a fsync since committing a transaction is usually more expensive
          than committing a log (if other filesystem changes happened under that
          transaction).
      
      Similar problem happens when logging directories, for the same reason as
      btrfs_must_commit_transaction() returns true on an inode with its
      last_unlink_trans having the generation of the previous transaction and
      that transaction is still committing, unpinning its freed extents.
      
      So fix this by comparing last_unlink_trans with the id of the current
      transaction instead of fs_info->last_trans_committed.
      
      This case is often hit when running dbench for a long enough duration, as
      it does lots of rename and rmdir operations (both update the field
      last_unlink_trans of an inode) and fsyncs of files and directories.
      
      This patch belongs to a patch set that is comprised of the following
      patches:
      
        btrfs: fix race causing unnecessary inode logging during link and rename
        btrfs: fix race that results in logging old extents during a fast fsync
        btrfs: fix race that causes unnecessary logging of ancestor inodes
        btrfs: fix race that makes inode logging fallback to transaction commit
        btrfs: fix race leading to unnecessary transaction commit when logging inode
        btrfs: do not block inode logging for so long during transaction commit
      
      Performance results are mentioned in the change log of the last patch.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      47d3db41
    • Filipe Manana's avatar
      btrfs: fix race that causes unnecessary logging of ancestor inodes · 4d6221d7
      Filipe Manana authored
      When logging an inode and we are checking if we need to log ancestors that
      are new, if the previous transaction is still committing we have a time
      window where we can unnecessarily log ancestor inodes that were created in
      the previous transaction.
      
      The race is described by the following steps:
      
      1) We are at transaction 1000;
      
      2) Directory inode X is created, its generation is set to 1000;
      
      3) The commit for transaction 1000 is started by task A;
      
      4) The task committing transaction 1000 sets the transaction state to
         unblocked, writes the dirty extent buffers and the super blocks, then
         unlocks tree_log_mutex;
      
      5) Inode Y, a regular file, is created under directory inode X, this
         results in starting a new transaction with a generation of 1001;
      
      6) The transaction 1000 commit is unpinning extents. At this point
         fs_info->last_trans_committed still has a value of 999;
      
      7) Task B calls fsync on inode Y and gets a handle for transaction 1001;
      
      8) Task B ends up at log_all_new_ancestors() and then because inode Y has
         only one hard link, ends up at log_new_ancestors_fast(). There it reads
         a value of 999 from fs_info->last_trans_committed, and sees that the
         parent inode X has a generation of 1000, so we end up logging inode X:
      
           if (inode->generation > fs_info->last_trans_committed) {
               ret = btrfs_log_inode(trans, root, inode,
                                     LOG_INODE_EXISTS, ctx);
               (...)
      
         which is not necessary since it was created in the past transaction,
         with a generation of 1000, and that transaction has already committed
         its super blocks - it's still unpinning extents so it has not yet
         updated fs_info->last_trans_committed from 999 to 1000.
      
         So this just causes us to spend more time logging and allocating and
         writing more tree blocks for the log tree.
      
      So fix this by comparing an inode's generation with the generation of the
      transaction our transaction handle refers to - if the inode's generation
      matches the generation of the current transaction than we know it is a
      new inode we need to log, otherwise don't log it.
      
      This case is often hit when running dbench for a long enough duration.
      
      This patch belongs to a patch set that is comprised of the following
      patches:
      
        btrfs: fix race causing unnecessary inode logging during link and rename
        btrfs: fix race that results in logging old extents during a fast fsync
        btrfs: fix race that causes unnecessary logging of ancestor inodes
        btrfs: fix race that makes inode logging fallback to transaction commit
        btrfs: fix race leading to unnecessary transaction commit when logging inode
        btrfs: do not block inode logging for so long during transaction commit
      
      Performance results are mentioned in the change log of the last patch.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4d6221d7
    • Filipe Manana's avatar
      btrfs: fix race that results in logging old extents during a fast fsync · 5f96bfb7
      Filipe Manana authored
      When logging the extents of an inode during a fast fsync, we have a time
      window where we can log extents that are from the previous transaction and
      already persisted. This only makes us waste time unnecessarily.
      
      The following sequence of steps shows how this can happen:
      
      1) We are at transaction 1000;
      
      2) An ordered extent E from inode I completes, that is it has gone through
         btrfs_finish_ordered_io(), and it set the extent maps' generation to
         1000 when we unpin the extent, which is the generation of the current
         transaction;
      
      3) The commit for transaction 1000 starts by task A;
      
      4) The task committing transaction 1000 sets the transaction state to
         unblocked, writes the dirty extent buffers and the super blocks, then
         unlocks tree_log_mutex;
      
      5) Some change is made to inode I, resulting in creation of a new
         transaction with a generation of 1001;
      
      6) The transaction 1000 commit starts unpinning extents. At this point
         fs_info->last_trans_committed still has a value of 999;
      
      7) Task B starts an fsync on inode I, and when it gets to
         btrfs_log_changed_extents() sees the extent map for extent E in the
         list of modified extents. It sees the extent map has a generation of
         1000 and fs_info->last_trans_committed has a value of 999, so it
         proceeds to logging the respective file extent item and all the
         checksums covering its range.
      
         So we end up wasting time since the extent was already persisted and
         is reachable through the trees pointed to by the super block committed
         by transaction 1000.
      
      So just fix this by comparing the extent maps generation against the
      generation of the transaction handle - if it is smaller then the id in the
      handle, we know the extent was already persisted and we do not need to log
      it.
      
      This patch belongs to a patch set that is comprised of the following
      patches:
      
        btrfs: fix race causing unnecessary inode logging during link and rename
        btrfs: fix race that results in logging old extents during a fast fsync
        btrfs: fix race that causes unnecessary logging of ancestor inodes
        btrfs: fix race that makes inode logging fallback to transaction commit
        btrfs: fix race leading to unnecessary transaction commit when logging inode
        btrfs: do not block inode logging for so long during transaction commit
      
      Performance results are mentioned in the change log of the last patch.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5f96bfb7
    • Filipe Manana's avatar
      btrfs: fix race causing unnecessary inode logging during link and rename · de53d892
      Filipe Manana authored
      When we are doing a rename or a link operation for an inode that was logged
      in the previous transaction and that transaction is still committing, we
      have a time window where we incorrectly consider that the inode was logged
      previously in the current transaction and therefore decide to log it to
      update it in the log. The following steps give an example on how this
      happens during a link operation:
      
      1) Inode X is logged in transaction 1000, so its logged_trans field is set
         to 1000;
      
      2) Task A starts to commit transaction 1000;
      
      3) The state of transaction 1000 is changed to TRANS_STATE_UNBLOCKED;
      
      4) Task B starts a link operation for inode X, and as a consequence it
         starts transaction 1001;
      
      5) Task A is still committing transaction 1000, therefore the value stored
         at fs_info->last_trans_committed is still 999;
      
      6) Task B calls btrfs_log_new_name(), it reads a value of 999 from
         fs_info->last_trans_committed and because the logged_trans field of
         inode X has a value of 1000, the function does not return immediately,
         instead it proceeds to logging the inode, which should not happen
         because the inode was logged in the previous transaction (1000) and
         not in the current one (1001).
      
      This is not a functional problem, just wasted time and space logging an
      inode that does not need to be logged, contributing to higher latency
      for link and rename operations.
      
      So fix this by comparing the inodes' logged_trans field with the
      generation of the current transaction instead of comparing with the value
      stored in fs_info->last_trans_committed.
      
      This case is often hit when running dbench for a long enough duration, as
      it does lots of rename operations.
      
      This patch belongs to a patch set that is comprised of the following
      patches:
      
        btrfs: fix race causing unnecessary inode logging during link and rename
        btrfs: fix race that results in logging old extents during a fast fsync
        btrfs: fix race that causes unnecessary logging of ancestor inodes
        btrfs: fix race that makes inode logging fallback to transaction commit
        btrfs: fix race leading to unnecessary transaction commit when logging inode
        btrfs: do not block inode logging for so long during transaction commit
      
      Performance results are mentioned in the change log of the last patch.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      de53d892
    • David Sterba's avatar
      btrfs: remove recalc_thresholds from free space ops · fa598b06
      David Sterba authored
      After removing the inode number cache that was using the free space
      cache code, we can remove at least the recalc_thresholds callback from
      the ops. Both code and tests use the same callback function. It's moved
      before its first use.
      
      The use_bitmaps callback is still needed by tests to create some
      extents/bitmap setup.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      fa598b06
    • Nikolay Borisov's avatar
      btrfs: always set NODATASUM/NODATACOW in __create_free_space_inode · f0d1219d
      Nikolay Borisov authored
      Since it's being used solely for the freespace cache unconditionally
      set the flags required for it.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f0d1219d
    • Nikolay Borisov's avatar
      btrfs: remove crc_check logic from free space · 7dbdb443
      Nikolay Borisov authored
      Following removal of the ino cache io_ctl_init will be called only on
      behalf of the freespace inode. In this case we always want to check
      CRCs so conditional code that depended on io_ctl::check_crc can be
      removed.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7dbdb443
    • Nikolay Borisov's avatar
      btrfs: remove inode number cache feature · 5297199a
      Nikolay Borisov authored
      It's been deprecated since commit b547a88e ("btrfs: start
      deprecation of mount option inode_cache") which enumerates the reasons.
      
      A filesystem that uses the feature (mount -o inode_cache) tracks the
      inode numbers in bitmaps, that data stay on the filesystem after this
      patch. The size is roughly 5MiB for 1M inodes [1], which is considered
      small enough to be left there. Removal of the change can be implemented
      in btrfs-progs if needed.
      
      [1] https://lore.kernel.org/linux-btrfs/20201127145836.GZ6430@twin.jikos.cz/Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      [ update changelog ]
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5297199a
    • Nikolay Borisov's avatar
      btrfs: replace calls to btrfs_find_free_ino with btrfs_find_free_objectid · abadc1fc
      Nikolay Borisov authored
      The former is going away as part of the inode map removal so switch
      callers to btrfs_find_free_objectid. No functional changes since with
      INODE_MAP disabled (default) find_free_objectid was called anyway.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      abadc1fc
    • Nikolay Borisov's avatar
      btrfs: move btrfs_find_highest_objectid/btrfs_find_free_objectid to disk-io.c · ec7d6dfd
      Nikolay Borisov authored
      Those functions are going to be used even after inode cache is removed
      so moved them to a more appropriate place.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ec7d6dfd
    • David Sterba's avatar
      btrfs: drop casts of bio bi_sector · 1201b58b
      David Sterba authored
      Since commit 72deb455 ("block: remove CONFIG_LBDAF") (5.2) the
      sector_t type is u64 on all arches and configs so we don't need to
      typecast it.  It used to be unsigned long and the result of sector size
      shifts were not guaranteed to fit in the type.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1201b58b
    • Naohiro Aota's avatar
      btrfs: implement log-structured superblock for ZONED mode · 12659251
      Naohiro Aota authored
      Superblock (and its copies) is the only data structure in btrfs which
      has a fixed location on a device. Since we cannot overwrite in a
      sequential write required zone, we cannot place superblock in the zone.
      One easy solution is limiting superblock and copies to be placed only in
      conventional zones.  However, this method has two downsides: one is
      reduced number of superblock copies. The location of the second copy of
      superblock is 256GB, which is in a sequential write required zone on
      typical devices in the market today.  So, the number of superblock and
      copies is limited to be two.  Second downside is that we cannot support
      devices which have no conventional zones at all.
      
      To solve these two problems, we employ superblock log writing. It uses
      two adjacent zones as a circular buffer to write updated superblocks.
      Once the first zone is filled up, start writing into the second one.
      Then, when both zones are filled up and before starting to write to the
      first zone again, it reset the first zone.
      
      We can determine the position of the latest superblock by reading write
      pointer information from a device. One corner case is when both zones
      are full. For this situation, we read out the last superblock of each
      zone, and compare them to determine which zone is older.
      
      The following zones are reserved as the circular buffer on ZONED btrfs.
      
      - The primary superblock: zones 0 and 1
      - The first copy: zones 16 and 17
      - The second copy: zones 1024 or zone at 256GB which is minimum, and
        next to it
      
      If these reserved zones are conventional, superblock is written fixed at
      the start of the zone without logging.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      12659251
    • Naohiro Aota's avatar
      btrfs: disallow mixed-bg in ZONED mode · a589dde0
      Naohiro Aota authored
      Placing both data and metadata in a block group is impossible in ZONED
      mode. For data, we can allocate a space for it and write it immediately
      after the allocation. For metadata, however, we cannot do that, because
      the logical addresses are recorded in other metadata buffers to build up
      the trees. As a result, a data buffer can be placed after a metadata
      buffer, which is not written yet. Writing out the data buffer will break
      the sequential write rule.
      
      Check and disallow MIXED_BG with ZONED mode.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a589dde0
    • Naohiro Aota's avatar
      btrfs: disable fallocate in ZONED mode · f1569c4c
      Naohiro Aota authored
      fallocate() is implemented by reserving actual extent instead of
      reservations. This can result in exposing the sequential write
      constraint of host-managed zoned block devices to the application, which
      would break the POSIX semantic for the fallocated file.  To avoid this,
      report fallocate() as not supported when in ZONED mode for now.
      
      In the future, we may be able to implement "in-memory" fallocate() in
      ZONED mode by utilizing space_info->bytes_may_use or similar, so this
      returns EOPNOTSUPP.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f1569c4c
    • Naohiro Aota's avatar
      btrfs: disallow NODATACOW in ZONED mode · d206e9c9
      Naohiro Aota authored
      NODATACOW implies overwriting the file data on a device, which is
      impossible in sequential required zones. Disable NODATACOW globally with
      mount option and per-file NODATACOW attribute by masking FS_NOCOW_FL.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d206e9c9
    • Naohiro Aota's avatar
      btrfs: disallow space_cache in ZONED mode · 5d1ab66c
      Naohiro Aota authored
      As updates to the space cache v1 are in-place, the space cache cannot be
      located over sequential zones and there is no guarantees that the device
      will have enough conventional zones to store this cache. Resolve this
      problem by disabling completely the space cache v1.  This does not
      introduce any problems with sequential block groups: all the free space
      is located after the allocation pointer and no free space before the
      pointer.  There is no need to have such cache.
      
      Note: we can technically use free-space-tree (space cache v2) on ZONED
      mode. But, since ZONED mode now always allocates extents in a block
      group sequentially regardless of underlying device zone type, it's no
      use to enable and maintain the tree.
      
      For the same reason, NODATACOW is also disabled.
      
      In summary, ZONED will disable:
      
      | Disabled features | Reason                                              |
      |-------------------+-----------------------------------------------------|
      | RAID/DUP          | Cannot handle two zone append writes to different   |
      |                   | zones                                               |
      |-------------------+-----------------------------------------------------|
      | space_cache (v1)  | In-place updating                                   |
      | NODATACOW         | In-place updating                                   |
      |-------------------+-----------------------------------------------------|
      | fallocate         | Reserved extent will be a write hole                |
      |-------------------+-----------------------------------------------------|
      | MIXED_BG          | Allocated metadata region will be write holes for   |
      |                   | data writes                                         |
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5d1ab66c
    • Naohiro Aota's avatar
      btrfs: introduce max_zone_append_size · 862931c7
      Naohiro Aota authored
      The zone append write command has a maximum IO size restriction it
      accepts. This is because a zone append write command cannot be split, as
      we ask the device to place the data into a specific target zone and the
      device responds with the actual written location of the data.
      
      Introduce max_zone_append_size to zone_info and fs_info to track the
      value, so we can limit all I/O to a zoned block device that we want to
      write using the zone append command to the device's limits.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      862931c7
    • Naohiro Aota's avatar
      btrfs: check and enable ZONED mode · b70f5097
      Naohiro Aota authored
      Introduce function btrfs_check_zoned_mode() to check if ZONED flag is
      enabled on the file system and if the file system consists of zoned
      devices with equal zone size.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b70f5097
    • Naohiro Aota's avatar
      btrfs: get zone information of zoned block devices · 5b316468
      Naohiro Aota authored
      If a zoned block device is found, get its zone information (number of
      zones and zone size).  To avoid costly run-time zone report
      commands to test the device zones type during block allocation, attach
      the seq_zones bitmap to the device structure to indicate if a zone is
      sequential or accept random writes. Also it attaches the empty_zones
      bitmap to indicate if a zone is empty or not.
      
      This patch also introduces the helper function btrfs_dev_is_sequential()
      to test if the zone storing a block is a sequential write required zone
      and btrfs_dev_is_empty_zone() to test if the zone is a empty zone.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5b316468
  2. 08 Dec, 2020 21 commits
    • Naohiro Aota's avatar
      btrfs: introduce ZONED feature flag · 7b3d5a90
      Naohiro Aota authored
      This patch introduces the ZONED incompat flag. The flag indicates that
      the volume management will satisfy the constraints imposed by
      host-managed zoned block devices (aligned chunk allocation, append-only
      updates, reset zone after filled).
      
      As the zoned support will happen incrementally due to enhancing some
      core infrastructure like super block writes, tree-log, raid support, the
      feature will appear in sysfs only on debug builds. It will be enabled
      once the support is feature complete and applications can reliably check
      whether zoned support is present or not.
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7b3d5a90
    • Nikolay Borisov's avatar
      btrfs: return bool from btrfs_should_end_transaction · a2633b6a
      Nikolay Borisov authored
      Results in slightly smaller code.
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-11 (-11)
      Function                                     old     new   delta
      btrfs_should_end_transaction                  96      85     -11
      Total: Before=20070, After=20059, chg -0.05%
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a2633b6a
    • Nikolay Borisov's avatar
      8a8f4dea
    • Nikolay Borisov's avatar
      btrfs: remove err variable from do_relocation · 8df01fdd
      Nikolay Borisov authored
      It simply gets assigned to 'ret' in case of errors. The flow of the
      while loop is not changed by this commit since the few call sites
      that 'goto next' will simply break from the loop.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8df01fdd
    • Nikolay Borisov's avatar
      btrfs: eliminate err variable from merge_reloc_root · c6a592f2
      Nikolay Borisov authored
      In most cases when an error is returned from a function 'ret' is simply
      assigned to 'err'. There is only one case where walk_up_reloc_tree can
      return a positive value - in this case the code breaks from the loop and
      ret is going to get its return value from btrfs_cow_block - either 0 or
      negative. This retains the old logic of how 'err' used to be set at
      this call site.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c6a592f2
    • Nikolay Borisov's avatar
      btrfs: remove err variable from btrfs_delete_subvolume · ee0d904f
      Nikolay Borisov authored
      Use only a single 'ret' to control whether we should abort the
      transaction or not. That's fine, because if we abort a transaction then
      btrfs_end_transaction will return the same value as passed to
      btrfs_abort_transaction. No semantic changes.
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ee0d904f
    • Filipe Manana's avatar
      btrfs: unlock path before checking if extent is shared during nocow writeback · c65ca98f
      Filipe Manana authored
      When we are attempting to start writeback for an existing extent in NOCOW
      mode, at run_delalloc_nocow(), we must check if the extent is shared, and
      if it is, fallback to a COW write. However we do such check while still
      holding a read lock on the leaf that contains the file extent item, and
      that check, the call to btrfs_cross_ref_exist(), can take some time
      because:
      
      1) It needs to do a search on the extent tree, which obviously takes some
         time, specially if delayed references are being run at the moment, as
         we can block when trying to lock currently write locked btree nodes;
      
      2) It needs to check the delayed references for any existing reference
         for our data extent, this requires acquiring the delayed references'
         spinlock and maybe block on the mutex of a delayed reference head in the
         case where there is a delayed reference for our data extent, in the
         worst case it makes us release the path on the extent tree and retry
         the whole process again (going back to step 1).
      
      There are other operations we do while holding the leaf locked that can
      take some significant time as well (specially all together):
      
      * btrfs_extent_readonly() - to check if the block group containing the
        extent is currently in RO mode. This requires taking a spinlock and
        searching for the block group in a rbtree that can be big on large
        filesystems;
      
      * csum_exist_in_range() - to search if there are any checksums in the
        csum tree for the extent. Like before, this can take some time if we are
        in a filesystem that has both COW and NOCOW files, in which case the
        csum tree is not empty;
      
      * btrfs_inc_nocow_writers() - increment the number of nocow writers in the
        block group that contains the data extent. Needs to acquire a spinlock
        and search for the block group in a rbtree that can be big on large
        filesystems.
      
      So just unlock the leaf (release the path) before doing all those checks,
      since we do not need it anymore. In case we can not do a NOCOW write for
      the extent, due to any of those checks failing, and the writeback range
      goes beyond that extents' length, we will do another btree search for the
      next file extent item.
      
      The following script that calls dbench was used to measure the impact of
      this change on a VM with 8 CPUs, 16Gb of ram, using a raw NVMe device
      directly (no intermediary filesystem on the host) and using a non-debug
      kernel (default configuration on Debian):
      
        $ cat test-dbench.sh
        #!/bin/bash
      
        DEV=/dev/sdk
        MNT=/mnt/sdk
        MOUNT_OPTIONS="-o ssd -o nodatacow"
        MKFS_OPTIONS="-m single -d single"
      
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        dbench -D $MNT -t 300 64
      
        umount $MNT
      
      Before this change:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    9326331     0.317   399.957
       Close        6851198     0.002     6.402
       Rename        394894     2.621   402.819
       Unlink       1883131     0.931   398.082
       Deltree          256    19.160   303.580
       Mkdir            128     0.003     0.016
       Qpathinfo    8452314     0.068   116.133
       Qfileinfo    1481921     0.001     5.081
       Qfsinfo      1549963     0.002     4.444
       Sfileinfo     759679     0.084    17.079
       Find         3268168     0.396   118.196
       WriteX       4653310     0.056   110.993
       ReadX        14618818     0.005    23.314
       LockX          30364     0.003     0.497
       UnlockX        30364     0.002     1.720
       Flush         653619    16.954   569.299
      
      Throughput 966.651 MB/sec  64 clients  64 procs  max_latency=569.377 ms
      
      After this change:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    9710433     0.302   232.449
       Close        7132948     0.002    11.496
       Rename        411144     2.452   131.805
       Unlink       1960961     0.893   230.383
       Deltree          256    14.858   198.646
       Mkdir            128     0.002     0.005
       Qpathinfo    8800890     0.066   111.588
       Qfileinfo    1542556     0.001     3.852
       Qfsinfo      1613835     0.002     5.483
       Sfileinfo     790871     0.081    19.492
       Find         3402743     0.386   120.185
       WriteX       4842918     0.054   179.312
       ReadX        15220407     0.005    32.435
       LockX          31612     0.003     1.533
       UnlockX        31612     0.002     1.047
       Flush         680567    16.320   463.323
      
      Throughput 1016.59 MB/sec  64 clients  64 procs  max_latency=463.327 ms
      
      +5.0% throughput, -20.5% max latency
      
      Also, the following test using fio was run:
      
        $ cat test-fio.sh
        #!/bin/bash
      
        DEV=/dev/sdk
        MNT=/mnt/sdk
        MOUNT_OPTIONS="-o ssd -o nodatacow"
        MKFS_OPTIONS="-d single -m single"
      
        if [ $# -ne 4 ]; then
            echo "Use $0 NUM_JOBS FILE_SIZE FSYNC_FREQ BLOCK_SIZE"
            exit 1
        fi
      
        NUM_JOBS=$1
        FILE_SIZE=$2
        FSYNC_FREQ=$3
        BLOCK_SIZE=$4
      
        cat <<EOF > /tmp/fio-job.ini
        [writers]
        rw=randwrite
        fsync=$FSYNC_FREQ
        fallocate=none
        group_reporting=1
        direct=0
        bs=$BLOCK_SIZE
        ioengine=sync
        size=$FILE_SIZE
        directory=$MNT
        numjobs=$NUM_JOBS
        EOF
      
        echo
        echo "Using fio config:"
        echo
        cat /tmp/fio-job.ini
        echo
        echo "mount options: $MOUNT_OPTIONS"
        echo
      
        mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null
        mount $MOUNT_OPTIONS $DEV $MNT
      
        echo "Creating nodatacow files before fio runs..."
        for ((i = 0; i < $NUM_JOBS; i++)); do
            xfs_io -f -c "pwrite -b 128M 0 $FILE_SIZE" "$MNT/writers.$i.0"
        done
        sync
      
        fio /tmp/fio-job.ini
        umount $MNT
      
      Before this change:
      
      $ ./test-fio.sh 16 512M 2 4K
      (...)
      WRITE: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=8192MiB (8590MB), run=289800-289800msec
      
      After this change:
      
      $ ./test-fio.sh 16 512M 2 4K
      (...)
      WRITE: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=8192MiB (8590MB), run=262845-262845msec
      
      +9.7% throughput, -9.8% runtime
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c65ca98f
    • David Sterba's avatar
      btrfs: tree-checker: annotate all error branches as unlikely · c7c01a4a
      David Sterba authored
      The tree checker is called many times as it verifies metadata at
      read/write time. The checks follow a simple pattern:
      
        if (error_condition) {
      	  report_error();
      	  return -EUCLEAN;
        }
      
      All the error reporting functions are annotated as __cold that is
      supposed to hint the compiler to move the statement block out of the hot
      path. This does not seem to happen that often.
      
      As the error condition is expected to be false almost always, we can
      annotate it with 'unlikely' as this satisfies one of the few use cases
      for the annotation. The expected outcome is a stronger hint to compiler
      to reorder the checks
      
        test
        jump to exit
        test
        jump to exit
        ...
      
      which can be observed in asm of eg. check_dir_item,
      btrfs_check_chunk_valid, check_root_item or check_leaf.
      
      There's a measurable run time improvement reported by Josef, the testing
      workload went from 655 MiB/s to 677 MiB/s, which is about +3%.
      
      There should be no functional changes but some of the conditions have
      been rewritten to produce more readable result, some lines are longer
      than 80, for the sake of readability.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c7c01a4a
    • David Sterba's avatar
      btrfs: remove stub device info from messages when we have no fs_info · a0f6d924
      David Sterba authored
      Without a NULL fs_info the helpers will print something like
      
      	BTRFS error (device <unknown>): ...
      
      This can happen in contexts where fs_info is not available at all or
      it's potentially unsafe due to object lifetime. The <unknown> stub does
      not bring much information and with the prefix makes the message
      unnecessarily longer.
      
      Remove it for the NULL fs_info case.
      
      	BTRFS error: ...
      
      Callers can add the device information to the message itself if needed.
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a0f6d924
    • Qu Wenruo's avatar
      btrfs: use detach_page_private() in alloc_extent_buffer() · fb22e9c4
      Qu Wenruo authored
      In alloc_extent_buffer(), after we got a page from btree inode, we check
      if that page has private pointer attached.
      
      If attached, we check if the existing extent buffer has proper refs.
      If not (the eb is being freed), we will detach that private eb pointer.
      
      The point here is, we are detaching that eb pointer by calling:
      - ClearPagePrivate()
      - put_page()
      
      The put_page() here is especially confusing, as it's decreasing the ref
      from attach_page_private().  Without knowing that, it looks like the
      put_page() is for the find_or_create_page() call, confusing the reader.
      
      Since we're always modifying page private with attach_page_private() and
      detach_page_private(), the only open-coded detach_page_private() here is
      really confusing.
      
      Fix it by calling detach_page_private().
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      fb22e9c4
    • Qu Wenruo's avatar
      btrfs: use nodesize to determine if we need readahead in btrfs_lookup_bio_sums · 35478d05
      Qu Wenruo authored
      In btrfs_lookup_bio_sums() if the bio is pretty large, we want to
      start readahead in the csum tree.
      
      However the threshold is an immediate number, (PAGE_SIZE * 8), from the
      initial btrfs merge.
      
      The meaning of the value is pretty hard to guess, especially when the
      immediate number is from the times when 4K sectorsize was the default
      and only CRC32C was supported.
      
      For the most common btrfs setup, CRC32 csum and 4K sectorsize,
      it means just 32K read would kick readahead, while the csum itself is
      only 32 bytes in size.
      
      Now let's be more reasonable by taking both csum size and node size into
      consideration.
      
      If the csum size for the bio is larger than one leaf, then we kick the
      readahead.  This means for current default btrfs, the threshold will be
      16M.
      
      This change should not change performance observably, thus this is
      mostly a readability enhancement.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      35478d05
    • Qu Wenruo's avatar
      btrfs: only clear EXTENT_LOCK bit in extent_invalidatepage · 829ddec9
      Qu Wenruo authored
      extent_invalidatepage() will try to clear all possible bits since it's
      calling clear_extent_bit() with delete == 1.
      
      This is currently fine, since for btree io tree, it only utilizes
      EXTENT_LOCK bit.  But this could be a problem for later subpage support,
      which will utilize extra io tree bit to represent additional info.
      
      This patch will just convert that clear_extent_bit() to
      unlock_extent_cached().
      
      For current code since only EXTENT_LOCKED bit is utilized, this doesn't
      change the behavior, but provides a much cleaner basis for incoming
      subpage support.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      829ddec9
    • Qu Wenruo's avatar
      btrfs: remove unused parameter phy_offset from btrfs_validate_metadata_buffer · 8e1dc982
      Qu Wenruo authored
      Parameter @phy_offset is the offset against the bio->bi_iter.bi_sector.
      @phy_offset is mostly for data io to lookup the csum in btrfs_io_bio.
      
      But for metadata, it's completely useless as metadata stores their own
      csum in its header, so we can remove it.
      
      Note: parameters @start and @end, they are not utilized at all for
      current sectorsize == PAGE_SIZE case, as we can grab eb directly from
      page.
      
      But those two parameters are very important for later subpage support,
      thus @start/@len are not touched here.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8e1dc982
    • Qu Wenruo's avatar
      btrfs: scrub: remove the anonymous structure from scrub_page · 2c363954
      Qu Wenruo authored
      That anonymous structure serve no special purpose, just replace it with
      regular members.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2c363954
    • Qu Wenruo's avatar
      btrfs: use fixed width int type for extent_state::state · f97e27e9
      Qu Wenruo authored
      Currently the type is unsigned int which could change its width
      depending on the architecture. We need up to 32 bits so make it
      explicit.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f97e27e9
    • Qu Wenruo's avatar
      btrfs: introduce helper to handle page status update in end_bio_extent_readpage() · e09caaf9
      Qu Wenruo authored
      Introduce a new helper to handle update page status in
      end_bio_extent_readpage(). This will be later used for subpage support
      where the page status update can be more complex than now.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e09caaf9
    • Qu Wenruo's avatar
      btrfs: add structure to keep track of extent range in end_bio_extent_readpage · 94e8c95c
      Qu Wenruo authored
      In end_bio_extent_readpage() we had a strange dance around
      extent_start/extent_len.
      
      Hidden behind the strange dance is, it's just calling
      endio_readpage_release_extent() on each bvec range.
      
      Here is an example to explain the original work flow:
      
        Bio is for inode 257, containing 2 pages, for range [1M, 1M+8K)
      
        end_bio_extent_extent_readpage() entered
        |- extent_start = 0;
        |- extent_end = 0;
        |- bio_for_each_segment_all() {
        |  |- /* Got the 1st bvec */
        |  |- start = SZ_1M;
        |  |- end = SZ_1M + SZ_4K - 1;
        |  |- update = 1;
        |  |- if (extent_len == 0) {
        |  |  |- extent_start = start; /* SZ_1M */
        |  |  |- extent_len = end + 1 - start; /* SZ_1M */
        |  |  }
        |  |
        |  |- /* Got the 2nd bvec */
        |  |- start = SZ_1M + 4K;
        |  |- end = SZ_1M + 4K - 1;
        |  |- update = 1;
        |  |- if (extent_start + extent_len == start) {
        |  |  |- extent_len += end + 1 - start; /* SZ_8K */
        |  |  }
        |  } /* All bio vec iterated */
        |
        |- if (extent_len) {
           |- endio_readpage_release_extent(tree, extent_start, extent_len,
      				      update);
      	/* extent_start == SZ_1M, extent_len == SZ_8K, uptodate = 1 */
      
      As the above flow shows, the existing code in end_bio_extent_readpage()
      is accumulates extent_start/extent_len, and when the contiguous range
      stops, calls endio_readpage_release_extent() for the range.
      
      However current behavior has something not really considered:
      
      - The inode can change
        For bio, its pages don't need to have contiguous page_offset.
        This means, even pages from different inodes can be packed into one
        bio.
      
      - bvec cross page boundary
        There is a feature called multi-page bvec, where bvec->bv_len can go
        beyond bvec->bv_page boundary.
      
      - Poor readability
      
      This patch will address the problem:
      
      - Introduce a proper structure, processed_extent, to record processed
        extent range
      
      - Integrate inode/start/end/uptodate check into
        endio_readpage_release_extent()
      
      - Add more comment on each step.
        This should greatly improve the readability, now in
        end_bio_extent_readpage() there are only two
        endio_readpage_release_extent() calls.
      
      - Add inode check for contiguity
        Now we also ensure the inode is the same one before checking if the
        range is contiguous.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      94e8c95c
    • Qu Wenruo's avatar
      btrfs: tests: remove invalid extent-io test · b1d51f67
      Qu Wenruo authored
      In extent-io-test, there are two invalid tests:
      
      - Invalid nodesize for test_eb_bitmaps()
        Instead of the sectorsize and nodesize combination passed in, we're
        always using hand-crafted nodesize, e.g:
      
      	len = (sectorsize < BTRFS_MAX_METADATA_BLOCKSIZE)
      		? sectorsize * 4 : sectorsize;
      
        In above case, if we have 32K page size, then we will get a length of
        128K, which is beyond max node size, and obviously invalid.
      
        The common page size goes up to 64K so we haven't hit that
      
      - Invalid extent buffer bytenr
        For 64K page size, the only combination we're going to test is
        sectorsize = nodesize = 64K.
        However, in that case we will try to test an eb which bytenr is not
        sectorsize aligned:
      
      	/* Do it over again with an extent buffer which isn't page-aligned. */
      	eb = __alloc_dummy_extent_buffer(fs_info, nodesize / 2, len);
      
        Sector alignment is a hard requirement for any sector size.
        The only exception is superblock. But anything else should follow
        sector size alignment.
      
        This is definitely an invalid test case.
      
      This patch will fix both problems by:
      
      - Honor the sectorsize/nodesize combination
        Now we won't bother to hand-craft the length and use it as nodesize.
      
      - Use sectorsize as the 2nd run extent buffer start
        This would test the case where extent buffer is aligned to sectorsize
        but not always aligned to nodesize.
      
      Please note that, later subpage related cleanup will reduce
      extent_buffer::pages[] to exactly what we need, making the sector
      unaligned extent buffer operations cause problems.
      
      Since only extent_io self tests utilize this, this patch is required for
      all later cleanup/refactoring.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b1d51f67
    • Tom Rix's avatar
      btrfs: sysfs: remove unneeded semicolon · 445d8ab5
      Tom Rix authored
      A semicolon is not needed after a switch statement.
      Signed-off-by: default avatarTom Rix <trix@redhat.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      445d8ab5
    • Nikolay Borisov's avatar
      btrfs: simplify return values in setup_nodes_for_search · 95b982de
      Nikolay Borisov authored
      The function is needlessly convoluted. Fix that by:
      
      * removing redundant sret variable definition in both if arms
      
      * replace the again/done labels with direct return statements, the
        function is short enough and doesn't do anything special upon exit
      
      * remove BUG_ON on split_node returning a positive number - it can't
        happen as split_node returns either 0 or a negative error code.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      95b982de
    • Nikolay Borisov's avatar
      btrfs: remove useless return value statement in split_node · d5286a92
      Nikolay Borisov authored
      At the point when we set 'ret = 0' it's guaranteed that the function is
      going to return 0 so directly return 0. No functional changes.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d5286a92