1. 03 Oct, 2023 4 commits
    • Filipe Manana's avatar
      btrfs: error out when COWing block using a stale transaction · 48774f3b
      Filipe Manana authored
      At btrfs_cow_block() we have these checks to verify we are not using a
      stale transaction (a past transaction with an unblocked state or higher),
      and the only thing we do is to trigger a WARN with a message and a stack
      trace. This however is a critical problem, highly unexpected and if it
      happens it's most likely due to a bug, so we should error out and turn the
      fs into error state so that such issue is much more easily noticed if it's
      triggered.
      
      The problem is critical because using such stale transaction will lead to
      not persisting the extent buffer used for the COW operation, as allocating
      a tree block adds the range of the respective extent buffer to the
      ->dirty_pages iotree of the transaction, and a stale transaction, in the
      unlocked state or higher, will not flush dirty extent buffers anymore,
      therefore resulting in not persisting the tree block and resource leaks
      (not cleaning the dirty_pages iotree for example).
      
      So do the following changes:
      
      1) Return -EUCLEAN if we find a stale transaction;
      
      2) Turn the fs into error state, with error -EUCLEAN, so that no
         transaction can be committed, and generate a stack trace;
      
      3) Combine both conditions into a single if statement, as both are related
         and have the same error message;
      
      4) Mark the check as unlikely, since this is not expected to ever happen.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      48774f3b
    • Filipe Manana's avatar
      btrfs: always print transaction aborted messages with an error level · f8d1b011
      Filipe Manana authored
      Commit b7af0635 ("btrfs: print transaction aborted messages with an
      error level") changed the log level of transaction aborted messages from
      a debug level to an error level, so that such messages are always visible
      even on production systems where the log level is normally above the debug
      level (and also on some syzbot reports).
      
      Later, commit fccf0c84 ("btrfs: move btrfs_abort_transaction to
      transaction.c") changed the log level back to debug level when the error
      number for a transaction abort should not have a stack trace printed.
      This happened for absolutely no reason. It's always useful to print
      transaction abort messages with an error level, regardless of whether
      the error number should cause a stack trace or not.
      
      So change back the log level to error level.
      
      Fixes: fccf0c84 ("btrfs: move btrfs_abort_transaction to transaction.c")
      CC: stable@vger.kernel.org # 6.5+
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f8d1b011
    • Qu Wenruo's avatar
      btrfs: reject unknown mount options early · 5f521494
      Qu Wenruo authored
      [BUG]
      The following script would allow invalid mount options to be specified
      (although such invalid options would just be ignored):
      
        # mkfs.btrfs -f $dev
        # mount $dev $mnt1		<<< Successful mount expected
        # mount $dev $mnt2 -o junk	<<< Failed mount expected
        # echo $?
        0
      
      [CAUSE]
      For the 2nd mount, since the fs is already mounted, we won't go through
      open_ctree() thus no btrfs_parse_options(), but only through
      btrfs_parse_subvol_options().
      
      However we do not treat unrecognized options from valid but irrelevant
      options, thus those invalid options would just be ignored by
      btrfs_parse_subvol_options().
      
      [FIX]
      Add the handling for Opt_err to handle invalid options and error out,
      while still ignore other valid options inside btrfs_parse_subvol_options().
      Reported-by: default avatarAnand Jain <anand.jain@oracle.com>
      CC: stable@vger.kernel.org # 4.14+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5f521494
    • Josef Bacik's avatar
      btrfs: fix some -Wmaybe-uninitialized warnings in ioctl.c · 9147b9de
      Josef Bacik authored
      Jens reported the following warnings from -Wmaybe-uninitialized recent
      Linus' branch.
      
        In file included from ./include/asm-generic/rwonce.h:26,
      		   from ./arch/arm64/include/asm/rwonce.h:71,
      		   from ./include/linux/compiler.h:246,
      		   from ./include/linux/export.h:5,
      		   from ./include/linux/linkage.h:7,
      		   from ./include/linux/kernel.h:17,
      		   from fs/btrfs/ioctl.c:6:
        In function ‘instrument_copy_from_user_before’,
            inlined from ‘_copy_from_user’ at ./include/linux/uaccess.h:148:3,
            inlined from ‘copy_from_user’ at ./include/linux/uaccess.h:183:7,
            inlined from ‘btrfs_ioctl_space_info’ at fs/btrfs/ioctl.c:2999:6,
            inlined from ‘btrfs_ioctl’ at fs/btrfs/ioctl.c:4616:10:
        ./include/linux/kasan-checks.h:38:27: warning: ‘space_args’ may be used
        uninitialized [-Wmaybe-uninitialized]
           38 | #define kasan_check_write __kasan_check_write
        ./include/linux/instrumented.h:129:9: note: in expansion of macro
        ‘kasan_check_write’
          129 |         kasan_check_write(to, n);
      	|         ^~~~~~~~~~~~~~~~~
        ./include/linux/kasan-checks.h: In function ‘btrfs_ioctl’:
        ./include/linux/kasan-checks.h:20:6: note: by argument 1 of type ‘const
        volatile void *’ to ‘__kasan_check_write’ declared here
           20 | bool __kasan_check_write(const volatile void *p, unsigned int
      	size);
      	|      ^~~~~~~~~~~~~~~~~~~
        fs/btrfs/ioctl.c:2981:39: note: ‘space_args’ declared here
         2981 |         struct btrfs_ioctl_space_args space_args;
      	|                                       ^~~~~~~~~~
        In function ‘instrument_copy_from_user_before’,
            inlined from ‘_copy_from_user’ at ./include/linux/uaccess.h:148:3,
            inlined from ‘copy_from_user’ at ./include/linux/uaccess.h:183:7,
            inlined from ‘_btrfs_ioctl_send’ at fs/btrfs/ioctl.c:4343:9,
            inlined from ‘btrfs_ioctl’ at fs/btrfs/ioctl.c:4658:10:
        ./include/linux/kasan-checks.h:38:27: warning: ‘args32’ may be used
        uninitialized [-Wmaybe-uninitialized]
           38 | #define kasan_check_write __kasan_check_write
        ./include/linux/instrumented.h:129:9: note: in expansion of macro
        ‘kasan_check_write’
          129 |         kasan_check_write(to, n);
      	|         ^~~~~~~~~~~~~~~~~
        ./include/linux/kasan-checks.h: In function ‘btrfs_ioctl’:
        ./include/linux/kasan-checks.h:20:6: note: by argument 1 of type ‘const
        volatile void *’ to ‘__kasan_check_write’ declared here
           20 | bool __kasan_check_write(const volatile void *p, unsigned int
      	size);
      	|      ^~~~~~~~~~~~~~~~~~~
        fs/btrfs/ioctl.c:4341:49: note: ‘args32’ declared here
         4341 |                 struct btrfs_ioctl_send_args_32 args32;
      	|                                                 ^~~~~~
      
      This was due to his config options and having KASAN turned on,
      which adds some extra checks around copy_from_user(), which then
      triggered the -Wmaybe-uninitialized checker for these cases.
      
      Fix the warnings by initializing the different structs we're copying
      into.
      Reported-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9147b9de
  2. 21 Sep, 2023 2 commits
    • Josef Bacik's avatar
      btrfs: initialize start_slot in btrfs_log_prealloc_extents · b4c639f6
      Josef Bacik authored
      Jens reported a compiler warning when using
      CONFIG_CC_OPTIMIZE_FOR_SIZE=y that looks like this
      
        fs/btrfs/tree-log.c: In function ‘btrfs_log_prealloc_extents’:
        fs/btrfs/tree-log.c:4828:23: warning: ‘start_slot’ may be used
        uninitialized [-Wmaybe-uninitialized]
         4828 |                 ret = copy_items(trans, inode, dst_path, path,
      	|                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         4829 |                                  start_slot, ins_nr, 1, 0);
      	|                                  ~~~~~~~~~~~~~~~~~~~~~~~~~
        fs/btrfs/tree-log.c:4725:13: note: ‘start_slot’ was declared here
         4725 |         int start_slot;
      	|             ^~~~~~~~~~
      
      The compiler is incorrect, as we only use this code when ins_len > 0,
      and when ins_len > 0 we have start_slot properly initialized.  However
      we generally find the -Wmaybe-uninitialized warnings valuable, so
      initialize start_slot to get rid of the warning.
      Reported-by: default avatarJens Axboe <axboe@kernel.dk>
      Tested-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b4c639f6
    • Josef Bacik's avatar
      btrfs: make sure to initialize start and len in find_free_dev_extent · 20218dfb
      Josef Bacik authored
      Jens reported a compiler error when using CONFIG_CC_OPTIMIZE_FOR_SIZE=y
      that looks like this
      
        In function ‘gather_device_info’,
            inlined from ‘btrfs_create_chunk’ at fs/btrfs/volumes.c:5507:8:
        fs/btrfs/volumes.c:5245:48: warning: ‘dev_offset’ may be used uninitialized [-Wmaybe-uninitialized]
         5245 |                 devices_info[ndevs].dev_offset = dev_offset;
      	|                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~
        fs/btrfs/volumes.c: In function ‘btrfs_create_chunk’:
        fs/btrfs/volumes.c:5196:13: note: ‘dev_offset’ was declared here
         5196 |         u64 dev_offset;
      
      This occurs because find_free_dev_extent is responsible for setting
      dev_offset, however if we get an -ENOMEM at the top of the function
      we'll return without setting the value.
      
      This isn't actually a problem because we will see the -ENOMEM in
      gather_device_info() and return and not use the uninitialized value,
      however we also just don't want the compiler warning so rework the code
      slightly in find_free_dev_extent() to make sure it's always setting
      *start and *len to avoid the compiler warning.
      Reported-by: default avatarJens Axboe <axboe@kernel.dk>
      Tested-by: default avatarJens Axboe <axboe@kernel.dk>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      20218dfb
  3. 20 Sep, 2023 7 commits
    • Qu Wenruo's avatar
      btrfs: reset destination buffer when read_extent_buffer() gets invalid range · 74ee7914
      Qu Wenruo authored
      Commit f98b6215 ("btrfs: extent_io: do extra check for extent buffer
      read write functions") changed how we handle invalid extent buffer range
      for read_extent_buffer().
      
      Previously if the range is invalid we just set the destination to zero,
      but after the patch we do nothing and error out.
      
      This can lead to smatch static checker errors like:
      
        fs/btrfs/print-tree.c:186 print_uuid_item() error: uninitialized symbol 'subvol_id'.
        fs/btrfs/tests/extent-io-tests.c:338 check_eb_bitmap() error: uninitialized symbol 'has'.
        fs/btrfs/tests/extent-io-tests.c:353 check_eb_bitmap() error: uninitialized symbol 'has'.
        fs/btrfs/uuid-tree.c:203 btrfs_uuid_tree_remove() error: uninitialized symbol 'read_subid'.
        fs/btrfs/uuid-tree.c:353 btrfs_uuid_tree_iterate() error: uninitialized symbol 'subid_le'.
        fs/btrfs/uuid-tree.c:72 btrfs_uuid_tree_lookup() error: uninitialized symbol 'data'.
        fs/btrfs/volumes.c:7415 btrfs_dev_stats_value() error: uninitialized symbol 'val'.
      
      Fix those warnings by reverting back to the old memset() behavior.
      By this we keep the static checker happy and would still make a lot of
      noise when such invalid ranges are passed in.
      Reported-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
      Fixes: f98b6215 ("btrfs: extent_io: do extra check for extent buffer read write functions")
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      74ee7914
    • Josef Bacik's avatar
      btrfs: properly report 0 avail for very full file systems · 58bfe2cc
      Josef Bacik authored
      A user reported some issues with smaller file systems that get very
      full.  While investigating this issue I noticed that df wasn't showing
      100% full, despite having 0 chunk space and having < 1MiB of available
      metadata space.
      
      This turns out to be an overflow issue, we're doing:
      
        total_available_metadata_space - SZ_4M < global_block_rsv_size
      
      to determine if there's not enough space to make metadata allocations,
      which overflows if total_available_metadata_space is < 4M.  Fix this by
      checking to see if our available space is greater than the 4M threshold.
      This makes df properly report 100% usage on the file system.
      
      CC: stable@vger.kernel.org # 4.14+
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      58bfe2cc
    • Filipe Manana's avatar
      btrfs: log message if extent item not found when running delayed extent op · 8ec0a4a5
      Filipe Manana authored
      When running a delayed extent operation, if we don't find the extent item
      in the extent tree we just return -EIO without any logged message. This
      indicates some bug or possibly a memory or fs corruption, so the return
      value should not be -EIO but -EUCLEAN instead, and since it's not expected
      to ever happen, print an informative error message so that if it happens
      we have some idea of what went wrong, where to look at.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8ec0a4a5
    • Filipe Manana's avatar
      btrfs: remove redundant BUG_ON() from __btrfs_inc_extent_ref() · d2f79e63
      Filipe Manana authored
      At __btrfs_inc_extent_ref() we are doing a BUG_ON() if we are dealing with
      a tree block reference that has a reference count that is different from 1,
      but we have already dealt with this case at run_delayed_tree_ref(), making
      it useless. So remove the BUG_ON().
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d2f79e63
    • Filipe Manana's avatar
      btrfs: return -EUCLEAN for delayed tree ref with a ref count not equals to 1 · 1bf76df3
      Filipe Manana authored
      When running a delayed tree reference, if we find a ref count different
      from 1, we return -EIO. This isn't an IO error, as it indicates either a
      bug in the delayed refs code or a memory corruption, so change the error
      code from -EIO to -EUCLEAN. Also tag the branch as 'unlikely' as this is
      not expected to ever happen, and change the error message to print the
      tree block's bytenr without the parenthesis (and there was a missing space
      between the 'block' word and the opening parenthesis), for consistency as
      that's the style we used everywhere else.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1bf76df3
    • Filipe Manana's avatar
      btrfs: prevent transaction block reserve underflow when starting transaction · a7ddeeb0
      Filipe Manana authored
      When starting a transaction, with a non-zero number of items, we reserve
      metadata space for that number of items and for delayed refs by doing a
      call to btrfs_block_rsv_add(), with the transaction block reserve passed
      as the block reserve argument. This reserves metadata space and adds it
      to the transaction block reserve. Later we migrate the space we reserved
      for delayed references from the transaction block reserve into the delayed
      refs block reserve, by calling btrfs_migrate_to_delayed_refs_rsv().
      
      btrfs_migrate_to_delayed_refs_rsv() decrements the number of bytes to
      migrate from the source block reserve, and this however may result in an
      underflow in case the space added to the transaction block reserve ended
      up being used by another task that has not reserved enough space for its
      own use - examples are tasks doing reflinks or hole punching because they
      end up calling btrfs_replace_file_extents() -> btrfs_drop_extents() and
      may need to modify/COW a variable number of leaves/paths, so they keep
      trying to use space from the transaction block reserve when they need to
      COW an extent buffer, and may end up trying to use more space then they
      have reserved (1 unit/path only for removing file extent items).
      
      This can be avoided by simply reserving space first without adding it to
      the transaction block reserve, then add the space for delayed refs to the
      delayed refs block reserve and finally add the remaining reserved space
      to the transaction block reserve. This also makes the code a bit shorter
      and simpler. So just do that.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a7ddeeb0
    • Filipe Manana's avatar
      btrfs: fix race when refilling delayed refs block reserve · 2ed45c0f
      Filipe Manana authored
      If we have two (or more) tasks attempting to refill the delayed refs block
      reserve we can end up with the delayed block reserve being over reserved,
      that is, with a reserved space greater than its size. If this happens, we
      are holding to more reserved space than necessary for a while.
      
      The race happens like this:
      
      1) The delayed refs block reserve has a size of 8M and a reserved space of
         6M for example;
      
      2) Task A calls btrfs_delayed_refs_rsv_refill();
      
      3) Task B also calls btrfs_delayed_refs_rsv_refill();
      
      4) Task A sees there's a 2M difference between the size and the reserved
         space of the delayed refs rsv, so it will reserve 2M of space by
         calling btrfs_reserve_metadata_bytes();
      
      5) Task B also sees that 2M difference, and like task A, it reserves
         another 2M of metadata space;
      
      6) Both task A and task B increase the reserved space of block reserve
         by 2M, by calling btrfs_block_rsv_add_bytes(), so the block reserve
         ends up with a size of 8M and a reserved space of 10M;
      
      7) The extra, over reserved space will eventually be freed by some task
         calling btrfs_delayed_refs_rsv_release() -> btrfs_block_rsv_release()
         -> block_rsv_release_bytes(), as there we will detect the over reserve
         and release that space.
      
      So fix this by checking if we still need to add space to the delayed refs
      block reserve after reserving the metadata space, and if we don't, just
      release that space immediately.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2ed45c0f
  4. 14 Sep, 2023 3 commits
    • Filipe Manana's avatar
      btrfs: fix race between reading a directory and adding entries to it · 8e7f82de
      Filipe Manana authored
      When opening a directory (opendir(3)) or rewinding it (rewinddir(3)), we
      are not holding the directory's inode locked, and this can result in later
      attempting to add two entries to the directory with the same index number,
      resulting in a transaction abort, with -EEXIST (-17), when inserting the
      second delayed dir index. This results in a trace like the following:
      
        Sep 11 22:34:59 myhostname kernel: BTRFS error (device dm-3): err add delayed dir index item(name: cockroach-stderr.log) into the insertion tree of the delayed node(root id: 5, inode id: 4539217, errno: -17)
        Sep 11 22:34:59 myhostname kernel: ------------[ cut here ]------------
        Sep 11 22:34:59 myhostname kernel: kernel BUG at fs/btrfs/delayed-inode.c:1504!
        Sep 11 22:34:59 myhostname kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
        Sep 11 22:34:59 myhostname kernel: CPU: 0 PID: 7159 Comm: cockroach Not tainted 6.4.15-200.fc38.x86_64 #1
        Sep 11 22:34:59 myhostname kernel: Hardware name: ASUS ESC500 G3/P9D WS, BIOS 2402 06/27/2018
        Sep 11 22:34:59 myhostname kernel: RIP: 0010:btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel: Code: eb dd 48 (...)
        Sep 11 22:34:59 myhostname kernel: RSP: 0000:ffffa9980e0fbb28 EFLAGS: 00010282
        Sep 11 22:34:59 myhostname kernel: RAX: 0000000000000000 RBX: ffff8b10b8f4a3c0 RCX: 0000000000000000
        Sep 11 22:34:59 myhostname kernel: RDX: 0000000000000000 RSI: ffff8b177ec21540 RDI: ffff8b177ec21540
        Sep 11 22:34:59 myhostname kernel: RBP: ffff8b110cf80888 R08: 0000000000000000 R09: ffffa9980e0fb938
        Sep 11 22:34:59 myhostname kernel: R10: 0000000000000003 R11: ffffffff86146508 R12: 0000000000000014
        Sep 11 22:34:59 myhostname kernel: R13: ffff8b1131ae5b40 R14: ffff8b10b8f4a418 R15: 00000000ffffffef
        Sep 11 22:34:59 myhostname kernel: FS:  00007fb14a7fe6c0(0000) GS:ffff8b177ec00000(0000) knlGS:0000000000000000
        Sep 11 22:34:59 myhostname kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        Sep 11 22:34:59 myhostname kernel: CR2: 000000c00143d000 CR3: 00000001b3b4e002 CR4: 00000000001706f0
        Sep 11 22:34:59 myhostname kernel: Call Trace:
        Sep 11 22:34:59 myhostname kernel:  <TASK>
        Sep 11 22:34:59 myhostname kernel:  ? die+0x36/0x90
        Sep 11 22:34:59 myhostname kernel:  ? do_trap+0xda/0x100
        Sep 11 22:34:59 myhostname kernel:  ? btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel:  ? do_error_trap+0x6a/0x90
        Sep 11 22:34:59 myhostname kernel:  ? btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel:  ? exc_invalid_op+0x50/0x70
        Sep 11 22:34:59 myhostname kernel:  ? btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel:  ? asm_exc_invalid_op+0x1a/0x20
        Sep 11 22:34:59 myhostname kernel:  ? btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel:  ? btrfs_insert_delayed_dir_index+0x1da/0x260
        Sep 11 22:34:59 myhostname kernel:  btrfs_insert_dir_item+0x200/0x280
        Sep 11 22:34:59 myhostname kernel:  btrfs_add_link+0xab/0x4f0
        Sep 11 22:34:59 myhostname kernel:  ? ktime_get_real_ts64+0x47/0xe0
        Sep 11 22:34:59 myhostname kernel:  btrfs_create_new_inode+0x7cd/0xa80
        Sep 11 22:34:59 myhostname kernel:  btrfs_symlink+0x190/0x4d0
        Sep 11 22:34:59 myhostname kernel:  ? schedule+0x5e/0xd0
        Sep 11 22:34:59 myhostname kernel:  ? __d_lookup+0x7e/0xc0
        Sep 11 22:34:59 myhostname kernel:  vfs_symlink+0x148/0x1e0
        Sep 11 22:34:59 myhostname kernel:  do_symlinkat+0x130/0x140
        Sep 11 22:34:59 myhostname kernel:  __x64_sys_symlinkat+0x3d/0x50
        Sep 11 22:34:59 myhostname kernel:  do_syscall_64+0x5d/0x90
        Sep 11 22:34:59 myhostname kernel:  ? syscall_exit_to_user_mode+0x2b/0x40
        Sep 11 22:34:59 myhostname kernel:  ? do_syscall_64+0x6c/0x90
        Sep 11 22:34:59 myhostname kernel:  entry_SYSCALL_64_after_hwframe+0x72/0xdc
      
      The race leading to the problem happens like this:
      
      1) Directory inode X is loaded into memory, its ->index_cnt field is
         initialized to (u64)-1 (at btrfs_alloc_inode());
      
      2) Task A is adding a new file to directory X, holding its vfs inode lock,
         and calls btrfs_set_inode_index() to get an index number for the entry.
      
         Because the inode's index_cnt field is set to (u64)-1 it calls
         btrfs_inode_delayed_dir_index_count() which fails because no dir index
         entries were added yet to the delayed inode and then it calls
         btrfs_set_inode_index_count(). This functions finds the last dir index
         key and then sets index_cnt to that index value + 1. It found that the
         last index key has an offset of 100. However before it assigns a value
         of 101 to index_cnt...
      
      3) Task B calls opendir(3), ending up at btrfs_opendir(), where the VFS
         lock for inode X is not taken, so it calls btrfs_get_dir_last_index()
         and sees index_cnt still with a value of (u64)-1. Because of that it
         calls btrfs_inode_delayed_dir_index_count() which fails since no dir
         index entries were added to the delayed inode yet, and then it also
         calls btrfs_set_inode_index_count(). This also finds that the last
         index key has an offset of 100, and before it assigns the value 101
         to the index_cnt field of inode X...
      
      4) Task A assigns a value of 101 to index_cnt. And then the code flow
         goes to btrfs_set_inode_index() where it increments index_cnt from
         101 to 102. Task A then creates a delayed dir index entry with a
         sequence number of 101 and adds it to the delayed inode;
      
      5) Task B assigns 101 to the index_cnt field of inode X;
      
      6) At some later point when someone tries to add a new entry to the
         directory, btrfs_set_inode_index() will return 101 again and shortly
         after an attempt to add another delayed dir index key with index
         number 101 will fail with -EEXIST resulting in a transaction abort.
      
      Fix this by locking the inode at btrfs_get_dir_last_index(), which is only
      only used when opening a directory or attempting to lseek on it.
      Reported-by: default avatarken <ken@bllue.org>
      Link: https://lore.kernel.org/linux-btrfs/CAE6xmH+Lp=Q=E61bU+v9eWX8gYfLvu6jLYxjxjFpo3zHVPR0EQ@mail.gmail.com/
      Reported-by: syzbot+d13490c82ad5353c779d@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-btrfs/00000000000036e1290603e097e0@google.com/
      Fixes: 9b378f6a ("btrfs: fix infinite directory reads")
      CC: stable@vger.kernel.org # 6.5+
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8e7f82de
    • Filipe Manana's avatar
      btrfs: refresh dir last index during a rewinddir(3) call · e60aa5da
      Filipe Manana authored
      When opening a directory we find what's the index of its last entry and
      then store it in the directory's file handle private data (struct
      btrfs_file_private::last_index), so that in the case new directory entries
      are added to a directory after an opendir(3) call we don't end up in an
      infinite loop (see commit 9b378f6a ("btrfs: fix infinite directory
      reads")) when calling readdir(3).
      
      However once rewinddir(3) is called, POSIX states [1] that any new
      directory entries added after the previous opendir(3) call, must be
      returned by subsequent calls to readdir(3):
      
        "The rewinddir() function shall reset the position of the directory
         stream to which dirp refers to the beginning of the directory.
         It shall also cause the directory stream to refer to the current
         state of the corresponding directory, as a call to opendir() would
         have done."
      
      We currently don't refresh the last_index field of the struct
      btrfs_file_private associated to the directory, so after a rewinddir(3)
      we are not returning any new entries added after the opendir(3) call.
      
      Fix this by finding the current last index of the directory when llseek
      is called against the directory.
      
      This can be reproduced by the following C program provided by Ian Johnson:
      
         #include <dirent.h>
         #include <stdio.h>
      
         int main(void) {
           DIR *dir = opendir("test");
      
           FILE *file;
           file = fopen("test/1", "w");
           fwrite("1", 1, 1, file);
           fclose(file);
      
           file = fopen("test/2", "w");
           fwrite("2", 1, 1, file);
           fclose(file);
      
           rewinddir(dir);
      
           struct dirent *entry;
           while ((entry = readdir(dir))) {
              printf("%s\n", entry->d_name);
           }
           closedir(dir);
           return 0;
         }
      Reported-by: default avatarIan Johnson <ian@ianjohnson.dev>
      Link: https://lore.kernel.org/linux-btrfs/YR1P0S.NGASEG570GJ8@ianjohnson.dev/
      Fixes: 9b378f6a ("btrfs: fix infinite directory reads")
      CC: stable@vger.kernel.org # 6.5+
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e60aa5da
    • Filipe Manana's avatar
      btrfs: set last dir index to the current last index when opening dir · 35795036
      Filipe Manana authored
      When opening a directory for reading it, we set the last index where we
      stop iteration to the value in struct btrfs_inode::index_cnt. That value
      does not match the index of the most recently added directory entry but
      it's instead the index number that will be assigned the next directory
      entry.
      
      This means that if after the call to opendir(3) new directory entries are
      added, a readdir(3) call will return the first new directory entry. This
      is fine because POSIX says the following [1]:
      
        "If a file is removed from or added to the directory after the most
         recent call to opendir() or rewinddir(), whether a subsequent call to
         readdir() returns an entry for that file is unspecified."
      
      For example for the test script from commit 9b378f6a ("btrfs: fix
      infinite directory reads"), where we have 2000 files in a directory, ext4
      doesn't return any new directory entry after opendir(3), while xfs returns
      the first 13 new directory entries added after the opendir(3) call.
      
      If we move to a shorter example with an empty directory when opendir(3) is
      called, and 2 files added to the directory after the opendir(3) call, then
      readdir(3) on btrfs will return the first file, ext4 and xfs return the 2
      files (but in a different order). A test program for this, reported by
      Ian Johnson, is the following:
      
         #include <dirent.h>
         #include <stdio.h>
      
         int main(void) {
           DIR *dir = opendir("test");
      
           FILE *file;
           file = fopen("test/1", "w");
           fwrite("1", 1, 1, file);
           fclose(file);
      
           file = fopen("test/2", "w");
           fwrite("2", 1, 1, file);
           fclose(file);
      
           struct dirent *entry;
           while ((entry = readdir(dir))) {
              printf("%s\n", entry->d_name);
           }
           closedir(dir);
           return 0;
         }
      
      To make this less odd, change the behaviour to never return new entries
      that were added after the opendir(3) call. This is done by setting the
      last_index field of the struct btrfs_file_private attached to the
      directory's file handle with a value matching btrfs_inode::index_cnt
      minus 1, since that value always matches the index of the next new
      directory entry and not the index of the most recently added entry.
      
      [1] https://pubs.opengroup.org/onlinepubs/007904875/functions/readdir_r.html
      
      Link: https://lore.kernel.org/linux-btrfs/YR1P0S.NGASEG570GJ8@ianjohnson.dev/
      CC: stable@vger.kernel.org # 6.5+
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      35795036
  5. 13 Sep, 2023 3 commits
    • Josef Bacik's avatar
      btrfs: don't clear uptodate on write errors · b595d259
      Josef Bacik authored
      We have been consistently seeing hangs with generic/648 in our subpage
      GitHub CI setup.  This is a classic deadlock, we are calling
      btrfs_read_folio() on a folio, which requires holding the folio lock on
      the folio, and then finding a ordered extent that overlaps that range
      and calling btrfs_start_ordered_extent(), which then tries to write out
      the dirty page, which requires taking the folio lock and then we
      deadlock.
      
      The hang happens because we're writing to range [1271750656, 1271767040),
      page index [77621, 77622], and page 77621 is !Uptodate.  It is also Dirty,
      so we call btrfs_read_folio() for 77621 and which does
      btrfs_lock_and_flush_ordered_range() for that range, and we find an ordered
      extent which is [1271644160, 1271746560), page index [77615, 77621].
      The page indexes overlap, but the actual bytes don't overlap.  We're
      holding the page lock for 77621, then call
      btrfs_lock_and_flush_ordered_range() which tries to flush the dirty
      page, and tries to lock 77621 again and then we deadlock.
      
      The byte ranges do not overlap, but with subpage support if we clear
      uptodate on any portion of the page we mark the entire thing as not
      uptodate.
      
      We have been clearing page uptodate on write errors, but no other file
      system does this, and is in fact incorrect.  This doesn't hurt us in the
      !subpage case because we can't end up with overlapped ranges that don't
      also overlap on the page.
      
      Fix this by not clearing uptodate when we have a write error.  The only
      thing we should be doing in this case is setting the mapping error and
      carrying on.  This makes it so we would no longer call
      btrfs_read_folio() on the page as it's uptodate and eliminates the
      deadlock.
      
      With this patch we're now able to make it through a full fstests run on
      our subpage blocksize VMs.
      
      Note for stable backports: this probably goes beyond 6.1 but the code
      has been cleaned up and clearing the uptodate bit must be verified on
      each version independently.
      
      CC: stable@vger.kernel.org # 6.1+
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b595d259
    • Bernd Schubert's avatar
      btrfs: file_remove_privs needs an exclusive lock in direct io write · 9af86694
      Bernd Schubert authored
      This was noticed by Miklos that file_remove_privs might call into
      notify_change(), which requires to hold an exclusive lock. The problem
      exists in FUSE and btrfs. We can fix it without any additional helpers
      from VFS, in case the privileges would need to be dropped, change the
      lock type to be exclusive and redo the loop.
      
      Fixes: e9adabb9 ("btrfs: use shared lock for direct writes within EOF")
      CC: Miklos Szeredi <miklos@szeredi.hu>
      CC: stable@vger.kernel.org # 5.15+
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarBernd Schubert <bschubert@ddn.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9af86694
    • Matthew Wilcox (Oracle)'s avatar
      btrfs: convert btrfs_read_merkle_tree_page() to use a folio · 06ed0935
      Matthew Wilcox (Oracle) authored
      Remove a number of hidden calls to compound_head() by using a folio
      throughout.  Also follow core kernel coding style by adding the folio to
      the page cache immediately after allocation instead of doing the read
      first, then adding it to the page cache.  This ordering makes subsequent
      readers block waiting for the first reader instead of duplicating the
      work only to throw it away when they find out they lost the race.
      Reviewed-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      06ed0935
  6. 08 Sep, 2023 10 commits
    • Bhaskar Chowdhury's avatar
      MAINTAINERS: remove links to obsolete btrfs.wiki.kernel.org · 5facccc9
      Bhaskar Chowdhury authored
      The wiki has been archived and is not updated anymore. Remove or replace
      the links in files that contain it (MAINTAINERS, Kconfig, docs).
      Signed-off-by: default avatarBhaskar Chowdhury <unixbhaskar@gmail.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5facccc9
    • Filipe Manana's avatar
      btrfs: assert delayed node locked when removing delayed item · a57c2d4e
      Filipe Manana authored
      When removing a delayed item, or releasing which will remove it as well,
      we will modify one of the delayed node's rbtrees and item counter if the
      delayed item is in one of the rbtrees. This require having the delayed
      node's mutex locked, otherwise we will race with other tasks modifying
      the rbtrees and the counter.
      
      This is motivated by a previous version of another patch actually calling
      btrfs_release_delayed_item() after unlocking the delayed node's mutex and
      against a delayed item that is in a rbtree.
      
      So assert at __btrfs_remove_delayed_item() that the delayed node's mutex
      is locked.
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a57c2d4e
    • Filipe Manana's avatar
      btrfs: remove BUG() after failure to insert delayed dir index item · 2c58c393
      Filipe Manana authored
      Instead of calling BUG() when we fail to insert a delayed dir index item
      into the delayed node's tree, we can just release all the resources we
      have allocated/acquired before and return the error to the caller. This is
      fine because all existing call chains undo anything they have done before
      calling btrfs_insert_delayed_dir_index() or BUG_ON (when creating pending
      snapshots in the transaction commit path).
      
      So remove the BUG() call and do proper error handling.
      
      This relates to a syzbot report linked below, but does not fix it because
      it only prevents hitting a BUG(), it does not fix the issue where somehow
      we attempt to use twice the same index number for different index items.
      
      Link: https://lore.kernel.org/linux-btrfs/00000000000036e1290603e097e0@google.com/
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2c58c393
    • Filipe Manana's avatar
      btrfs: improve error message after failure to add delayed dir index item · 91bfe310
      Filipe Manana authored
      If we fail to add a delayed dir index item because there's already another
      item with the same index number, we print an error message (and then BUG).
      However that message isn't very helpful to debug anything because we don't
      know what's the index number and what are the values of index counters in
      the inode and its delayed inode (index_cnt fields of struct btrfs_inode
      and struct btrfs_delayed_node).
      
      So update the error message to include the index number and counters.
      
      We actually had a recent case where this issue was hit by a syzbot report
      (see the link below).
      
      Link: https://lore.kernel.org/linux-btrfs/00000000000036e1290603e097e0@google.com/Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      91bfe310
    • Qu Wenruo's avatar
      btrfs: fix a compilation error if DEBUG is defined in btree_dirty_folio · 5e0e8799
      Qu Wenruo authored
      [BUG]
      After commit 72a69cd0 ("btrfs: subpage: pack all subpage bitmaps
      into a larger bitmap"), the DEBUG section of btree_dirty_folio() would
      no longer compile.
      
      [CAUSE]
      If DEBUG is defined, we would do extra checks for btree_dirty_folio(),
      mostly to make sure the range we marked dirty has an extent buffer and
      that extent buffer is dirty.
      
      For subpage, we need to iterate through all the extent buffers covered
      by that page range, and make sure they all matches the criteria.
      
      However commit 72a69cd0 ("btrfs: subpage: pack all subpage bitmaps
      into a larger bitmap") changes how we store the bitmap, we pack all the
      16 bits bitmaps into a larger bitmap, which would save some space.
      
      This means we no longer have btrfs_subpage::dirty_bitmap, instead the
      dirty bitmap is starting at btrfs_subpage_info::dirty_offset, and has a
      length of btrfs_subpage_info::bitmap_nr_bits.
      
      [FIX]
      Although I'm not sure if it still makes sense to maintain such code, at
      least let it compile.
      
      This patch would let us test the bits one by one through the bitmaps.
      
      CC: stable@vger.kernel.org # 6.1+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5e0e8799
    • Josef Bacik's avatar
      btrfs: check for BTRFS_FS_ERROR in pending ordered assert · 4ca8e03c
      Josef Bacik authored
      If we do fast tree logging we increment a counter on the current
      transaction for every ordered extent we need to wait for.  This means we
      expect the transaction to still be there when we clear pending on the
      ordered extent.  However if we happen to abort the transaction and clean
      it up, there could be no running transaction, and thus we'll trip the
      "ASSERT(trans)" check.  This is obviously incorrect, and the code
      properly deals with the case that the transaction doesn't exist.  Fix
      this ASSERT() to only fire if there's no trans and we don't have
      BTRFS_FS_ERROR() set on the file system.
      
      CC: stable@vger.kernel.org # 4.14+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4ca8e03c
    • Filipe Manana's avatar
      btrfs: fix lockdep splat and potential deadlock after failure running delayed items · e110f891
      Filipe Manana authored
      When running delayed items we are holding a delayed node's mutex and then
      we will attempt to modify a subvolume btree to insert/update/delete the
      delayed items. However if have an error during the insertions for example,
      btrfs_insert_delayed_items() may return with a path that has locked extent
      buffers (a leaf at the very least), and then we attempt to release the
      delayed node at __btrfs_run_delayed_items(), which requires taking the
      delayed node's mutex, causing an ABBA type of deadlock. This was reported
      by syzbot and the lockdep splat is the following:
      
        WARNING: possible circular locking dependency detected
        6.5.0-rc7-syzkaller-00024-g93f5de5f #0 Not tainted
        ------------------------------------------------------
        syz-executor.2/13257 is trying to acquire lock:
        ffff88801835c0c0 (&delayed_node->mutex){+.+.}-{3:3}, at: __btrfs_release_delayed_node+0x9a/0xaa0 fs/btrfs/delayed-inode.c:256
      
        but task is already holding lock:
        ffff88802a5ab8e8 (btrfs-tree-00){++++}-{3:3}, at: __btrfs_tree_lock+0x3c/0x2a0 fs/btrfs/locking.c:198
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #1 (btrfs-tree-00){++++}-{3:3}:
               __lock_release kernel/locking/lockdep.c:5475 [inline]
               lock_release+0x36f/0x9d0 kernel/locking/lockdep.c:5781
               up_write+0x79/0x580 kernel/locking/rwsem.c:1625
               btrfs_tree_unlock_rw fs/btrfs/locking.h:189 [inline]
               btrfs_unlock_up_safe+0x179/0x3b0 fs/btrfs/locking.c:239
               search_leaf fs/btrfs/ctree.c:1986 [inline]
               btrfs_search_slot+0x2511/0x2f80 fs/btrfs/ctree.c:2230
               btrfs_insert_empty_items+0x9c/0x180 fs/btrfs/ctree.c:4376
               btrfs_insert_delayed_item fs/btrfs/delayed-inode.c:746 [inline]
               btrfs_insert_delayed_items fs/btrfs/delayed-inode.c:824 [inline]
               __btrfs_commit_inode_delayed_items+0xd24/0x2410 fs/btrfs/delayed-inode.c:1111
               __btrfs_run_delayed_items+0x1db/0x430 fs/btrfs/delayed-inode.c:1153
               flush_space+0x269/0xe70 fs/btrfs/space-info.c:723
               btrfs_async_reclaim_metadata_space+0x106/0x350 fs/btrfs/space-info.c:1078
               process_one_work+0x92c/0x12c0 kernel/workqueue.c:2600
               worker_thread+0xa63/0x1210 kernel/workqueue.c:2751
               kthread+0x2b8/0x350 kernel/kthread.c:389
               ret_from_fork+0x2e/0x60 arch/x86/kernel/process.c:145
               ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
      
        -> #0 (&delayed_node->mutex){+.+.}-{3:3}:
               check_prev_add kernel/locking/lockdep.c:3142 [inline]
               check_prevs_add kernel/locking/lockdep.c:3261 [inline]
               validate_chain kernel/locking/lockdep.c:3876 [inline]
               __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5144
               lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5761
               __mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
               __mutex_lock kernel/locking/mutex.c:747 [inline]
               mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
               __btrfs_release_delayed_node+0x9a/0xaa0 fs/btrfs/delayed-inode.c:256
               btrfs_release_delayed_node fs/btrfs/delayed-inode.c:281 [inline]
               __btrfs_run_delayed_items+0x2b5/0x430 fs/btrfs/delayed-inode.c:1156
               btrfs_commit_transaction+0x859/0x2ff0 fs/btrfs/transaction.c:2276
               btrfs_sync_file+0xf56/0x1330 fs/btrfs/file.c:1988
               vfs_fsync_range fs/sync.c:188 [inline]
               vfs_fsync fs/sync.c:202 [inline]
               do_fsync fs/sync.c:212 [inline]
               __do_sys_fsync fs/sync.c:220 [inline]
               __se_sys_fsync fs/sync.c:218 [inline]
               __x64_sys_fsync+0x196/0x1e0 fs/sync.c:218
               do_syscall_x64 arch/x86/entry/common.c:50 [inline]
               do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
               entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
        other info that might help us debug this:
      
         Possible unsafe locking scenario:
      
               CPU0                    CPU1
               ----                    ----
          lock(btrfs-tree-00);
                                       lock(&delayed_node->mutex);
                                       lock(btrfs-tree-00);
          lock(&delayed_node->mutex);
      
         *** DEADLOCK ***
      
        3 locks held by syz-executor.2/13257:
         #0: ffff88802c1ee370 (btrfs_trans_num_writers){++++}-{0:0}, at: spin_unlock include/linux/spinlock.h:391 [inline]
         #0: ffff88802c1ee370 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0xb87/0xe00 fs/btrfs/transaction.c:287
         #1: ffff88802c1ee398 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0xbb2/0xe00 fs/btrfs/transaction.c:288
         #2: ffff88802a5ab8e8 (btrfs-tree-00){++++}-{3:3}, at: __btrfs_tree_lock+0x3c/0x2a0 fs/btrfs/locking.c:198
      
        stack backtrace:
        CPU: 0 PID: 13257 Comm: syz-executor.2 Not tainted 6.5.0-rc7-syzkaller-00024-g93f5de5f #0
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
        Call Trace:
         <TASK>
         __dump_stack lib/dump_stack.c:88 [inline]
         dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
         check_noncircular+0x375/0x4a0 kernel/locking/lockdep.c:2195
         check_prev_add kernel/locking/lockdep.c:3142 [inline]
         check_prevs_add kernel/locking/lockdep.c:3261 [inline]
         validate_chain kernel/locking/lockdep.c:3876 [inline]
         __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5144
         lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5761
         __mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
         __mutex_lock kernel/locking/mutex.c:747 [inline]
         mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
         __btrfs_release_delayed_node+0x9a/0xaa0 fs/btrfs/delayed-inode.c:256
         btrfs_release_delayed_node fs/btrfs/delayed-inode.c:281 [inline]
         __btrfs_run_delayed_items+0x2b5/0x430 fs/btrfs/delayed-inode.c:1156
         btrfs_commit_transaction+0x859/0x2ff0 fs/btrfs/transaction.c:2276
         btrfs_sync_file+0xf56/0x1330 fs/btrfs/file.c:1988
         vfs_fsync_range fs/sync.c:188 [inline]
         vfs_fsync fs/sync.c:202 [inline]
         do_fsync fs/sync.c:212 [inline]
         __do_sys_fsync fs/sync.c:220 [inline]
         __se_sys_fsync fs/sync.c:218 [inline]
         __x64_sys_fsync+0x196/0x1e0 fs/sync.c:218
         do_syscall_x64 arch/x86/entry/common.c:50 [inline]
         do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
         entry_SYSCALL_64_after_hwframe+0x63/0xcd
        RIP: 0033:0x7f3ad047cae9
        Code: 28 00 00 00 75 (...)
        RSP: 002b:00007f3ad12510c8 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
        RAX: ffffffffffffffda RBX: 00007f3ad059bf80 RCX: 00007f3ad047cae9
        RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000005
        RBP: 00007f3ad04c847a R08: 0000000000000000 R09: 0000000000000000
        R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
        R13: 000000000000000b R14: 00007f3ad059bf80 R15: 00007ffe56af92f8
         </TASK>
        ------------[ cut here ]------------
      
      Fix this by releasing the path before releasing the delayed node in the
      error path at __btrfs_run_delayed_items().
      
      Reported-by: syzbot+a379155f07c134ea9879@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-btrfs/000000000000abba27060403b5bd@google.com/
      CC: stable@vger.kernel.org # 4.14+
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e110f891
    • Josef Bacik's avatar
      btrfs: do not block starts waiting on previous transaction commit · 77d20c68
      Josef Bacik authored
      Internally I got a report of very long stalls on normal operations like
      creating a new file when auto relocation was running.  The reporter used
      the 'bpf offcputime' tracer to show that we would get stuck in
      start_transaction for 5 to 30 seconds, and were always being woken up by
      the transaction commit.
      
      Using my timing-everything script, which times how long a function takes
      and what percentage of that total time is taken up by its children, I
      saw several traces like this
      
      1083 took 32812902424 ns
              29929002926 ns 91.2110% wait_for_commit_duration
              25568 ns 7.7920e-05% commit_fs_roots_duration
              1007751 ns 0.00307% commit_cowonly_roots_duration
              446855602 ns 1.36182% btrfs_run_delayed_refs_duration
              271980 ns 0.00082% btrfs_run_delayed_items_duration
              2008 ns 6.1195e-06% btrfs_apply_pending_changes_duration
              9656 ns 2.9427e-05% switch_commit_roots_duration
              1598 ns 4.8700e-06% btrfs_commit_device_sizes_duration
              4314 ns 1.3147e-05% btrfs_free_log_root_tree_duration
      
      Here I was only tracing functions that happen where we are between
      START_COMMIT and UNBLOCKED in order to see what would be keeping us
      blocked for so long.  The wait_for_commit() we do is where we wait for a
      previous transaction that hasn't completed it's commit.  This can
      include all of the unpin work and other cleanups, which tends to be the
      longest part of our transaction commit.
      
      There is no reason we should be blocking new things from entering the
      transaction at this point, it just adds to random latency spikes for no
      reason.
      
      Fix this by adding a PREP stage.  This allows us to properly deal with
      multiple committers coming in at the same time, we retain the behavior
      that the winner waits on the previous transaction and the losers all
      wait for this transaction commit to occur.  Nothing else is blocked
      during the PREP stage, and then once the wait is complete we switch to
      COMMIT_START and all of the same behavior as before is maintained.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      77d20c68
    • Filipe Manana's avatar
      btrfs: release path before inode lookup during the ino lookup ioctl · ee34a82e
      Filipe Manana authored
      During the ino lookup ioctl we can end up calling btrfs_iget() to get an
      inode reference while we are holding on a root's btree. If btrfs_iget()
      needs to lookup the inode from the root's btree, because it's not
      currently loaded in memory, then it will need to lock another or the
      same path in the same root btree. This may result in a deadlock and
      trigger the following lockdep splat:
      
        WARNING: possible circular locking dependency detected
        6.5.0-rc7-syzkaller-00004-gf7757129 #0 Not tainted
        ------------------------------------------------------
        syz-executor277/5012 is trying to acquire lock:
        ffff88802df41710 (btrfs-tree-01){++++}-{3:3}, at: __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
      
        but task is already holding lock:
        ffff88802df418e8 (btrfs-tree-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #1 (btrfs-tree-00){++++}-{3:3}:
               down_read_nested+0x49/0x2f0 kernel/locking/rwsem.c:1645
               __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
               btrfs_search_slot+0x13a4/0x2f80 fs/btrfs/ctree.c:2302
               btrfs_init_root_free_objectid+0x148/0x320 fs/btrfs/disk-io.c:4955
               btrfs_init_fs_root fs/btrfs/disk-io.c:1128 [inline]
               btrfs_get_root_ref+0x5ae/0xae0 fs/btrfs/disk-io.c:1338
               btrfs_get_fs_root fs/btrfs/disk-io.c:1390 [inline]
               open_ctree+0x29c8/0x3030 fs/btrfs/disk-io.c:3494
               btrfs_fill_super+0x1c7/0x2f0 fs/btrfs/super.c:1154
               btrfs_mount_root+0x7e0/0x910 fs/btrfs/super.c:1519
               legacy_get_tree+0xef/0x190 fs/fs_context.c:611
               vfs_get_tree+0x8c/0x270 fs/super.c:1519
               fc_mount fs/namespace.c:1112 [inline]
               vfs_kern_mount+0xbc/0x150 fs/namespace.c:1142
               btrfs_mount+0x39f/0xb50 fs/btrfs/super.c:1579
               legacy_get_tree+0xef/0x190 fs/fs_context.c:611
               vfs_get_tree+0x8c/0x270 fs/super.c:1519
               do_new_mount+0x28f/0xae0 fs/namespace.c:3335
               do_mount fs/namespace.c:3675 [inline]
               __do_sys_mount fs/namespace.c:3884 [inline]
               __se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3861
               do_syscall_x64 arch/x86/entry/common.c:50 [inline]
               do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
               entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
        -> #0 (btrfs-tree-01){++++}-{3:3}:
               check_prev_add kernel/locking/lockdep.c:3142 [inline]
               check_prevs_add kernel/locking/lockdep.c:3261 [inline]
               validate_chain kernel/locking/lockdep.c:3876 [inline]
               __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5144
               lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5761
               down_read_nested+0x49/0x2f0 kernel/locking/rwsem.c:1645
               __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
               btrfs_tree_read_lock fs/btrfs/locking.c:142 [inline]
               btrfs_read_lock_root_node+0x292/0x3c0 fs/btrfs/locking.c:281
               btrfs_search_slot_get_root fs/btrfs/ctree.c:1832 [inline]
               btrfs_search_slot+0x4ff/0x2f80 fs/btrfs/ctree.c:2154
               btrfs_lookup_inode+0xdc/0x480 fs/btrfs/inode-item.c:412
               btrfs_read_locked_inode fs/btrfs/inode.c:3892 [inline]
               btrfs_iget_path+0x2d9/0x1520 fs/btrfs/inode.c:5716
               btrfs_search_path_in_tree_user fs/btrfs/ioctl.c:1961 [inline]
               btrfs_ioctl_ino_lookup_user+0x77a/0xf50 fs/btrfs/ioctl.c:2105
               btrfs_ioctl+0xb0b/0xd40 fs/btrfs/ioctl.c:4683
               vfs_ioctl fs/ioctl.c:51 [inline]
               __do_sys_ioctl fs/ioctl.c:870 [inline]
               __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:856
               do_syscall_x64 arch/x86/entry/common.c:50 [inline]
               do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
               entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
        other info that might help us debug this:
      
         Possible unsafe locking scenario:
      
               CPU0                    CPU1
               ----                    ----
          rlock(btrfs-tree-00);
                                       lock(btrfs-tree-01);
                                       lock(btrfs-tree-00);
          rlock(btrfs-tree-01);
      
         *** DEADLOCK ***
      
        1 lock held by syz-executor277/5012:
         #0: ffff88802df418e8 (btrfs-tree-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
      
        stack backtrace:
        CPU: 1 PID: 5012 Comm: syz-executor277 Not tainted 6.5.0-rc7-syzkaller-00004-gf7757129 #0
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
        Call Trace:
         <TASK>
         __dump_stack lib/dump_stack.c:88 [inline]
         dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
         check_noncircular+0x375/0x4a0 kernel/locking/lockdep.c:2195
         check_prev_add kernel/locking/lockdep.c:3142 [inline]
         check_prevs_add kernel/locking/lockdep.c:3261 [inline]
         validate_chain kernel/locking/lockdep.c:3876 [inline]
         __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5144
         lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5761
         down_read_nested+0x49/0x2f0 kernel/locking/rwsem.c:1645
         __btrfs_tree_read_lock+0x2f/0x220 fs/btrfs/locking.c:136
         btrfs_tree_read_lock fs/btrfs/locking.c:142 [inline]
         btrfs_read_lock_root_node+0x292/0x3c0 fs/btrfs/locking.c:281
         btrfs_search_slot_get_root fs/btrfs/ctree.c:1832 [inline]
         btrfs_search_slot+0x4ff/0x2f80 fs/btrfs/ctree.c:2154
         btrfs_lookup_inode+0xdc/0x480 fs/btrfs/inode-item.c:412
         btrfs_read_locked_inode fs/btrfs/inode.c:3892 [inline]
         btrfs_iget_path+0x2d9/0x1520 fs/btrfs/inode.c:5716
         btrfs_search_path_in_tree_user fs/btrfs/ioctl.c:1961 [inline]
         btrfs_ioctl_ino_lookup_user+0x77a/0xf50 fs/btrfs/ioctl.c:2105
         btrfs_ioctl+0xb0b/0xd40 fs/btrfs/ioctl.c:4683
         vfs_ioctl fs/ioctl.c:51 [inline]
         __do_sys_ioctl fs/ioctl.c:870 [inline]
         __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:856
         do_syscall_x64 arch/x86/entry/common.c:50 [inline]
         do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
         entry_SYSCALL_64_after_hwframe+0x63/0xcd
        RIP: 0033:0x7f0bec94ea39
      
      Fix this simply by releasing the path before calling btrfs_iget() as at
      point we don't need the path anymore.
      
      Reported-by: syzbot+bf66ad948981797d2f1d@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-btrfs/00000000000045fa140603c4a969@google.com/
      Fixes: 23d0b79d ("btrfs: Add unprivileged version of ino_lookup ioctl")
      CC: stable@vger.kernel.org # 4.19+
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ee34a82e
    • Filipe Manana's avatar
      btrfs: fix race between finishing block group creation and its item update · 2d6cd791
      Filipe Manana authored
      Commit 675dfe12 ("btrfs: fix block group item corruption after
      inserting new block group") fixed one race that resulted in not persisting
      a block group's item when its "used" bytes field decreases to zero.
      However there's another race that can happen in a much shorter time window
      that results in the same problem. The following sequence of steps explains
      how it can happen:
      
      1) Task A creates a metadata block group X, its "used" and "commit_used"
         fields are initialized to 0;
      
      2) Two extents are allocated from block group X, so its "used" field is
         updated to 32K, and its "commit_used" field remains as 0;
      
      3) Transaction commit starts, by some task B, and it enters
         btrfs_start_dirty_block_groups(). There it tries to update the block
         group item for block group X, which currently has its "used" field with
         a value of 32K and its "commit_used" field with a value of 0. However
         that fails since the block group item was not yet inserted, so at
         update_block_group_item(), the btrfs_search_slot() call returns 1, and
         then we set 'ret' to -ENOENT. Before jumping to the label 'fail'...
      
      4) The block group item is inserted by task A, when for example
         btrfs_create_pending_block_groups() is called when releasing its
         transaction handle. This results in insert_block_group_item() inserting
         the block group item in the extent tree (or block group tree), with a
         "used" field having a value of 32K and setting "commit_used", in struct
         btrfs_block_group, to the same value (32K);
      
      5) Task B jumps to the 'fail' label and then resets the "commit_used"
         field to 0. At btrfs_start_dirty_block_groups(), because -ENOENT was
         returned from update_block_group_item(), we add the block group again
         to the list of dirty block groups, so that we will try again in the
         critical section of the transaction commit when calling
         btrfs_write_dirty_block_groups();
      
      6) Later the two extents from block group X are freed, so its "used" field
         becomes 0;
      
      7) If no more extents are allocated from block group X before we get into
         btrfs_write_dirty_block_groups(), then when we call
         update_block_group_item() again for block group X, we will not update
         the block group item to reflect that it has 0 bytes used, because the
         "used" and "commit_used" fields in struct btrfs_block_group have the
         same value, a value of 0.
      
         As a result after committing the transaction we have an empty block
         group with its block group item having a 32K value for its "used" field.
         This will trigger errors from fsck ("btrfs check" command) and after
         mounting again the fs, the cleaner kthread will not automatically delete
         the empty block group, since its "used" field is not 0. Possibly there
         are other issues due to this inconsistency.
      
         When this issue happens, the error reported by fsck is like this:
      
           [1/7] checking root items
           [2/7] checking extents
           block group [1104150528 1073741824] used 39796736 but extent items used 0
           ERROR: errors found in extent allocation tree or chunk allocation
           (...)
      
      So fix this by not resetting the "commit_used" field of a block group when
      we don't find the block group item at update_block_group_item().
      
      Fixes: 7248e0ce ("btrfs: skip update of block group item if used bytes are the same")
      CC: stable@vger.kernel.org # 6.2+
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2d6cd791
  7. 22 Aug, 2023 1 commit
    • Naohiro Aota's avatar
      btrfs: zoned: skip splitting and logical rewriting on pre-alloc write · c02d35d8
      Naohiro Aota authored
      When doing a relocation, there is a chance that at the time of
      btrfs_reloc_clone_csums(), there is no checksum for the corresponding
      region.
      
      In this case, btrfs_finish_ordered_zoned()'s sum points to an invalid item
      and so ordered_extent's logical is set to some invalid value. Then,
      btrfs_lookup_block_group() in btrfs_zone_finish_endio() failed to find a
      block group and will hit an assert or a null pointer dereference as
      following.
      
      This can be reprodcued by running btrfs/028 several times (e.g, 4 to 16
      times) with a null_blk setup. The device's zone size and capacity is set to
      32 MB and the storage size is set to 5 GB on my setup.
      
          KASAN: null-ptr-deref in range [0x0000000000000088-0x000000000000008f]
          CPU: 6 PID: 3105720 Comm: kworker/u16:13 Tainted: G        W          6.5.0-rc6-kts+ #1
          Hardware name: Supermicro Super Server/X10SRL-F, BIOS 2.0 12/17/2015
          Workqueue: btrfs-endio-write btrfs_work_helper [btrfs]
          RIP: 0010:btrfs_zone_finish_endio.part.0+0x34/0x160 [btrfs]
          Code: 41 54 49 89 fc 55 48 89 f5 53 e8 57 7d fc ff 48 8d b8 88 00 00 00 48 89 c3 48 b8 00 00 00 00 00
          > 3c 02 00 0f 85 02 01 00 00 f6 83 88 00 00 00 01 0f 84 a8 00 00
          RSP: 0018:ffff88833cf87b08 EFLAGS: 00010206
          RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000000
          RDX: 0000000000000011 RSI: 0000000000000004 RDI: 0000000000000088
          RBP: 0000000000000002 R08: 0000000000000001 R09: ffffed102877b827
          R10: ffff888143bdc13b R11: ffff888125b1cbc0 R12: ffff888143bdc000
          R13: 0000000000007000 R14: ffff888125b1cba8 R15: 0000000000000000
          FS:  0000000000000000(0000) GS:ffff88881e500000(0000) knlGS:0000000000000000
          CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
          CR2: 00007f3ed85223d5 CR3: 00000001519b4005 CR4: 00000000001706e0
          Call Trace:
           <TASK>
           ? die_addr+0x3c/0xa0
           ? exc_general_protection+0x148/0x220
           ? asm_exc_general_protection+0x22/0x30
           ? btrfs_zone_finish_endio.part.0+0x34/0x160 [btrfs]
           ? btrfs_zone_finish_endio.part.0+0x19/0x160 [btrfs]
           btrfs_finish_one_ordered+0x7b8/0x1de0 [btrfs]
           ? rcu_is_watching+0x11/0xb0
           ? lock_release+0x47a/0x620
           ? btrfs_finish_ordered_zoned+0x59b/0x800 [btrfs]
           ? __pfx_btrfs_finish_one_ordered+0x10/0x10 [btrfs]
           ? btrfs_finish_ordered_zoned+0x358/0x800 [btrfs]
           ? __smp_call_single_queue+0x124/0x350
           ? rcu_is_watching+0x11/0xb0
           btrfs_work_helper+0x19f/0xc60 [btrfs]
           ? __pfx_try_to_wake_up+0x10/0x10
           ? _raw_spin_unlock_irq+0x24/0x50
           ? rcu_is_watching+0x11/0xb0
           process_one_work+0x8c1/0x1430
           ? __pfx_lock_acquire+0x10/0x10
           ? __pfx_process_one_work+0x10/0x10
           ? __pfx_do_raw_spin_lock+0x10/0x10
           ? _raw_spin_lock_irq+0x52/0x60
           worker_thread+0x100/0x12c0
           ? __kthread_parkme+0xc1/0x1f0
           ? __pfx_worker_thread+0x10/0x10
           kthread+0x2ea/0x3c0
           ? __pfx_kthread+0x10/0x10
           ret_from_fork+0x30/0x70
           ? __pfx_kthread+0x10/0x10
           ret_from_fork_asm+0x1b/0x30
           </TASK>
      
      On the zoned mode, writing to pre-allocated region means data relocation
      write. Such write always uses WRITE command so there is no need of splitting
      and rewriting logical address. Thus, we can just skip the function for the
      case.
      
      Fixes: cbfce4c7 ("btrfs: optimize the logical to physical mapping for zoned writes")
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c02d35d8
  8. 21 Aug, 2023 10 commits
    • Josef Bacik's avatar
      btrfs: tests: test invalid splitting when skipping pinned drop extent_map · 92e1229b
      Josef Bacik authored
      This reproduces the bug fixed by "btrfs: fix incorrect splitting in
      btrfs_drop_extent_map_range", we were improperly calculating the range
      for the split extent.  Add a test that exercises this scenario and
      validates that we get the correct resulting extent_maps in our tree.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      92e1229b
    • Josef Bacik's avatar
      btrfs: tests: add a test for btrfs_add_extent_mapping · f345dbdf
      Josef Bacik authored
      This helper is different from the normal add_extent_mapping in that it
      will stuff an em into a gap that exists between overlapping em's in the
      tree.  It appeared there was a bug so I wrote a self test to validate it
      did the correct thing when it worked with two side by side ems.
      Thankfully it is correct, but more testing is better.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f345dbdf
    • Josef Bacik's avatar
      btrfs: tests: add extent_map tests for dropping with odd layouts · 89c37604
      Josef Bacik authored
      While investigating weird problems with the extent_map I wrote a self
      test testing the various edge cases of btrfs_drop_extent_map_range.
      This can split in different ways and behaves different in each case, so
      test the various edge cases to make sure everything is functioning
      properly.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      89c37604
    • Qu Wenruo's avatar
      btrfs: scrub: move write back of repaired sectors to scrub_stripe_read_repair_worker() · 4fe44f9d
      Qu Wenruo authored
      Currently the scrub_stripe_read_repair_worker() only does reads to
      rebuild the corrupted sectors, it doesn't do any writeback.
      
      The design is mostly to put writeback into a more ordered manner, to
      co-operate with dev-replace with zoned mode, which requires every write
      to be submitted in their bytenr order.
      
      However the writeback for repaired sectors into the original mirror
      doesn't need such strong sync requirement, as it can only happen for
      non-zoned devices.
      
      This patch would move the writeback for repaired sectors into
      scrub_stripe_read_repair_worker(), which removes two calls sites for
      repaired sectors writeback. (one from flush_scrub_stripes(), one from
      scrub_raid56_parity_stripe())
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4fe44f9d
    • Qu Wenruo's avatar
      btrfs: scrub: don't go ordered workqueue for dev-replace · 39dc7bd9
      Qu Wenruo authored
      The workqueue fs_info->scrub_worker would go ordered workqueue if it's a
      device replace operation.
      
      However the scrub is relying on multiple workers to do data csum
      verification, and we always submit several read requests in a row.
      
      Thus there is no need to use ordered workqueue just for dev-replace.
      We have extra synchronization (the main thread will always
      submit-and-wait for dev-replace writes) to handle it for zoned devices.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      39dc7bd9
    • Qu Wenruo's avatar
      btrfs: scrub: fix grouping of read IO · ae76d8e3
      Qu Wenruo authored
      [REGRESSION]
      There are several regression reports about the scrub performance with
      v6.4 kernel.
      
      On a PCIe 3.0 device, the old v6.3 kernel can go 3GB/s scrub speed, but
      v6.4 can only go 1GB/s, an obvious 66% performance drop.
      
      [CAUSE]
      Iostat shows a very different behavior between v6.3 and v6.4 kernel:
      
        Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
        nvme0n1p3  9731.00 3425544.00 17237.00  63.92    2.18   352.02  21.18 100.00
        nvme0n1p3 15578.00  993616.00     5.00   0.03    0.09    63.78   1.32 100.00
      
      The upper one is v6.3 while the lower one is v6.4.
      
      There are several obvious differences:
      
      - Very few read merges
        This turns out to be a behavior change that we no longer do bio
        plug/unplug.
      
      - Very low aqu-sz
        This is due to the submit-and-wait behavior of flush_scrub_stripes(),
        and extra extent/csum tree search.
      
      Both behaviors are not that obvious on SATA SSDs, as SATA SSDs have NCQ
      to merge the reads, while SATA SSDs can not handle high queue depth well
      either.
      
      [FIX]
      For now this patch focuses on the read speed fix. Dev-replace replace
      speed needs more work.
      
      For the read part, we go two directions to fix the problems:
      
      - Re-introduce blk plug/unplug to merge read requests
        This is pretty simple, and the behavior is pretty easy to observe.
      
        This would enlarge the average read request size to 512K.
      
      - Introduce multi-group reads and no longer wait for each group
        Instead of the old behavior, which submits 8 stripes and waits for
        them, here we would enlarge the total number of stripes to 16 * 8.
        Which is 8M per device, the same limit as the old scrub in-flight
        bios size limit.
      
        Now every time we fill a group (8 stripes), we submit them and
        continue to next stripes.
      
        Only when the full 16 * 8 stripes are all filled, we submit the
        remaining ones (the last group), and wait for all groups to finish.
        Then submit the repair writes and dev-replace writes.
      
        This should enlarge the queue depth.
      
      This would greatly improve the merge rate (thus read block size) and
      queue depth:
      
      Before (with regression, and cached extent/csum path):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 20666.00 1318240.00    10.00   0.05    0.08    63.79   1.63 100.00
      
      After (with all patches applied):
      
       nvme0n1p3  5165.00 2278304.00 30557.00  85.54    0.55   441.10   2.81 100.00
      
      i.e. 1287 to 2224 MB/s.
      
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ae76d8e3
    • Qu Wenruo's avatar
      btrfs: scrub: avoid unnecessary csum tree search preparing stripes · 3c771c19
      Qu Wenruo authored
      One of the bottleneck of the new scrub code is the extra csum tree
      search.
      
      The old code would only do the csum tree search for each scrub bio,
      which can be as large as 512KiB, thus they can afford to allocate a new
      path each time.
      
      But the new scrub code is doing csum tree search for each stripe, which
      is only 64KiB, this means we'd better re-use the same csum path during
      each search.
      
      This patch would introduce a per-sctx path for csum tree search, as we
      don't need to re-allocate the path every time we need to do a csum tree
      search.
      
      With this change we can further improve the queue depth and improve the
      scrub read performance:
      
      Before (with regression and cached extent tree path):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 15875.00 1013328.00    12.00   0.08    0.08    63.83   1.35 100.00
      
      After (with both cached extent/csum tree path):
      
       nvme0n1p3 17759.00 1133280.00    10.00   0.06    0.08    63.81   1.50 100.00
      
      Fixes: e02ee89b ("btrfs: scrub: switch scrub_simple_mirror() to scrub_stripe infrastructure")
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3c771c19
    • Qu Wenruo's avatar
      btrfs: scrub: avoid unnecessary extent tree search preparing stripes · 1dc4888e
      Qu Wenruo authored
      Since commit e02ee89b ("btrfs: scrub: switch scrub_simple_mirror()
      to scrub_stripe infrastructure"), scrub no longer re-use the same path
      for extent tree search.
      
      This can lead to unnecessary extent tree search, especially for the new
      stripe based scrub, as we have way more stripes to prepare.
      
      This patch would re-introduce a shared path for extent tree search, and
      properly release it when the block group is scrubbed.
      
      This change alone can improve scrub performance slightly by reducing the
      time spend preparing the stripe thus improving the queue depth.
      
      Before (with regression):
      
       Device         r/s      rkB/s   rrqm/s  %rrqm r_await rareq-sz aqu-sz  %util
       nvme0n1p3 15578.00  993616.00     5.00   0.03    0.09    63.78   1.32 100.00
      
      After (with this patch):
      
       nvme0n1p3 15875.00 1013328.00    12.00   0.08    0.08    63.83   1.35 100.00
      
      Fixes: e02ee89b ("btrfs: scrub: switch scrub_simple_mirror() to scrub_stripe infrastructure")
      CC: stable@vger.kernel.org # 6.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1dc4888e
    • Lee Trager's avatar
      btrfs: copy dir permission and time when creating a stub subvolume · 94628ad9
      Lee Trager authored
      btrfs supports creating nested subvolumes however snapshots are not
      recursive.  When a snapshot is taken of a volume which contains a
      subvolume the subvolume is replaced with a stub subvolume which has the
      same name and uses inode number 2[1]. The stub subvolume kept the
      directory name but did not set the time or permissions of the stub
      subvolume. This resulted in all time information being the current time
      and ownership defaulting to root. When subvolumes and snapshots are
      created using unshare this results in a snapshot directory the user
      created but has no permissions for.
      
      Test case:
      
        [vmuser@archvm ~]# sudo -i
        [root@archvm ~]# mkdir -p /mnt/btrfs/test
        [root@archvm ~]# chown vmuser:users /mnt/btrfs/test/
        [root@archvm ~]# exit
        logout
        [vmuser@archvm ~]$ cd /mnt/btrfs/test
        [vmuser@archvm test]$ unshare --user --keep-caps --map-auto --map-root-user
        [root@archvm test]# btrfs subvolume create subvolume
        Create subvolume './subvolume'
        [root@archvm test]# btrfs subvolume create subvolume/subsubvolume
        Create subvolume 'subvolume/subsubvolume'
        [root@archvm test]# btrfs subvolume snapshot subvolume snapshot
        Create a snapshot of 'subvolume' in './snapshot'
        [root@archvm test]# exit
        logout
        [vmuser@archvm test]$ tree -ug
        [vmuser   users   ]  .
        ├── [vmuser   users   ]  snapshot
        │   └── [vmuser   users   ]  subsubvolume  <-- Without patch perm is root:root
        └── [vmuser   users   ]  subvolume
            └── [vmuser   users   ]  subsubvolume
      
        5 directories, 0 files
      
      [1] https://btrfs.readthedocs.io/en/latest/btrfs-subvolume.html#nested-subvolumesSigned-off-by: default avatarLee Trager <lee@trager.us>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      94628ad9
    • Filipe Manana's avatar
      btrfs: remove pointless empty list check when reading delayed dir indexes · 6b604c9a
      Filipe Manana authored
      At btrfs_readdir_delayed_dir_index(), called when reading a directory, we
      have this check for an empty list to return immediately, but it's not
      needed since list_for_each_entry_safe(), called immediately after, is
      prepared to deal with an empty list, it simply does nothing. So remove
      the empty list check.
      
      Besides shorter source code, it also slightly reduces the binary text
      size:
      
        Before this change:
      
          $ size fs/btrfs/btrfs.ko
             text	   data	    bss	    dec	    hex	filename
          1609408	 167269	  16864	1793541	 1b5e05	fs/btrfs/btrfs.ko
      
        After this change:
      
          $ size fs/btrfs/btrfs.ko
             text	   data	    bss	    dec	    hex	filename
          1609392	 167269	  16864	1793525	 1b5df5	fs/btrfs/btrfs.ko
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6b604c9a