1. 23 Aug, 2021 25 commits
    • Marcos Paulo de Souza's avatar
      btrfs: introduce btrfs_lookup_match_dir · a7d1c5dc
      Marcos Paulo de Souza authored
      btrfs_search_slot is called in multiple places in dir-item.c to search
      for a dir entry, and then calling btrfs_match_dir_name to return a
      btrfs_dir_item.
      
      In order to reduce the number of callers of btrfs_search_slot, create a
      common function that looks for the dir key, and if found call
      btrfs_match_dir_item_name.
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a7d1c5dc
    • Marcos Paulo de Souza's avatar
      btrfs: remove unneeded return variable in btrfs_lookup_file_extent · f8ee80de
      Marcos Paulo de Souza authored
      We can return from btrfs_search_slot directly which also shows that it
      follows the same return value convention.
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f8ee80de
    • Marcos Paulo de Souza's avatar
      btrfs: use btrfs_next_leaf instead of btrfs_next_item when slots > nritems · ad9a9378
      Marcos Paulo de Souza authored
      After calling btrfs_search_slot is a common practice to check if the
      slot found isn't bigger than number of slots in the current leaf, and if
      so, search for the same key in the next leaf by calling btrfs_next_leaf,
      which calls btrfs_next_old_leaf to do the job.
      
      Calling btrfs_next_item in the same situation would end up in the same
      code flow, since
      
      * btrfs_next_item
        * btrfs_next_old_item
          * if slot >= nritems(curr_leaf)
            btrfs_next_old_leaf
      
      Change btrfs_verify_dev_extents and calculate_emulated_zone_size
      functions to use btrfs_next_leaf in the same situation.
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ad9a9378
    • Filipe Manana's avatar
      btrfs: remove ignore_offset argument from btrfs_find_all_roots() · c7bcbb21
      Filipe Manana authored
      Currently all the callers of btrfs_find_all_roots() pass a value of false
      for its ignore_offset argument. This makes the argument pointless and we
      can remove it and make btrfs_find_all_roots() always pass false as the
      ignore_offset argument for btrfs_find_all_roots_safe(). So just do that.
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c7bcbb21
    • Filipe Manana's avatar
      btrfs: avoid unnecessary lock and leaf splits when updating inode in the log · 2ac691d8
      Filipe Manana authored
      During a fast fsync, if we have already fsynced the file before and in the
      current transaction, we can make the inode item update more efficient and
      avoid acquiring a write lock on the leaf's parent.
      
      To update the inode item we are always using btrfs_insert_empty_item() to
      get a path pointing to the inode item, which calls btrfs_search_slot()
      with an "ins_len" argument of 'sizeof(struct btrfs_inode_item) +
      sizeof(struct btrfs_item)', and that always results in the search taking
      a write lock on the level 1 node that is the parent of the leaf that
      contains the inode item. This adds unnecessary lock contention on log
      trees when we have multiple fsyncs in parallel against inodes in the same
      subvolume, which has a very significant impact due to the fact that log
      trees are short lived and their height very rarely goes beyond level 2.
      
      Also, by using btrfs_insert_empty_item() when we need to update the inode
      item, we also end up splitting the leaf of the existing inode item when
      the leaf has an amount of free space smaller than the size of an inode
      item.
      
      Improve this by using btrfs_seach_slot(), with a 0 "ins_len" argument,
      when we know the inode item already exists in the log. This avoids these
      two inefficiencies.
      
      The following script, using fio, was used to perform the tests:
      
        $ cat fio-test.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/nvme0n1
        MOUNT_OPTIONS="-o ssd"
        MKFS_OPTIONS="-d single -m single"
      
        if [ $# -ne 4 ]; then
          echo "Use $0 NUM_JOBS FILE_SIZE FSYNC_FREQ BLOCK_SIZE"
          exit 1
        fi
      
        NUM_JOBS=$1
        FILE_SIZE=$2
        FSYNC_FREQ=$3
        BLOCK_SIZE=$4
      
        cat <<EOF > /tmp/fio-job.ini
        [writers]
        rw=randwrite
        fsync=$FSYNC_FREQ
        fallocate=none
        group_reporting=1
        direct=0
        bs=$BLOCK_SIZE
        ioengine=sync
        size=$FILE_SIZE
        directory=$MNT
        numjobs=$NUM_JOBS
        EOF
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        echo
        echo "Using config:"
        echo
        cat /tmp/fio-job.ini
        echo
        echo "mount options: $MOUNT_OPTIONS"
        echo
      
        umount $MNT &> /dev/null
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        fio /tmp/fio-job.ini
        umount $MNT
      
      The tests were done on a physical machine, with 12 cores, 64G of RAM,
      using a NVMEe device and using a non-debug kernel config (the default one
      from Debian). The summary line from fio is provided below for each test
      run.
      
      With 8 jobs, file size 256M, fsync frequency of 4 and a block size of 4K:
      
      Before: WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=2048MiB (2147MB), run=72297-72297msec
      After:  WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=2048MiB (2147MB), run=71411-71411msec
      
      +1.4% throughput, -1.2% runtime
      
      With 16 jobs, file size 256M, fsync frequency of 4 and a block size of 4K:
      
      Before: WRITE: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=4096MiB (4295MB), run=99980-99980msec
      After:  WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=4096MiB (4295MB), run=97933-97933msec
      
      +2.2% throughput, -2.1% runtime
      
      The changes are small but it's possible to be better on faster hardware as
      in the test machine used disk utilization was pretty much 100% during the
      whole time the tests were running (observed with 'iostat -xz 1').
      
      The tests also included the previous patch with the subject of:
      "btrfs: avoid unnecessary log mutex contention when syncing log".
      So they compared a branch without that patch and without this patch versus
      a branch with these two patches applied.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2ac691d8
    • Filipe Manana's avatar
      btrfs: remove unnecessary list head initialization when syncing log · e68107e5
      Filipe Manana authored
      One of the last steps of syncing the log is to remove all log contexts
      from the root's list of contexts, done at btrfs_remove_all_log_ctxs().
      There we iterate over all the contexts in the list and delete each one
      from the list, and after that we call INIT_LIST_HEAD() on the list. That
      is unnecessary since at that point the list is empty.
      
      So just remove the INIT_LIST_HEAD() call. It's not needed, increases code
      size (bloat-o-meter reported a delta of -122 for btrfs_sync_log() after
      this change) and increases two critical sections delimited by log mutexes.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e68107e5
    • Filipe Manana's avatar
      btrfs: avoid unnecessary log mutex contention when syncing log · e1a6d264
      Filipe Manana authored
      When syncing the log we acquire the root's log mutex just to update the
      root's last_log_commit. This is unnecessary because:
      
      1) At this point there can only be one task updating this value, which is
         the task committing the current log transaction. Any task that enters
         btrfs_sync_log() has to wait for the previous log transaction to commit
         and wait for the current log transaction to commit if someone else
         already started it (in this case it never reaches to the point of
         updating last_log_commit, as that is done by the committing task);
      
      2) All readers of the root's last_log_commit don't acquire the root's
         log mutex. This is to avoid blocking the readers, potentially for too
         long and because getting a stale value of last_log_commit does not
         cause any functional problem, in the worst case getting a stale value
         results in logging an inode unnecessarily. Plus it's actually very
         rare to get a stale value that results in unnecessarily logging the
         inode.
      
      So in order to avoid unnecessary contention on the root's log mutex,
      which is used for several different purposes, like starting/joining a
      log transaction and starting writeback of a log transaction, stop
      acquiring the log mutex for updating the root's last_log_commit.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e1a6d264
    • Filipe Manana's avatar
      btrfs: remove racy and unnecessary inode transaction update when using no-holes · cceaa89f
      Filipe Manana authored
      When using the NO_HOLES feature and expanding the size of an inode, we
      update the inode's last_trans, last_sub_trans and last_log_commit fields
      at maybe_insert_hole() so that a fsync does know that the inode needs to
      be logged (by making sure that btrfs_inode_in_log() returns false). This
      happens for expanding truncate operations, buffered writes, direct IO
      writes and when cloning extents to an offset greater than the inode's
      i_size.
      
      However the way we do it is racy, because in between setting the inode's
      last_sub_trans and last_log_commit fields, the log transaction ID that was
      assigned to last_sub_trans might be committed before we read the root's
      last_log_commit and assign that value to last_log_commit. If that happens
      it would make a future call to btrfs_inode_in_log() return true. This is
      a race that should be extremely unlikely to be hit in practice, and it is
      the same that was described by commit bc0939fc ("btrfs: fix race
      between marking inode needs to be logged and log syncing").
      
      The fix would simply be to set last_log_commit to the value we assigned
      to last_sub_trans minus 1, like it was done in that commit. However
      updating these two fields plus the last_trans field is pointless here
      because all the callers of btrfs_cont_expand() (which is the only
      caller of maybe_insert_hole()) always call btrfs_set_inode_last_trans()
      or btrfs_update_inode() after calling btrfs_cont_expand(). Calling either
      btrfs_set_inode_last_trans() or btrfs_update_inode() guarantees that the
      next fsync will log the inode, as it makes btrfs_inode_in_log() return
      false.
      
      So just remove the code that explicitly sets the inode's last_trans,
      last_sub_trans and last_log_commit fields.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cceaa89f
    • Filipe Manana's avatar
      btrfs: stop doing GFP_KERNEL memory allocations in the ref verify tool · 5a656c36
      Filipe Manana authored
      In commit 351cbf6e ("btrfs: use nofs allocations for running delayed
      items") we wrapped all btree updates when running delayed items with
      memalloc_nofs_save() and memalloc_nofs_restore(), due to a lock inversion
      detected by lockdep involving reclaim and the mutex of delayed nodes.
      
      The problem is because the ref verify tool does some memory allocations
      with GFP_KERNEL, which can trigger reclaim and reclaim can trigger inode
      eviction, which requires locking the mutex of an inode's delayed node.
      On the other hand the ref verify tool is called when allocating metadata
      extents as part of operations that modify a btree, which is a problem when
      running delayed nodes, where we do btree updates while holding the mutex
      of a delayed node. This is what caused the lockdep warning.
      
      Instead of wrapping every btree update when running delayed nodes, change
      the ref verify tool to never do GFP_KERNEL allocations, because:
      
      1) We get less repeated code, which at the moment does not even have a
         comment mentioning why we need to setup the NOFS context, which is a
         recommended good practice as mentioned at
         Documentation/core-api/gfp_mask-from-fs-io.rst
      
      2) The ref verify tool is something meant only for debugging and not
         something that should be enabled on non-debug / non-development
         kernels;
      
      3) We may have yet more places outside delayed-inode.c where we have
         similar problem: doing btree updates while holding some lock and
         then having the GFP_KERNEL memory allocations, from the ref verify
         tool, trigger reclaim and trying again to acquire the same lock
         through the reclaim path.
         Or we could get more such cases in the future, therefore this change
         prevents getting into similar cases when using the ref verify tool.
      
      Curiously most of the memory allocations done by the ref verify tool
      were already using GFP_NOFS, except a few ones for no apparent reason.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5a656c36
    • Filipe Manana's avatar
      btrfs: improve the batch insertion of delayed items · 506650dc
      Filipe Manana authored
      When we insert the delayed items of an inode, which corresponds to the
      directory index keys for a directory (key type BTRFS_DIR_INDEX_KEY), we
      do the following:
      
      1) Pick the first delayed item from the rbtree and insert it into the
         fs/subvolume btree, using btrfs_insert_empty_item() for that;
      
      2) Without releasing the path returned by btrfs_insert_empty_item(),
         keep collecting as many consecutive delayed items from the rbtree
         as possible, as long as each one's BTRFS_DIR_INDEX_KEY key is the
         immediate successor of the previously picked item and as long as
         they fit in the available space of the leaf the path points to;
      
      3) Then insert all the collected items into the leaf;
      
      4) Release the reserve metadata space for each collected item and
         release each item (implies deleting from the rbtree);
      
      5) Unlock the path.
      
      While this is much better than inserting items one by one, it can be
      improved in a few aspects:
      
      1) Instead of adding items based on the remaining free space of the
         leaf, collect as many items that can fit in a leaf and bulk insert
         them. This results in less and larger batches, reducing the total
         amount of time to insert the delayed items. For example when adding
         100K files to a directory, we ended up creating 1658 batches with
         very variable sizes ranging from 1 item to 118 items, on a filesystem
         with a node/leaf size of 16K. After this change, we end up with 839
         batches, with the vast majority of them having exactly 120 items;
      
      2) We do the search for more items to batch, by iterating the rbtree,
         while holding a write lock on the leaf;
      
      3) While still holding the leaf locked, we are releasing the reserved
         metadata for each item and then deleting each item, keeping a write
         lock on the leaf for longer than necessary. Releasing the delayed items
         one by one can take a significant amount of time, because deleting
         them from the rbtree can often be a bit slow when the deletion results
         in rebalancing the rbtree.
      
      So change this so that we try to create larger batches, with a total
      item size up to the maximum a leaf can support, and by unlocking the leaf
      immediately after inserting the items, releasing the reserved metadata
      space of each item and releasing each item without holding the write lock
      on the leaf.
      
      The following script that runs fs_mark was used to test this change:
      
        $ cat test.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/nvme0n1
        MOUNT_OPTIONS="-o ssd"
        MKFS_OPTIONS="-m single -d single"
        FILES=1000000
        THREADS=16
        FILE_SIZE=0
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        umount $DEV &> /dev/null
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        OPTS="-S 0 -L 5 -n $FILES -s $FILE_SIZE -t 16"
        for ((i = 1; i <= $THREADS; i++)); do
            OPTS="$OPTS -d $MNT/d$i"
        done
      
        fs_mark $OPTS
      
        umount $MNT
      
      It was run on machine with 12 cores, 64G of ram, using a NVMe device and
      using a non-debug kernel config (Debian's default config).
      
      Results before this change:
      
      FSUse%        Count         Size    Files/sec         App Overhead
           1     16000000            0      76182.1             72223046
           3     32000000            0      62746.9             80776528
           5     48000000            0      77029.0             93022381
           6     64000000            0      73691.6             95251075
           8     80000000            0      66288.0             85089634
      
      Results after this change:
      
      FSUse%        Count         Size    Files/sec         App Overhead
           1     16000000            0      79049.5 (+3.7%)     69700824
           3     32000000            0      65248.9 (+3.9%)     80583693
           5     48000000            0      77991.4 (+1.2%)     90040908
           6     64000000            0      75096.8 (+1.9%)     89862241
           8     80000000            0      66926.8 (+1.0%)     84429169
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      506650dc
    • Qu Wenruo's avatar
      btrfs: rescue: allow ibadroots to skip bad extent tree when reading block group items · 2b29726c
      Qu Wenruo authored
      When extent tree gets corrupted, normally it's not extent tree root, but
      one toasted tree leaf/node.
      
      In that case, rescue=ibadroots mount option won't help as it can only
      handle the extent tree root corruption.
      
      This patch will enhance the behavior by:
      
      - Allow fill_dummy_bgs() to ignore -EEXIST error
      
        This means we may have some block group items read from disk, but
        then hit some error halfway.
      
      - Fallback to fill_dummy_bgs() if any error gets hit in
        btrfs_read_block_groups()
      
        Of course, this still needs rescue=ibadroots mount option.
      
      With that, rescue=ibadroots can handle extent tree corruption more
      gracefully and allow a better recover chance.
      Reported-by: default avatarZhenyu Wu <wuzy001@gmail.com>
      Link: https://www.spinics.net/lists/linux-btrfs/msg114424.htmlReviewed-by: default avatarSu Yue <l@damenly.su>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2b29726c
    • Marcos Paulo de Souza's avatar
      btrfs: pass NULL as trans to btrfs_search_slot if we only want to search · 6534c0c9
      Marcos Paulo de Souza authored
      Using a transaction in btrfs_search_slot is only useful when we are
      searching to add or modify the tree. When the function is used for
      searching, insert length and mod arguments are 0, there is no need to
      use a transaction.
      
      No functional changes, changing for consistency.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6534c0c9
    • Filipe Manana's avatar
      btrfs: continue readahead of siblings even if target node is in memory · 069a2e37
      Filipe Manana authored
      At reada_for_search(), when attempting to readahead a node or leaf's
      siblings, we skip the readahead of the siblings if the node/leaf is
      already in memory. That is probably fine for the READA_FORWARD and
      READA_BACK readahead types, as they are used on contexts where we
      end up reading some consecutive leaves, but usually not the whole btree.
      
      However for a READA_FORWARD_ALWAYS mode, currently only used for full
      send operations, it does not make sense to skip the readahead if the
      target node or leaf is already loaded in memory, since we know the caller
      is visiting every node and leaf of the btree in ascending order.
      
      So change the behaviour to not skip the readahead when the target node is
      already in memory and the readahead mode is READA_FORWARD_ALWAYS.
      
      The following test script was used to measure the improvement on a box
      using an average, consumer grade, spinning disk, with 32GiB of RAM and
      using a non-debug kernel config (Debian's default config).
      
        $ cat test.sh
        #!/bin/bash
      
        DEV=/dev/sdj
        MNT=/mnt/sdj
        MKFS_OPTIONS="--nodesize 16384"     # default, just to be explicit
        MOUNT_OPTIONS="-o max_inline=2048"  # default, just to be explicit
      
        mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null
        mount $MOUNT_OPTIONS $DEV $MNT
      
        # Create files with inline data to make it easier and faster to create
        # large btrees.
        add_files()
        {
            local total=$1
            local start_offset=$2
            local number_jobs=$3
            local total_per_job=$(($total / $number_jobs))
      
            echo "Creating $total new files using $number_jobs jobs"
            for ((n = 0; n < $number_jobs; n++)); do
                (
                    local start_num=$(($start_offset + $n * $total_per_job))
                    for ((i = 1; i <= $total_per_job; i++)); do
                        local file_num=$((start_num + $i))
                        local file_path="$MNT/file_${file_num}"
                        xfs_io -f -c "pwrite -S 0xab 0 2000" $file_path > /dev/null
                        if [ $? -ne 0 ]; then
                            echo "Failed creating file $file_path"
                            break
                        fi
                    done
                ) &
                worker_pids[$n]=$!
            done
      
            wait ${worker_pids[@]}
      
            sync
            echo
            echo "btree node/leaf count: $(btrfs inspect-internal dump-tree -t 5 $DEV | egrep '^(node|leaf) ' | wc -l)"
        }
      
        file_count=2000000
        add_files $file_count 0 4
      
        echo
        echo "Creating snapshot..."
        btrfs subvolume snapshot -r $MNT $MNT/snap1
      
        umount $MNT
      
        echo 3 > /proc/sys/vm/drop_caches
        blockdev --flushbufs $DEV &> /dev/null
        hdparm -F $DEV &> /dev/null
      
        mount $MOUNT_OPTIONS $DEV $MNT
      
        echo
        echo "Testing full send..."
        start=$(date +%s)
        btrfs send $MNT/snap1 > /dev/null
        end=$(date +%s)
        echo
        echo "Full send took $((end - start)) seconds"
      
        umount $MNT
      
      The duration of the full send operations, in seconds, were the following:
      
      Before this change:  85 seconds
      After this change:   76 seconds (-11.2%)
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      069a2e37
    • David Sterba's avatar
      btrfs: check-integrity: drop kmap/kunmap for block pages · 5da38479
      David Sterba authored
      The pages in block_ctx have never been allocated from highmem (in
      btrfsic_read_block) so the mapping is pointless and can be removed.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5da38479
    • David Sterba's avatar
      btrfs: compression: drop kmap/kunmap from generic helpers · 4c2bf276
      David Sterba authored
      The pages in compressed_pages are not from highmem anymore so we can
      drop the mapping for checksum calculation and inline extent.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4c2bf276
    • David Sterba's avatar
      btrfs: compression: drop kmap/kunmap from zstd · bbaf9715
      David Sterba authored
      As we don't use highmem pages anymore, drop the kmap/kunmap. The kmap is
      simply page_address and kunmap is a no-op.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      bbaf9715
    • David Sterba's avatar
      btrfs: compression: drop kmap/kunmap from zlib · 696ab562
      David Sterba authored
      As we don't use highmem pages anymore, drop the kmap/kunmap. The kmap is
      simply page_address and kunmap is a no-op.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      696ab562
    • David Sterba's avatar
      btrfs: compression: drop kmap/kunmap from lzo · 8c945d32
      David Sterba authored
      As we don't use highmem pages anymore, drop the kmap/kunmap. The kmap is
      simply page_address and kunmap is a no-op.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8c945d32
    • David Sterba's avatar
      btrfs: drop from __GFP_HIGHMEM all allocations · b0ee5e1e
      David Sterba authored
      The highmem flag is used for allocating pages for compression and for
      raid56 pages. The high memory makes sense on 32bit systems but is not
      without problems. On 64bit system's it's just another layer of wrappers.
      
      The time the pages are allocated for compression or raid56 is relatively
      short (about a transaction commit), so the pages are not blocked
      indefinitely. As the number of pages depends on the amount of data being
      written/read, there's a theoretical problem. A fast device on a 32bit
      system could use most of the low memory pool, while with the highmem
      allocation that would not happen. This was possibly the original idea
      long time ago, but nowadays we optimize for 64bit systems.
      
      This patch removes all usage of the __GFP_HIGHMEM flag for page
      allocation, the kmap/kunmap are still in place and will be removed in
      followup patches. Remaining is masking out the bit in
      alloc_extent_state and __lookup_free_space_inode, that can safely stay.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b0ee5e1e
    • Anand Jain's avatar
      btrfs: cleanup fs_devices pointer usage in btrfs_trim_fs · 23608d51
      Anand Jain authored
      Drop variable 'devices' (used only once) and add new variable for
      the fs_devices, so it is used at two locations within btrfs_trim_fs()
      function and also helps to access fs_devices->devices.
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      23608d51
    • Marcos Paulo de Souza's avatar
      btrfs: remove max argument from generic_bin_search · 67d5e289
      Marcos Paulo de Souza authored
      Both callers use btrfs_header_nritems to feed the max argument.  Remove
      the argument and let generic_bin_search call it itself.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      67d5e289
    • Nikolay Borisov's avatar
      btrfs: make btrfs_finish_chunk_alloc private to block-group.c · 2eadb9e7
      Nikolay Borisov authored
      One of the final things that must be done to add a new chunk is
      inserting its device extent items in the device tree. They describe
      the portion of allocated device physical space during phase 1 of
      chunk allocation. This is currently done in btrfs_finish_chunk_alloc
      whose name isn't very informative. What's more, this function is only
      used in block-group.c but is defined as public. There isn't anything
      special about it that would warrant it being defined in volumes.c.
      
      Just move btrfs_finish_chunk_alloc and alloc_chunk_dev_extent to
      block-group.c, make the former static and rename both functions to
      insert_dev_extents and insert_dev_extent respectively.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2eadb9e7
    • Anand Jain's avatar
      btrfs: check-integrity: drop unnecessary function prototypes · 4a9531cf
      Anand Jain authored
      The function prototypes below aren't necessary as the functions are
      first defined before called. Remove them.
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4a9531cf
    • David Sterba's avatar
      btrfs: add special case to setget helpers for 64k pages · b3b7e1d0
      David Sterba authored
      On 64K pages the size of the extent_buffer::pages array is 1 and
      compilation with -Warray-bounds warns due to
      
        kaddr = page_address(eb->pages[idx + 1]);
      
      when reading byte range crossing page boundary.
      
      This does never actually overflow the array because on 64K because all
      the data fit in one page and bounds are checked by check_setget_bounds.
      
      To fix the reported overflows and warnings add a compile-time condition
      that will allow compiler to eliminate the dead code that reads from the
      idx + 1 page.
      
      Link: https://lore.kernel.org/lkml/20210623083901.1d49d19d@canb.auug.org.au/
      CC: Gustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b3b7e1d0
    • Johannes Thumshirn's avatar
      btrfs: zoned: remove max_zone_append_size logic · 5a80d1c6
      Johannes Thumshirn authored
      There used to be a patch in the original series for zoned support which
      limited the extent size to max_zone_append_size, but this patch has been
      dropped somewhere around v9.
      
      We've decided to go the opposite direction, instead of limiting extents
      in the first place we split them before submission to comply with the
      device's limits.
      
      Remove the related code, btrfs_fs_info::max_zone_append_size and
      btrfs_zoned_device_info::max_zone_append_size.
      
      This also removes the workaround for dm-crypt introduced in
      1d68128c ("btrfs: zoned: fail mount if the device does not support
      zone append") because the fix has been merged as f34ee1dc ("dm
      crypt: Fix zoned block device support").
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5a80d1c6
  2. 22 Aug, 2021 2 commits
  3. 21 Aug, 2021 9 commits
  4. 20 Aug, 2021 4 commits
    • Jens Axboe's avatar
      io_uring: fix xa_alloc_cycle() error return value check · a30f895a
      Jens Axboe authored
      We currently check for ret != 0 to indicate error, but '1' is a valid
      return and just indicates that the allocation succeeded with a wrap.
      Correct the check to be for < 0, like it was before the xarray
      conversion.
      
      Cc: stable@vger.kernel.org
      Fixes: 61cf9370 ("io_uring: Convert personality_idr to XArray")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a30f895a
    • Linus Torvalds's avatar
      Merge tag 'acpi-5.14-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · fa54d366
      Linus Torvalds authored
      Pull ACPI fixes from Rafael Wysocki:
       "These fix two mistakes in new code.
      
        Specifics:
      
         - Prevent confusing messages from being printed if the PRMT table is
           not present or there are no PRM modules (Aubrey Li).
      
         - Fix the handling of suspend-to-idle entry and exit in the case when
           the Microsoft UUID is used with the Low-Power S0 Idle _DSM
           interface (Mario Limonciello)"
      
      * tag 'acpi-5.14-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        ACPI: PM: s2idle: Invert Microsoft UUID entry and exit
        ACPI: PRM: Deal with table not present or no module found
      fa54d366
    • Linus Torvalds's avatar
      Merge tag 'pm-5.14-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · cae68764
      Linus Torvalds authored
      Pull power management fixes from Rafael Wysocki:
       "These fix some issues in the ARM cpufreq drivers and in the operating
        performance points (OPP) framework.
      
        Specifics:
      
         - Fix useless WARN() in the OPP core and prevent a noisy warning
           from being printed by OPP _put functions (Dmitry Osipenko).
      
         - Fix error path when allocation failed in the arm_scmi cpufreq
           driver (Lukasz Luba).
      
         - Blacklist Qualcomm sc8180x and Qualcomm sm8150 in
           cpufreq-dt-platdev (Bjorn Andersson, Thara Gopinath).
      
         - Forbid cpufreq for 1.2 GHz variant in the armada-37xx cpufreq
           driver (Marek Behún)"
      
      * tag 'pm-5.14-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        opp: Drop empty-table checks from _put functions
        cpufreq: armada-37xx: forbid cpufreq for 1.2 GHz variant
        cpufreq: blocklist Qualcomm sm8150 in cpufreq-dt-platdev
        cpufreq: arm_scmi: Fix error path when allocation failed
        opp: remove WARN when no valid OPPs remain
        cpufreq: blacklist Qualcomm sc8180x in cpufreq-dt-platdev
      cae68764
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · ed3bad2e
      Linus Torvalds authored
      Merge misc fixes from Andrew Morton:
       "10 patches.
      
        Subsystems affected by this patch series: MAINTAINERS and mm (shmem,
        pagealloc, tracing, memcg, memory-failure, vmscan, kfence, and
        hugetlb)"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        hugetlb: don't pass page cache pages to restore_reserve_on_error
        kfence: fix is_kfence_address() for addresses below KFENCE_POOL_SIZE
        mm: vmscan: fix missing psi annotation for node_reclaim()
        mm/hwpoison: retry with shake_page() for unhandlable pages
        mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim
        MAINTAINERS: update ClangBuiltLinux IRC chat
        mmflags.h: add missing __GFP_ZEROTAGS and __GFP_SKIP_KASAN_POISON names
        mm/page_alloc: don't corrupt pcppage_migratetype
        Revert "mm: swap: check if swap backing device is congested or not"
        Revert "mm/shmem: fix shmem_swapin() race with swapoff"
      ed3bad2e