1. 19 Nov, 2008 1 commit
    • Chris Mason's avatar
      Btrfs: Avoid writeback stalls · d2c3f4f6
      Chris Mason authored
      While building large bios in writepages, btrfs may end up waiting
      for other page writeback to finish if WB_SYNC_ALL is used.
      
      While it is waiting, the bio it is building has a number of pages with the
      writeback bit set and they aren't getting to the disk any time soon.  This
      lowers the latencies of writeback in general by sending down the bio being
      built before waiting for other pages.
      
      The bio submission code tries to limit the total number of async bios in
      flight by waiting when we're over a certain number of async bios.  But,
      the waits are happening while writepages is building bios, and this can easily
      lead to stalls and other problems for people calling wait_on_page_writeback.
      
      The current fix is to let the congestion tests take care of waiting.
      
      sync() and others make sure to drain the current async requests to make
      sure that everything that was pending when the sync was started really get
      to disk.  The code would drain pending requests both before and after
      submitting a new request.
      
      But, if one of the requests is waiting for page writeback to finish,
      the draining waits might block that page writeback.  This changes the
      draining code to only wait after submitting the bio being processed.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      d2c3f4f6
  2. 18 Nov, 2008 10 commits
    • Chris Mason's avatar
      Btrfs: switch back to wait_on_page_writeback to wait on metadata writes · 105d931d
      Chris Mason authored
      The extent based waiting was using more CPU, and other fixes have helped
      with the unplug storm problems.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      105d931d
    • Chris Mason's avatar
    • Chris Mason's avatar
      Btrfs: unplug all devices in the unplug call back · 9f0ba5bd
      Chris Mason authored
      For larger multi-device filesystems, there was logic to limit the
      number of devices unplugged to just the page that was sent to our sync_page
      function.
      
      But, the code wasn't always unplugging the right device.  Since this was
      just an optimization, disable it for now.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      9f0ba5bd
    • Liu Hui's avatar
      Btrfs: Some fixes for batching extent insert. · b4eec2ca
      Liu Hui authored
      In insert_extents(), when ret==1 and last is not zero, it should
      check if the current inserted item is the last item in this batching
      inserts. If so, it should just break from loop. If not, 'cur =
      insert_list->next' will make no sense because the list is empty now,
      and 'op' will point to an unexpectable place.
      
      There are also some trivial fixs in this patch including one comment
      typo error and deleting two redundant lines.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      b4eec2ca
    • Chris Mason's avatar
      Btrfs: prevent loops in the directory tree when creating snapshots · ea9e8b11
      Chris Mason authored
      For a directory tree:
      
      /mnt/subvolA/subvolB
      
      btrfsctl -s /mnt/subvolA/subvolB /mnt
      
      Will create a directory loop with subvolA under subvolB.  This
      commit uses the forward refs for each subvol and snapshot to error out
      before creating the loop.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      ea9e8b11
    • Chris Mason's avatar
      Btrfs: Add backrefs and forward refs for subvols and snapshots · 0660b5af
      Chris Mason authored
      Subvols and snapshots can now be referenced from any point in the directory
      tree.  We need to maintain back refs for them so we can find lost
      subvols.
      
      Forward refs are added so that we know all of the subvols and
      snapshots referenced anywhere in the directory tree of a single subvol.  This
      can be used to do recursive snapshotting (but they aren't yet) and it is
      also used to detect and prevent directory loops when creating new snapshots.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      0660b5af
    • Chris Mason's avatar
      Btrfs: Give each subvol and snapshot their own anonymous devid · 3394e160
      Chris Mason authored
      Each subvolume has its own private inode number space, and so we need
      to fill in different device numbers for each subvolume to avoid confusing
      applications.
      
      This commit puts a struct super_block into struct btrfs_root so it can
      call set_anon_super() and get a different device number generated for
      each root.
      
      btrfs_rename is changed to prevent renames across subvols.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      3394e160
    • Chris Mason's avatar
      Btrfs: Allow subvolumes and snapshots anywhere in the directory tree · 3de4586c
      Chris Mason authored
      Before, all snapshots and subvolumes lived in a single flat directory.  This
      was awkward and confusing because the single flat directory was only writable
      with the ioctls.
      
      This commit changes the ioctls to create subvols and snapshots at any
      point in the directory tree.  This requires making separate ioctls for
      snapshot and subvol creation instead of a combining them into one.
      
      The subvol ioctl does:
      
      btrfsctl -S subvol_name parent_dir
      
      After the ioctl is done subvol_name lives inside parent_dir.
      
      The snapshot ioctl does:
      
      btrfsctl -s path_for_snapshot root_to_snapshot
      
      path_for_snapshot can be an absolute or relative path.  btrfsctl breaks it up
      into directory and basename components.
      
      root_to_snapshot can be any file or directory in the FS.  The snapshot
      is taken of the entire root where that file lives.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      3de4586c
    • Josef Bacik's avatar
      Btrfs: Add some debugging around the ENOSPC bugs · 4ce4cb52
      Josef Bacik authored
      Some people are still reporting problems with early enospc.  This
      will help narrown down the cause.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      4ce4cb52
    • Josef Bacik's avatar
      Btrfs: fix free space leak · e3e469f8
      Josef Bacik authored
      In my batch delete/update/insert patch I introduced a free space leak.  The
      extent that we do the original search on in free_extents is never pinned, so we
      always update the block saying that it has free space, but the free space never
      actually gets added to the free space tree, since op->del will always be 0 and
      it's never actually added to the pinned extents tree.
      
      This patch fixes this problem by making sure we call pin_down_bytes on the
      pending extent op and set op->del to the return value of pin_down_bytes so
      update_block_group is called with the right value.  This seems to fix the case
      where we were getting ENOSPC when there was plenty of space available.
      Signed-off-by: default avatarJosef Bacik <jbacik@redhat.com>
      e3e469f8
  3. 12 Nov, 2008 3 commits
  4. 18 Nov, 2008 1 commit
    • Yan Zheng's avatar
      Btrfs: Seed device support · 2b82032c
      Yan Zheng authored
      Seed device is a special btrfs with SEEDING super flag
      set and can only be mounted in read-only mode. Seed
      devices allow people to create new btrfs on top of it.
      
      The new FS contains the same contents as the seed device,
      but it can be mounted in read-write mode.
      
      This patch does the following:
      
      1) split code in btrfs_alloc_chunk into two parts. The first part does makes
      the newly allocated chunk usable, but does not do any operation that modifies
      the chunk tree. The second part does the the chunk tree modifications. This
      division is for the bootstrap step of adding storage to the seed device.
      
      2) Update device management code to handle seed device.
      The basic idea is: For an FS grown from seed devices, its
      seed devices are put into a list. Seed devices are
      opened on demand at mounting time. If any seed device is
      missing or has been changed, btrfs kernel module will
      refuse to mount the FS.
      
      3) make btrfs_find_block_group not return NULL when all
      block groups are read-only.
      Signed-off-by: default avatarYan Zheng <zheng.yan@oracle.com>
      2b82032c
  5. 12 Nov, 2008 3 commits
    • Yan Zheng's avatar
      Btrfs: mount ro and remount support · c146afad
      Yan Zheng authored
      This patch adds mount ro and remount support. The main
      changes in patch are: adding btrfs_remount and related
      helper function; splitting the transaction related code
      out of close_ctree into btrfs_commit_super; updating
      allocator to properly handle read only block group.
      Signed-off-by: default avatarYan Zheng <zheng.yan@oracle.com>
      c146afad
    • Josef Bacik's avatar
      Btrfs: batch extent inserts/updates/deletions on the extent root · f3465ca4
      Josef Bacik authored
      While profiling the allocator I noticed a good amount of time was being spent in
      finish_current_insert and del_pending_extents, and as the filesystem filled up
      more and more time was being spent in those functions.  This patch aims to try
      and reduce that problem.  This happens two ways
      
      1) track if we tried to delete an extent that we are going to update or insert.
      Once we get into finish_current_insert we discard any of the extents that were
      marked for deletion.  This saves us from doing unnecessary work almost every
      time finish_current_insert runs.
      
      2) Batch insertion/updates/deletions.  Instead of doing a btrfs_search_slot for
      each individual extent and doing the needed operation, we instead keep the leaf
      around and see if there is anything else we can do on that leaf.  On the insert
      case I introduced a btrfs_insert_some_items, which will take an array of keys
      with an array of data_sizes and try and squeeze in as many of those keys as
      possible, and then return how many keys it was able to insert.  In the update
      case we search for an extent ref, update the ref and then loop through the leaf
      to see if any of the other refs we are looking to update are on that leaf, and
      then once we are done we release the path and search for the next ref we need to
      update.  And finally for the deletion we try and delete the extent+ref in pairs,
      so we will try to find extent+ref pairs next to the extent we are trying to free
      and free them in bulk if possible.
      
      This along with the other cluster fix that Chris pushed out a bit ago helps make
      the allocator preform more uniformly as it fills up the disk.  There is still a
      slight drop as we fill up the disk since we start having to stick new blocks in
      odd places which results in more COW's than on a empty fs, but the drop is not
      nearly as severe as it was before.
      Signed-off-by: default avatarJosef Bacik <jbacik@redhat.com>
      f3465ca4
    • Sage Weil's avatar
      Btrfs: allow clone of an arbitrary file range · c5c9cd4d
      Sage Weil authored
      This patch adds an additional CLONE_RANGE ioctl to clone an arbitrary 
      (block-aligned) file range to another file.  The original CLONE ioctl 
      becomes a special case of cloning the entire file range.  The logic is a 
      bit more complex now since ranges may be cloned to different offsets, and 
      because we may only be cloning the beginning or end of a particular extent 
      or checksum item.
      
      An additional sanity check ensures the source and destination files aren't 
      the same (which would previously deadlock), although eventually this could 
      be extended to allow the duplication of file data at a different offset 
      within the same file.
      
      Any extents within the destination range in the target file are dropped.
      
      We currently do not cope with the case where a compressed inline extent 
      needs to be split.  This will probably require decompressing the extent 
      into a temporary address_space, and inserting just the cloned portion as a 
      new compressed inline extent.  For now, just return -EINVAL in this case.  
      Note that this never comes up in the more common case of cloning an entire 
      file.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      c5c9cd4d
  6. 13 Nov, 2008 2 commits
    • Chris Mason's avatar
      Btrfs: Fix handling of space info full during allocations · 2ed6d664
      Chris Mason authored
      When we fail to allocate a new block group, we should still do the
      checks to make sure allocations try again with the minimum requested
      allocation size.
      
      This also fixes a deadlock that come from a missed down_read in
      the chunk allocation failure handling.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      2ed6d664
    • Chris Mason's avatar
      Btrfs: Improve metadata read latencies · 6f3577bd
      Chris Mason authored
      This fixes latency problems on metadata reads by making sure they
      don't go through the async submit queue, and by tuning down the amount
      of readahead done during btree searches.
      
      Also, the btrfs bdi congestion function is tuned to ignore the
      number of pending async bios and checksums pending.  There is additional
      code that throttles new async bios now and the congestion function
      doesn't need to worry about it anymore.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      6f3577bd
  7. 11 Nov, 2008 2 commits
  8. 10 Nov, 2008 10 commits
  9. 07 Nov, 2008 7 commits
    • Chris Mason's avatar
      Btrfs: Avoid unplug storms during commit · 5f2cc086
      Chris Mason authored
      While doing a commit, btrfs makes sure all the metadata blocks
      were properly written to disk, calling wait_on_page_writeback for
      each page.  This writeback happens after allowing another transaction
      to start, so it competes for the disk with other processes in the FS.
      
      If the page writeback bit is still set, each wait_on_page_writeback might
      trigger an unplug, even though the page might be waiting for checksumming
      to finish or might be waiting for the async work queue to submit the
      bio.
      
      This trades wait_on_page_writeback for waiting on the extent writeback
      bits.  It won't trigger any unplugs and substantially improves performance
      in a number of workloads.
      
      This also changes the async bio submission to avoid requeueing if there
      is only one device.  The requeue just wastes CPU time because there are
      no other devices to service.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      5f2cc086
    • Chris Mason's avatar
      Btrfs: Fix more false enospc errors and an oops from empty clustering · 42e70e7a
      Chris Mason authored
      In comes cases the empty cluster was added twice to the total number of
      bytes the allocator was trying to find.
      
      With empty clustering on, the hint byte was sometimes outside of the
      block group.  Add an extra goto to find the correct block group.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      42e70e7a
    • Chris Mason's avatar
      Btrfs: make sure compressed bios don't complete too soon · af09abfe
      Chris Mason authored
      When writing a compressed extent, a number of bios are created that
      point to a single struct compressed_bio.  At end_io time an atomic counter in
      the compressed_bio struct makes sure that all of the bios have finished
      before final end_io processing is done.
      
      But when multiple bios are needed to write a compressed extent, the
      counter was being incremented after the first bio was sent to submit_bio.
      It is possible the bio will complete before the counter is incremented,
      making the end_io handler free the compressed_bio struct before
      processing is finished.
      
      The fix is to increment the atomic counter before bio submission,
      both for compressed reads and writes.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      af09abfe
    • Chris Mason's avatar
      Btfs: More metadata allocator optimizations · 4366211c
      Chris Mason authored
      This lowers the empty cluster target for metadata allocations.  The lower
      target makes it easier to do allocations and still seems to perform well.
      
      It also fixes the allocator loop to drop the empty cluster when things
      start getting difficult, avoiding false enospc warnings.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      4366211c
    • Chris Mason's avatar
      Btrfs: enforce metadata allocation clustering · 3b7885bf
      Chris Mason authored
      The allocator uses the last allocation as a starting point for metadata
      allocations, and tries to allocate in clusters of at least 256k.
      
      If the search for a free block fails to find the expected block, this patch
      forces a new cluster to be found in the free list.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      3b7885bf
    • Chris Mason's avatar
      Btrfs: Optimize compressed writeback and reads · 771ed689
      Chris Mason authored
      When reading compressed extents, try to put pages into the page cache
      for any pages covered by the compressed extent that readpages didn't already
      preload.
      
      Add an async work queue to handle transformations at delayed allocation processing
      time.  Right now this is just compression.  The workflow is:
      
      1) Find offsets in the file marked for delayed allocation
      2) Lock the pages
      3) Lock the state bits
      4) Call the async delalloc code
      
      The async delalloc code clears the state lock bits and delalloc bits.  It is
      important this happens before the range goes into the work queue because
      otherwise it might deadlock with other work queue items that try to lock
      those extent bits.
      
      The file pages are compressed, and if the compression doesn't work the
      pages are written back directly.
      
      An ordered work queue is used to make sure the inodes are written in the same
      order that pdflush or writepages sent them down.
      
      This changes extent_write_cache_pages to let the writepage function
      update the wbc nr_written count.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      771ed689
    • Chris Mason's avatar
      Btrfs: Add ordered async work queues · 4a69a410
      Chris Mason authored
      Btrfs uses kernel threads to create async work queues for cpu intensive
      operations such as checksumming and decompression.  These work well,
      but they make it difficult to keep IO order intact.
      
      A single writepages call from pdflush or fsync will turn into a number
      of bios, and each bio is checksummed in parallel.  Once the checksum is
      computed, the bio is sent down to the disk, and since we don't control
      the order in which the parallel operations happen, they might go down to
      the disk in almost any order.
      
      The code deals with this somewhat by having deep work queues for a single
      kernel thread, making it very likely that a single thread will process all
      the bios for a single inode.
      
      This patch introduces an explicitly ordered work queue.  As work structs
      are placed into the queue they are put onto the tail of a list.  They have
      three callbacks:
      
      ->func (cpu intensive processing here)
      ->ordered_func (order sensitive processing here)
      ->ordered_free (free the work struct, all processing is done)
      
      The work struct has three callbacks.  The func callback does the cpu intensive
      work, and when it completes the work struct is marked as done.
      
      Every time a work struct completes, the list is checked to see if the head
      is marked as done.  If so the ordered_func callback is used to do the
      order sensitive processing and the ordered_free callback is used to do
      any cleanup.  Then we loop back and check the head of the list again.
      
      This patch also changes the checksumming code to use the ordered workqueues.
      One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      4a69a410
  10. 31 Oct, 2008 1 commit