1. 13 May, 2011 3 commits
    • Arne Jansen's avatar
      btrfs: quasi-round-robin for chunk allocation · 73c5de00
      Arne Jansen authored
      In a multi device setup, the chunk allocator currently always allocates
      chunks on the devices in the same order. This leads to a very uneven
      distribution, especially with RAID1 or RAID10 and an uneven number of
      devices.
      This patch always sorts the devices before allocating, and allocates the
      stripes on the devices with the most available space, as long as there
      is enough space available. In a low space situation, it first tries to
      maximize striping.
      The patch also simplifies the allocator and reduces the checks for
      corner cases.
      The simplification is done by several means. First, it defines the
      properties of each RAID type upfront. These properties are used afterwards
      instead of differentiating cases in several places.
      Second, the old allocator defined a minimum stripe size for each block
      group type, tried to find a large enough chunk, and if this fails just
      allocates a smaller one. This is now done in one step. The largest possible
      chunk (up to max_chunk_size) is searched and allocated.
      Because we now have only one pass, the allocation of the map (struct
      map_lookup) is moved down to the point where the number of stripes is
      already known. This way we avoid reallocation of the map.
      We still avoid allocating stripes that are not a multiple of STRIPE_SIZE.
      73c5de00
    • Arne Jansen's avatar
      btrfs: heed alloc_start · a9c9bf68
      Arne Jansen authored
      currently alloc_start is disregarded if the requested
      chunk size is bigger than (device size - alloc_start),
      but smaller than the device size.
      The only situation where I see this could have made sense
      was when a chunk equal the size of the device has been
      requested. This was possible as the allocator failed to
      take alloc_start into account when calculating the request
      chunk size. As this gets fixed by this patch, the workaround
      is not necessary anymore.
      a9c9bf68
    • Arne Jansen's avatar
      btrfs: move btrfs_cmp_device_free_bytes to super.c · bcd53741
      Arne Jansen authored
      this function won't be used here anymore, so move it super.c where it is
      used for df-calculation
      bcd53741
  2. 25 Apr, 2011 8 commits
  3. 18 Apr, 2011 1 commit
    • Chris Mason's avatar
      Btrfs: fix free space cache leak · f65647c2
      Chris Mason authored
      The free space caching code was recently reworked to
      cache all the pages it needed instead of using find_get_page everywhere.
      
      One loop was missed though, so it ended up leaking pages.  This fixes
      it to use our page array instead of find_get_page.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      f65647c2
  4. 16 Apr, 2011 2 commits
  5. 15 Apr, 2011 1 commit
    • Chris Mason's avatar
      Btrfs: don't force chunk allocation in find_free_extent · 0e4f8f88
      Chris Mason authored
      find_free_extent likes to allocate in contiguous clusters,
      which makes writeback faster, especially on SSD storage.  As
      the FS fragments, these clusters become harder to find and we have
      to decide between allocating a new chunk to make more clusters
      or giving up on the cluster to allocate from the free space
      we have.
      
      Right now it creates too many chunks, and you can end up with
      a whole FS that is mostly empty metadata chunks.  This commit
      changes the allocation code to be more strict and only
      allocate new chunks when we've made good use of the chunks we
      already have.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      0e4f8f88
  6. 13 Apr, 2011 5 commits
  7. 12 Apr, 2011 8 commits
  8. 08 Apr, 2011 8 commits
    • Josef Bacik's avatar
      Btrfs: check for duplicate iov_base's when doing dio reads · 93a54bc4
      Josef Bacik authored
      Apparently it is ok to submit a read to an IDE device with the same target page
      for different offsets.  This is what Windows does under qemu.  The problem is
      under DIO we expect them to be different buffers for checksumming reasons, and
      so this sort of thing will result in checksum errors, when in reality the file
      is fine.  So when reading, check to make sure that all iov bases are different,
      and if they aren't fall back to buffered mode, since that will work out right.
      Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      93a54bc4
    • Josef Bacik's avatar
      Btrfs: reuse the extent_map we found when calling btrfs_get_extent · 16d299ac
      Josef Bacik authored
      In btrfs_get_block_direct we call btrfs_get_extent to lookup the extent for the
      range that we are looking for.  If we don't find an extent, btrfs_get_extent
      will insert a extent_map for that area and mark it as a hole.  So it does the
      job of allocating a new extent map and inserting it into the io tree.  But if
      we're creating a new extent we free it up and redo all of that work.  So instead
      pass the em to btrfs_new_extent_direct(), and if it will work just allocate the
      disk space and set it up properly and bypass the freeing/allocating of a new
      extent map and the expensive operation of inserting the thing into the io_tree.
      Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      16d299ac
    • Josef Bacik's avatar
      Btrfs: do not use async submit for small DIO io's · 1ae39938
      Josef Bacik authored
      When looking at our DIO performance Chris said that for small IO's doing the
      async submit stuff tends to be more overhead than it's worth.  With this on top
      of my other fixes I get about a 17-20% speedup doing a sequential dd with 4k
      IO's.  Basically if we don't have to split the bio for the map length it's small
      enough to be directly submitted, otherwise go back to the async submit.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      1ae39938
    • Josef Bacik's avatar
      Btrfs: don't split dio bios if we don't have to · 02f57c7a
      Josef Bacik authored
      We have been unconditionally allocating a new bio and re-adding all pages from
      our original bio to the new bio.  This is needed if our original bio is larger
      than our stripe size, but if it is smaller than the stripe size then there is no
      need to do this.  So check the map length and if we are under that then go ahead
      and submit the original bio.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      02f57c7a
    • Josef Bacik's avatar
      Btrfs: do not call btrfs_update_inode in endio if nothing changed · 1ef30be1
      Josef Bacik authored
      In the DIO code we often don't update the i_disk_size because the i_size isn't
      updated until after the DIO is completed, so basically we are allocating a path,
      doing a search, and updating the inode item for no reason since nothing changed.
      btrfs_ordered_update_i_size will return 1 if it didn't update i_disk_size, so
      only run btrfs_update_inode if btrfs_ordered_update_i_size returns 0.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      1ef30be1
    • Josef Bacik's avatar
      Btrfs: map the inode item when doing fill_inode_item · 12ddb96c
      Josef Bacik authored
      Instead of calling kmap_atomic for every thing we set in the inode item, map the
      entire inode item at the start and unmap it at the end.  This makes a sequential
      dd of 400mb O_DIRECT something like 1% faster.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      12ddb96c
    • Josef Bacik's avatar
      Btrfs: only retry transaction reservation once · 06d5a589
      Josef Bacik authored
      I saw a lockup where we kept getting into this start transaction->commit
      transaction loop because of enospce.  The fact is if we fail to make our
      reservation, we've tried _everything_ several times, so we only need to try and
      commit the transaction once, and if that doesn't work then we really are out of
      space and need to just exit.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      06d5a589
    • Josef Bacik's avatar
      Btrfs: deal with the case that we run out of space in the cache · be1a12a0
      Josef Bacik authored
      Currently we don't handle running out of space in the cache, so to fix this we
      keep track of how far in the cache we are.  Then we only dirty the pages if we
      successfully modify all of them, otherwise if we have an error or run out of
      space we can just drop them and not worry about the vm writing them out.
      Thanks,
      
      Tested-by Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      be1a12a0
  9. 05 Apr, 2011 4 commits
    • Josef Bacik's avatar
      Btrfs: don't warn in btrfs_add_orphan · c9ddec74
      Josef Bacik authored
      When I moved the orphan adding to btrfs_truncate I missed the fact that during
      orphan cleanup we just add the orphan items to the orphan list without going
      through btrfs_orphan_add, which results in lots of warnings on mount if you have
      any orphan items that need to be truncated.  Just remove this warning since it's
      ok, this will allow all of the normal space accounting take place.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      c9ddec74
    • Josef Bacik's avatar
      Btrfs: fix free space cache when there are pinned extents and clusters V2 · 43be2146
      Josef Bacik authored
      I noticed a huge problem with the free space cache that was presenting
      as an early ENOSPC.  Turns out when writing the free space cache out I
      forgot to take into account pinned extents and more importantly
      clusters.  This would result in us leaking free space everytime we
      unmounted the filesystem and remounted it.
      
      I fix this by making sure to check and see if the current block group
      has a cluster and writing out any entries that are in the cluster to the
      cache, as well as writing any pinned extents we currently have to the
      cache since those will be available for us to use the next time the fs
      mounts.
      
      This patch also adds a check to the end of load_free_space_cache to make
      sure we got the right amount of free space cache, and if not make sure
      to clear the cache and re-cache the old fashioned way.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      43be2146
    • Li Zefan's avatar
      Btrfs: Fix uninitialized root flags for subvolumes · 08fe4db1
      Li Zefan authored
      root_item->flags and root_item->byte_limit are not initialized when
      a subvolume is created. This bug is not revealed until we added
      readonly snapshot support - now you mount a btrfs filesystem and you
      may find the subvolumes in it are readonly.
      
      To work around this problem, we steal a bit from root_item->inode_item->flags,
      and use it to indicate if those fields have been properly initialized.
      When we read a tree root from disk, we check if the bit is set, and if
      not we'll set the flag and initialize the two fields of the root item.
      Reported-by: default avatarAndreas Philipp <philipp.andreas@gmail.com>
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Tested-by: default avatarAndreas Philipp <philipp.andreas@gmail.com>
      cc: stable@kernel.org
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      08fe4db1
    • Miao Xie's avatar
      btrfs: clear __GFP_FS flag in the space cache inode · adae52b9
      Miao Xie authored
      the object id of the space cache inode's key is allocated from the relative
      root, just like the regular file. So we can't identify space cache inode by
      checking the object id of the inode's key, and we have to clear __GFP_FS flag
      at the time we look up the space cache inode.
      Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarLiu Bo <liubo2009@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      adae52b9