1. 13 Jun, 2014 8 commits
  2. 10 Jun, 2014 32 commits
    • Chris Mason's avatar
      Btrfs: convert smp_mb__{before,after}_clear_bit · c7548af6
      Chris Mason authored
      The new call is smp_mb__{before,after}_atomic.  The __ gives us extra
      protection from the atomic rays.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      c7548af6
    • Liu Bo's avatar
      Btrfs: fix scrub_print_warning to handle skinny metadata extents · 6eda71d0
      Liu Bo authored
      The skinny extents are intepreted incorrectly in scrub_print_warning(),
      and end up hitting the BUG() in btrfs_extent_inline_ref_size.
      Reported-by: default avatarKonstantinos Skarlatos <k.skarlatos@gmail.com>
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      6eda71d0
    • Filipe Manana's avatar
      Btrfs: make fsync work after cloning into a file · 7ffbb598
      Filipe Manana authored
      When cloning into a file, we were correctly replacing the extent
      items in the target range and removing the extent maps. However
      we weren't replacing the extent maps with new ones that point to
      the new extents - as a consequence, an incremental fsync (when the
      inode doesn't have the full sync flag) was a NOOP, since it relies
      on the existence of extent maps in the modified list of the inode's
      extent map tree, which was empty. Therefore add new extent maps to
      reflect the target clone range.
      
      A test case for xfstests follows.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      7ffbb598
    • Liu Bo's avatar
      Btrfs: use right type to get real comparison · cd857dd6
      Liu Bo authored
      We want to make sure the point is still within the extent item, not to verify
      the memory it's pointing to.
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      cd857dd6
    • Josef Bacik's avatar
      Btrfs: don't check nodes for extent items · 8a56457f
      Josef Bacik authored
      The backref code was looking at nodes as well as leaves when we tried to
      populate extent item entries.  This is not good, and although we go away with it
      for the most part because we'd skip where disk_bytenr != random_memory,
      sometimes random_memory would match and suddenly boom.  This fixes that problem.
      Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      8a56457f
    • Filipe Manana's avatar
      Btrfs: don't release invalid page in btrfs_page_exists_in_range() · 6fdef6d4
      Filipe Manana authored
      In inode.c:btrfs_page_exists_in_range(), if the page we got from
      the radix tree is an exception entry, which can't be retried, we
      exit the loop with a non-NULL page and then call page_cache_release
      against it, which is not ok since it's not a valid page. This could
      also make us return true when we shouldn't.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      6fdef6d4
    • Filipe Manana's avatar
      Btrfs: make sure we retry if page is a retriable exception · 809f9016
      Filipe Manana authored
      In inode.c:btrfs_page_exists_in_range(), if the page we get from the
      radix tree is an exception which should make us retry, set page to
      NULL in order to really retry, because otherwise we don't get another
      loop iteration executed (page != NULL makes the while loop exit).
      This also was making us call page_cache_release after exiting the loop,
      which isn't correct because page doesn't point to a valid page, and
      possibly return true from the function when we shouldn't.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      809f9016
    • Filipe Manana's avatar
      Btrfs: make sure we retry if we couldn't get the page · 91405151
      Filipe Manana authored
      In inode.c:btrfs_page_exists_in_range(), if we can't get the page
      we need to retry. However we weren't retrying because we weren't
      setting page to NULL, which makes the while loop exit immediately
      and will make us call page_cache_release after exiting the loop
      which is incorrect because our page get didn't succeed. This could
      also make us return true when we shouldn't.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      91405151
    • Gui Hecheng's avatar
      btrfs: replace EINVAL with EOPNOTSUPP for dev_replace raid56 · c81d5767
      Gui Hecheng authored
      To return EOPNOTSUPP is more user friendly than to return EINVAL,
      and then user-space tool will show that the dev_replace operation
      for raid56 is not currently supported rather than showing that
      there is an invalid argument.
      Signed-off-by: default avatarGui Hecheng <guihc.fnst@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      c81d5767
    • Antonio Ospite's avatar
      trivial: fs/btrfs/ioctl.c: fix typo s/substract/subtract/ · 93915584
      Antonio Ospite authored
      Signed-off-by: default avatarAntonio Ospite <ao2@ao2.it>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: linux-btrfs@vger.kernel.org
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      93915584
    • Liu Bo's avatar
      Btrfs: fix leaf corruption after __btrfs_drop_extents · 0b43e04f
      Liu Bo authored
      Several reports about leaf corruption has been floating on the list, one of them
      points to __btrfs_drop_extents(), and we find that the leaf becomes corrupted
      after __btrfs_drop_extents(), it's really a rare case but it does exist.
      
      The problem turns out to be btrfs_next_leaf() called in __btrfs_drop_extents().
      
      So in btrfs_next_leaf(), we release the current path to re-search the last key of
      the leaf for locating next leaf, and we've taken it into account that there might
      be balance operations between leafs during this 'unlock and re-lock' dance, so
      we check the path again and advance it if there are now more items available.
      But things are a bit different if that last key happens to be removed and balance
      gets a bigger key as the last one, and btrfs_search_slot will return it with
      ret > 0, IOW, nothing change in this leaf except the new last key, then we think
      we're okay because there is no more item balanced in, fine, we thinks we can
      go to the next leaf.
      
      However, we should return that bigger key, otherwise we deserve leaf corruption,
      for example, in endio, skipping that key means that __btrfs_drop_extents() thinks
      it has dropped all extent matched the required range and finish_ordered_io can
      safely insert a new extent, but it actually doesn't and ends up a leaf
      corruption.
      
      One may be asking that why our locking on extent io tree doesn't work as
      expected, ie. it should avoid this kind of race situation.  But in
      __btrfs_drop_extents(), we don't always find extents which are included within
      our locking range, IOW, extents can start before our searching start, in this
      case locking on extent io tree doesn't protect us from the race.
      
      This takes the special case into account.
      Reviewed-by: default avatarFilipe Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      0b43e04f
    • Filipe Manana's avatar
      Btrfs: ensure btrfs_prev_leaf doesn't miss 1 item · 337c6f68
      Filipe Manana authored
      We might have had an item with the previous key in the tree right
      before we released our path. And after we released our path, that
      item might have been pushed to the first slot (0) of the leaf we
      were holding due to a tree balance. Alternatively, an item with the
      previous key can exist as the only element of a leaf (big fat item).
      Therefore account for these 2 cases, so that our callers (like
      btrfs_previous_item) don't miss an existing item with a key matching
      the previous key we computed above.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      337c6f68
    • Filipe Manana's avatar
      Btrfs: fix clone to deal with holes when NO_HOLES feature is enabled · f82a9901
      Filipe Manana authored
      If the NO_HOLES feature is enabled holes don't have file extent items in
      the btree that represent them anymore. This made the clone operation
      ignore the gaps that exist between consecutive file extent items and
      therefore not create the holes at the destination. When not using the
      NO_HOLES feature, the holes were created at the destination.
      
      A test case for xfstests follows.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Reviewed-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      f82a9901
    • Jeff Mahoney's avatar
      btrfs: free delayed node outside of root->inode_lock · 96493031
      Jeff Mahoney authored
      On heavy workloads, we're seeing soft lockup warnings on
      root->inode_lock in __btrfs_release_delayed_node. The low hanging fruit
      is to reduce the size of the critical section.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      96493031
    • Gui Hecheng's avatar
      btrfs: replace EINVAL with ERANGE for resize when ULLONG_MAX · 902c68a4
      Gui Hecheng authored
      To be accurate about the error case,
      if the new size is beyond ULLONG_MAX, return ERANGE instead of EINVAL.
      Signed-off-by: default avatarGui Hecheng <guihc.fnst@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      902c68a4
    • Filipe Manana's avatar
      Btrfs: fix transaction leak during fsync call · b05fd874
      Filipe Manana authored
      If btrfs_log_dentry_safe() returns an error, we set ret to 1 and
      fall through with the goal of committing the transaction. However,
      in the case where the inode doesn't need a full sync, we would call
      btrfs_wait_ordered_range() against the target range for our inode,
      and if it returned an error, we would return without commiting or
      ending the transaction.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      b05fd874
    • Qu Wenruo's avatar
      btrfs: Avoid trucating page or punching hole in a already existed hole. · d7781546
      Qu Wenruo authored
      btrfs_punch_hole() will truncate unaligned pages or punch hole on a
      already existed hole.
      This will cause unneeded zero page or holes splitting the original huge
      hole.
      
      This patch will skip already existed holes before any page truncating or
      hole punching.
      Signed-off-by: default avatarQu Wenruo <quwenruo@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      d7781546
    • Filipe Manana's avatar
      Btrfs: update commit root on snapshot creation after orphan cleanup · 3821f348
      Filipe Manana authored
      On snapshot creation (either writable or read-only), we do orphan cleanup
      against the root of the snapshot. If the cleanup did remove any orphans,
      then the current root node will be different from the commit root node
      until the next transaction commit happens.
      
      A send operation always uses the commit root of a snapshot - this means
      it will see the orphans if it starts computing the send stream before the
      next transaction commit happens (triggered by a timer or sync() for .e.g),
      which is when the commit root gets assigned a reference to current root,
      where the orphans are not visible anymore. The consequence of send seeing
      the orphans is explained below.
      
      For example:
      
          mkfs.btrfs -f /dev/sdd
          mount -o commit=999 /dev/sdd /mnt
      
          # open a file with O_TMPFILE and leave it open
          # write some data to the file
          btrfs subvolume snapshot -r /mnt /mnt/snap1
      
          btrfs send /mnt/snap1 -f /tmp/send.data
      
      The send operation will fail with the following error:
      
          ERROR: send ioctl failed with -116: Stale file handle
      
      What happens here is that our snapshot has an orphan inode still visible
      through the commit root, that corresponds to the tmpfile. However send
      will attempt to call inode.c:btrfs_iget(), with the goal of reading the
      file's data, which will return -ESTALE because it will use the current
      root (and not the commit root) of the snapshot.
      
      Of course, there are other cases where we can get orphans, but this
      example using a tmpfile makes it much easier to reproduce the issue.
      
      Therefore on snapshot creation, after calling btrfs_orphan_cleanup, if
      the commit root is different from the current root, just commit the
      transaction associated with the snapshot's root (if it exists), so that
      a send will not see any orphans that don't exist anymore. This also
      guarantees a send will always see the same content regardless of whether
      a transaction commit happened already before the send was requested and
      after the orphan cleanup (meaning the commit root and current roots are
      the same) or it hasn't happened yet (commit and current roots are
      different).
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      3821f348
    • Filipe Manana's avatar
      Btrfs: ioctl, don't re-lock extent range when not necessary · ff5df9b8
      Filipe Manana authored
      In ioctl.c:lock_extent_range(), after locking our target range, the
      ordered extent that btrfs_lookup_first_ordered_extent() returns us
      may not overlap our target range at all. In this case we would just
      unlock our target range, wait for any new ordered extents that overlap
      the range to complete, lock again the range and repeat all these steps
      until we don't get any ordered extent and the delalloc flag isn't set
      in the io tree for our target range.
      
      Therefore just stop if we get an ordered extent that doesn't overlap
      our target range and the dealalloc flag isn't set for the range in
      the inode's io tree.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      ff5df9b8
    • Filipe Manana's avatar
      Btrfs: avoid visiting all extent items when cloning a range · 2c463823
      Filipe Manana authored
      When cloning a range of a file, we were visiting all the extent items in
      the btree that belong to our source inode. We don't need to visit those
      extent items that don't overlap the range we are cloning, as doing so only
      makes us waste time and do unnecessary btree navigations (btrfs_next_leaf)
      for inodes that have a large number of file extent items in the btree.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      2c463823
    • Filipe Manana's avatar
      Btrfs: set dead flag on the right root when destroying snapshot · c55bfa67
      Filipe Manana authored
      We were setting the BTRFS_ROOT_SUBVOL_DEAD flag on the root of the
      parent of our target snapshot, instead of setting it in the target
      snapshot's root.
      
      This is easy to observe by running the following scenario:
      
          mkfs.btrfs -f /dev/sdd
          mount /dev/sdd /mnt
      
          btrfs subvolume create /mnt/first_subvol
          btrfs subvolume snapshot -r /mnt /mnt/mysnap1
      
          btrfs subvolume delete /mnt/first_subvol
          btrfs subvolume snapshot -r /mnt /mnt/mysnap2
      
          btrfs send -p /mnt/mysnap1 /mnt/mysnap2 -f /tmp/send.data
      
      The send command failed because the send ioctl returned -EPERM.
      A test case for xfstests follows.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      c55bfa67
    • Filipe Manana's avatar
      Btrfs: ensure readers see new data after a clone operation · c125b8bf
      Filipe Manana authored
      We were cleaning the clone target file range from the page cache before
      we did replace the file extent items in the fs tree. This was racy,
      as right after cleaning the relevant range from the page cache and before
      replacing the file extent items, a read against that range could be
      performed by another task and populate again the page cache with stale
      data (stale after the cloning finishes). This would result in reads after
      the clone operation successfully finishes to get old data (and potentially
      for a very long time). Therefore evict the pages after replacing the file
      extent items, so that subsequent reads will always get the new data.
      
      Similarly, we were prone to races while cloning the file extent items
      because we weren't locking the target range and wait for any existing
      ordered extents against that range to complete. It was possible that
      after cloning the extent items, a write operation that was performed
      before the clone operation and overlaps the same range, would end up
      undoing all or part of the work the clone operation did (a worker task
      running inode.c:btrfs_finish_ordered_io). Therefore lock the target
      range in the io tree, wait for all pending ordered extents against that
      range to finish and then safely perform the cloning.
      
      The issue of reading stale data after the clone operation is easy to
      reproduce by running the following C program in a loop until it exits
      with return value 1.
      
       #include <unistd.h>
       #include <stdio.h>
       #include <stdlib.h>
       #include <string.h>
       #include <errno.h>
       #include <pthread.h>
       #include <fcntl.h>
       #include <assert.h>
       #include <asm/types.h>
       #include <linux/ioctl.h>
       #include <sys/stat.h>
       #include <sys/types.h>
       #include <sys/ioctl.h>
      
       #define SRC_FILE "/mnt/sdd/foo"
       #define DST_FILE "/mnt/sdd/bar"
       #define FILE_SIZE (16 * 1024)
       #define PATTERN_SRC 'X'
       #define PATTERN_DST 'Y'
      
      struct btrfs_ioctl_clone_range_args {
      	__s64 src_fd;
      	__u64 src_offset, src_length;
      	__u64 dest_offset;
      };
      
       #define BTRFS_IOCTL_MAGIC 0x94
       #define BTRFS_IOC_CLONE_RANGE _IOW(BTRFS_IOCTL_MAGIC, 13, \
      				   struct btrfs_ioctl_clone_range_args)
      
      static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
      static int clone_done = 0;
      static int reader_ready = 0;
      static int stale_data = 0;
      
      static void *reader_loop(void *arg)
      {
      	char buf[4096], want_buf[4096];
      
      	memset(want_buf, PATTERN_SRC, 4096);
      	pthread_mutex_lock(&mutex);
      	reader_ready = 1;
      	pthread_mutex_unlock(&mutex);
      
      	while (1) {
      		int done, fd, ret;
      
      		fd = open(DST_FILE, O_RDONLY);
      		assert(fd != -1);
      
      		pthread_mutex_lock(&mutex);
      		done = clone_done;
      		pthread_mutex_unlock(&mutex);
      
      		ret = read(fd, buf, 4096);
      		assert(ret == 4096);
      		close(fd);
      
      		if (done) {
      			ret = memcmp(buf, want_buf, 4096);
      			if (ret == 0) {
      				printf("Found new content\n");
      			} else {
      				printf("Found old content\n");
      				pthread_mutex_lock(&mutex);
      				stale_data = 1;
      				pthread_mutex_unlock(&mutex);
      			}
      			break;
      		}
      	}
      	return NULL;
      }
      
      int main(int argc, char *argv[])
      {
      	pthread_t reader;
      	int ret, i, fd;
      	struct btrfs_ioctl_clone_range_args clone_args;
      	int fd1, fd2;
      
      	ret = remove(SRC_FILE);
      	if (ret == -1 && errno != ENOENT) {
      		fprintf(stderr, "Error deleting src file: %s\n", strerror(errno));
      		return 1;
      	}
      	ret = remove(DST_FILE);
      	if (ret == -1 && errno != ENOENT) {
      		fprintf(stderr, "Error deleting dst file: %s\n", strerror(errno));
      		return 1;
      	}
      
      	fd = open(SRC_FILE, O_CREAT | O_WRONLY | O_TRUNC, S_IRWXU);
      	assert(fd != -1);
      	for (i = 0; i < FILE_SIZE; i++) {
      		char c = PATTERN_SRC;
      		ret = write(fd, &c, 1);
      		assert(ret == 1);
      	}
      	close(fd);
      	fd = open(DST_FILE, O_CREAT | O_WRONLY | O_TRUNC, S_IRWXU);
      	assert(fd != -1);
      	for (i = 0; i < FILE_SIZE; i++) {
      		char c = PATTERN_DST;
      		ret = write(fd, &c, 1);
      		assert(ret == 1);
      	}
      	close(fd);
              sync();
      
      	ret = pthread_create(&reader, NULL, reader_loop, NULL);
      	assert(ret == 0);
      	while (1) {
      		int r;
      		pthread_mutex_lock(&mutex);
      		r = reader_ready;
      		pthread_mutex_unlock(&mutex);
      		if (r) break;
      	}
      
      	fd1 = open(SRC_FILE, O_RDONLY);
      	if (fd1 < 0) {
      		fprintf(stderr, "Error open src file: %s\n", strerror(errno));
      		return 1;
      	}
      	fd2 = open(DST_FILE, O_RDWR);
      	if (fd2 < 0) {
      		fprintf(stderr, "Error open dst file: %s\n", strerror(errno));
      		return 1;
      	}
      	clone_args.src_fd = fd1;
      	clone_args.src_offset = 0;
      	clone_args.src_length = 4096;
      	clone_args.dest_offset = 0;
      	ret = ioctl(fd2, BTRFS_IOC_CLONE_RANGE, &clone_args);
      	assert(ret == 0);
      	close(fd1);
      	close(fd2);
      
      	pthread_mutex_lock(&mutex);
      	clone_done = 1;
      	pthread_mutex_unlock(&mutex);
      	ret = pthread_join(reader, NULL);
      	assert(ret == 0);
      
      	pthread_mutex_lock(&mutex);
      	ret = stale_data ? 1 : 0;
      	pthread_mutex_unlock(&mutex);
      	return ret;
      }
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      c125b8bf
    • Rickard Strandqvist's avatar
      fs: btrfs: volumes.c: Fix for possible null pointer dereference · 8321cf25
      Rickard Strandqvist authored
      There is otherwise a risk of a possible null pointer dereference.
      
      Was largely found by using a static code analysis program called cppcheck.
      Signed-off-by: default avatarRickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      8321cf25
    • Jeff Mahoney's avatar
      btrfs: allocate raid type kobjects dynamically · c1895442
      Jeff Mahoney authored
      We are currently allocating space_info objects in an array when we
      allocate space_info. When a user does something like:
      
      # btrfs balance start -mconvert=raid1 -dconvert=raid1 /mnt
      # btrfs balance start -mconvert=single -dconvert=single /mnt -f
      # btrfs balance start -mconvert=raid1 -dconvert=raid1 /
      
      We can end up with memory corruption since the kobject hasn't
      been reinitialized properly and the name pointer was left set.
      
      The rationale behind allocating them statically was to avoid
      creating a separate kobject container that just contained the
      raid type. It used the index in the array to determine the index.
      
      Ultimately, though, this wastes more memory than it saves in all
      but the most complex scenarios and introduces kobject lifetime
      questions.
      
      This patch allocates the kobjects dynamically instead. Note that
      we also remove the kobject_get/put of the parent kobject since
      kobject_add and kobject_del do that internally.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Reported-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      c1895442
    • Filipe Manana's avatar
      Btrfs: send, use the right limits for xattr names and values · 7e3ae33e
      Filipe Manana authored
      We were limiting the sum of the xattr name and value lengths to PATH_MAX,
      which is not correct, specially on filesystems created with btrfs-progs
      v3.12 or higher, where the default leaf size is max(16384, PAGE_SIZE), or
      systems with page sizes larger than 4096 bytes.
      
      Xattrs have their own specific maximum name and value lengths, which depend
      on the leaf size, therefore use these limits to be able to send xattrs with
      sizes larger than PATH_MAX.
      
      A test case for xfstests follows.
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      7e3ae33e
    • Filipe Manana's avatar
      Btrfs: send, don't error in the presence of subvols/snapshots · 1af56070
      Filipe Manana authored
      If we are doing an incremental send and the base snapshot has a
      directory with name X that doesn't exist anymore in the second
      snapshot and a new subvolume/snapshot exists in the second snapshot
      that has the same name as the directory (name X), the incremental
      send would fail with -ENOENT error. This is because it attempts
      to lookup for an inode with a number matching the objectid of a
      root, which doesn't exist.
      
      Steps to reproduce:
      
          mkfs.btrfs -f /dev/sdd
          mount /dev/sdd /mnt
      
          mkdir /mnt/testdir
          btrfs subvolume snapshot -r /mnt /mnt/mysnap1
      
          rmdir /mnt/testdir
          btrfs subvolume create /mnt/testdir
          btrfs subvolume snapshot -r /mnt /mnt/mysnap2
      
          btrfs send -p /mnt/mysnap1 /mnt/mysnap2 -f /tmp/send.data
      
      A test case for xfstests follows.
      Reported-by: default avatarRobert White <rwhite@pobox.com>
      Signed-off-by: default avatarFilipe David Borba Manana <fdmanana@gmail.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      1af56070
    • Chris Mason's avatar
      Btrfs: async delayed refs · a79b7d4b
      Chris Mason authored
      Delayed extent operations are triggered during transaction commits.
      The goal is to queue up a healthly batch of changes to the extent
      allocation tree and run through them in bulk.
      
      This farms them off to async helper threads.  The goal is to have the
      bulk of the delayed operations being done in the background, but this is
      also important to limit our stack footprint.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      a79b7d4b
    • Chris Mason's avatar
      Btrfs: split up __extent_writepage to lower stack usage · 40f76580
      Chris Mason authored
      __extent_writepage has two unrelated parts.  First it does the delayed
      allocation dance and second it does the mapping and IO for the page
      we're actually writing.
      
      This splits it up into those two parts so the stack from one doesn't
      impact the stack from the other.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      40f76580
    • Alex Gartrell's avatar
      btrfs: Drop EXTENT_UPTODATE check in hole punching and direct locking · fc4adbff
      Alex Gartrell authored
      In these instances, we are trying to determine if a page has been accessed
      since we began the operation for the sake of retry.  This is easily
      accomplished by doing a gang lookup in the page mapping radix tree, and it
      saves us the dependency on the flag (so that we might eventually delete
      it).
      
      btrfs_page_exists_in_range borrows heavily from find_get_page, replacing
      the radix tree look up with a gang lookup of 1, so that we can find the
      next highest page >= index and see if it falls into our lock range.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarAlex Gartrell <agartrell@fb.com>
      fc4adbff
    • Chris Mason's avatar
      Btrfs: cut down stack usage in btree_write_cache_pages · 0e378df1
      Chris Mason authored
      This adds noinline_for_stack to two helpers used by
      btree_write_cache_pages.  It shaves us down from 424 bytes on the
      stack to 280.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      0e378df1
    • Chris Mason's avatar
      Btrfs: break up __btrfs_write_out_cache to cut down stack usage · d4452bc5
      Chris Mason authored
      __btrfs_write_out_cache was one of our stack pigs.  This breaks it
      up into helper functions and slims it down to 194 bytes.
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      d4452bc5
    • Josef Bacik's avatar
      Btrfs: free tmp ulist for qgroup rescan · 2a108409
      Josef Bacik authored
      Memory leaks are bad mmkay?
      Signed-off-by: default avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      2a108409