1. 30 May, 2018 3 commits
    • Omar Sandoval's avatar
      Btrfs: fix memory and mount leak in btrfs_ioctl_rm_dev_v2() · fd4e994b
      Omar Sandoval authored
      If we have invalid flags set, when we error out we must drop our writer
      counter and free the buffer we allocated for the arguments. This bug is
      trivially reproduced with the following program on 4.7+:
      
      	#include <fcntl.h>
      	#include <stdint.h>
      	#include <stdio.h>
      	#include <stdlib.h>
      	#include <unistd.h>
      	#include <sys/ioctl.h>
      	#include <sys/stat.h>
      	#include <sys/types.h>
      	#include <linux/btrfs.h>
      	#include <linux/btrfs_tree.h>
      
      	int main(int argc, char **argv)
      	{
      		struct btrfs_ioctl_vol_args_v2 vol_args = {
      			.flags = UINT64_MAX,
      		};
      		int ret;
      		int fd;
      
      		if (argc != 2) {
      			fprintf(stderr, "usage: %s PATH\n", argv[0]);
      			return EXIT_FAILURE;
      		}
      
      		fd = open(argv[1], O_WRONLY);
      		if (fd == -1) {
      			perror("open");
      			return EXIT_FAILURE;
      		}
      
      		ret = ioctl(fd, BTRFS_IOC_RM_DEV_V2, &vol_args);
      		if (ret == -1)
      			perror("ioctl");
      
      		close(fd);
      		return EXIT_SUCCESS;
      	}
      
      When unmounting the filesystem, we'll hit the
      WARN_ON(mnt_get_writers(mnt)) in cleanup_mnt() and also may prevent the
      filesystem to be remounted read-only as the writer count will stay
      lifted.
      
      Fixes: 6b526ed7 ("btrfs: introduce device delete by devid")
      CC: stable@vger.kernel.org # 4.9+
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Reviewed-by: default avatarSu Yue <suy.fnst@cn.fujitsu.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      fd4e994b
    • Qu Wenruo's avatar
      btrfs: lzo: Harden inline lzo compressed extent decompression · de885e3e
      Qu Wenruo authored
      For inlined extent, we only have one segment, thus less things to check.
      And further more, inlined extent always has the csum in its leaf header,
      it's less probable to have corrupted data.
      
      Anyway, still check header and segment header.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      de885e3e
    • Qu Wenruo's avatar
      btrfs: lzo: Add header length check to avoid potential out-of-bounds access · 314bfa47
      Qu Wenruo authored
      James Harvey reported that some corrupted compressed extent data can
      lead to various kernel memory corruption.
      
      Such corrupted extent data belongs to inode with NODATASUM flags, thus
      data csum won't help us detecting such bug.
      
      If lucky enough, KASAN could catch it like:
      
      BUG: KASAN: slab-out-of-bounds in lzo_decompress_bio+0x384/0x7a0 [btrfs]
      Write of size 4096 at addr ffff8800606cb0f8 by task kworker/u16:0/2338
      
      CPU: 3 PID: 2338 Comm: kworker/u16:0 Tainted: G           O      4.17.0-rc5-custom+ #50
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
      Workqueue: btrfs-endio btrfs_endio_helper [btrfs]
      Call Trace:
       dump_stack+0xc2/0x16b
       print_address_description+0x6a/0x270
       kasan_report+0x260/0x380
       memcpy+0x34/0x50
       lzo_decompress_bio+0x384/0x7a0 [btrfs]
       end_compressed_bio_read+0x99f/0x10b0 [btrfs]
       bio_endio+0x32e/0x640
       normal_work_helper+0x15a/0xea0 [btrfs]
       process_one_work+0x7e3/0x1470
       worker_thread+0x1b0/0x1170
       kthread+0x2db/0x390
       ret_from_fork+0x22/0x40
      ...
      
      The offending compressed data has the following info:
      
      Header:			length 32768		(looks completely valid)
      Segment 0 Header:	length 3472882419	(obviously out of bounds)
      
      Then when handling segment 0, since it's over the current page, we need
      the copy the compressed data to temporary buffer in workspace, then such
      large size would trigger out-of-bounds memory access, screwing up the
      whole kernel.
      
      Fix it by adding extra checks on header and segment headers to ensure we
      won't access out-of-bounds, and even checks the decompressed data won't
      be out-of-bounds.
      Reported-by: default avatarJames Harvey <jamespharvey20@gmail.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarMisono Tomohiro <misono.tomohiro@jp.fujitsu.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      [ updated comments ]
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      314bfa47
  2. 29 May, 2018 10 commits
  3. 28 May, 2018 27 commits