1. 30 May, 2018 2 commits
    • Qu Wenruo's avatar
      btrfs: lzo: Harden inline lzo compressed extent decompression · de885e3e
      Qu Wenruo authored
      For inlined extent, we only have one segment, thus less things to check.
      And further more, inlined extent always has the csum in its leaf header,
      it's less probable to have corrupted data.
      
      Anyway, still check header and segment header.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      de885e3e
    • Qu Wenruo's avatar
      btrfs: lzo: Add header length check to avoid potential out-of-bounds access · 314bfa47
      Qu Wenruo authored
      James Harvey reported that some corrupted compressed extent data can
      lead to various kernel memory corruption.
      
      Such corrupted extent data belongs to inode with NODATASUM flags, thus
      data csum won't help us detecting such bug.
      
      If lucky enough, KASAN could catch it like:
      
      BUG: KASAN: slab-out-of-bounds in lzo_decompress_bio+0x384/0x7a0 [btrfs]
      Write of size 4096 at addr ffff8800606cb0f8 by task kworker/u16:0/2338
      
      CPU: 3 PID: 2338 Comm: kworker/u16:0 Tainted: G           O      4.17.0-rc5-custom+ #50
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
      Workqueue: btrfs-endio btrfs_endio_helper [btrfs]
      Call Trace:
       dump_stack+0xc2/0x16b
       print_address_description+0x6a/0x270
       kasan_report+0x260/0x380
       memcpy+0x34/0x50
       lzo_decompress_bio+0x384/0x7a0 [btrfs]
       end_compressed_bio_read+0x99f/0x10b0 [btrfs]
       bio_endio+0x32e/0x640
       normal_work_helper+0x15a/0xea0 [btrfs]
       process_one_work+0x7e3/0x1470
       worker_thread+0x1b0/0x1170
       kthread+0x2db/0x390
       ret_from_fork+0x22/0x40
      ...
      
      The offending compressed data has the following info:
      
      Header:			length 32768		(looks completely valid)
      Segment 0 Header:	length 3472882419	(obviously out of bounds)
      
      Then when handling segment 0, since it's over the current page, we need
      the copy the compressed data to temporary buffer in workspace, then such
      large size would trigger out-of-bounds memory access, screwing up the
      whole kernel.
      
      Fix it by adding extra checks on header and segment headers to ensure we
      won't access out-of-bounds, and even checks the decompressed data won't
      be out-of-bounds.
      Reported-by: default avatarJames Harvey <jamespharvey20@gmail.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarMisono Tomohiro <misono.tomohiro@jp.fujitsu.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      [ updated comments ]
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      314bfa47
  2. 29 May, 2018 10 commits
  3. 28 May, 2018 28 commits