Commit 181aaebd authored by Jan Kara's avatar Jan Kara Committed by Sasha Levin

ext4: fix races between buffered IO and collapse / insert range

Current code implementing FALLOC_FL_COLLAPSE_RANGE and
FALLOC_FL_INSERT_RANGE is prone to races with buffered writes and page
faults. If buffered write or write via mmap manages to squeeze between
filemap_write_and_wait_range() and truncate_pagecache() in the fallocate
implementations, the written data is simply discarded by
truncate_pagecache() although it should have been shifted.

Fix the problem by moving filemap_write_and_wait_range() call inside
i_mutex and i_mmap_sem. That way we are protected against races with
both buffered writes and page faults.
Signed-off-by: default avatarJan Kara <jack@suse.com>
Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
Reviewed-by: default avatarMingming Cao <mingming.cao@oracle.com>
Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
parent 9621787d
......@@ -5423,21 +5423,7 @@ int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
return ret;
}
/*
* Need to round down offset to be aligned with page size boundary
* for page size > block size.
*/
ioffset = round_down(offset, PAGE_SIZE);
/* Write out all dirty pages */
ret = filemap_write_and_wait_range(inode->i_mapping, ioffset,
LLONG_MAX);
if (ret)
return ret;
/* Take mutex lock */
mutex_lock(&inode->i_mutex);
/*
* There is no need to overlap collapse range with EOF, in which case
* it is effectively a truncate operation
......@@ -5462,6 +5448,27 @@ int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
* page cache.
*/
down_write(&EXT4_I(inode)->i_mmap_sem);
/*
* Need to round down offset to be aligned with page size boundary
* for page size > block size.
*/
ioffset = round_down(offset, PAGE_SIZE);
/*
* Write tail of the last page before removed range since it will get
* removed from the page cache below.
*/
ret = filemap_write_and_wait_range(inode->i_mapping, ioffset, offset);
if (ret)
goto out_mmap;
/*
* Write data that will be shifted to preserve them when discarding
* page cache below. We are also protected from pages becoming dirty
* by i_mmap_sem.
*/
ret = filemap_write_and_wait_range(inode->i_mapping, offset + len,
LLONG_MAX);
if (ret)
goto out_mmap;
truncate_pagecache(inode, ioffset);
credits = ext4_writepage_trans_blocks(inode);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment