Commit 14c14414 authored by Maxim Patlasov's avatar Maxim Patlasov Committed by Miklos Szeredi

fuse: hold i_mutex in fuse_file_fallocate()

Changing size of a file on server and local update (fuse_write_update_size)
should be always protected by inode->i_mutex. Otherwise a race like this is
possible:

1. Process 'A' calls fallocate(2) to extend file (~FALLOC_FL_KEEP_SIZE).
fuse_file_fallocate() sends FUSE_FALLOCATE request to the server.
2. Process 'B' calls ftruncate(2) shrinking the file. fuse_do_setattr()
sends shrinking FUSE_SETATTR request to the server and updates local i_size
by i_size_write(inode, outarg.attr.size).
3. Process 'A' resumes execution of fuse_file_fallocate() and calls
fuse_write_update_size(inode, offset + length). But 'offset + length' was
obsoleted by ftruncate from previous step.

Changed in v2 (thanks Brian and Anand for suggestions):
 - made relation between mutex_lock() and fuse_set_nowrite(inode) more
   explicit and clear.
 - updated patch description to use ftruncate(2) in example
Signed-off-by: default avatarMaxim V. Patlasov <MPatlasov@parallels.com>
Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
Signed-off-by: default avatarMiklos Szeredi <mszeredi@suse.cz>
parent 8177a9d7
......@@ -2470,13 +2470,16 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
.mode = mode
};
int err;
bool lock_inode = !(mode & FALLOC_FL_KEEP_SIZE) ||
(mode & FALLOC_FL_PUNCH_HOLE);
if (fc->no_fallocate)
return -EOPNOTSUPP;
if (mode & FALLOC_FL_PUNCH_HOLE) {
if (lock_inode) {
mutex_lock(&inode->i_mutex);
fuse_set_nowrite(inode);
if (mode & FALLOC_FL_PUNCH_HOLE)
fuse_set_nowrite(inode);
}
req = fuse_get_req_nopages(fc);
......@@ -2511,8 +2514,9 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
fuse_invalidate_attr(inode);
out:
if (mode & FALLOC_FL_PUNCH_HOLE) {
fuse_release_nowrite(inode);
if (lock_inode) {
if (mode & FALLOC_FL_PUNCH_HOLE)
fuse_release_nowrite(inode);
mutex_unlock(&inode->i_mutex);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment