Commit 0d90d638 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-f2fs-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
 "In this round, a couple of sysfs entries were introduced to tune the
  f2fs at runtime.

  In addition, f2fs starts to support inline_data and improves the
  read/write performance in some workloads by refactoring bio-related
  flows.

  This patch-set includes the following major enhancement patches.
   - support inline_data
   - refactor bio operations such as merge operations and rw type
     assignment
   - enhance the direct IO path
   - enhance bio operations
   - truncate a node page when it becomes obsolete
   - add sysfs entries: small_discards, max_victim_search, and
     in-place-update
   - add a sysfs entry to control max_victim_search

  The other bug fixes are as follows.
   - fix a bug in truncate_partial_nodes
   - avoid warnings during sparse and build process
   - fix error handling flows
   - fix potential bit overflows

  And, there are a bunch of cleanups"

* tag 'for-f2fs-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (95 commits)
  f2fs: drop obsolete node page when it is truncated
  f2fs: introduce NODE_MAPPING for code consistency
  f2fs: remove the orphan block page array
  f2fs: add help function META_MAPPING
  f2fs: move a branch for code redability
  f2fs: call mark_inode_dirty to flush dirty pages
  f2fs: clean checkpatch warnings
  f2fs: missing REQ_META and REQ_PRIO when sync_meta_pages(META_FLUSH)
  f2fs: avoid f2fs_balance_fs call during pageout
  f2fs: add delimiter to seperate name and value in debug phrase
  f2fs: use spinlock rather than mutex for better speed
  f2fs: move alloc new orphan node out of lock protection region
  f2fs: move grabing orphan pages out of protection region
  f2fs: remove the needless parameter of f2fs_wait_on_page_writeback
  f2fs: update documents and a MAINTAINERS entry
  f2fs: add a sysfs entry to control max_victim_search
  f2fs: improve write performance under frequent fsync calls
  f2fs: avoid to read inline data except first page
  f2fs: avoid to left uninitialized data in page when read inline data
  f2fs: fix truncate_partial_nodes bug
  ...
parents 1d32bdaf bf39c00a
...@@ -24,3 +24,34 @@ Date: July 2013 ...@@ -24,3 +24,34 @@ Date: July 2013
Contact: "Namjae Jeon" <namjae.jeon@samsung.com> Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
Description: Description:
Controls the victim selection policy for garbage collection. Controls the victim selection policy for garbage collection.
What: /sys/fs/f2fs/<disk>/reclaim_segments
Date: October 2013
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description:
Controls the issue rate of segment discard commands.
What: /sys/fs/f2fs/<disk>/ipu_policy
Date: November 2013
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description:
Controls the in-place-update policy.
What: /sys/fs/f2fs/<disk>/min_ipu_util
Date: November 2013
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description:
Controls the FS utilization condition for the in-place-update
policies.
What: /sys/fs/f2fs/<disk>/max_small_discards
Date: November 2013
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description:
Controls the issue rate of small discard commands.
What: /sys/fs/f2fs/<disk>/max_victim_search
Date: January 2014
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description:
Controls the number of trials to find a victim segment.
...@@ -120,6 +120,8 @@ active_logs=%u Support configuring the number of active logs. In the ...@@ -120,6 +120,8 @@ active_logs=%u Support configuring the number of active logs. In the
disable_ext_identify Disable the extension list configured by mkfs, so f2fs disable_ext_identify Disable the extension list configured by mkfs, so f2fs
does not aware of cold files such as media files. does not aware of cold files such as media files.
inline_xattr Enable the inline xattrs feature. inline_xattr Enable the inline xattrs feature.
inline_data Enable the inline data feature: New created small(<~3.4k)
files can be written into inode block.
================================================================================ ================================================================================
DEBUGFS ENTRIES DEBUGFS ENTRIES
...@@ -171,6 +173,28 @@ Files in /sys/fs/f2fs/<devname> ...@@ -171,6 +173,28 @@ Files in /sys/fs/f2fs/<devname>
conduct checkpoint to reclaim the prefree segments conduct checkpoint to reclaim the prefree segments
to free segments. By default, 100 segments, 200MB. to free segments. By default, 100 segments, 200MB.
max_small_discards This parameter controls the number of discard
commands that consist small blocks less than 2MB.
The candidates to be discarded are cached until
checkpoint is triggered, and issued during the
checkpoint. By default, it is disabled with 0.
ipu_policy This parameter controls the policy of in-place
updates in f2fs. There are five policies:
0: F2FS_IPU_FORCE, 1: F2FS_IPU_SSR,
2: F2FS_IPU_UTIL, 3: F2FS_IPU_SSR_UTIL,
4: F2FS_IPU_DISABLE.
min_ipu_util This parameter controls the threshold to trigger
in-place-updates. The number indicates percentage
of the filesystem utilization, and used by
F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies.
max_victim_search This parameter controls the number of trials to
find a victim segment when conducting SSR and
cleaning operations. The default value is 4096
which covers 8GB block address range.
================================================================================ ================================================================================
USAGE USAGE
================================================================================ ================================================================================
......
...@@ -3634,6 +3634,7 @@ W: http://en.wikipedia.org/wiki/F2FS ...@@ -3634,6 +3634,7 @@ W: http://en.wikipedia.org/wiki/F2FS
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git
S: Maintained S: Maintained
F: Documentation/filesystems/f2fs.txt F: Documentation/filesystems/f2fs.txt
F: Documentation/ABI/testing/sysfs-fs-f2fs
F: fs/f2fs/ F: fs/f2fs/
F: include/linux/f2fs_fs.h F: include/linux/f2fs_fs.h
......
obj-$(CONFIG_F2FS_FS) += f2fs.o obj-$(CONFIG_F2FS_FS) += f2fs.o
f2fs-y := dir.o file.o inode.o namei.o hash.o super.o f2fs-y := dir.o file.o inode.o namei.o hash.o super.o inline.o
f2fs-y += checkpoint.o gc.o data.o node.o segment.o recovery.o f2fs-y += checkpoint.o gc.o data.o node.o segment.o recovery.o
f2fs-$(CONFIG_F2FS_STAT_FS) += debug.o f2fs-$(CONFIG_F2FS_STAT_FS) += debug.o
f2fs-$(CONFIG_F2FS_FS_XATTR) += xattr.o f2fs-$(CONFIG_F2FS_FS_XATTR) += xattr.o
......
This diff is collapsed.
This diff is collapsed.
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
#include "gc.h" #include "gc.h"
static LIST_HEAD(f2fs_stat_list); static LIST_HEAD(f2fs_stat_list);
static struct dentry *debugfs_root; static struct dentry *f2fs_debugfs_root;
static DEFINE_MUTEX(f2fs_stat_mutex); static DEFINE_MUTEX(f2fs_stat_mutex);
static void update_general_status(struct f2fs_sb_info *sbi) static void update_general_status(struct f2fs_sb_info *sbi)
...@@ -45,14 +45,15 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -45,14 +45,15 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->valid_count = valid_user_blocks(sbi); si->valid_count = valid_user_blocks(sbi);
si->valid_node_count = valid_node_count(sbi); si->valid_node_count = valid_node_count(sbi);
si->valid_inode_count = valid_inode_count(sbi); si->valid_inode_count = valid_inode_count(sbi);
si->inline_inode = sbi->inline_inode;
si->utilization = utilization(sbi); si->utilization = utilization(sbi);
si->free_segs = free_segments(sbi); si->free_segs = free_segments(sbi);
si->free_secs = free_sections(sbi); si->free_secs = free_sections(sbi);
si->prefree_count = prefree_segments(sbi); si->prefree_count = prefree_segments(sbi);
si->dirty_count = dirty_segments(sbi); si->dirty_count = dirty_segments(sbi);
si->node_pages = sbi->node_inode->i_mapping->nrpages; si->node_pages = NODE_MAPPING(sbi)->nrpages;
si->meta_pages = sbi->meta_inode->i_mapping->nrpages; si->meta_pages = META_MAPPING(sbi)->nrpages;
si->nats = NM_I(sbi)->nat_cnt; si->nats = NM_I(sbi)->nat_cnt;
si->sits = SIT_I(sbi)->dirty_sentries; si->sits = SIT_I(sbi)->dirty_sentries;
si->fnids = NM_I(sbi)->fcnt; si->fnids = NM_I(sbi)->fcnt;
...@@ -165,9 +166,9 @@ static void update_mem_info(struct f2fs_sb_info *sbi) ...@@ -165,9 +166,9 @@ static void update_mem_info(struct f2fs_sb_info *sbi)
/* free nids */ /* free nids */
si->cache_mem = NM_I(sbi)->fcnt; si->cache_mem = NM_I(sbi)->fcnt;
si->cache_mem += NM_I(sbi)->nat_cnt; si->cache_mem += NM_I(sbi)->nat_cnt;
npages = sbi->node_inode->i_mapping->nrpages; npages = NODE_MAPPING(sbi)->nrpages;
si->cache_mem += npages << PAGE_CACHE_SHIFT; si->cache_mem += npages << PAGE_CACHE_SHIFT;
npages = sbi->meta_inode->i_mapping->nrpages; npages = META_MAPPING(sbi)->nrpages;
si->cache_mem += npages << PAGE_CACHE_SHIFT; si->cache_mem += npages << PAGE_CACHE_SHIFT;
si->cache_mem += sbi->n_orphans * sizeof(struct orphan_inode_entry); si->cache_mem += sbi->n_orphans * sizeof(struct orphan_inode_entry);
si->cache_mem += sbi->n_dirty_dirs * sizeof(struct dir_inode_entry); si->cache_mem += sbi->n_dirty_dirs * sizeof(struct dir_inode_entry);
...@@ -200,6 +201,8 @@ static int stat_show(struct seq_file *s, void *v) ...@@ -200,6 +201,8 @@ static int stat_show(struct seq_file *s, void *v)
seq_printf(s, "Other: %u)\n - Data: %u\n", seq_printf(s, "Other: %u)\n - Data: %u\n",
si->valid_node_count - si->valid_inode_count, si->valid_node_count - si->valid_inode_count,
si->valid_count - si->valid_node_count); si->valid_count - si->valid_node_count);
seq_printf(s, " - Inline_data Inode: %u\n",
si->inline_inode);
seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n", seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n",
si->main_area_segs, si->main_area_sections, si->main_area_segs, si->main_area_sections,
si->main_area_zones); si->main_area_zones);
...@@ -242,14 +245,14 @@ static int stat_show(struct seq_file *s, void *v) ...@@ -242,14 +245,14 @@ static int stat_show(struct seq_file *s, void *v)
seq_printf(s, " - node blocks : %d\n", si->node_blks); seq_printf(s, " - node blocks : %d\n", si->node_blks);
seq_printf(s, "\nExtent Hit Ratio: %d / %d\n", seq_printf(s, "\nExtent Hit Ratio: %d / %d\n",
si->hit_ext, si->total_ext); si->hit_ext, si->total_ext);
seq_printf(s, "\nBalancing F2FS Async:\n"); seq_puts(s, "\nBalancing F2FS Async:\n");
seq_printf(s, " - nodes %4d in %4d\n", seq_printf(s, " - nodes: %4d in %4d\n",
si->ndirty_node, si->node_pages); si->ndirty_node, si->node_pages);
seq_printf(s, " - dents %4d in dirs:%4d\n", seq_printf(s, " - dents: %4d in dirs:%4d\n",
si->ndirty_dent, si->ndirty_dirs); si->ndirty_dent, si->ndirty_dirs);
seq_printf(s, " - meta %4d in %4d\n", seq_printf(s, " - meta: %4d in %4d\n",
si->ndirty_meta, si->meta_pages); si->ndirty_meta, si->meta_pages);
seq_printf(s, " - NATs %5d > %lu\n", seq_printf(s, " - NATs: %5d > %lu\n",
si->nats, NM_WOUT_THRESHOLD); si->nats, NM_WOUT_THRESHOLD);
seq_printf(s, " - SITs: %5d\n - free_nids: %5d\n", seq_printf(s, " - SITs: %5d\n - free_nids: %5d\n",
si->sits, si->fnids); si->sits, si->fnids);
...@@ -340,14 +343,32 @@ void f2fs_destroy_stats(struct f2fs_sb_info *sbi) ...@@ -340,14 +343,32 @@ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
void __init f2fs_create_root_stats(void) void __init f2fs_create_root_stats(void)
{ {
debugfs_root = debugfs_create_dir("f2fs", NULL); struct dentry *file;
if (debugfs_root)
debugfs_create_file("status", S_IRUGO, debugfs_root, f2fs_debugfs_root = debugfs_create_dir("f2fs", NULL);
NULL, &stat_fops); if (!f2fs_debugfs_root)
goto bail;
file = debugfs_create_file("status", S_IRUGO, f2fs_debugfs_root,
NULL, &stat_fops);
if (!file)
goto free_debugfs_dir;
return;
free_debugfs_dir:
debugfs_remove(f2fs_debugfs_root);
bail:
f2fs_debugfs_root = NULL;
return;
} }
void f2fs_destroy_root_stats(void) void f2fs_destroy_root_stats(void)
{ {
debugfs_remove_recursive(debugfs_root); if (!f2fs_debugfs_root)
debugfs_root = NULL; return;
debugfs_remove_recursive(f2fs_debugfs_root);
f2fs_debugfs_root = NULL;
} }
...@@ -190,9 +190,6 @@ struct f2fs_dir_entry *f2fs_find_entry(struct inode *dir, ...@@ -190,9 +190,6 @@ struct f2fs_dir_entry *f2fs_find_entry(struct inode *dir,
unsigned int max_depth; unsigned int max_depth;
unsigned int level; unsigned int level;
if (namelen > F2FS_NAME_LEN)
return NULL;
if (npages == 0) if (npages == 0)
return NULL; return NULL;
...@@ -259,20 +256,17 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, ...@@ -259,20 +256,17 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
dir->i_mtime = dir->i_ctime = CURRENT_TIME; dir->i_mtime = dir->i_ctime = CURRENT_TIME;
mark_inode_dirty(dir); mark_inode_dirty(dir);
/* update parent inode number before releasing dentry page */
F2FS_I(inode)->i_pino = dir->i_ino;
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
static void init_dent_inode(const struct qstr *name, struct page *ipage) static void init_dent_inode(const struct qstr *name, struct page *ipage)
{ {
struct f2fs_node *rn; struct f2fs_inode *ri;
/* copy name info. to this inode page */ /* copy name info. to this inode page */
rn = F2FS_NODE(ipage); ri = F2FS_INODE(ipage);
rn->i.i_namelen = cpu_to_le32(name->len); ri->i_namelen = cpu_to_le32(name->len);
memcpy(rn->i.i_name, name->name, name->len); memcpy(ri->i_name, name->name, name->len);
set_page_dirty(ipage); set_page_dirty(ipage);
} }
...@@ -348,11 +342,11 @@ static struct page *init_inode_metadata(struct inode *inode, ...@@ -348,11 +342,11 @@ static struct page *init_inode_metadata(struct inode *inode,
err = f2fs_init_acl(inode, dir, page); err = f2fs_init_acl(inode, dir, page);
if (err) if (err)
goto error; goto put_error;
err = f2fs_init_security(inode, dir, name, page); err = f2fs_init_security(inode, dir, name, page);
if (err) if (err)
goto error; goto put_error;
wait_on_page_writeback(page); wait_on_page_writeback(page);
} else { } else {
...@@ -376,8 +370,9 @@ static struct page *init_inode_metadata(struct inode *inode, ...@@ -376,8 +370,9 @@ static struct page *init_inode_metadata(struct inode *inode,
} }
return page; return page;
error: put_error:
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
error:
remove_inode_page(inode); remove_inode_page(inode);
return ERR_PTR(err); return ERR_PTR(err);
} }
...@@ -393,6 +388,8 @@ static void update_parent_metadata(struct inode *dir, struct inode *inode, ...@@ -393,6 +388,8 @@ static void update_parent_metadata(struct inode *dir, struct inode *inode,
clear_inode_flag(F2FS_I(inode), FI_NEW_INODE); clear_inode_flag(F2FS_I(inode), FI_NEW_INODE);
} }
dir->i_mtime = dir->i_ctime = CURRENT_TIME; dir->i_mtime = dir->i_ctime = CURRENT_TIME;
mark_inode_dirty(dir);
if (F2FS_I(dir)->i_current_depth != current_depth) { if (F2FS_I(dir)->i_current_depth != current_depth) {
F2FS_I(dir)->i_current_depth = current_depth; F2FS_I(dir)->i_current_depth = current_depth;
set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR); set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
...@@ -400,8 +397,6 @@ static void update_parent_metadata(struct inode *dir, struct inode *inode, ...@@ -400,8 +397,6 @@ static void update_parent_metadata(struct inode *dir, struct inode *inode,
if (is_inode_flag_set(F2FS_I(dir), FI_UPDATE_DIR)) if (is_inode_flag_set(F2FS_I(dir), FI_UPDATE_DIR))
update_inode_page(dir); update_inode_page(dir);
else
mark_inode_dirty(dir);
if (is_inode_flag_set(F2FS_I(inode), FI_INC_LINK)) if (is_inode_flag_set(F2FS_I(inode), FI_INC_LINK))
clear_inode_flag(F2FS_I(inode), FI_INC_LINK); clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
...@@ -432,10 +427,11 @@ static int room_for_filename(struct f2fs_dentry_block *dentry_blk, int slots) ...@@ -432,10 +427,11 @@ static int room_for_filename(struct f2fs_dentry_block *dentry_blk, int slots)
} }
/* /*
* Caller should grab and release a mutex by calling mutex_lock_op() and * Caller should grab and release a rwsem by calling f2fs_lock_op() and
* mutex_unlock_op(). * f2fs_unlock_op().
*/ */
int __f2fs_add_link(struct inode *dir, const struct qstr *name, struct inode *inode) int __f2fs_add_link(struct inode *dir, const struct qstr *name,
struct inode *inode)
{ {
unsigned int bit_pos; unsigned int bit_pos;
unsigned int level; unsigned int level;
...@@ -461,7 +457,7 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name, struct inode *in ...@@ -461,7 +457,7 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name, struct inode *in
} }
start: start:
if (current_depth == MAX_DIR_HASH_DEPTH) if (unlikely(current_depth == MAX_DIR_HASH_DEPTH))
return -ENOSPC; return -ENOSPC;
/* Increase the depth, if required */ /* Increase the depth, if required */
...@@ -554,14 +550,11 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -554,14 +550,11 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
dir->i_ctime = dir->i_mtime = CURRENT_TIME; dir->i_ctime = dir->i_mtime = CURRENT_TIME;
if (inode && S_ISDIR(inode->i_mode)) {
drop_nlink(dir);
update_inode_page(dir);
} else {
mark_inode_dirty(dir);
}
if (inode) { if (inode) {
if (S_ISDIR(inode->i_mode)) {
drop_nlink(dir);
update_inode_page(dir);
}
inode->i_ctime = CURRENT_TIME; inode->i_ctime = CURRENT_TIME;
drop_nlink(inode); drop_nlink(inode);
if (S_ISDIR(inode->i_mode)) { if (S_ISDIR(inode->i_mode)) {
...@@ -636,7 +629,7 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) ...@@ -636,7 +629,7 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
bit_pos = ((unsigned long)ctx->pos % NR_DENTRY_IN_BLOCK); bit_pos = ((unsigned long)ctx->pos % NR_DENTRY_IN_BLOCK);
for ( ; n < npages; n++) { for (; n < npages; n++) {
dentry_page = get_lock_data_page(inode, n); dentry_page = get_lock_data_page(inode, n);
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
continue; continue;
......
This diff is collapsed.
...@@ -33,7 +33,6 @@ static int f2fs_vm_page_mkwrite(struct vm_area_struct *vma, ...@@ -33,7 +33,6 @@ static int f2fs_vm_page_mkwrite(struct vm_area_struct *vma,
struct page *page = vmf->page; struct page *page = vmf->page;
struct inode *inode = file_inode(vma->vm_file); struct inode *inode = file_inode(vma->vm_file);
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
block_t old_blk_addr;
struct dnode_of_data dn; struct dnode_of_data dn;
int err; int err;
...@@ -44,30 +43,16 @@ static int f2fs_vm_page_mkwrite(struct vm_area_struct *vma, ...@@ -44,30 +43,16 @@ static int f2fs_vm_page_mkwrite(struct vm_area_struct *vma,
/* block allocation */ /* block allocation */
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, page->index, ALLOC_NODE); err = f2fs_reserve_block(&dn, page->index);
if (err) {
f2fs_unlock_op(sbi);
goto out;
}
old_blk_addr = dn.data_blkaddr;
if (old_blk_addr == NULL_ADDR) {
err = reserve_new_block(&dn);
if (err) {
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi);
goto out;
}
}
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (err)
goto out;
file_update_time(vma->vm_file); file_update_time(vma->vm_file);
lock_page(page); lock_page(page);
if (page->mapping != inode->i_mapping || if (unlikely(page->mapping != inode->i_mapping ||
page_offset(page) > i_size_read(inode) || page_offset(page) > i_size_read(inode) ||
!PageUptodate(page)) { !PageUptodate(page))) {
unlock_page(page); unlock_page(page);
err = -EFAULT; err = -EFAULT;
goto out; goto out;
...@@ -130,12 +115,12 @@ int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) ...@@ -130,12 +115,12 @@ int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
int ret = 0; int ret = 0;
bool need_cp = false; bool need_cp = false;
struct writeback_control wbc = { struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL, .sync_mode = WB_SYNC_NONE,
.nr_to_write = LONG_MAX, .nr_to_write = LONG_MAX,
.for_reclaim = 0, .for_reclaim = 0,
}; };
if (f2fs_readonly(inode->i_sb)) if (unlikely(f2fs_readonly(inode->i_sb)))
return 0; return 0;
trace_f2fs_sync_file_enter(inode); trace_f2fs_sync_file_enter(inode);
...@@ -217,7 +202,7 @@ int truncate_data_blocks_range(struct dnode_of_data *dn, int count) ...@@ -217,7 +202,7 @@ int truncate_data_blocks_range(struct dnode_of_data *dn, int count)
raw_node = F2FS_NODE(dn->node_page); raw_node = F2FS_NODE(dn->node_page);
addr = blkaddr_in_node(raw_node) + ofs; addr = blkaddr_in_node(raw_node) + ofs;
for ( ; count > 0; count--, addr++, dn->ofs_in_node++) { for (; count > 0; count--, addr++, dn->ofs_in_node++) {
block_t blkaddr = le32_to_cpu(*addr); block_t blkaddr = le32_to_cpu(*addr);
if (blkaddr == NULL_ADDR) if (blkaddr == NULL_ADDR)
continue; continue;
...@@ -256,7 +241,7 @@ static void truncate_partial_data_page(struct inode *inode, u64 from) ...@@ -256,7 +241,7 @@ static void truncate_partial_data_page(struct inode *inode, u64 from)
return; return;
lock_page(page); lock_page(page);
if (page->mapping != inode->i_mapping) { if (unlikely(page->mapping != inode->i_mapping)) {
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return; return;
} }
...@@ -266,21 +251,24 @@ static void truncate_partial_data_page(struct inode *inode, u64 from) ...@@ -266,21 +251,24 @@ static void truncate_partial_data_page(struct inode *inode, u64 from)
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
static int truncate_blocks(struct inode *inode, u64 from) int truncate_blocks(struct inode *inode, u64 from)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
unsigned int blocksize = inode->i_sb->s_blocksize; unsigned int blocksize = inode->i_sb->s_blocksize;
struct dnode_of_data dn; struct dnode_of_data dn;
pgoff_t free_from; pgoff_t free_from;
int count = 0; int count = 0, err = 0;
int err;
trace_f2fs_truncate_blocks_enter(inode, from); trace_f2fs_truncate_blocks_enter(inode, from);
if (f2fs_has_inline_data(inode))
goto done;
free_from = (pgoff_t) free_from = (pgoff_t)
((from + blocksize - 1) >> (sbi->log_blocksize)); ((from + blocksize - 1) >> (sbi->log_blocksize));
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE); err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE);
if (err) { if (err) {
...@@ -308,7 +296,7 @@ static int truncate_blocks(struct inode *inode, u64 from) ...@@ -308,7 +296,7 @@ static int truncate_blocks(struct inode *inode, u64 from)
free_next: free_next:
err = truncate_inode_blocks(inode, free_from); err = truncate_inode_blocks(inode, free_from);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
done:
/* lastly zero out the first data page */ /* lastly zero out the first data page */
truncate_partial_data_page(inode, from); truncate_partial_data_page(inode, from);
...@@ -382,6 +370,10 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr) ...@@ -382,6 +370,10 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
if ((attr->ia_valid & ATTR_SIZE) && if ((attr->ia_valid & ATTR_SIZE) &&
attr->ia_size != i_size_read(inode)) { attr->ia_size != i_size_read(inode)) {
err = f2fs_convert_inline_data(inode, attr->ia_size);
if (err)
return err;
truncate_setsize(inode, attr->ia_size); truncate_setsize(inode, attr->ia_size);
f2fs_truncate(inode); f2fs_truncate(inode);
f2fs_balance_fs(F2FS_SB(inode->i_sb)); f2fs_balance_fs(F2FS_SB(inode->i_sb));
...@@ -459,12 +451,16 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end) ...@@ -459,12 +451,16 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
return 0; return 0;
} }
static int punch_hole(struct inode *inode, loff_t offset, loff_t len, int mode) static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
{ {
pgoff_t pg_start, pg_end; pgoff_t pg_start, pg_end;
loff_t off_start, off_end; loff_t off_start, off_end;
int ret = 0; int ret = 0;
ret = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1);
if (ret)
return ret;
pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT; pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT;
pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT; pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT;
...@@ -499,12 +495,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len, int mode) ...@@ -499,12 +495,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len, int mode)
} }
} }
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
i_size_read(inode) <= (offset + len)) {
i_size_write(inode, offset);
mark_inode_dirty(inode);
}
return ret; return ret;
} }
...@@ -521,6 +511,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset, ...@@ -521,6 +511,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
if (ret) if (ret)
return ret; return ret;
ret = f2fs_convert_inline_data(inode, offset + len);
if (ret)
return ret;
pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT; pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT;
pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT; pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT;
...@@ -532,22 +526,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset, ...@@ -532,22 +526,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
ret = get_dnode_of_data(&dn, index, ALLOC_NODE); ret = f2fs_reserve_block(&dn, index);
if (ret) {
f2fs_unlock_op(sbi);
break;
}
if (dn.data_blkaddr == NULL_ADDR) {
ret = reserve_new_block(&dn);
if (ret) {
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi);
break;
}
}
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (ret)
break;
if (pg_start == pg_end) if (pg_start == pg_end)
new_size = offset + len; new_size = offset + len;
...@@ -578,7 +560,7 @@ static long f2fs_fallocate(struct file *file, int mode, ...@@ -578,7 +560,7 @@ static long f2fs_fallocate(struct file *file, int mode,
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (mode & FALLOC_FL_PUNCH_HOLE) if (mode & FALLOC_FL_PUNCH_HOLE)
ret = punch_hole(inode, offset, len, mode); ret = punch_hole(inode, offset, len);
else else
ret = expand_inode_data(inode, offset, len, mode); ret = expand_inode_data(inode, offset, len, mode);
......
...@@ -119,7 +119,6 @@ int start_gc_thread(struct f2fs_sb_info *sbi) ...@@ -119,7 +119,6 @@ int start_gc_thread(struct f2fs_sb_info *sbi)
kfree(gc_th); kfree(gc_th);
sbi->gc_thread = NULL; sbi->gc_thread = NULL;
} }
out: out:
return err; return err;
} }
...@@ -164,8 +163,8 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -164,8 +163,8 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
p->ofs_unit = sbi->segs_per_sec; p->ofs_unit = sbi->segs_per_sec;
} }
if (p->max_search > MAX_VICTIM_SEARCH) if (p->max_search > sbi->max_victim_search)
p->max_search = MAX_VICTIM_SEARCH; p->max_search = sbi->max_victim_search;
p->offset = sbi->last_victim[p->gc_mode]; p->offset = sbi->last_victim[p->gc_mode];
} }
...@@ -429,7 +428,7 @@ static void gc_node_segment(struct f2fs_sb_info *sbi, ...@@ -429,7 +428,7 @@ static void gc_node_segment(struct f2fs_sb_info *sbi,
/* set page dirty and write it */ /* set page dirty and write it */
if (gc_type == FG_GC) { if (gc_type == FG_GC) {
f2fs_wait_on_page_writeback(node_page, NODE, true); f2fs_wait_on_page_writeback(node_page, NODE);
set_page_dirty(node_page); set_page_dirty(node_page);
} else { } else {
if (!PageWriteback(node_page)) if (!PageWriteback(node_page))
...@@ -521,6 +520,11 @@ static int check_dnode(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -521,6 +520,11 @@ static int check_dnode(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
static void move_data_page(struct inode *inode, struct page *page, int gc_type) static void move_data_page(struct inode *inode, struct page *page, int gc_type)
{ {
struct f2fs_io_info fio = {
.type = DATA,
.rw = WRITE_SYNC,
};
if (gc_type == BG_GC) { if (gc_type == BG_GC) {
if (PageWriteback(page)) if (PageWriteback(page))
goto out; goto out;
...@@ -529,7 +533,7 @@ static void move_data_page(struct inode *inode, struct page *page, int gc_type) ...@@ -529,7 +533,7 @@ static void move_data_page(struct inode *inode, struct page *page, int gc_type)
} else { } else {
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA);
if (clear_page_dirty_for_io(page) && if (clear_page_dirty_for_io(page) &&
S_ISDIR(inode->i_mode)) { S_ISDIR(inode->i_mode)) {
...@@ -537,7 +541,7 @@ static void move_data_page(struct inode *inode, struct page *page, int gc_type) ...@@ -537,7 +541,7 @@ static void move_data_page(struct inode *inode, struct page *page, int gc_type)
inode_dec_dirty_dents(inode); inode_dec_dirty_dents(inode);
} }
set_cold_data(page); set_cold_data(page);
do_write_data_page(page); do_write_data_page(page, &fio);
clear_cold_data(page); clear_cold_data(page);
} }
out: out:
...@@ -631,7 +635,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -631,7 +635,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
goto next_step; goto next_step;
if (gc_type == FG_GC) { if (gc_type == FG_GC) {
f2fs_submit_bio(sbi, DATA, true); f2fs_submit_merged_bio(sbi, DATA, WRITE);
/* /*
* In the case of FG_GC, it'd be better to reclaim this victim * In the case of FG_GC, it'd be better to reclaim this victim
...@@ -664,8 +668,6 @@ static void do_garbage_collect(struct f2fs_sb_info *sbi, unsigned int segno, ...@@ -664,8 +668,6 @@ static void do_garbage_collect(struct f2fs_sb_info *sbi, unsigned int segno,
/* read segment summary of victim */ /* read segment summary of victim */
sum_page = get_sum_page(sbi, segno); sum_page = get_sum_page(sbi, segno);
if (IS_ERR(sum_page))
return;
blk_start_plug(&plug); blk_start_plug(&plug);
...@@ -697,7 +699,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi) ...@@ -697,7 +699,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi)
INIT_LIST_HEAD(&ilist); INIT_LIST_HEAD(&ilist);
gc_more: gc_more:
if (!(sbi->sb->s_flags & MS_ACTIVE)) if (unlikely(!(sbi->sb->s_flags & MS_ACTIVE)))
goto stop; goto stop;
if (gc_type == BG_GC && has_not_enough_free_secs(sbi, nfree)) { if (gc_type == BG_GC && has_not_enough_free_secs(sbi, nfree)) {
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
#define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */ #define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */
/* Search max. number of dirty segments to select a victim segment */ /* Search max. number of dirty segments to select a victim segment */
#define MAX_VICTIM_SEARCH 4096 /* covers 8GB */ #define DEF_MAX_VICTIM_SEARCH 4096 /* covers 8GB */
struct f2fs_gc_kthread { struct f2fs_gc_kthread {
struct task_struct *f2fs_gc_task; struct task_struct *f2fs_gc_task;
......
/*
* fs/f2fs/inline.c
* Copyright (c) 2013, Intel Corporation
* Authors: Huajun Li <huajun.li@intel.com>
* Haicheng Li <haicheng.li@intel.com>
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/fs.h>
#include <linux/f2fs_fs.h>
#include "f2fs.h"
bool f2fs_may_inline(struct inode *inode)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
block_t nr_blocks;
loff_t i_size;
if (!test_opt(sbi, INLINE_DATA))
return false;
nr_blocks = F2FS_I(inode)->i_xattr_nid ? 3 : 2;
if (inode->i_blocks > nr_blocks)
return false;
i_size = i_size_read(inode);
if (i_size > MAX_INLINE_DATA)
return false;
return true;
}
int f2fs_read_inline_data(struct inode *inode, struct page *page)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct page *ipage;
void *src_addr, *dst_addr;
if (page->index) {
zero_user_segment(page, 0, PAGE_CACHE_SIZE);
goto out;
}
ipage = get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage))
return PTR_ERR(ipage);
zero_user_segment(page, MAX_INLINE_DATA, PAGE_CACHE_SIZE);
/* Copy the whole inline data block */
src_addr = inline_data_addr(ipage);
dst_addr = kmap(page);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
kunmap(page);
f2fs_put_page(ipage, 1);
out:
SetPageUptodate(page);
unlock_page(page);
return 0;
}
static int __f2fs_convert_inline_data(struct inode *inode, struct page *page)
{
int err;
struct page *ipage;
struct dnode_of_data dn;
void *src_addr, *dst_addr;
block_t new_blk_addr;
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_io_info fio = {
.type = DATA,
.rw = WRITE_SYNC | REQ_PRIO,
};
f2fs_lock_op(sbi);
ipage = get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage))
return PTR_ERR(ipage);
/*
* i_addr[0] is not used for inline data,
* so reserving new block will not destroy inline data
*/
set_new_dnode(&dn, inode, ipage, NULL, 0);
err = f2fs_reserve_block(&dn, 0);
if (err) {
f2fs_unlock_op(sbi);
return err;
}
zero_user_segment(page, MAX_INLINE_DATA, PAGE_CACHE_SIZE);
/* Copy the whole inline data block */
src_addr = inline_data_addr(ipage);
dst_addr = kmap(page);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
kunmap(page);
SetPageUptodate(page);
/* write data page to try to make data consistent */
set_page_writeback(page);
write_data_page(page, &dn, &new_blk_addr, &fio);
update_extent_cache(new_blk_addr, &dn);
f2fs_wait_on_page_writeback(page, DATA);
/* clear inline data and flag after data writeback */
zero_user_segment(ipage, INLINE_DATA_OFFSET,
INLINE_DATA_OFFSET + MAX_INLINE_DATA);
clear_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
stat_dec_inline_inode(inode);
sync_inode_page(&dn);
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi);
return err;
}
int f2fs_convert_inline_data(struct inode *inode, pgoff_t to_size)
{
struct page *page;
int err;
if (!f2fs_has_inline_data(inode))
return 0;
else if (to_size <= MAX_INLINE_DATA)
return 0;
page = grab_cache_page_write_begin(inode->i_mapping, 0, AOP_FLAG_NOFS);
if (!page)
return -ENOMEM;
err = __f2fs_convert_inline_data(inode, page);
f2fs_put_page(page, 1);
return err;
}
int f2fs_write_inline_data(struct inode *inode,
struct page *page, unsigned size)
{
void *src_addr, *dst_addr;
struct page *ipage;
struct dnode_of_data dn;
int err;
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, 0, LOOKUP_NODE);
if (err)
return err;
ipage = dn.inode_page;
zero_user_segment(ipage, INLINE_DATA_OFFSET,
INLINE_DATA_OFFSET + MAX_INLINE_DATA);
src_addr = kmap(page);
dst_addr = inline_data_addr(ipage);
memcpy(dst_addr, src_addr, size);
kunmap(page);
/* Release the first data block if it is allocated */
if (!f2fs_has_inline_data(inode)) {
truncate_data_blocks_range(&dn, 1);
set_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
stat_inc_inline_inode(inode);
}
sync_inode_page(&dn);
f2fs_put_dnode(&dn);
return 0;
}
int recover_inline_data(struct inode *inode, struct page *npage)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_inode *ri = NULL;
void *src_addr, *dst_addr;
struct page *ipage;
/*
* The inline_data recovery policy is as follows.
* [prev.] [next] of inline_data flag
* o o -> recover inline_data
* o x -> remove inline_data, and then recover data blocks
* x o -> remove inline_data, and then recover inline_data
* x x -> recover data blocks
*/
if (IS_INODE(npage))
ri = F2FS_INODE(npage);
if (f2fs_has_inline_data(inode) &&
ri && ri->i_inline & F2FS_INLINE_DATA) {
process_inline:
ipage = get_node_page(sbi, inode->i_ino);
f2fs_bug_on(IS_ERR(ipage));
src_addr = inline_data_addr(npage);
dst_addr = inline_data_addr(ipage);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
return -1;
}
if (f2fs_has_inline_data(inode)) {
ipage = get_node_page(sbi, inode->i_ino);
f2fs_bug_on(IS_ERR(ipage));
zero_user_segment(ipage, INLINE_DATA_OFFSET,
INLINE_DATA_OFFSET + MAX_INLINE_DATA);
clear_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
} else if (ri && ri->i_inline & F2FS_INLINE_DATA) {
truncate_blocks(inode, 0);
set_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
goto process_inline;
}
return 0;
}
...@@ -42,9 +42,11 @@ static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri) ...@@ -42,9 +42,11 @@ static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
if (ri->i_addr[0]) if (ri->i_addr[0])
inode->i_rdev = old_decode_dev(le32_to_cpu(ri->i_addr[0])); inode->i_rdev =
old_decode_dev(le32_to_cpu(ri->i_addr[0]));
else else
inode->i_rdev = new_decode_dev(le32_to_cpu(ri->i_addr[1])); inode->i_rdev =
new_decode_dev(le32_to_cpu(ri->i_addr[1]));
} }
} }
...@@ -52,11 +54,13 @@ static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri) ...@@ -52,11 +54,13 @@ static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
{ {
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
if (old_valid_dev(inode->i_rdev)) { if (old_valid_dev(inode->i_rdev)) {
ri->i_addr[0] = cpu_to_le32(old_encode_dev(inode->i_rdev)); ri->i_addr[0] =
cpu_to_le32(old_encode_dev(inode->i_rdev));
ri->i_addr[1] = 0; ri->i_addr[1] = 0;
} else { } else {
ri->i_addr[0] = 0; ri->i_addr[0] = 0;
ri->i_addr[1] = cpu_to_le32(new_encode_dev(inode->i_rdev)); ri->i_addr[1] =
cpu_to_le32(new_encode_dev(inode->i_rdev));
ri->i_addr[2] = 0; ri->i_addr[2] = 0;
} }
} }
...@@ -67,7 +71,6 @@ static int do_read_inode(struct inode *inode) ...@@ -67,7 +71,6 @@ static int do_read_inode(struct inode *inode)
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
struct page *node_page; struct page *node_page;
struct f2fs_node *rn;
struct f2fs_inode *ri; struct f2fs_inode *ri;
/* Check if ino is within scope */ /* Check if ino is within scope */
...@@ -81,8 +84,7 @@ static int do_read_inode(struct inode *inode) ...@@ -81,8 +84,7 @@ static int do_read_inode(struct inode *inode)
if (IS_ERR(node_page)) if (IS_ERR(node_page))
return PTR_ERR(node_page); return PTR_ERR(node_page);
rn = F2FS_NODE(node_page); ri = F2FS_INODE(node_page);
ri = &(rn->i);
inode->i_mode = le16_to_cpu(ri->i_mode); inode->i_mode = le16_to_cpu(ri->i_mode);
i_uid_write(inode, le32_to_cpu(ri->i_uid)); i_uid_write(inode, le32_to_cpu(ri->i_uid));
...@@ -175,13 +177,11 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) ...@@ -175,13 +177,11 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
void update_inode(struct inode *inode, struct page *node_page) void update_inode(struct inode *inode, struct page *node_page)
{ {
struct f2fs_node *rn;
struct f2fs_inode *ri; struct f2fs_inode *ri;
f2fs_wait_on_page_writeback(node_page, NODE, false); f2fs_wait_on_page_writeback(node_page, NODE);
rn = F2FS_NODE(node_page); ri = F2FS_INODE(node_page);
ri = &(rn->i);
ri->i_mode = cpu_to_le16(inode->i_mode); ri->i_mode = cpu_to_le16(inode->i_mode);
ri->i_advise = F2FS_I(inode)->i_advise; ri->i_advise = F2FS_I(inode)->i_advise;
...@@ -281,6 +281,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -281,6 +281,7 @@ void f2fs_evict_inode(struct inode *inode)
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
remove_inode_page(inode); remove_inode_page(inode);
stat_dec_inline_inode(inode);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
sb_end_intwrite(inode->i_sb); sb_end_intwrite(inode->i_sb);
......
...@@ -424,11 +424,13 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -424,11 +424,13 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
} }
f2fs_set_link(new_dir, new_entry, new_page, old_inode); f2fs_set_link(new_dir, new_entry, new_page, old_inode);
F2FS_I(old_inode)->i_pino = new_dir->i_ino;
new_inode->i_ctime = CURRENT_TIME; new_inode->i_ctime = CURRENT_TIME;
if (old_dir_entry) if (old_dir_entry)
drop_nlink(new_inode); drop_nlink(new_inode);
drop_nlink(new_inode); drop_nlink(new_inode);
mark_inode_dirty(new_inode);
if (!new_inode->i_nlink) if (!new_inode->i_nlink)
add_orphan_inode(sbi, new_inode->i_ino); add_orphan_inode(sbi, new_inode->i_ino);
...@@ -457,11 +459,14 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -457,11 +459,14 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
if (old_dir != new_dir) { if (old_dir != new_dir) {
f2fs_set_link(old_inode, old_dir_entry, f2fs_set_link(old_inode, old_dir_entry,
old_dir_page, new_dir); old_dir_page, new_dir);
F2FS_I(old_inode)->i_pino = new_dir->i_ino;
update_inode_page(old_inode);
} else { } else {
kunmap(old_dir_page); kunmap(old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
} }
drop_nlink(old_dir); drop_nlink(old_dir);
mark_inode_dirty(old_dir);
update_inode_page(old_dir); update_inode_page(old_dir);
} }
......
This diff is collapsed.
...@@ -224,7 +224,13 @@ static inline block_t next_blkaddr_of_node(struct page *node_page) ...@@ -224,7 +224,13 @@ static inline block_t next_blkaddr_of_node(struct page *node_page)
* | `- direct node (5 + N => 5 + 2N - 1) * | `- direct node (5 + N => 5 + 2N - 1)
* `- double indirect node (5 + 2N) * `- double indirect node (5 + 2N)
* `- indirect node (6 + 2N) * `- indirect node (6 + 2N)
* `- direct node (x(N + 1)) * `- direct node
* ......
* `- indirect node ((6 + 2N) + x(N + 1))
* `- direct node
* ......
* `- indirect node ((6 + 2N) + (N - 1)(N + 1))
* `- direct node
*/ */
static inline bool IS_DNODE(struct page *node_page) static inline bool IS_DNODE(struct page *node_page)
{ {
......
...@@ -40,8 +40,7 @@ static struct fsync_inode_entry *get_fsync_inode(struct list_head *head, ...@@ -40,8 +40,7 @@ static struct fsync_inode_entry *get_fsync_inode(struct list_head *head,
static int recover_dentry(struct page *ipage, struct inode *inode) static int recover_dentry(struct page *ipage, struct inode *inode)
{ {
struct f2fs_node *raw_node = F2FS_NODE(ipage); struct f2fs_inode *raw_inode = F2FS_INODE(ipage);
struct f2fs_inode *raw_inode = &(raw_node->i);
nid_t pino = le32_to_cpu(raw_inode->i_pino); nid_t pino = le32_to_cpu(raw_inode->i_pino);
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
struct qstr name; struct qstr name;
...@@ -62,6 +61,12 @@ static int recover_dentry(struct page *ipage, struct inode *inode) ...@@ -62,6 +61,12 @@ static int recover_dentry(struct page *ipage, struct inode *inode)
name.len = le32_to_cpu(raw_inode->i_namelen); name.len = le32_to_cpu(raw_inode->i_namelen);
name.name = raw_inode->i_name; name.name = raw_inode->i_name;
if (unlikely(name.len > F2FS_NAME_LEN)) {
WARN_ON(1);
err = -ENAMETOOLONG;
goto out;
}
retry: retry:
de = f2fs_find_entry(dir, &name, &page); de = f2fs_find_entry(dir, &name, &page);
if (de && inode->i_ino == le32_to_cpu(de->ino)) if (de && inode->i_ino == le32_to_cpu(de->ino))
...@@ -90,17 +95,16 @@ static int recover_dentry(struct page *ipage, struct inode *inode) ...@@ -90,17 +95,16 @@ static int recover_dentry(struct page *ipage, struct inode *inode)
kunmap(page); kunmap(page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
out: out:
f2fs_msg(inode->i_sb, KERN_NOTICE, "recover_inode and its dentry: " f2fs_msg(inode->i_sb, KERN_NOTICE,
"ino = %x, name = %s, dir = %lx, err = %d", "%s: ino = %x, name = %s, dir = %lx, err = %d",
ino_of_node(ipage), raw_inode->i_name, __func__, ino_of_node(ipage), raw_inode->i_name,
IS_ERR(dir) ? 0 : dir->i_ino, err); IS_ERR(dir) ? 0 : dir->i_ino, err);
return err; return err;
} }
static int recover_inode(struct inode *inode, struct page *node_page) static int recover_inode(struct inode *inode, struct page *node_page)
{ {
struct f2fs_node *raw_node = F2FS_NODE(node_page); struct f2fs_inode *raw_inode = F2FS_INODE(node_page);
struct f2fs_inode *raw_inode = &(raw_node->i);
if (!IS_INODE(node_page)) if (!IS_INODE(node_page))
return 0; return 0;
...@@ -143,9 +147,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head) ...@@ -143,9 +147,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head)
while (1) { while (1) {
struct fsync_inode_entry *entry; struct fsync_inode_entry *entry;
err = f2fs_readpage(sbi, page, blkaddr, READ_SYNC); err = f2fs_submit_page_bio(sbi, page, blkaddr, READ_SYNC);
if (err) if (err)
goto out; return err;
lock_page(page); lock_page(page);
...@@ -191,9 +195,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head) ...@@ -191,9 +195,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head)
/* check next segment */ /* check next segment */
blkaddr = next_blkaddr_of_node(page); blkaddr = next_blkaddr_of_node(page);
} }
unlock_page(page); unlock_page(page);
out:
__free_pages(page, 0); __free_pages(page, 0);
return err; return err;
} }
...@@ -293,6 +298,9 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -293,6 +298,9 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
struct node_info ni; struct node_info ni;
int err = 0, recovered = 0; int err = 0, recovered = 0;
if (recover_inline_data(inode, page))
goto out;
start = start_bidx_of_node(ofs_of_node(page), fi); start = start_bidx_of_node(ofs_of_node(page), fi);
if (IS_INODE(page)) if (IS_INODE(page))
end = start + ADDRS_PER_INODE(fi); end = start + ADDRS_PER_INODE(fi);
...@@ -300,12 +308,13 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -300,12 +308,13 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
end = start + ADDRS_PER_BLOCK; end = start + ADDRS_PER_BLOCK;
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, start, ALLOC_NODE); err = get_dnode_of_data(&dn, start, ALLOC_NODE);
if (err) { if (err) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
return err; goto out;
} }
wait_on_page_writeback(dn.node_page); wait_on_page_writeback(dn.node_page);
...@@ -356,10 +365,10 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -356,10 +365,10 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
err: err:
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
out:
f2fs_msg(sbi->sb, KERN_NOTICE, "recover_data: ino = %lx, " f2fs_msg(sbi->sb, KERN_NOTICE,
"recovered_data = %d blocks, err = %d", "recover_data: ino = %lx, recovered = %d blocks, err = %d",
inode->i_ino, recovered, err); inode->i_ino, recovered, err);
return err; return err;
} }
...@@ -377,7 +386,7 @@ static int recover_data(struct f2fs_sb_info *sbi, ...@@ -377,7 +386,7 @@ static int recover_data(struct f2fs_sb_info *sbi,
blkaddr = NEXT_FREE_BLKADDR(sbi, curseg); blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
/* read node page */ /* read node page */
page = alloc_page(GFP_NOFS | __GFP_ZERO); page = alloc_page(GFP_F2FS_ZERO);
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
...@@ -386,9 +395,9 @@ static int recover_data(struct f2fs_sb_info *sbi, ...@@ -386,9 +395,9 @@ static int recover_data(struct f2fs_sb_info *sbi,
while (1) { while (1) {
struct fsync_inode_entry *entry; struct fsync_inode_entry *entry;
err = f2fs_readpage(sbi, page, blkaddr, READ_SYNC); err = f2fs_submit_page_bio(sbi, page, blkaddr, READ_SYNC);
if (err) if (err)
goto out; return err;
lock_page(page); lock_page(page);
...@@ -412,8 +421,8 @@ static int recover_data(struct f2fs_sb_info *sbi, ...@@ -412,8 +421,8 @@ static int recover_data(struct f2fs_sb_info *sbi,
/* check next segment */ /* check next segment */
blkaddr = next_blkaddr_of_node(page); blkaddr = next_blkaddr_of_node(page);
} }
unlock_page(page); unlock_page(page);
out:
__free_pages(page, 0); __free_pages(page, 0);
if (!err) if (!err)
...@@ -429,7 +438,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi) ...@@ -429,7 +438,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi)
fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry", fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
sizeof(struct fsync_inode_entry), NULL); sizeof(struct fsync_inode_entry), NULL);
if (unlikely(!fsync_entry_slab)) if (!fsync_entry_slab)
return -ENOMEM; return -ENOMEM;
INIT_LIST_HEAD(&inode_list); INIT_LIST_HEAD(&inode_list);
......
This diff is collapsed.
...@@ -20,13 +20,8 @@ ...@@ -20,13 +20,8 @@
#define GET_L2R_SEGNO(free_i, segno) (segno - free_i->start_segno) #define GET_L2R_SEGNO(free_i, segno) (segno - free_i->start_segno)
#define GET_R2L_SEGNO(free_i, segno) (segno + free_i->start_segno) #define GET_R2L_SEGNO(free_i, segno) (segno + free_i->start_segno)
#define IS_DATASEG(t) \ #define IS_DATASEG(t) (t <= CURSEG_COLD_DATA)
((t == CURSEG_HOT_DATA) || (t == CURSEG_COLD_DATA) || \ #define IS_NODESEG(t) (t >= CURSEG_HOT_NODE)
(t == CURSEG_WARM_DATA))
#define IS_NODESEG(t) \
((t == CURSEG_HOT_NODE) || (t == CURSEG_COLD_NODE) || \
(t == CURSEG_WARM_NODE))
#define IS_CURSEG(sbi, seg) \ #define IS_CURSEG(sbi, seg) \
((seg == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno) || \ ((seg == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno) || \
...@@ -83,25 +78,20 @@ ...@@ -83,25 +78,20 @@
(segno / SIT_ENTRY_PER_BLOCK) (segno / SIT_ENTRY_PER_BLOCK)
#define START_SEGNO(sit_i, segno) \ #define START_SEGNO(sit_i, segno) \
(SIT_BLOCK_OFFSET(sit_i, segno) * SIT_ENTRY_PER_BLOCK) (SIT_BLOCK_OFFSET(sit_i, segno) * SIT_ENTRY_PER_BLOCK)
#define SIT_BLK_CNT(sbi) \
((TOTAL_SEGS(sbi) + SIT_ENTRY_PER_BLOCK - 1) / SIT_ENTRY_PER_BLOCK)
#define f2fs_bitmap_size(nr) \ #define f2fs_bitmap_size(nr) \
(BITS_TO_LONGS(nr) * sizeof(unsigned long)) (BITS_TO_LONGS(nr) * sizeof(unsigned long))
#define TOTAL_SEGS(sbi) (SM_I(sbi)->main_segments) #define TOTAL_SEGS(sbi) (SM_I(sbi)->main_segments)
#define TOTAL_SECS(sbi) (sbi->total_sections) #define TOTAL_SECS(sbi) (sbi->total_sections)
#define SECTOR_FROM_BLOCK(sbi, blk_addr) \ #define SECTOR_FROM_BLOCK(sbi, blk_addr) \
(blk_addr << ((sbi)->log_blocksize - F2FS_LOG_SECTOR_SIZE)) (((sector_t)blk_addr) << (sbi)->log_sectors_per_block)
#define SECTOR_TO_BLOCK(sbi, sectors) \ #define SECTOR_TO_BLOCK(sbi, sectors) \
(sectors >> ((sbi)->log_blocksize - F2FS_LOG_SECTOR_SIZE)) (sectors >> (sbi)->log_sectors_per_block)
#define MAX_BIO_BLOCKS(max_hw_blocks) \ #define MAX_BIO_BLOCKS(max_hw_blocks) \
(min((int)max_hw_blocks, BIO_MAX_PAGES)) (min((int)max_hw_blocks, BIO_MAX_PAGES))
/* during checkpoint, bio_private is used to synchronize the last bio */
struct bio_private {
struct f2fs_sb_info *sbi;
bool is_sync;
void *wait;
};
/* /*
* indicate a block allocation direction: RIGHT and LEFT. * indicate a block allocation direction: RIGHT and LEFT.
* RIGHT means allocating new sections towards the end of volume. * RIGHT means allocating new sections towards the end of volume.
...@@ -458,8 +448,8 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi) ...@@ -458,8 +448,8 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
static inline bool need_SSR(struct f2fs_sb_info *sbi) static inline bool need_SSR(struct f2fs_sb_info *sbi)
{ {
return ((prefree_segments(sbi) / sbi->segs_per_sec) return (prefree_segments(sbi) / sbi->segs_per_sec)
+ free_sections(sbi) < overprovision_sections(sbi)); + free_sections(sbi) < overprovision_sections(sbi);
} }
static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed) static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed)
...@@ -467,38 +457,71 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed) ...@@ -467,38 +457,71 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed)
int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES); int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS); int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
if (sbi->por_doing) if (unlikely(sbi->por_doing))
return false; return false;
return ((free_sections(sbi) + freed) <= (node_secs + 2 * dent_secs + return (free_sections(sbi) + freed) <= (node_secs + 2 * dent_secs +
reserved_sections(sbi))); reserved_sections(sbi));
} }
static inline bool excess_prefree_segs(struct f2fs_sb_info *sbi) static inline bool excess_prefree_segs(struct f2fs_sb_info *sbi)
{ {
return (prefree_segments(sbi) > SM_I(sbi)->rec_prefree_segments); return prefree_segments(sbi) > SM_I(sbi)->rec_prefree_segments;
} }
static inline int utilization(struct f2fs_sb_info *sbi) static inline int utilization(struct f2fs_sb_info *sbi)
{ {
return div_u64((u64)valid_user_blocks(sbi) * 100, sbi->user_block_count); return div_u64((u64)valid_user_blocks(sbi) * 100,
sbi->user_block_count);
} }
/* /*
* Sometimes f2fs may be better to drop out-of-place update policy. * Sometimes f2fs may be better to drop out-of-place update policy.
* So, if fs utilization is over MIN_IPU_UTIL, then f2fs tries to write * And, users can control the policy through sysfs entries.
* data in the original place likewise other traditional file systems. * There are five policies with triggering conditions as follows.
* But, currently set 100 in percentage, which means it is disabled. * F2FS_IPU_FORCE - all the time,
* See below need_inplace_update(). * F2FS_IPU_SSR - if SSR mode is activated,
* F2FS_IPU_UTIL - if FS utilization is over threashold,
* F2FS_IPU_SSR_UTIL - if SSR mode is activated and FS utilization is over
* threashold,
* F2FS_IPUT_DISABLE - disable IPU. (=default option)
*/ */
#define MIN_IPU_UTIL 100 #define DEF_MIN_IPU_UTIL 70
enum {
F2FS_IPU_FORCE,
F2FS_IPU_SSR,
F2FS_IPU_UTIL,
F2FS_IPU_SSR_UTIL,
F2FS_IPU_DISABLE,
};
static inline bool need_inplace_update(struct inode *inode) static inline bool need_inplace_update(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
/* IPU can be done only for the user data */
if (S_ISDIR(inode->i_mode)) if (S_ISDIR(inode->i_mode))
return false; return false;
if (need_SSR(sbi) && utilization(sbi) > MIN_IPU_UTIL)
switch (SM_I(sbi)->ipu_policy) {
case F2FS_IPU_FORCE:
return true; return true;
case F2FS_IPU_SSR:
if (need_SSR(sbi))
return true;
break;
case F2FS_IPU_UTIL:
if (utilization(sbi) > SM_I(sbi)->min_ipu_util)
return true;
break;
case F2FS_IPU_SSR_UTIL:
if (need_SSR(sbi) && utilization(sbi) > SM_I(sbi)->min_ipu_util)
return true;
break;
case F2FS_IPU_DISABLE:
break;
}
return false; return false;
} }
......
...@@ -50,6 +50,7 @@ enum { ...@@ -50,6 +50,7 @@ enum {
Opt_active_logs, Opt_active_logs,
Opt_disable_ext_identify, Opt_disable_ext_identify,
Opt_inline_xattr, Opt_inline_xattr,
Opt_inline_data,
Opt_err, Opt_err,
}; };
...@@ -65,6 +66,7 @@ static match_table_t f2fs_tokens = { ...@@ -65,6 +66,7 @@ static match_table_t f2fs_tokens = {
{Opt_active_logs, "active_logs=%u"}, {Opt_active_logs, "active_logs=%u"},
{Opt_disable_ext_identify, "disable_ext_identify"}, {Opt_disable_ext_identify, "disable_ext_identify"},
{Opt_inline_xattr, "inline_xattr"}, {Opt_inline_xattr, "inline_xattr"},
{Opt_inline_data, "inline_data"},
{Opt_err, NULL}, {Opt_err, NULL},
}; };
...@@ -72,6 +74,7 @@ static match_table_t f2fs_tokens = { ...@@ -72,6 +74,7 @@ static match_table_t f2fs_tokens = {
enum { enum {
GC_THREAD, /* struct f2fs_gc_thread */ GC_THREAD, /* struct f2fs_gc_thread */
SM_INFO, /* struct f2fs_sm_info */ SM_INFO, /* struct f2fs_sm_info */
F2FS_SBI, /* struct f2fs_sb_info */
}; };
struct f2fs_attr { struct f2fs_attr {
...@@ -89,6 +92,8 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type) ...@@ -89,6 +92,8 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
return (unsigned char *)sbi->gc_thread; return (unsigned char *)sbi->gc_thread;
else if (struct_type == SM_INFO) else if (struct_type == SM_INFO)
return (unsigned char *)SM_I(sbi); return (unsigned char *)SM_I(sbi);
else if (struct_type == F2FS_SBI)
return (unsigned char *)sbi;
return NULL; return NULL;
} }
...@@ -175,6 +180,10 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time); ...@@ -175,6 +180,10 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments); F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, max_small_discards, max_discards);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, ipu_policy, ipu_policy);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ipu_util, min_ipu_util);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
#define ATTR_LIST(name) (&f2fs_attr_##name.attr) #define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = { static struct attribute *f2fs_attrs[] = {
...@@ -183,6 +192,10 @@ static struct attribute *f2fs_attrs[] = { ...@@ -183,6 +192,10 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(gc_no_gc_sleep_time), ATTR_LIST(gc_no_gc_sleep_time),
ATTR_LIST(gc_idle), ATTR_LIST(gc_idle),
ATTR_LIST(reclaim_segments), ATTR_LIST(reclaim_segments),
ATTR_LIST(max_small_discards),
ATTR_LIST(ipu_policy),
ATTR_LIST(min_ipu_util),
ATTR_LIST(max_victim_search),
NULL, NULL,
}; };
...@@ -311,6 +324,9 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -311,6 +324,9 @@ static int parse_options(struct super_block *sb, char *options)
case Opt_disable_ext_identify: case Opt_disable_ext_identify:
set_opt(sbi, DISABLE_EXT_IDENTIFY); set_opt(sbi, DISABLE_EXT_IDENTIFY);
break; break;
case Opt_inline_data:
set_opt(sbi, INLINE_DATA);
break;
default: default:
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Unrecognized mount option \"%s\" or missing value", "Unrecognized mount option \"%s\" or missing value",
...@@ -325,7 +341,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -325,7 +341,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
{ {
struct f2fs_inode_info *fi; struct f2fs_inode_info *fi;
fi = kmem_cache_alloc(f2fs_inode_cachep, GFP_NOFS | __GFP_ZERO); fi = kmem_cache_alloc(f2fs_inode_cachep, GFP_F2FS_ZERO);
if (!fi) if (!fi)
return NULL; return NULL;
...@@ -508,7 +524,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) ...@@ -508,7 +524,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
#endif #endif
if (test_opt(sbi, DISABLE_EXT_IDENTIFY)) if (test_opt(sbi, DISABLE_EXT_IDENTIFY))
seq_puts(seq, ",disable_ext_identify"); seq_puts(seq, ",disable_ext_identify");
if (test_opt(sbi, INLINE_DATA))
seq_puts(seq, ",inline_data");
seq_printf(seq, ",active_logs=%u", sbi->active_logs); seq_printf(seq, ",active_logs=%u", sbi->active_logs);
return 0; return 0;
...@@ -518,7 +535,8 @@ static int segment_info_seq_show(struct seq_file *seq, void *offset) ...@@ -518,7 +535,8 @@ static int segment_info_seq_show(struct seq_file *seq, void *offset)
{ {
struct super_block *sb = seq->private; struct super_block *sb = seq->private;
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
unsigned int total_segs = le32_to_cpu(sbi->raw_super->segment_count_main); unsigned int total_segs =
le32_to_cpu(sbi->raw_super->segment_count_main);
int i; int i;
for (i = 0; i < total_segs; i++) { for (i = 0; i < total_segs; i++) {
...@@ -618,7 +636,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb, ...@@ -618,7 +636,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb,
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct inode *inode; struct inode *inode;
if (ino < F2FS_ROOT_INO(sbi)) if (unlikely(ino < F2FS_ROOT_INO(sbi)))
return ERR_PTR(-ESTALE); return ERR_PTR(-ESTALE);
/* /*
...@@ -629,7 +647,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb, ...@@ -629,7 +647,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb,
inode = f2fs_iget(sb, ino); inode = f2fs_iget(sb, ino);
if (IS_ERR(inode)) if (IS_ERR(inode))
return ERR_CAST(inode); return ERR_CAST(inode);
if (generation && inode->i_generation != generation) { if (unlikely(generation && inode->i_generation != generation)) {
/* we didn't find the right inode.. */ /* we didn't find the right inode.. */
iput(inode); iput(inode);
return ERR_PTR(-ESTALE); return ERR_PTR(-ESTALE);
...@@ -732,10 +750,10 @@ static int sanity_check_ckpt(struct f2fs_sb_info *sbi) ...@@ -732,10 +750,10 @@ static int sanity_check_ckpt(struct f2fs_sb_info *sbi)
fsmeta += le32_to_cpu(ckpt->rsvd_segment_count); fsmeta += le32_to_cpu(ckpt->rsvd_segment_count);
fsmeta += le32_to_cpu(raw_super->segment_count_ssa); fsmeta += le32_to_cpu(raw_super->segment_count_ssa);
if (fsmeta >= total) if (unlikely(fsmeta >= total))
return 1; return 1;
if (is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) { if (unlikely(is_set_ckpt_flags(ckpt, CP_ERROR_FLAG))) {
f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck"); f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
return 1; return 1;
} }
...@@ -763,6 +781,7 @@ static void init_sb_info(struct f2fs_sb_info *sbi) ...@@ -763,6 +781,7 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
sbi->node_ino_num = le32_to_cpu(raw_super->node_ino); sbi->node_ino_num = le32_to_cpu(raw_super->node_ino);
sbi->meta_ino_num = le32_to_cpu(raw_super->meta_ino); sbi->meta_ino_num = le32_to_cpu(raw_super->meta_ino);
sbi->cur_victim_sec = NULL_SECNO; sbi->cur_victim_sec = NULL_SECNO;
sbi->max_victim_search = DEF_MAX_VICTIM_SEARCH;
for (i = 0; i < NR_COUNT_TYPE; i++) for (i = 0; i < NR_COUNT_TYPE; i++)
atomic_set(&sbi->nr_pages[i], 0); atomic_set(&sbi->nr_pages[i], 0);
...@@ -798,9 +817,10 @@ static int read_raw_super_block(struct super_block *sb, ...@@ -798,9 +817,10 @@ static int read_raw_super_block(struct super_block *sb,
/* sanity checking of raw super */ /* sanity checking of raw super */
if (sanity_check_raw_super(sb, *raw_super)) { if (sanity_check_raw_super(sb, *raw_super)) {
brelse(*raw_super_buf); brelse(*raw_super_buf);
f2fs_msg(sb, KERN_ERR, "Can't find a valid F2FS filesystem " f2fs_msg(sb, KERN_ERR,
"in %dth superblock", block + 1); "Can't find valid F2FS filesystem in %dth superblock",
if(block == 0) { block + 1);
if (block == 0) {
block++; block++;
goto retry; goto retry;
} else { } else {
...@@ -818,6 +838,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -818,6 +838,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
struct buffer_head *raw_super_buf; struct buffer_head *raw_super_buf;
struct inode *root; struct inode *root;
long err = -EINVAL; long err = -EINVAL;
int i;
/* allocate memory for f2fs-specific super block info */ /* allocate memory for f2fs-specific super block info */
sbi = kzalloc(sizeof(struct f2fs_sb_info), GFP_KERNEL); sbi = kzalloc(sizeof(struct f2fs_sb_info), GFP_KERNEL);
...@@ -825,7 +846,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -825,7 +846,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
return -ENOMEM; return -ENOMEM;
/* set a block size */ /* set a block size */
if (!sb_set_blocksize(sb, F2FS_BLKSIZE)) { if (unlikely(!sb_set_blocksize(sb, F2FS_BLKSIZE))) {
f2fs_msg(sb, KERN_ERR, "unable to set blocksize"); f2fs_msg(sb, KERN_ERR, "unable to set blocksize");
goto free_sbi; goto free_sbi;
} }
...@@ -874,7 +895,16 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -874,7 +895,16 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
mutex_init(&sbi->node_write); mutex_init(&sbi->node_write);
sbi->por_doing = false; sbi->por_doing = false;
spin_lock_init(&sbi->stat_lock); spin_lock_init(&sbi->stat_lock);
init_rwsem(&sbi->bio_sem);
mutex_init(&sbi->read_io.io_mutex);
sbi->read_io.sbi = sbi;
sbi->read_io.bio = NULL;
for (i = 0; i < NR_PAGE_TYPE; i++) {
mutex_init(&sbi->write_io[i].io_mutex);
sbi->write_io[i].sbi = sbi;
sbi->write_io[i].bio = NULL;
}
init_rwsem(&sbi->cp_rwsem); init_rwsem(&sbi->cp_rwsem);
init_waitqueue_head(&sbi->cp_wait); init_waitqueue_head(&sbi->cp_wait);
init_sb_info(sbi); init_sb_info(sbi);
...@@ -939,9 +969,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -939,9 +969,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
} }
/* if there are nt orphan nodes free them */ /* if there are nt orphan nodes free them */
err = -EINVAL; recover_orphan_inodes(sbi);
if (recover_orphan_inodes(sbi))
goto free_node_inode;
/* read root inode and dentry */ /* read root inode and dentry */
root = f2fs_iget(sb, F2FS_ROOT_INO(sbi)); root = f2fs_iget(sb, F2FS_ROOT_INO(sbi));
...@@ -950,8 +978,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -950,8 +978,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
err = PTR_ERR(root); err = PTR_ERR(root);
goto free_node_inode; goto free_node_inode;
} }
if (!S_ISDIR(root->i_mode) || !root->i_blocks || !root->i_size) if (!S_ISDIR(root->i_mode) || !root->i_blocks || !root->i_size) {
err = -EINVAL;
goto free_root_inode; goto free_root_inode;
}
sb->s_root = d_make_root(root); /* allocate root dentry */ sb->s_root = d_make_root(root); /* allocate root dentry */
if (!sb->s_root) { if (!sb->s_root) {
...@@ -1053,7 +1083,7 @@ static int __init init_inodecache(void) ...@@ -1053,7 +1083,7 @@ static int __init init_inodecache(void)
{ {
f2fs_inode_cachep = f2fs_kmem_cache_create("f2fs_inode_cache", f2fs_inode_cachep = f2fs_kmem_cache_create("f2fs_inode_cache",
sizeof(struct f2fs_inode_info), NULL); sizeof(struct f2fs_inode_info), NULL);
if (f2fs_inode_cachep == NULL) if (!f2fs_inode_cachep)
return -ENOMEM; return -ENOMEM;
return 0; return 0;
} }
...@@ -1078,9 +1108,12 @@ static int __init init_f2fs_fs(void) ...@@ -1078,9 +1108,12 @@ static int __init init_f2fs_fs(void)
err = create_node_manager_caches(); err = create_node_manager_caches();
if (err) if (err)
goto free_inodecache; goto free_inodecache;
err = create_gc_caches(); err = create_segment_manager_caches();
if (err) if (err)
goto free_node_manager_caches; goto free_node_manager_caches;
err = create_gc_caches();
if (err)
goto free_segment_manager_caches;
err = create_checkpoint_caches(); err = create_checkpoint_caches();
if (err) if (err)
goto free_gc_caches; goto free_gc_caches;
...@@ -1102,6 +1135,8 @@ static int __init init_f2fs_fs(void) ...@@ -1102,6 +1135,8 @@ static int __init init_f2fs_fs(void)
destroy_checkpoint_caches(); destroy_checkpoint_caches();
free_gc_caches: free_gc_caches:
destroy_gc_caches(); destroy_gc_caches();
free_segment_manager_caches:
destroy_segment_manager_caches();
free_node_manager_caches: free_node_manager_caches:
destroy_node_manager_caches(); destroy_node_manager_caches();
free_inodecache: free_inodecache:
...@@ -1117,6 +1152,7 @@ static void __exit exit_f2fs_fs(void) ...@@ -1117,6 +1152,7 @@ static void __exit exit_f2fs_fs(void)
unregister_filesystem(&f2fs_fs_type); unregister_filesystem(&f2fs_fs_type);
destroy_checkpoint_caches(); destroy_checkpoint_caches();
destroy_gc_caches(); destroy_gc_caches();
destroy_segment_manager_caches();
destroy_node_manager_caches(); destroy_node_manager_caches();
destroy_inodecache(); destroy_inodecache();
kset_unregister(f2fs_kset); kset_unregister(f2fs_kset);
......
...@@ -522,7 +522,7 @@ static int __f2fs_setxattr(struct inode *inode, int name_index, ...@@ -522,7 +522,7 @@ static int __f2fs_setxattr(struct inode *inode, int name_index,
if (found) if (found)
free = free + ENTRY_SIZE(here); free = free + ENTRY_SIZE(here);
if (free < newsize) { if (unlikely(free < newsize)) {
error = -ENOSPC; error = -ENOSPC;
goto exit; goto exit;
} }
......
...@@ -153,6 +153,13 @@ struct f2fs_extent { ...@@ -153,6 +153,13 @@ struct f2fs_extent {
#define NODE_DIND_BLOCK (DEF_ADDRS_PER_INODE + 5) #define NODE_DIND_BLOCK (DEF_ADDRS_PER_INODE + 5)
#define F2FS_INLINE_XATTR 0x01 /* file inline xattr flag */ #define F2FS_INLINE_XATTR 0x01 /* file inline xattr flag */
#define F2FS_INLINE_DATA 0x02 /* file inline data flag */
#define MAX_INLINE_DATA (sizeof(__le32) * (DEF_ADDRS_PER_INODE - \
F2FS_INLINE_XATTR_ADDRS - 1))
#define INLINE_DATA_OFFSET (PAGE_CACHE_SIZE - sizeof(struct node_footer) \
- sizeof(__le32) * (DEF_ADDRS_PER_INODE + 5 - 1))
struct f2fs_inode { struct f2fs_inode {
__le16 i_mode; /* file mode */ __le16 i_mode; /* file mode */
......
...@@ -16,15 +16,28 @@ ...@@ -16,15 +16,28 @@
{ META, "META" }, \ { META, "META" }, \
{ META_FLUSH, "META_FLUSH" }) { META_FLUSH, "META_FLUSH" })
#define show_bio_type(type) \ #define F2FS_BIO_MASK(t) (t & (READA | WRITE_FLUSH_FUA))
__print_symbolic(type, \ #define F2FS_BIO_EXTRA_MASK(t) (t & (REQ_META | REQ_PRIO))
{ READ, "READ" }, \
{ READA, "READAHEAD" }, \ #define show_bio_type(type) show_bio_base(type), show_bio_extra(type)
{ READ_SYNC, "READ_SYNC" }, \
{ WRITE, "WRITE" }, \ #define show_bio_base(type) \
{ WRITE_SYNC, "WRITE_SYNC" }, \ __print_symbolic(F2FS_BIO_MASK(type), \
{ WRITE_FLUSH, "WRITE_FLUSH" }, \ { READ, "READ" }, \
{ WRITE_FUA, "WRITE_FUA" }) { READA, "READAHEAD" }, \
{ READ_SYNC, "READ_SYNC" }, \
{ WRITE, "WRITE" }, \
{ WRITE_SYNC, "WRITE_SYNC" }, \
{ WRITE_FLUSH, "WRITE_FLUSH" }, \
{ WRITE_FUA, "WRITE_FUA" }, \
{ WRITE_FLUSH_FUA, "WRITE_FLUSH_FUA" })
#define show_bio_extra(type) \
__print_symbolic(F2FS_BIO_EXTRA_MASK(type), \
{ REQ_META, "(M)" }, \
{ REQ_PRIO, "(P)" }, \
{ REQ_META | REQ_PRIO, "(MP)" }, \
{ 0, " \b" })
#define show_data_type(type) \ #define show_data_type(type) \
__print_symbolic(type, \ __print_symbolic(type, \
...@@ -421,7 +434,7 @@ TRACE_EVENT(f2fs_truncate_partial_nodes, ...@@ -421,7 +434,7 @@ TRACE_EVENT(f2fs_truncate_partial_nodes,
__entry->err) __entry->err)
); );
TRACE_EVENT_CONDITION(f2fs_readpage, TRACE_EVENT_CONDITION(f2fs_submit_page_bio,
TP_PROTO(struct page *page, sector_t blkaddr, int type), TP_PROTO(struct page *page, sector_t blkaddr, int type),
...@@ -446,7 +459,7 @@ TRACE_EVENT_CONDITION(f2fs_readpage, ...@@ -446,7 +459,7 @@ TRACE_EVENT_CONDITION(f2fs_readpage,
), ),
TP_printk("dev = (%d,%d), ino = %lu, page_index = 0x%lx, " TP_printk("dev = (%d,%d), ino = %lu, page_index = 0x%lx, "
"blkaddr = 0x%llx, bio_type = %s", "blkaddr = 0x%llx, bio_type = %s%s",
show_dev_ino(__entry), show_dev_ino(__entry),
(unsigned long)__entry->index, (unsigned long)__entry->index,
(unsigned long long)__entry->blkaddr, (unsigned long long)__entry->blkaddr,
...@@ -598,36 +611,54 @@ TRACE_EVENT(f2fs_reserve_new_block, ...@@ -598,36 +611,54 @@ TRACE_EVENT(f2fs_reserve_new_block,
__entry->ofs_in_node) __entry->ofs_in_node)
); );
TRACE_EVENT(f2fs_do_submit_bio, DECLARE_EVENT_CLASS(f2fs__submit_bio,
TP_PROTO(struct super_block *sb, int btype, bool sync, struct bio *bio), TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
TP_ARGS(sb, btype, sync, bio), TP_ARGS(sb, rw, type, bio),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(dev_t, dev) __field(dev_t, dev)
__field(int, btype) __field(int, rw)
__field(bool, sync) __field(int, type)
__field(sector_t, sector) __field(sector_t, sector)
__field(unsigned int, size) __field(unsigned int, size)
), ),
TP_fast_assign( TP_fast_assign(
__entry->dev = sb->s_dev; __entry->dev = sb->s_dev;
__entry->btype = btype; __entry->rw = rw;
__entry->sync = sync; __entry->type = type;
__entry->sector = bio->bi_sector; __entry->sector = bio->bi_sector;
__entry->size = bio->bi_size; __entry->size = bio->bi_size;
), ),
TP_printk("dev = (%d,%d), type = %s, io = %s, sector = %lld, size = %u", TP_printk("dev = (%d,%d), %s%s, %s, sector = %lld, size = %u",
show_dev(__entry), show_dev(__entry),
show_block_type(__entry->btype), show_bio_type(__entry->rw),
__entry->sync ? "sync" : "no sync", show_block_type(__entry->type),
(unsigned long long)__entry->sector, (unsigned long long)__entry->sector,
__entry->size) __entry->size)
); );
DEFINE_EVENT_CONDITION(f2fs__submit_bio, f2fs_submit_write_bio,
TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
TP_ARGS(sb, rw, type, bio),
TP_CONDITION(bio)
);
DEFINE_EVENT_CONDITION(f2fs__submit_bio, f2fs_submit_read_bio,
TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
TP_ARGS(sb, rw, type, bio),
TP_CONDITION(bio)
);
DECLARE_EVENT_CLASS(f2fs__page, DECLARE_EVENT_CLASS(f2fs__page,
TP_PROTO(struct page *page, int type), TP_PROTO(struct page *page, int type),
...@@ -674,15 +705,16 @@ DEFINE_EVENT(f2fs__page, f2fs_vm_page_mkwrite, ...@@ -674,15 +705,16 @@ DEFINE_EVENT(f2fs__page, f2fs_vm_page_mkwrite,
TP_ARGS(page, type) TP_ARGS(page, type)
); );
TRACE_EVENT(f2fs_submit_write_page, TRACE_EVENT(f2fs_submit_page_mbio,
TP_PROTO(struct page *page, block_t blk_addr, int type), TP_PROTO(struct page *page, int rw, int type, block_t blk_addr),
TP_ARGS(page, blk_addr, type), TP_ARGS(page, rw, type, blk_addr),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(dev_t, dev) __field(dev_t, dev)
__field(ino_t, ino) __field(ino_t, ino)
__field(int, rw)
__field(int, type) __field(int, type)
__field(pgoff_t, index) __field(pgoff_t, index)
__field(block_t, block) __field(block_t, block)
...@@ -691,13 +723,15 @@ TRACE_EVENT(f2fs_submit_write_page, ...@@ -691,13 +723,15 @@ TRACE_EVENT(f2fs_submit_write_page,
TP_fast_assign( TP_fast_assign(
__entry->dev = page->mapping->host->i_sb->s_dev; __entry->dev = page->mapping->host->i_sb->s_dev;
__entry->ino = page->mapping->host->i_ino; __entry->ino = page->mapping->host->i_ino;
__entry->rw = rw;
__entry->type = type; __entry->type = type;
__entry->index = page->index; __entry->index = page->index;
__entry->block = blk_addr; __entry->block = blk_addr;
), ),
TP_printk("dev = (%d,%d), ino = %lu, %s, index = %lu, blkaddr = 0x%llx", TP_printk("dev = (%d,%d), ino = %lu, %s%s, %s, index = %lu, blkaddr = 0x%llx",
show_dev_ino(__entry), show_dev_ino(__entry),
show_bio_type(__entry->rw),
show_block_type(__entry->type), show_block_type(__entry->type),
(unsigned long)__entry->index, (unsigned long)__entry->index,
(unsigned long long)__entry->block) (unsigned long long)__entry->block)
...@@ -727,6 +761,29 @@ TRACE_EVENT(f2fs_write_checkpoint, ...@@ -727,6 +761,29 @@ TRACE_EVENT(f2fs_write_checkpoint,
__entry->msg) __entry->msg)
); );
TRACE_EVENT(f2fs_issue_discard,
TP_PROTO(struct super_block *sb, block_t blkstart, block_t blklen),
TP_ARGS(sb, blkstart, blklen),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(block_t, blkstart)
__field(block_t, blklen)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
__entry->blkstart = blkstart;
__entry->blklen = blklen;
),
TP_printk("dev = (%d,%d), blkstart = 0x%llx, blklen = 0x%llx",
show_dev(__entry),
(unsigned long long)__entry->blkstart,
(unsigned long long)__entry->blklen)
);
#endif /* _TRACE_F2FS_H */ #endif /* _TRACE_F2FS_H */
/* This part must be outside protection */ /* This part must be outside protection */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment