Commit 168a9fd6 authored by Miklos Szeredi's avatar Miklos Szeredi Committed by Linus Torvalds

[PATCH] __wait_on_freeing_inode fix

This patch fixes queer behavior in __wait_on_freeing_inode().

If I_LOCK was not set it called yield(), effectively busy waiting for the
removal of the inode from the hash.  This change was introduced within
"[PATCH] eliminate inode waitqueue hashtable" Changeset 1.1938.166.16 last
october by wli.

The solution is to restore the old behavior, of unconditionally waiting on
the waitqueue.  It doesn't matter if I_LOCK is not set initally, the task
will go to sleep, and wake up when wake_up_inode() is called from
generic_delete_inode() after removing the inode from the hash chain.

Comment is also updated to better reflect current behavior.

This condition is very hard to trigger normally (simultaneous clear_inode()
with iget()) so probably only heavy stress testing can reveal any change of
behavior.
Signed-off-by: default avatarMiklos Szeredi <miklos@szeredi.hu>
Acked-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 3b6bfcdb
...@@ -1244,29 +1244,21 @@ int inode_wait(void *word) ...@@ -1244,29 +1244,21 @@ int inode_wait(void *word)
} }
/* /*
* If we try to find an inode in the inode hash while it is being deleted, we * If we try to find an inode in the inode hash while it is being
* have to wait until the filesystem completes its deletion before reporting * deleted, we have to wait until the filesystem completes its
* that it isn't found. This is because iget will immediately call * deletion before reporting that it isn't found. This function waits
* ->read_inode, and we want to be sure that evidence of the deletion is found * until the deletion _might_ have completed. Callers are responsible
* by ->read_inode. * to recheck inode state.
*
* It doesn't matter if I_LOCK is not set initially, a call to
* wake_up_inode() after removing from the hash list will DTRT.
*
* This is called with inode_lock held. * This is called with inode_lock held.
*/ */
static void __wait_on_freeing_inode(struct inode *inode) static void __wait_on_freeing_inode(struct inode *inode)
{ {
wait_queue_head_t *wq; wait_queue_head_t *wq;
DEFINE_WAIT_BIT(wait, &inode->i_state, __I_LOCK); DEFINE_WAIT_BIT(wait, &inode->i_state, __I_LOCK);
/*
* I_FREEING and I_CLEAR are cleared in process context under
* inode_lock, so we have to give the tasks who would clear them
* a chance to run and acquire inode_lock.
*/
if (!(inode->i_state & I_LOCK)) {
spin_unlock(&inode_lock);
yield();
spin_lock(&inode_lock);
return;
}
wq = bit_waitqueue(&inode->i_state, __I_LOCK); wq = bit_waitqueue(&inode->i_state, __I_LOCK);
prepare_to_wait(wq, &wait.wait, TASK_UNINTERRUPTIBLE); prepare_to_wait(wq, &wait.wait, TASK_UNINTERRUPTIBLE);
spin_unlock(&inode_lock); spin_unlock(&inode_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment