Commit b533184f authored by J. Bruce Fields's avatar J. Bruce Fields

locks: clarify posix_locks_deadlock

For such a short function (with such a long comment),
posix_locks_deadlock() seems to cause a lot of confusion.  Attempt to
make it a bit clearer:

	- Remove the initial posix_same_owner() check, which can never
	  pass (since this is only called in the case that block_fl and
	  caller_fl conflict)
	- Use an explicit loop (and a helper function) instead of a goto.
	- Rewrite the comment, attempting a clearer explanation, and
	  removing some uninteresting historical detail.
Signed-off-by: default avatarJ. Bruce Fields <bfields@citi.umich.edu>
parent 9135f190
...@@ -683,45 +683,55 @@ posix_test_lock(struct file *filp, struct file_lock *fl) ...@@ -683,45 +683,55 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
EXPORT_SYMBOL(posix_test_lock); EXPORT_SYMBOL(posix_test_lock);
/* This function tests for deadlock condition before putting a process to /*
* sleep. The detection scheme is no longer recursive. Recursive was neat, * Deadlock detection:
* but dangerous - we risked stack corruption if the lock data was bad, or *
* if the recursion was too deep for any other reason. * We attempt to detect deadlocks that are due purely to posix file
* locks.
*
* We assume that a task can be waiting for at most one lock at a time.
* So for any acquired lock, the process holding that lock may be
* waiting on at most one other lock. That lock in turns may be held by
* someone waiting for at most one other lock. Given a requested lock
* caller_fl which is about to wait for a conflicting lock block_fl, we
* follow this chain of waiters to ensure we are not about to create a
* cycle.
* *
* We rely on the fact that a task can only be on one lock's wait queue * Since we do this before we ever put a process to sleep on a lock, we
* at a time. When we find blocked_task on a wait queue we can re-search * are ensured that there is never a cycle; that is what guarantees that
* with blocked_task equal to that queue's owner, until either blocked_task * the while() loop in posix_locks_deadlock() eventually completes.
* isn't found, or blocked_task is found on a queue owned by my_task.
* *
* Note: the above assumption may not be true when handling lock requests * Note: the above assumption may not be true when handling lock
* from a broken NFS client. But broken NFS clients have a lot more to * requests from a broken NFS client. It may also fail in the presence
* worry about than proper deadlock detection anyway... --okir * of tasks (such as posix threads) sharing the same open file table.
* *
* However, the failure of this assumption (also possible in the case of * To handle those cases, we just bail out after a few iterations.
* multiple tasks sharing the same open file table) also means there's no
* guarantee that the loop below will terminate. As a hack, we give up
* after a few iterations.
*/ */
#define MAX_DEADLK_ITERATIONS 10 #define MAX_DEADLK_ITERATIONS 10
/* Find a lock that the owner of the given block_fl is blocking on. */
static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
{
struct file_lock *fl;
list_for_each_entry(fl, &blocked_list, fl_link) {
if (posix_same_owner(fl, block_fl))
return fl->fl_next;
}
return NULL;
}
static int posix_locks_deadlock(struct file_lock *caller_fl, static int posix_locks_deadlock(struct file_lock *caller_fl,
struct file_lock *block_fl) struct file_lock *block_fl)
{ {
struct file_lock *fl;
int i = 0; int i = 0;
next_task: while ((block_fl = what_owner_is_waiting_for(block_fl))) {
if (posix_same_owner(caller_fl, block_fl))
return 1;
list_for_each_entry(fl, &blocked_list, fl_link) {
if (posix_same_owner(fl, block_fl)) {
if (i++ > MAX_DEADLK_ITERATIONS) if (i++ > MAX_DEADLK_ITERATIONS)
return 0; return 0;
fl = fl->fl_next; if (posix_same_owner(caller_fl, block_fl))
block_fl = fl; return 1;
goto next_task;
}
} }
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment