Commit 0729e196 authored by Darren Hart's avatar Darren Hart Committed by Ingo Molnar

futex: Fix wakeup race by setting TASK_INTERRUPTIBLE before queue_me()

PI futexes do not use the same plist_node_empty() test for wakeup.
It was possible for the waiter (in futex_wait_requeue_pi()) to set
TASK_INTERRUPTIBLE after the waker assigned the rtmutex to the
waiter. The waiter would then note the plist was not empty and call
schedule(). The task would not be found by any subsequeuent futex
wakeups, resulting in a userspace hang.

By moving the setting of TASK_INTERRUPTIBLE to before the call to
queue_me(), the race with the waker is eliminated. Since we no
longer call get_user() from within queue_me(), there is no need to
delay the setting of TASK_INTERRUPTIBLE until after the call to
queue_me().

The FUTEX_LOCK_PI operation is not affected as futex_lock_pi()
relies entirely on the rtmutex code to handle schedule() and
wakeup.  The requeue PI code is affected because the waiter starts
as a non-PI waiter and is woken on a PI futex.

Remove the crusty old comment about holding spinlocks() across
get_user() as we no longer do that. Correct the locking statement
with a description of why the test is performed.
Signed-off-by: default avatarDarren Hart <dvhltc@us.ibm.com>
Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Dinakar Guniguntala <dino@in.ibm.com>
Cc: John Stultz <johnstul@us.ibm.com>
LKML-Reference: <20090922053038.8717.97838.stgit@Aeon>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent d8d88fbb
...@@ -1656,17 +1656,8 @@ static int fixup_owner(u32 __user *uaddr, int fshared, struct futex_q *q, ...@@ -1656,17 +1656,8 @@ static int fixup_owner(u32 __user *uaddr, int fshared, struct futex_q *q,
static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q, static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
struct hrtimer_sleeper *timeout) struct hrtimer_sleeper *timeout)
{ {
queue_me(q, hb);
/*
* There might have been scheduling since the queue_me(), as we
* cannot hold a spinlock across the get_user() in case it
* faults, and we cannot just set TASK_INTERRUPTIBLE state when
* queueing ourselves into the futex hash. This code thus has to
* rely on the futex_wake() code removing us from hash when it
* wakes us up.
*/
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
queue_me(q, hb);
/* Arm the timer */ /* Arm the timer */
if (timeout) { if (timeout) {
...@@ -1676,8 +1667,8 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q, ...@@ -1676,8 +1667,8 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
} }
/* /*
* !plist_node_empty() is safe here without any lock. * If we have been removed from the hash list, then another task
* q.lock_ptr != 0 is not safe, because of ordering against wakeup. * has tried to wake us, and we can skip the call to schedule().
*/ */
if (likely(!plist_node_empty(&q->list))) { if (likely(!plist_node_empty(&q->list))) {
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment