Commit 58877d34 authored by Peter Zijlstra's avatar Peter Zijlstra

sched: Better document ttwu()

Dave hit the problem fixed by commit:

  b6e13e85 ("sched/core: Fix ttwu() race")

and failed to understand much of the code involved. Per his request a
few comments to (hopefully) clarify things.
Requested-by: default avatarDave Chinner <david@fromorbit.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200702125211.GQ4800@hirez.programming.kicks-ass.net
parent 015dc089
...@@ -154,7 +154,7 @@ struct task_group; ...@@ -154,7 +154,7 @@ struct task_group;
* *
* for (;;) { * for (;;) {
* set_current_state(TASK_UNINTERRUPTIBLE); * set_current_state(TASK_UNINTERRUPTIBLE);
* if (!need_sleep) * if (CONDITION)
* break; * break;
* *
* schedule(); * schedule();
...@@ -162,16 +162,16 @@ struct task_group; ...@@ -162,16 +162,16 @@ struct task_group;
* __set_current_state(TASK_RUNNING); * __set_current_state(TASK_RUNNING);
* *
* If the caller does not need such serialisation (because, for instance, the * If the caller does not need such serialisation (because, for instance, the
* condition test and condition change and wakeup are under the same lock) then * CONDITION test and condition change and wakeup are under the same lock) then
* use __set_current_state(). * use __set_current_state().
* *
* The above is typically ordered against the wakeup, which does: * The above is typically ordered against the wakeup, which does:
* *
* need_sleep = false; * CONDITION = 1;
* wake_up_state(p, TASK_UNINTERRUPTIBLE); * wake_up_state(p, TASK_UNINTERRUPTIBLE);
* *
* where wake_up_state() executes a full memory barrier before accessing the * where wake_up_state()/try_to_wake_up() executes a full memory barrier before
* task state. * accessing p->state.
* *
* Wakeup will do: if (@state & p->state) p->state = TASK_RUNNING, that is, * Wakeup will do: if (@state & p->state) p->state = TASK_RUNNING, that is,
* once it observes the TASK_UNINTERRUPTIBLE store the waking CPU can issue a * once it observes the TASK_UNINTERRUPTIBLE store the waking CPU can issue a
......
This diff is collapsed.
...@@ -1203,6 +1203,16 @@ struct rq_flags { ...@@ -1203,6 +1203,16 @@ struct rq_flags {
#endif #endif
}; };
/*
* Lockdep annotation that avoids accidental unlocks; it's like a
* sticky/continuous lockdep_assert_held().
*
* This avoids code that has access to 'struct rq *rq' (basically everything in
* the scheduler) from accidentally unlocking the rq if they do not also have a
* copy of the (on-stack) 'struct rq_flags rf'.
*
* Also see Documentation/locking/lockdep-design.rst.
*/
static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf) static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf)
{ {
rf->cookie = lockdep_pin_lock(&rq->lock); rf->cookie = lockdep_pin_lock(&rq->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment