Commit 269d5992 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched/core: Fix DEBUG_SPINLOCK annotation for rq->lock

Mark noticed that he had sporadic "spinlock recursion" warnings from
the DEBUG_SPINLOCK code. Now rq->lock is special in that the owner
changes in the middle of a context switch.

It so happens that we fix up the lock.owner too late, @prev can run
(remotely) the moment prev->on_cpu is cleared, this then allows @prev
to again try and acquire this rq->lock and trigger this warning.

So we have to switch lock.owner before clearing prev->on_cpu.

Do this by moving the DEBUG_SPINLOCK annotation from after switch_to()
to before switch_to() and collect all lockdep annotations there into
prepare_lock_switch() to mirror the existing finish_lock_switch().
Debugged-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent a7711602
...@@ -2601,19 +2601,31 @@ static inline void finish_task(struct task_struct *prev) ...@@ -2601,19 +2601,31 @@ static inline void finish_task(struct task_struct *prev)
#endif #endif
} }
static inline void finish_lock_switch(struct rq *rq) static inline void
prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
{ {
/*
* Since the runqueue lock will be released by the next
* task (which is an invalid locking op but in the case
* of the scheduler it's an obvious special-case), so we
* do an early lockdep release here:
*/
rq_unpin_lock(rq, rf);
spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
#ifdef CONFIG_DEBUG_SPINLOCK #ifdef CONFIG_DEBUG_SPINLOCK
/* this is a valid case when another task releases the spinlock */ /* this is a valid case when another task releases the spinlock */
rq->lock.owner = current; rq->lock.owner = next;
#endif #endif
}
static inline void finish_lock_switch(struct rq *rq)
{
/* /*
* If we are tracking spinlock dependencies then we have to * If we are tracking spinlock dependencies then we have to
* fix up the runqueue lock - which gets 'carried over' from * fix up the runqueue lock - which gets 'carried over' from
* prev into current: * prev into current:
*/ */
spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_); spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
raw_spin_unlock_irq(&rq->lock); raw_spin_unlock_irq(&rq->lock);
} }
...@@ -2844,14 +2856,7 @@ context_switch(struct rq *rq, struct task_struct *prev, ...@@ -2844,14 +2856,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);
/* prepare_lock_switch(rq, next, rf);
* Since the runqueue lock will be released by the next
* task (which is an invalid locking op but in the case
* of the scheduler it's an obvious special-case), so we
* do an early lockdep release here:
*/
rq_unpin_lock(rq, rf);
spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
/* Here we just switch the register state and the stack. */ /* Here we just switch the register state and the stack. */
switch_to(prev, next, prev); switch_to(prev, next, prev);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment