Commit ab2515c4 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched: Drop rq->lock from first part of wake_up_new_task()

Since p->pi_lock now protects all things needed to call
select_task_rq() avoid the double remote rq->lock acquisition and rely
on p->pi_lock.
Reviewed-by: default avatarFrank Rowand <frank.rowand@am.sony.com>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20110405152729.273362517@chello.nlSigned-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 0122ec5b
......@@ -2736,28 +2736,18 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
{
unsigned long flags;
struct rq *rq;
int cpu __maybe_unused = get_cpu();
raw_spin_lock_irqsave(&p->pi_lock, flags);
#ifdef CONFIG_SMP
rq = task_rq_lock(p, &flags);
p->state = TASK_WAKING;
/*
* Fork balancing, do it here and not earlier because:
* - cpus_allowed can change in the fork path
* - any previously selected cpu might disappear through hotplug
*
* We set TASK_WAKING so that select_task_rq() can drop rq->lock
* without people poking at ->cpus_allowed.
*/
cpu = select_task_rq(p, SD_BALANCE_FORK, 0);
set_task_cpu(p, cpu);
p->state = TASK_RUNNING;
task_rq_unlock(rq, p, &flags);
set_task_cpu(p, select_task_rq(p, SD_BALANCE_FORK, 0));
#endif
rq = task_rq_lock(p, &flags);
rq = __task_rq_lock(p);
activate_task(rq, p, 0);
p->on_rq = 1;
trace_sched_wakeup_new(p, true);
......@@ -2767,7 +2757,6 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
p->sched_class->task_woken(rq, p);
#endif
task_rq_unlock(rq, p, &flags);
put_cpu();
}
#ifdef CONFIG_PREEMPT_NOTIFIERS
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment