Commit 34bce199 authored by Mike Galbraith's avatar Mike Galbraith Committed by Ben Hutchings

sched: fix __sched_setscheduler() vs load balancing race

__sched_setscheduler() may release rq->lock in pull_rt_task() as a task is
being changed rt -> fair class.  load balancing may sneak in, move the task
behind __sched_setscheduler()'s back, which explodes in switched_to_fair()
when the passed but no longer valid rq is used.  Tell can_migrate_task() to
say no if ->pi_lock is held.

@stable: Kernels that predate SCHED_DEADLINE can use this simple (and tested)
check in lieu of backport of the full 18 patch mainline treatment.
Signed-off-by: default avatarMike Galbraith <umgwanakikbuti@gmail.com>
[bwh: Backported to 3.2:
 - Adjust numbering in the comment
 - Adjust filename]
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Willy Tarreau <w@1wt.eu>
parent feae3ca2
......@@ -2791,6 +2791,7 @@ int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu,
* 1) running (obviously), or
* 2) cannot be migrated to this CPU due to cpus_allowed, or
* 3) are cache-hot on their current CPU.
* 4) p->pi_lock is held.
*/
if (!cpumask_test_cpu(this_cpu, tsk_cpus_allowed(p))) {
schedstat_inc(p, se.statistics.nr_failed_migrations_affine);
......@@ -2803,6 +2804,14 @@ int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu,
return 0;
}
/*
* rt -> fair class change may be in progress. If we sneak in should
* double_lock_balance() release rq->lock, and move the task, we will
* cause switched_to_fair() to meet a passed but no longer valid rq.
*/
if (raw_spin_is_locked(&p->pi_lock))
return 0;
/*
* Aggressive migration if:
* 1) task is cache cold, or
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment