Commit 65a4433a authored by Jeffrey Hugo's avatar Jeffrey Hugo Committed by Ingo Molnar

sched/fair: Fix load_balance() affinity redo path

If load_balance() fails to migrate any tasks because all tasks were
affined, load_balance() removes the source CPU from consideration and
attempts to redo and balance among the new subset of CPUs.

There is a bug in this code path where the algorithm considers all active
CPUs in the system (minus the source that was just masked out).  This is
not valid for two reasons: some active CPUs may not be in the current
scheduling domain and one of the active CPUs is dst_cpu. These CPUs should
not be considered, as we cannot pull load from them.

Instead of failing out of load_balance(), we may end up redoing the search
with no valid CPUs and incorrectly concluding the domain is balanced.
Additionally, if the group_imbalance flag was just set, it may also be
incorrectly unset, thus the flag will not be seen by other CPUs in future
load_balance() runs as that algorithm intends.

Fix the check by removing CPUs not in the current domain and the dst_cpu
from considertation, thus limiting the evaluation to valid remaining CPUs
from which load might be migrated.
Co-authored-by: default avatarAustin Christ <austinwc@codeaurora.org>
Co-authored-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
Tested-by: default avatarTyler Baicar <tbaicar@codeaurora.org>
Signed-off-by: default avatarJeffrey Hugo <jhugo@codeaurora.org>
Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Austin Christ <austinwc@codeaurora.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Timur Tabi <timur@codeaurora.org>
Link: http://lkml.kernel.org/r/1496863138-11322-2-git-send-email-jhugo@codeaurora.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 2a42eb95
...@@ -6646,10 +6646,10 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) ...@@ -6646,10 +6646,10 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
* our sched_group. We may want to revisit it if we couldn't * our sched_group. We may want to revisit it if we couldn't
* meet load balance goals by pulling other tasks on src_cpu. * meet load balance goals by pulling other tasks on src_cpu.
* *
* Also avoid computing new_dst_cpu if we have already computed * Avoid computing new_dst_cpu for NEWLY_IDLE or if we have
* one in current iteration. * already computed one in current iteration.
*/ */
if (!env->dst_grpmask || (env->flags & LBF_DST_PINNED)) if (env->idle == CPU_NEWLY_IDLE || (env->flags & LBF_DST_PINNED))
return 0; return 0;
/* Prevent to re-select dst_cpu via env's cpus */ /* Prevent to re-select dst_cpu via env's cpus */
...@@ -8022,14 +8022,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, ...@@ -8022,14 +8022,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
.tasks = LIST_HEAD_INIT(env.tasks), .tasks = LIST_HEAD_INIT(env.tasks),
}; };
/* cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask);
* For NEWLY_IDLE load_balancing, we don't need to consider
* other cpus in our group
*/
if (idle == CPU_NEWLY_IDLE)
env.dst_grpmask = NULL;
cpumask_copy(cpus, cpu_active_mask);
schedstat_inc(sd->lb_count[idle]); schedstat_inc(sd->lb_count[idle]);
...@@ -8151,7 +8144,15 @@ static int load_balance(int this_cpu, struct rq *this_rq, ...@@ -8151,7 +8144,15 @@ static int load_balance(int this_cpu, struct rq *this_rq,
/* All tasks on this runqueue were pinned by CPU affinity */ /* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(env.flags & LBF_ALL_PINNED)) { if (unlikely(env.flags & LBF_ALL_PINNED)) {
cpumask_clear_cpu(cpu_of(busiest), cpus); cpumask_clear_cpu(cpu_of(busiest), cpus);
if (!cpumask_empty(cpus)) { /*
* Attempting to continue load balancing at the current
* sched_domain level only makes sense if there are
* active CPUs remaining as possible busiest CPUs to
* pull load from which are not contained within the
* destination group that is receiving any migrated
* load.
*/
if (!cpumask_subset(cpus, env.dst_grpmask)) {
env.loop = 0; env.loop = 0;
env.loop_break = sched_nr_migrate_break; env.loop_break = sched_nr_migrate_break;
goto redo; goto redo;
...@@ -8447,6 +8448,13 @@ static int active_load_balance_cpu_stop(void *data) ...@@ -8447,6 +8448,13 @@ static int active_load_balance_cpu_stop(void *data)
.src_cpu = busiest_rq->cpu, .src_cpu = busiest_rq->cpu,
.src_rq = busiest_rq, .src_rq = busiest_rq,
.idle = CPU_IDLE, .idle = CPU_IDLE,
/*
* can_migrate_task() doesn't need to compute new_dst_cpu
* for active balancing. Since we have CPU_IDLE, but no
* @dst_grpmask we need to make that test go away with lying
* about DST_PINNED.
*/
.flags = LBF_DST_PINNED,
}; };
schedstat_inc(sd->alb_count); schedstat_inc(sd->alb_count);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment