Commit f93e65c1 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched: Restore __cpu_power to a straight sum of power

cpu_power is supposed to be a representation of the process
capacity of the cpu, not a value to randomly tweak in order to
affect placement.

Remove the placement hacks.
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: default avatarAndreas Herrmann <andreas.herrmann3@amd.com>
Acked-by: default avatarAndreas Herrmann <andreas.herrmann3@amd.com>
Acked-by: default avatarGautham R Shenoy <ego@in.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
LKML-Reference: <20090901083825.810860576@chello.nl>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 9aa55fbd
...@@ -8464,15 +8464,13 @@ static void free_sched_groups(const struct cpumask *cpu_map, ...@@ -8464,15 +8464,13 @@ static void free_sched_groups(const struct cpumask *cpu_map,
* there are asymmetries in the topology. If there are asymmetries, group * there are asymmetries in the topology. If there are asymmetries, group
* having more cpu_power will pickup more load compared to the group having * having more cpu_power will pickup more load compared to the group having
* less cpu_power. * less cpu_power.
*
* cpu_power will be a multiple of SCHED_LOAD_SCALE. This multiple represents
* the maximum number of tasks a group can handle in the presence of other idle
* or lightly loaded groups in the same sched domain.
*/ */
static void init_sched_groups_power(int cpu, struct sched_domain *sd) static void init_sched_groups_power(int cpu, struct sched_domain *sd)
{ {
struct sched_domain *child; struct sched_domain *child;
struct sched_group *group; struct sched_group *group;
long power;
int weight;
WARN_ON(!sd || !sd->groups); WARN_ON(!sd || !sd->groups);
...@@ -8483,22 +8481,20 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd) ...@@ -8483,22 +8481,20 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
sd->groups->__cpu_power = 0; sd->groups->__cpu_power = 0;
/* if (!child) {
* For perf policy, if the groups in child domain share resources power = SCHED_LOAD_SCALE;
* (for example cores sharing some portions of the cache hierarchy weight = cpumask_weight(sched_domain_span(sd));
* or SMT), then set this domain groups cpu_power such that each group /*
* can handle only one task, when there are other idle groups in the * SMT siblings share the power of a single core.
* same sched domain. */
*/ if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
if (!child || (!(sd->flags & SD_POWERSAVINGS_BALANCE) && power /= weight;
(child->flags & sg_inc_cpu_power(sd->groups, power);
(SD_SHARE_CPUPOWER | SD_SHARE_PKG_RESOURCES)))) {
sg_inc_cpu_power(sd->groups, SCHED_LOAD_SCALE);
return; return;
} }
/* /*
* add cpu_power of each child group to this groups cpu_power * Add cpu_power of each child group to this groups cpu_power.
*/ */
group = child->groups; group = child->groups;
do { do {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment