Commit ca28aa53 authored by Rik van Riel's avatar Rik van Riel Committed by Ingo Molnar

sched/numa: Fix task or group comparison

This patch separately considers task and group affinities when
searching for swap candidates during NUMA placement. If tasks
are part of the same group, or no group at all, the task weights
are considered.

Some hysteresis is added to prevent tasks within one group from
getting bounced between NUMA nodes due to tiny differences.

If tasks are part of different groups, the code compares group
weights, in order to favor grouping task groups together.

The patch also changes the group weight multiplier to be the
same as the task weight multiplier, since the two are no longer
added up like before.
Signed-off-by: default avatarRik van Riel <riel@redhat.com>
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-55-git-send-email-mgorman@suse.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 887c290e
...@@ -962,7 +962,7 @@ static inline unsigned long group_weight(struct task_struct *p, int nid) ...@@ -962,7 +962,7 @@ static inline unsigned long group_weight(struct task_struct *p, int nid)
if (!total_faults) if (!total_faults)
return 0; return 0;
return 1200 * group_faults(p, nid) / total_faults; return 1000 * group_faults(p, nid) / total_faults;
} }
static unsigned long weighted_cpuload(const int cpu); static unsigned long weighted_cpuload(const int cpu);
...@@ -1068,16 +1068,34 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1068,16 +1068,34 @@ static void task_numa_compare(struct task_numa_env *env,
/* /*
* If dst and source tasks are in the same NUMA group, or not * If dst and source tasks are in the same NUMA group, or not
* in any group then look only at task weights otherwise give * in any group then look only at task weights.
* priority to the group weights.
*/ */
if (!cur->numa_group || !env->p->numa_group || if (cur->numa_group == env->p->numa_group) {
cur->numa_group == env->p->numa_group) {
imp = taskimp + task_weight(cur, env->src_nid) - imp = taskimp + task_weight(cur, env->src_nid) -
task_weight(cur, env->dst_nid); task_weight(cur, env->dst_nid);
/*
* Add some hysteresis to prevent swapping the
* tasks within a group over tiny differences.
*/
if (cur->numa_group)
imp -= imp/16;
} else { } else {
imp = groupimp + group_weight(cur, env->src_nid) - /*
group_weight(cur, env->dst_nid); * Compare the group weights. If a task is all by
* itself (not part of a group), use the task weight
* instead.
*/
if (env->p->numa_group)
imp = groupimp;
else
imp = taskimp;
if (cur->numa_group)
imp += group_weight(cur, env->src_nid) -
group_weight(cur, env->dst_nid);
else
imp += task_weight(cur, env->src_nid) -
task_weight(cur, env->dst_nid);
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment