Commit 2dd73a4f authored by Peter Williams's avatar Peter Williams Committed by Linus Torvalds

[PATCH] sched: implement smpnice

Problem:

The introduction of separate run queues per CPU has brought with it "nice"
enforcement problems that are best described by a simple example.

For the sake of argument suppose that on a single CPU machine with a
nice==19 hard spinner and a nice==0 hard spinner running that the nice==0
task gets 95% of the CPU and the nice==19 task gets 5% of the CPU.  Now
suppose that there is a system with 2 CPUs and 2 nice==19 hard spinners and
2 nice==0 hard spinners running.  The user of this system would be entitled
to expect that the nice==0 tasks each get 95% of a CPU and the nice==19
tasks only get 5% each.  However, whether this expectation is met is pretty
much down to luck as there are four equally likely distributions of the
tasks to the CPUs that the load balancing code will consider to be balanced
with loads of 2.0 for each CPU.  Two of these distributions involve one
nice==0 and one nice==19 task per CPU and in these circumstances the users
expectations will be met.  The other two distributions both involve both
nice==0 tasks being on one CPU and both nice==19 being on the other CPU and
each task will get 50% of a CPU and the user's expectations will not be
met.

Solution:

The solution to this problem that is implemented in the attached patch is
to use weighted loads when determining if the system is balanced and, when
an imbalance is detected, to move an amount of weighted load between run
queues (as opposed to a number of tasks) to restore the balance.  Once
again, the easiest way to explain why both of these measures are necessary
is to use a simple example.  Suppose that (in a slight variation of the
above example) that we have a two CPU system with 4 nice==0 and 4 nice=19
hard spinning tasks running and that the 4 nice==0 tasks are on one CPU and
the 4 nice==19 tasks are on the other CPU.  The weighted loads for the two
CPUs would be 4.0 and 0.2 respectively and the load balancing code would
move 2 tasks resulting in one CPU with a load of 2.0 and the other with
load of 2.2.  If this was considered to be a big enough imbalance to
justify moving a task and that task was moved using the current
move_tasks() then it would move the highest priority task that it found and
this would result in one CPU with a load of 3.0 and the other with a load
of 1.2 which would result in the movement of a task in the opposite
direction and so on -- infinite loop.  If, on the other hand, an amount of
load to be moved is calculated from the imbalance (in this case 0.1) and
move_tasks() skips tasks until it find ones whose contributions to the
weighted load are less than this amount it would move two of the nice==19
tasks resulting in a system with 2 nice==0 and 2 nice=19 on each CPU with
loads of 2.1 for each CPU.

One of the advantages of this mechanism is that on a system where all tasks
have nice==0 the load balancing calculations would be mathematically
identical to the current load balancing code.

Notes:

struct task_struct:

has a new field load_weight which (in a trade off of space for speed)
stores the contribution that this task makes to a CPU's weighted load when
it is runnable.

struct runqueue:

has a new field raw_weighted_load which is the sum of the load_weight
values for the currently runnable tasks on this run queue.  This field
always needs to be updated when nr_running is updated so two new inline
functions inc_nr_running() and dec_nr_running() have been created to make
sure that this happens.  This also offers a convenient way to optimize away
this part of the smpnice mechanism when CONFIG_SMP is not defined.

int try_to_wake_up():

in this function the value SCHED_LOAD_BALANCE is used to represent the load
contribution of a single task in various calculations in the code that
decides which CPU to put the waking task on.  While this would be a valid
on a system where the nice values for the runnable tasks were distributed
evenly around zero it will lead to anomalous load balancing if the
distribution is skewed in either direction.  To overcome this problem
SCHED_LOAD_SCALE has been replaced by the load_weight for the relevant task
or by the average load_weight per task for the queue in question (as
appropriate).

int move_tasks():

The modifications to this function were complicated by the fact that
active_load_balance() uses it to move exactly one task without checking
whether an imbalance actually exists.  This precluded the simple
overloading of max_nr_move with max_load_move and necessitated the addition
of the latter as an extra argument to the function.  The internal
implementation is then modified to move up to max_nr_move tasks and
max_load_move of weighted load.  This slightly complicates the code where
move_tasks() is called and if ever active_load_balance() is changed to not
use move_tasks() the implementation of move_tasks() should be simplified
accordingly.

struct sched_group *find_busiest_group():

Similar to try_to_wake_up(), there are places in this function where
SCHED_LOAD_SCALE is used to represent the load contribution of a single
task and the same issues are created.  A similar solution is adopted except
that it is now the average per task contribution to a group's load (as
opposed to a run queue) that is required.  As this value is not directly
available from the group it is calculated on the fly as the queues in the
groups are visited when determining the busiest group.

A key change to this function is that it is no longer to scale down
*imbalance on exit as move_tasks() uses the load in its scaled form.

void set_user_nice():

has been modified to update the task's load_weight field when it's nice
value and also to ensure that its run queue's raw_weighted_load field is
updated if it was runnable.

From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>

With smpnice, sched groups with highest priority tasks can mask the imbalance
between the other sched groups with in the same domain.  This patch fixes some
of the listed down scenarios by not considering the sched groups which are
lightly loaded.

a) on a simple 4-way MP system, if we have one high priority and 4 normal
   priority tasks, with smpnice we would like to see the high priority task
   scheduled on one cpu, two other cpus getting one normal task each and the
   fourth cpu getting the remaining two normal tasks.  but with current
   smpnice extra normal priority task keeps jumping from one cpu to another
   cpu having the normal priority task.  This is because of the
   busiest_has_loaded_cpus, nr_loaded_cpus logic..  We are not including the
   cpu with high priority task in max_load calculations but including that in
   total and avg_load calcuations..  leading to max_load < avg_load and load
   balance between cpus running normal priority tasks(2 Vs 1) will always show
   imbalanace as one normal priority and the extra normal priority task will
   keep moving from one cpu to another cpu having normal priority task..

b) 4-way system with HT (8 logical processors).  Package-P0 T0 has a
   highest priority task, T1 is idle.  Package-P1 Both T0 and T1 have 1 normal
   priority task each..  P2 and P3 are idle.  With this patch, one of the
   normal priority tasks on P1 will be moved to P2 or P3..

c) With the current weighted smp nice calculations, it doesn't always make
   sense to look at the highest weighted runqueue in the busy group..
   Consider a load balance scenario on a DP with HT system, with Package-0
   containing one high priority and one low priority, Package-1 containing one
   low priority(with other thread being idle)..  Package-1 thinks that it need
   to take the low priority thread from Package-0.  And find_busiest_queue()
   returns the cpu thread with highest priority task..  And ultimately(with
   help of active load balance) we move high priority task to Package-1.  And
   same continues with Package-0 now, moving high priority task from package-1
   to package-0..  Even without the presence of active load balance, load
   balance will fail to balance the above scenario..  Fix find_busiest_queue
   to use "imbalance" when it is lightly loaded.

[kernel@kolivas.org: sched: store weighted load on up]
[kernel@kolivas.org: sched: add discrete weighted cpu load function]
[suresh.b.siddha@intel.com: sched: remove dead code]
Signed-off-by: default avatarPeter Williams <pwil3058@bigpond.com.au>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
Acked-by: default avatarIngo Molnar <mingo@elte.hu>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: default avatarCon Kolivas <kernel@kolivas.org>
Cc: John Hawkes <hawkes@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent efc30814
...@@ -123,6 +123,7 @@ extern unsigned long nr_running(void); ...@@ -123,6 +123,7 @@ extern unsigned long nr_running(void);
extern unsigned long nr_uninterruptible(void); extern unsigned long nr_uninterruptible(void);
extern unsigned long nr_active(void); extern unsigned long nr_active(void);
extern unsigned long nr_iowait(void); extern unsigned long nr_iowait(void);
extern unsigned long weighted_cpuload(const int cpu);
/* /*
...@@ -558,9 +559,9 @@ enum idle_type ...@@ -558,9 +559,9 @@ enum idle_type
/* /*
* sched-domains (multiprocessor balancing) declarations: * sched-domains (multiprocessor balancing) declarations:
*/ */
#ifdef CONFIG_SMP
#define SCHED_LOAD_SCALE 128UL /* increase resolution of load */ #define SCHED_LOAD_SCALE 128UL /* increase resolution of load */
#ifdef CONFIG_SMP
#define SD_LOAD_BALANCE 1 /* Do load balancing on this domain. */ #define SD_LOAD_BALANCE 1 /* Do load balancing on this domain. */
#define SD_BALANCE_NEWIDLE 2 /* Balance when about to become idle */ #define SD_BALANCE_NEWIDLE 2 /* Balance when about to become idle */
#define SD_BALANCE_EXEC 4 /* Balance on exec */ #define SD_BALANCE_EXEC 4 /* Balance on exec */
...@@ -713,9 +714,12 @@ struct task_struct { ...@@ -713,9 +714,12 @@ struct task_struct {
int lock_depth; /* BKL lock depth */ int lock_depth; /* BKL lock depth */
#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW) #ifdef CONFIG_SMP
#ifdef __ARCH_WANT_UNLOCKED_CTXSW
int oncpu; int oncpu;
#endif #endif
#endif
int load_weight; /* for niceness load balancing purposes */
int prio, static_prio; int prio, static_prio;
struct list_head run_list; struct list_head run_list;
prio_array_t *array; prio_array_t *array;
......
...@@ -168,15 +168,21 @@ ...@@ -168,15 +168,21 @@
*/ */
#define SCALE_PRIO(x, prio) \ #define SCALE_PRIO(x, prio) \
max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE) max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO / 2), MIN_TIMESLICE)
static unsigned int task_timeslice(task_t *p) static unsigned int static_prio_timeslice(int static_prio)
{ {
if (p->static_prio < NICE_TO_PRIO(0)) if (static_prio < NICE_TO_PRIO(0))
return SCALE_PRIO(DEF_TIMESLICE*4, p->static_prio); return SCALE_PRIO(DEF_TIMESLICE * 4, static_prio);
else else
return SCALE_PRIO(DEF_TIMESLICE, p->static_prio); return SCALE_PRIO(DEF_TIMESLICE, static_prio);
} }
static inline unsigned int task_timeslice(task_t *p)
{
return static_prio_timeslice(p->static_prio);
}
#define task_hot(p, now, sd) ((long long) ((now) - (p)->last_ran) \ #define task_hot(p, now, sd) ((long long) ((now) - (p)->last_ran) \
< (long long) (sd)->cache_hot_time) < (long long) (sd)->cache_hot_time)
...@@ -207,6 +213,7 @@ struct runqueue { ...@@ -207,6 +213,7 @@ struct runqueue {
* remote CPUs use both these fields when doing load calculation. * remote CPUs use both these fields when doing load calculation.
*/ */
unsigned long nr_running; unsigned long nr_running;
unsigned long raw_weighted_load;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned long cpu_load[3]; unsigned long cpu_load[3];
#endif #endif
...@@ -661,6 +668,68 @@ static int effective_prio(task_t *p) ...@@ -661,6 +668,68 @@ static int effective_prio(task_t *p)
return prio; return prio;
} }
/*
* To aid in avoiding the subversion of "niceness" due to uneven distribution
* of tasks with abnormal "nice" values across CPUs the contribution that
* each task makes to its run queue's load is weighted according to its
* scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
* scaled version of the new time slice allocation that they receive on time
* slice expiry etc.
*/
/*
* Assume: static_prio_timeslice(NICE_TO_PRIO(0)) == DEF_TIMESLICE
* If static_prio_timeslice() is ever changed to break this assumption then
* this code will need modification
*/
#define TIME_SLICE_NICE_ZERO DEF_TIMESLICE
#define LOAD_WEIGHT(lp) \
(((lp) * SCHED_LOAD_SCALE) / TIME_SLICE_NICE_ZERO)
#define PRIO_TO_LOAD_WEIGHT(prio) \
LOAD_WEIGHT(static_prio_timeslice(prio))
#define RTPRIO_TO_LOAD_WEIGHT(rp) \
(PRIO_TO_LOAD_WEIGHT(MAX_RT_PRIO) + LOAD_WEIGHT(rp))
static void set_load_weight(task_t *p)
{
if (rt_task(p)) {
#ifdef CONFIG_SMP
if (p == task_rq(p)->migration_thread)
/*
* The migration thread does the actual balancing.
* Giving its load any weight will skew balancing
* adversely.
*/
p->load_weight = 0;
else
#endif
p->load_weight = RTPRIO_TO_LOAD_WEIGHT(p->rt_priority);
} else
p->load_weight = PRIO_TO_LOAD_WEIGHT(p->static_prio);
}
static inline void inc_raw_weighted_load(runqueue_t *rq, const task_t *p)
{
rq->raw_weighted_load += p->load_weight;
}
static inline void dec_raw_weighted_load(runqueue_t *rq, const task_t *p)
{
rq->raw_weighted_load -= p->load_weight;
}
static inline void inc_nr_running(task_t *p, runqueue_t *rq)
{
rq->nr_running++;
inc_raw_weighted_load(rq, p);
}
static inline void dec_nr_running(task_t *p, runqueue_t *rq)
{
rq->nr_running--;
dec_raw_weighted_load(rq, p);
}
/* /*
* __activate_task - move a task to the runqueue. * __activate_task - move a task to the runqueue.
*/ */
...@@ -671,7 +740,7 @@ static void __activate_task(task_t *p, runqueue_t *rq) ...@@ -671,7 +740,7 @@ static void __activate_task(task_t *p, runqueue_t *rq)
if (batch_task(p)) if (batch_task(p))
target = rq->expired; target = rq->expired;
enqueue_task(p, target); enqueue_task(p, target);
rq->nr_running++; inc_nr_running(p, rq);
} }
/* /*
...@@ -680,7 +749,7 @@ static void __activate_task(task_t *p, runqueue_t *rq) ...@@ -680,7 +749,7 @@ static void __activate_task(task_t *p, runqueue_t *rq)
static inline void __activate_idle_task(task_t *p, runqueue_t *rq) static inline void __activate_idle_task(task_t *p, runqueue_t *rq)
{ {
enqueue_task_head(p, rq->active); enqueue_task_head(p, rq->active);
rq->nr_running++; inc_nr_running(p, rq);
} }
static int recalc_task_prio(task_t *p, unsigned long long now) static int recalc_task_prio(task_t *p, unsigned long long now)
...@@ -804,7 +873,7 @@ static void activate_task(task_t *p, runqueue_t *rq, int local) ...@@ -804,7 +873,7 @@ static void activate_task(task_t *p, runqueue_t *rq, int local)
*/ */
static void deactivate_task(struct task_struct *p, runqueue_t *rq) static void deactivate_task(struct task_struct *p, runqueue_t *rq)
{ {
rq->nr_running--; dec_nr_running(p, rq);
dequeue_task(p, p->array); dequeue_task(p, p->array);
p->array = NULL; p->array = NULL;
} }
...@@ -859,6 +928,12 @@ inline int task_curr(const task_t *p) ...@@ -859,6 +928,12 @@ inline int task_curr(const task_t *p)
return cpu_curr(task_cpu(p)) == p; return cpu_curr(task_cpu(p)) == p;
} }
/* Used instead of source_load when we know the type == 0 */
unsigned long weighted_cpuload(const int cpu)
{
return cpu_rq(cpu)->raw_weighted_load;
}
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
typedef struct { typedef struct {
struct list_head list; struct list_head list;
...@@ -948,7 +1023,8 @@ void kick_process(task_t *p) ...@@ -948,7 +1023,8 @@ void kick_process(task_t *p)
} }
/* /*
* Return a low guess at the load of a migration-source cpu. * Return a low guess at the load of a migration-source cpu weighted
* according to the scheduling class and "nice" value.
* *
* We want to under-estimate the load of migration sources, to * We want to under-estimate the load of migration sources, to
* balance conservatively. * balance conservatively.
...@@ -956,24 +1032,36 @@ void kick_process(task_t *p) ...@@ -956,24 +1032,36 @@ void kick_process(task_t *p)
static inline unsigned long source_load(int cpu, int type) static inline unsigned long source_load(int cpu, int type)
{ {
runqueue_t *rq = cpu_rq(cpu); runqueue_t *rq = cpu_rq(cpu);
unsigned long load_now = rq->nr_running * SCHED_LOAD_SCALE;
if (type == 0) if (type == 0)
return load_now; return rq->raw_weighted_load;
return min(rq->cpu_load[type-1], load_now); return min(rq->cpu_load[type-1], rq->raw_weighted_load);
} }
/* /*
* Return a high guess at the load of a migration-target cpu * Return a high guess at the load of a migration-target cpu weighted
* according to the scheduling class and "nice" value.
*/ */
static inline unsigned long target_load(int cpu, int type) static inline unsigned long target_load(int cpu, int type)
{ {
runqueue_t *rq = cpu_rq(cpu); runqueue_t *rq = cpu_rq(cpu);
unsigned long load_now = rq->nr_running * SCHED_LOAD_SCALE;
if (type == 0) if (type == 0)
return load_now; return rq->raw_weighted_load;
return max(rq->cpu_load[type-1], load_now); return max(rq->cpu_load[type-1], rq->raw_weighted_load);
}
/*
* Return the average load per task on the cpu's run queue
*/
static inline unsigned long cpu_avg_load_per_task(int cpu)
{
runqueue_t *rq = cpu_rq(cpu);
unsigned long n = rq->nr_running;
return n ? rq->raw_weighted_load / n : SCHED_LOAD_SCALE;
} }
/* /*
...@@ -1046,7 +1134,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) ...@@ -1046,7 +1134,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
cpus_and(tmp, group->cpumask, p->cpus_allowed); cpus_and(tmp, group->cpumask, p->cpus_allowed);
for_each_cpu_mask(i, tmp) { for_each_cpu_mask(i, tmp) {
load = source_load(i, 0); load = weighted_cpuload(i);
if (load < min_load || (load == min_load && i == this_cpu)) { if (load < min_load || (load == min_load && i == this_cpu)) {
min_load = load; min_load = load;
...@@ -1226,17 +1314,19 @@ static int try_to_wake_up(task_t *p, unsigned int state, int sync) ...@@ -1226,17 +1314,19 @@ static int try_to_wake_up(task_t *p, unsigned int state, int sync)
if (this_sd->flags & SD_WAKE_AFFINE) { if (this_sd->flags & SD_WAKE_AFFINE) {
unsigned long tl = this_load; unsigned long tl = this_load;
unsigned long tl_per_task = cpu_avg_load_per_task(this_cpu);
/* /*
* If sync wakeup then subtract the (maximum possible) * If sync wakeup then subtract the (maximum possible)
* effect of the currently running task from the load * effect of the currently running task from the load
* of the current CPU: * of the current CPU:
*/ */
if (sync) if (sync)
tl -= SCHED_LOAD_SCALE; tl -= current->load_weight;
if ((tl <= load && if ((tl <= load &&
tl + target_load(cpu, idx) <= SCHED_LOAD_SCALE) || tl + target_load(cpu, idx) <= tl_per_task) ||
100*(tl + SCHED_LOAD_SCALE) <= imbalance*load) { 100*(tl + p->load_weight) <= imbalance*load) {
/* /*
* This domain has SD_WAKE_AFFINE and * This domain has SD_WAKE_AFFINE and
* p is cache cold in this domain, and * p is cache cold in this domain, and
...@@ -1435,7 +1525,7 @@ void fastcall wake_up_new_task(task_t *p, unsigned long clone_flags) ...@@ -1435,7 +1525,7 @@ void fastcall wake_up_new_task(task_t *p, unsigned long clone_flags)
list_add_tail(&p->run_list, &current->run_list); list_add_tail(&p->run_list, &current->run_list);
p->array = current->array; p->array = current->array;
p->array->nr_active++; p->array->nr_active++;
rq->nr_running++; inc_nr_running(p, rq);
} }
set_need_resched(); set_need_resched();
} else } else
...@@ -1802,9 +1892,9 @@ void pull_task(runqueue_t *src_rq, prio_array_t *src_array, task_t *p, ...@@ -1802,9 +1892,9 @@ void pull_task(runqueue_t *src_rq, prio_array_t *src_array, task_t *p,
runqueue_t *this_rq, prio_array_t *this_array, int this_cpu) runqueue_t *this_rq, prio_array_t *this_array, int this_cpu)
{ {
dequeue_task(p, src_array); dequeue_task(p, src_array);
src_rq->nr_running--; dec_nr_running(p, src_rq);
set_task_cpu(p, this_cpu); set_task_cpu(p, this_cpu);
this_rq->nr_running++; inc_nr_running(p, this_rq);
enqueue_task(p, this_array); enqueue_task(p, this_array);
p->timestamp = (p->timestamp - src_rq->timestamp_last_tick) p->timestamp = (p->timestamp - src_rq->timestamp_last_tick)
+ this_rq->timestamp_last_tick; + this_rq->timestamp_last_tick;
...@@ -1852,24 +1942,27 @@ int can_migrate_task(task_t *p, runqueue_t *rq, int this_cpu, ...@@ -1852,24 +1942,27 @@ int can_migrate_task(task_t *p, runqueue_t *rq, int this_cpu,
} }
/* /*
* move_tasks tries to move up to max_nr_move tasks from busiest to this_rq, * move_tasks tries to move up to max_nr_move tasks and max_load_move weighted
* as part of a balancing operation within "domain". Returns the number of * load from busiest to this_rq, as part of a balancing operation within
* tasks moved. * "domain". Returns the number of tasks moved.
* *
* Called with both runqueues locked. * Called with both runqueues locked.
*/ */
static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest, static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
unsigned long max_nr_move, struct sched_domain *sd, unsigned long max_nr_move, unsigned long max_load_move,
enum idle_type idle, int *all_pinned) struct sched_domain *sd, enum idle_type idle,
int *all_pinned)
{ {
prio_array_t *array, *dst_array; prio_array_t *array, *dst_array;
struct list_head *head, *curr; struct list_head *head, *curr;
int idx, pulled = 0, pinned = 0; int idx, pulled = 0, pinned = 0;
long rem_load_move;
task_t *tmp; task_t *tmp;
if (max_nr_move == 0) if (max_nr_move == 0 || max_load_move == 0)
goto out; goto out;
rem_load_move = max_load_move;
pinned = 1; pinned = 1;
/* /*
...@@ -1910,7 +2003,8 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest, ...@@ -1910,7 +2003,8 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
curr = curr->prev; curr = curr->prev;
if (!can_migrate_task(tmp, busiest, this_cpu, sd, idle, &pinned)) { if (tmp->load_weight > rem_load_move ||
!can_migrate_task(tmp, busiest, this_cpu, sd, idle, &pinned)) {
if (curr != head) if (curr != head)
goto skip_queue; goto skip_queue;
idx++; idx++;
...@@ -1924,9 +2018,13 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest, ...@@ -1924,9 +2018,13 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
pull_task(busiest, array, tmp, this_rq, dst_array, this_cpu); pull_task(busiest, array, tmp, this_rq, dst_array, this_cpu);
pulled++; pulled++;
rem_load_move -= tmp->load_weight;
/* We only want to steal up to the prescribed number of tasks. */ /*
if (pulled < max_nr_move) { * We only want to steal up to the prescribed number of tasks
* and the prescribed amount of weighted load.
*/
if (pulled < max_nr_move && rem_load_move > 0) {
if (curr != head) if (curr != head)
goto skip_queue; goto skip_queue;
idx++; idx++;
...@@ -1947,7 +2045,7 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest, ...@@ -1947,7 +2045,7 @@ static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
/* /*
* find_busiest_group finds and returns the busiest CPU group within the * find_busiest_group finds and returns the busiest CPU group within the
* domain. It calculates and returns the number of tasks which should be * domain. It calculates and returns the amount of weighted load which should be
* moved to restore balance via the imbalance parameter. * moved to restore balance via the imbalance parameter.
*/ */
static struct sched_group * static struct sched_group *
...@@ -1957,9 +2055,13 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -1957,9 +2055,13 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
struct sched_group *busiest = NULL, *this = NULL, *group = sd->groups; struct sched_group *busiest = NULL, *this = NULL, *group = sd->groups;
unsigned long max_load, avg_load, total_load, this_load, total_pwr; unsigned long max_load, avg_load, total_load, this_load, total_pwr;
unsigned long max_pull; unsigned long max_pull;
unsigned long busiest_load_per_task, busiest_nr_running;
unsigned long this_load_per_task, this_nr_running;
int load_idx; int load_idx;
max_load = this_load = total_load = total_pwr = 0; max_load = this_load = total_load = total_pwr = 0;
busiest_load_per_task = busiest_nr_running = 0;
this_load_per_task = this_nr_running = 0;
if (idle == NOT_IDLE) if (idle == NOT_IDLE)
load_idx = sd->busy_idx; load_idx = sd->busy_idx;
else if (idle == NEWLY_IDLE) else if (idle == NEWLY_IDLE)
...@@ -1971,13 +2073,16 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -1971,13 +2073,16 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
unsigned long load; unsigned long load;
int local_group; int local_group;
int i; int i;
unsigned long sum_nr_running, sum_weighted_load;
local_group = cpu_isset(this_cpu, group->cpumask); local_group = cpu_isset(this_cpu, group->cpumask);
/* Tally up the load of all CPUs in the group */ /* Tally up the load of all CPUs in the group */
avg_load = 0; sum_weighted_load = sum_nr_running = avg_load = 0;
for_each_cpu_mask(i, group->cpumask) { for_each_cpu_mask(i, group->cpumask) {
runqueue_t *rq = cpu_rq(i);
if (*sd_idle && !idle_cpu(i)) if (*sd_idle && !idle_cpu(i))
*sd_idle = 0; *sd_idle = 0;
...@@ -1988,6 +2093,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -1988,6 +2093,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
load = source_load(i, load_idx); load = source_load(i, load_idx);
avg_load += load; avg_load += load;
sum_nr_running += rq->nr_running;
sum_weighted_load += rq->raw_weighted_load;
} }
total_load += avg_load; total_load += avg_load;
...@@ -1999,14 +2106,19 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -1999,14 +2106,19 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
if (local_group) { if (local_group) {
this_load = avg_load; this_load = avg_load;
this = group; this = group;
} else if (avg_load > max_load) { this_nr_running = sum_nr_running;
this_load_per_task = sum_weighted_load;
} else if (avg_load > max_load &&
sum_nr_running > group->cpu_power / SCHED_LOAD_SCALE) {
max_load = avg_load; max_load = avg_load;
busiest = group; busiest = group;
busiest_nr_running = sum_nr_running;
busiest_load_per_task = sum_weighted_load;
} }
group = group->next; group = group->next;
} while (group != sd->groups); } while (group != sd->groups);
if (!busiest || this_load >= max_load || max_load <= SCHED_LOAD_SCALE) if (!busiest || this_load >= max_load || busiest_nr_running == 0)
goto out_balanced; goto out_balanced;
avg_load = (SCHED_LOAD_SCALE * total_load) / total_pwr; avg_load = (SCHED_LOAD_SCALE * total_load) / total_pwr;
...@@ -2015,6 +2127,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -2015,6 +2127,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
100*max_load <= sd->imbalance_pct*this_load) 100*max_load <= sd->imbalance_pct*this_load)
goto out_balanced; goto out_balanced;
busiest_load_per_task /= busiest_nr_running;
/* /*
* We're trying to get all the cpus to the average_load, so we don't * We're trying to get all the cpus to the average_load, so we don't
* want to push ourselves above the average load, nor do we wish to * want to push ourselves above the average load, nor do we wish to
...@@ -2026,21 +2139,50 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -2026,21 +2139,50 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* by pulling tasks to us. Be careful of negative numbers as they'll * by pulling tasks to us. Be careful of negative numbers as they'll
* appear as very large values with unsigned longs. * appear as very large values with unsigned longs.
*/ */
if (max_load <= busiest_load_per_task)
goto out_balanced;
/*
* In the presence of smp nice balancing, certain scenarios can have
* max load less than avg load(as we skip the groups at or below
* its cpu_power, while calculating max_load..)
*/
if (max_load < avg_load) {
*imbalance = 0;
goto small_imbalance;
}
/* Don't want to pull so many tasks that a group would go idle */ /* Don't want to pull so many tasks that a group would go idle */
max_pull = min(max_load - avg_load, max_load - SCHED_LOAD_SCALE); max_pull = min(max_load - avg_load, max_load - busiest_load_per_task);
/* How much load to actually move to equalise the imbalance */ /* How much load to actually move to equalise the imbalance */
*imbalance = min(max_pull * busiest->cpu_power, *imbalance = min(max_pull * busiest->cpu_power,
(avg_load - this_load) * this->cpu_power) (avg_load - this_load) * this->cpu_power)
/ SCHED_LOAD_SCALE; / SCHED_LOAD_SCALE;
if (*imbalance < SCHED_LOAD_SCALE) { /*
unsigned long pwr_now = 0, pwr_move = 0; * if *imbalance is less than the average load per runnable task
* there is no gaurantee that any tasks will be moved so we'll have
* a think about bumping its value to force at least one task to be
* moved
*/
if (*imbalance < busiest_load_per_task) {
unsigned long pwr_now, pwr_move;
unsigned long tmp; unsigned long tmp;
unsigned int imbn;
small_imbalance:
pwr_move = pwr_now = 0;
imbn = 2;
if (this_nr_running) {
this_load_per_task /= this_nr_running;
if (busiest_load_per_task > this_load_per_task)
imbn = 1;
} else
this_load_per_task = SCHED_LOAD_SCALE;
if (max_load - this_load >= SCHED_LOAD_SCALE*2) { if (max_load - this_load >= busiest_load_per_task * imbn) {
*imbalance = 1; *imbalance = busiest_load_per_task;
return busiest; return busiest;
} }
...@@ -2050,35 +2192,34 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -2050,35 +2192,34 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* moving them. * moving them.
*/ */
pwr_now += busiest->cpu_power*min(SCHED_LOAD_SCALE, max_load); pwr_now += busiest->cpu_power *
pwr_now += this->cpu_power*min(SCHED_LOAD_SCALE, this_load); min(busiest_load_per_task, max_load);
pwr_now += this->cpu_power *
min(this_load_per_task, this_load);
pwr_now /= SCHED_LOAD_SCALE; pwr_now /= SCHED_LOAD_SCALE;
/* Amount of load we'd subtract */ /* Amount of load we'd subtract */
tmp = SCHED_LOAD_SCALE*SCHED_LOAD_SCALE/busiest->cpu_power; tmp = busiest_load_per_task*SCHED_LOAD_SCALE/busiest->cpu_power;
if (max_load > tmp) if (max_load > tmp)
pwr_move += busiest->cpu_power*min(SCHED_LOAD_SCALE, pwr_move += busiest->cpu_power *
max_load - tmp); min(busiest_load_per_task, max_load - tmp);
/* Amount of load we'd add */ /* Amount of load we'd add */
if (max_load*busiest->cpu_power < if (max_load*busiest->cpu_power <
SCHED_LOAD_SCALE*SCHED_LOAD_SCALE) busiest_load_per_task*SCHED_LOAD_SCALE)
tmp = max_load*busiest->cpu_power/this->cpu_power; tmp = max_load*busiest->cpu_power/this->cpu_power;
else else
tmp = SCHED_LOAD_SCALE*SCHED_LOAD_SCALE/this->cpu_power; tmp = busiest_load_per_task*SCHED_LOAD_SCALE/this->cpu_power;
pwr_move += this->cpu_power*min(SCHED_LOAD_SCALE, this_load + tmp); pwr_move += this->cpu_power*min(this_load_per_task, this_load + tmp);
pwr_move /= SCHED_LOAD_SCALE; pwr_move /= SCHED_LOAD_SCALE;
/* Move if we gain throughput */ /* Move if we gain throughput */
if (pwr_move <= pwr_now) if (pwr_move <= pwr_now)
goto out_balanced; goto out_balanced;
*imbalance = 1; *imbalance = busiest_load_per_task;
return busiest;
} }
/* Get rid of the scaling factor, rounding down as we divide */
*imbalance = *imbalance / SCHED_LOAD_SCALE;
return busiest; return busiest;
out_balanced: out_balanced:
...@@ -2091,18 +2232,21 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, ...@@ -2091,18 +2232,21 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* find_busiest_queue - find the busiest runqueue among the cpus in group. * find_busiest_queue - find the busiest runqueue among the cpus in group.
*/ */
static runqueue_t *find_busiest_queue(struct sched_group *group, static runqueue_t *find_busiest_queue(struct sched_group *group,
enum idle_type idle) enum idle_type idle, unsigned long imbalance)
{ {
unsigned long load, max_load = 0; unsigned long max_load = 0;
runqueue_t *busiest = NULL; runqueue_t *busiest = NULL, *rqi;
int i; int i;
for_each_cpu_mask(i, group->cpumask) { for_each_cpu_mask(i, group->cpumask) {
load = source_load(i, 0); rqi = cpu_rq(i);
if (rqi->nr_running == 1 && rqi->raw_weighted_load > imbalance)
continue;
if (load > max_load) { if (rqi->raw_weighted_load > max_load) {
max_load = load; max_load = rqi->raw_weighted_load;
busiest = cpu_rq(i); busiest = rqi;
} }
} }
...@@ -2115,6 +2259,7 @@ static runqueue_t *find_busiest_queue(struct sched_group *group, ...@@ -2115,6 +2259,7 @@ static runqueue_t *find_busiest_queue(struct sched_group *group,
*/ */
#define MAX_PINNED_INTERVAL 512 #define MAX_PINNED_INTERVAL 512
#define minus_1_or_zero(n) ((n) > 0 ? (n) - 1 : 0)
/* /*
* Check this_cpu to ensure it is balanced within domain. Attempt to move * Check this_cpu to ensure it is balanced within domain. Attempt to move
* tasks if there is an imbalance. * tasks if there is an imbalance.
...@@ -2142,7 +2287,7 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, ...@@ -2142,7 +2287,7 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
goto out_balanced; goto out_balanced;
} }
busiest = find_busiest_queue(group, idle); busiest = find_busiest_queue(group, idle, imbalance);
if (!busiest) { if (!busiest) {
schedstat_inc(sd, lb_nobusyq[idle]); schedstat_inc(sd, lb_nobusyq[idle]);
goto out_balanced; goto out_balanced;
...@@ -2162,6 +2307,7 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, ...@@ -2162,6 +2307,7 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
*/ */
double_rq_lock(this_rq, busiest); double_rq_lock(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest, nr_moved = move_tasks(this_rq, this_cpu, busiest,
minus_1_or_zero(busiest->nr_running),
imbalance, sd, idle, &all_pinned); imbalance, sd, idle, &all_pinned);
double_rq_unlock(this_rq, busiest); double_rq_unlock(this_rq, busiest);
...@@ -2265,7 +2411,7 @@ static int load_balance_newidle(int this_cpu, runqueue_t *this_rq, ...@@ -2265,7 +2411,7 @@ static int load_balance_newidle(int this_cpu, runqueue_t *this_rq,
goto out_balanced; goto out_balanced;
} }
busiest = find_busiest_queue(group, NEWLY_IDLE); busiest = find_busiest_queue(group, NEWLY_IDLE, imbalance);
if (!busiest) { if (!busiest) {
schedstat_inc(sd, lb_nobusyq[NEWLY_IDLE]); schedstat_inc(sd, lb_nobusyq[NEWLY_IDLE]);
goto out_balanced; goto out_balanced;
...@@ -2280,6 +2426,7 @@ static int load_balance_newidle(int this_cpu, runqueue_t *this_rq, ...@@ -2280,6 +2426,7 @@ static int load_balance_newidle(int this_cpu, runqueue_t *this_rq,
/* Attempt to move tasks */ /* Attempt to move tasks */
double_lock_balance(this_rq, busiest); double_lock_balance(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest, nr_moved = move_tasks(this_rq, this_cpu, busiest,
minus_1_or_zero(busiest->nr_running),
imbalance, sd, NEWLY_IDLE, NULL); imbalance, sd, NEWLY_IDLE, NULL);
spin_unlock(&busiest->lock); spin_unlock(&busiest->lock);
} }
...@@ -2361,7 +2508,8 @@ static void active_load_balance(runqueue_t *busiest_rq, int busiest_cpu) ...@@ -2361,7 +2508,8 @@ static void active_load_balance(runqueue_t *busiest_rq, int busiest_cpu)
schedstat_inc(sd, alb_cnt); schedstat_inc(sd, alb_cnt);
if (move_tasks(target_rq, target_cpu, busiest_rq, 1, sd, SCHED_IDLE, NULL)) if (move_tasks(target_rq, target_cpu, busiest_rq, 1,
RTPRIO_TO_LOAD_WEIGHT(100), sd, SCHED_IDLE, NULL))
schedstat_inc(sd, alb_pushed); schedstat_inc(sd, alb_pushed);
else else
schedstat_inc(sd, alb_failed); schedstat_inc(sd, alb_failed);
...@@ -2389,7 +2537,7 @@ static void rebalance_tick(int this_cpu, runqueue_t *this_rq, ...@@ -2389,7 +2537,7 @@ static void rebalance_tick(int this_cpu, runqueue_t *this_rq,
struct sched_domain *sd; struct sched_domain *sd;
int i; int i;
this_load = this_rq->nr_running * SCHED_LOAD_SCALE; this_load = this_rq->raw_weighted_load;
/* Update our load */ /* Update our load */
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
unsigned long new_load = this_load; unsigned long new_load = this_load;
...@@ -3441,17 +3589,21 @@ void set_user_nice(task_t *p, long nice) ...@@ -3441,17 +3589,21 @@ void set_user_nice(task_t *p, long nice)
goto out_unlock; goto out_unlock;
} }
array = p->array; array = p->array;
if (array) if (array) {
dequeue_task(p, array); dequeue_task(p, array);
dec_raw_weighted_load(rq, p);
}
old_prio = p->prio; old_prio = p->prio;
new_prio = NICE_TO_PRIO(nice); new_prio = NICE_TO_PRIO(nice);
delta = new_prio - old_prio; delta = new_prio - old_prio;
p->static_prio = NICE_TO_PRIO(nice); p->static_prio = NICE_TO_PRIO(nice);
set_load_weight(p);
p->prio += delta; p->prio += delta;
if (array) { if (array) {
enqueue_task(p, array); enqueue_task(p, array);
inc_raw_weighted_load(rq, p);
/* /*
* If the task increased its priority or is running and * If the task increased its priority or is running and
* lowered its priority, then reschedule its CPU: * lowered its priority, then reschedule its CPU:
...@@ -3587,6 +3739,7 @@ static void __setscheduler(struct task_struct *p, int policy, int prio) ...@@ -3587,6 +3739,7 @@ static void __setscheduler(struct task_struct *p, int policy, int prio)
if (policy == SCHED_BATCH) if (policy == SCHED_BATCH)
p->sleep_avg = 0; p->sleep_avg = 0;
} }
set_load_weight(p);
} }
/** /**
...@@ -6106,6 +6259,7 @@ void __init sched_init(void) ...@@ -6106,6 +6259,7 @@ void __init sched_init(void)
} }
} }
set_load_weight(&init_task);
/* /*
* The boot idle thread does lazy MMU switching as well: * The boot idle thread does lazy MMU switching as well:
*/ */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment