Commit e4dbd222 authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds

[PATCH] sched: trivial sched changes

The following patches properly intergrate sched domains and cpu hotplug (using
Nathan's code), by having sched-domains *always* only represent online CPUs,
and having hotplug notifier to keep them up to date.

Then tackle Jesse's domain setup problem: the disjoint top-level domains were
completely broken.  The group-list builder thingy simply can't handle distinct
sets of groups containing the same CPUs.  The code is ugly and specific enough
that I'm re-introducing the arch overridable domains.

I doubt we'll get a proliferation of implementations, because the current
generic code can do the job for everyone but SGI.  I'd rather take a look at
it again down the track if we need to rather than try to shoehorn this into
the generic code.

Nathan and I have tested the hotplug work. He's happy with it.

I've tested the disjoint domain stuff (copied it to i386 for the test), and it
does the right thing on the NUMAQ.  I've asked Jesse to test it as well, but
it should be fine - maybe just help me out and run a test compile on ia64 ;)

This really gets sched domains into much better shape.  Without further ado,
the patches.



This patch:

Make a definition static and slightly sanitize ifdefs.
Signed-off-by: default avatarNick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent f871dbd2
...@@ -310,9 +310,7 @@ struct sched_group { ...@@ -310,9 +310,7 @@ struct sched_group {
/* /*
* CPU power of this group, SCHED_LOAD_SCALE being max power for a * CPU power of this group, SCHED_LOAD_SCALE being max power for a
* single CPU. This should be read only (except for setup). Although * single CPU. This is read only (except for setup, hotplug CPU).
* it will need to be written to at cpu hot(un)plug time, perhaps the
* cpucontrol semaphore will provide enough exclusion?
*/ */
unsigned long cpu_power; unsigned long cpu_power;
}; };
...@@ -4248,7 +4246,8 @@ static void cpu_attach_domain(struct sched_domain *sd, int cpu) ...@@ -4248,7 +4246,8 @@ static void cpu_attach_domain(struct sched_domain *sd, int cpu)
* in arch code. That defines the number of nearby nodes in a node's top * in arch code. That defines the number of nearby nodes in a node's top
* level scheduling domain. * level scheduling domain.
*/ */
#if defined(CONFIG_NUMA) && defined(SD_NODES_PER_DOMAIN) #ifdef CONFIG_NUMA
#ifdef SD_NODES_PER_DOMAIN
/** /**
* find_next_best_node - find the next node to include in a sched_domain * find_next_best_node - find the next node to include in a sched_domain
* @node: node whose sched_domain we're building * @node: node whose sched_domain we're building
...@@ -4295,7 +4294,7 @@ static int __init find_next_best_node(int node, unsigned long *used_nodes) ...@@ -4295,7 +4294,7 @@ static int __init find_next_best_node(int node, unsigned long *used_nodes)
* should be one that prevents unnecessary balancing, but also spreads tasks * should be one that prevents unnecessary balancing, but also spreads tasks
* out optimally. * out optimally.
*/ */
cpumask_t __init sched_domain_node_span(int node) static cpumask_t __init sched_domain_node_span(int node)
{ {
int i; int i;
cpumask_t span; cpumask_t span;
...@@ -4314,12 +4313,13 @@ cpumask_t __init sched_domain_node_span(int node) ...@@ -4314,12 +4313,13 @@ cpumask_t __init sched_domain_node_span(int node)
return span; return span;
} }
#else /* CONFIG_NUMA && SD_NODES_PER_DOMAIN */ #else /* SD_NODES_PER_DOMAIN */
cpumask_t __init sched_domain_node_span(int node) static cpumask_t __init sched_domain_node_span(int node)
{ {
return cpu_possible_map; return cpu_possible_map;
} }
#endif /* CONFIG_NUMA && SD_NODES_PER_DOMAIN */ #endif /* SD_NODES_PER_DOMAIN */
#endif /* CONFIG_NUMA */
#ifdef CONFIG_SCHED_SMT #ifdef CONFIG_SCHED_SMT
static DEFINE_PER_CPU(struct sched_domain, cpu_domains); static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment