An error occurred fetching the project authors.
- 28 Jan, 2009 1 commit
-
-
Ingo Molnar authored
- spread out the namespace on a per driver basis - get rid of macro wrappers - small cleanups Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 20 Jan, 2009 1 commit
-
-
Brian Gerst authored
Impact: cleanup Signed-off-by:
Brian Gerst <brgerst@gmail.com>
-
- 18 Jan, 2009 1 commit
-
-
Brian Gerst authored
tj: moved cpu_number definition out of CONFIG_HAVE_SETUP_PER_CPU_AREA for voyager. Signed-off-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
- 16 Jan, 2009 2 commits
-
-
Ingo Molnar authored
It is an optimization and a cleanup, and adds the following new generic percpu methods: percpu_read() percpu_write() percpu_add() percpu_sub() percpu_and() percpu_or() percpu_xor() and implements support for them on x86. (other architectures will fall back to a default implementation) The advantage is that for example to read a local percpu variable, instead of this sequence: return __get_cpu_var(var); ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx ffffffff8102ca32: 81 ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax We can get a single instruction by using the optimized variants: return percpu_read(var); ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax I also cleaned up the x86-specific APIs and made the x86 code use these new generic percpu primitives. tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out * added percpu_and() for completeness's sake * made generic percpu ops atomic against preemption Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
Tejun Heo authored
[ Based on original patch from Christoph Lameter and Mike Travis. ] Currently pdas and percpu areas are allocated separately. %gs points to local pda and percpu area can be reached using pda->data_offset. This patch folds pda into percpu area. Due to strange gcc requirement, pda needs to be at the beginning of the percpu area so that pda->stack_canary is at %gs:40. To achieve this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is added and used to reserve pda sized chunk at the start of the percpu area. After this change, for boot cpu, %gs first points to pda in the data.init area and later during setup_per_cpu_areas() gets updated to point to the actual pda. This means that setup_per_cpu_areas() need to reload %gs for CPU0 while clearing pda area for other cpus as cpu0 already has modified it when control reaches setup_per_cpu_areas(). This patch also removes now unnecessary get_local_pda() and its call sites. A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 10 Jan, 2009 4 commits
-
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 07 Jan, 2009 7 commits
-
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup, moving NON-SMP stuff from smp.h Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup, moving NON-SMP stuff from smp.h Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh Rajput authored
Impact: cleanup Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 04 Jan, 2009 1 commit
-
-
Mike Travis authored
Impact: use new cpumask API to reduce memory and stack usage Allocate the following local cpumasks based on the number of cpus that are present. References will use new cpumask API. (Currently only modified for x86_64, x86_32 continues to use the *_map variants.) cpu_callin_mask cpu_callout_mask cpu_initialized_mask cpu_sibling_setup_mask Provide the following accessor functions: struct cpumask *cpu_sibling_mask(int cpu) struct cpumask *cpu_core_mask(int cpu) Other changes are when setting or clearing the cpu online, possible or present maps, use the accessor functions. Signed-off-by:
Mike Travis <travis@sgi.com> Acked-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 17 Dec, 2008 2 commits
-
-
Mike Travis authored
This patch simply changes cpumask_t to struct cpumask and similar trivial modernizations. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Mike Travis <travis@sgi.com>
-
Mike Travis authored
Impact: cleanup, change parameter passing * Change genapic interfaces to accept cpumask_t pointers where possible. * Modify external callers to use cpumask_t pointers in function calls. * Create new send_IPI_mask_allbutself which is the same as the send_IPI_mask functions but removes smp_processor_id() from list. This removes another common need for a temporary cpumask_t variable. * Functions that used a temp cpumask_t variable for: cpumask_t allbutme = cpu_online_map; cpu_clear(smp_processor_id(), allbutme); if (!cpus_empty(allbutme)) ... become: if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu))) ... * Other minor code optimizations (like using cpus_clear instead of CPU_MASK_NONE, etc.) Applies to linux-2.6.tip/master. Signed-off-by:
Mike Travis <travis@sgi.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Acked-by:
Ingo Molnar <mingo@elte.hu>
-
- 30 Oct, 2008 1 commit
-
-
James Bottomley authored
Impact: build fix on x86/Voyager Given commits like this: | Author: Suresh Siddha <suresh.b.siddha@intel.com> | Date: Tue Jul 29 10:29:19 2008 -0700 | | x86, xsave: enable xsave/xrstor on cpus with xsave support Which deliberately expose boot cpu dependence to pieces of the system, I think it's time to explicitly have a variable for it to prevent this continual misassumption that the boot CPU is zero. Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 23 Oct, 2008 2 commits
-
-
H. Peter Anvin authored
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- 13 Oct, 2008 1 commit
-
-
Chuck Ebbert authored
Commit 4a701737 ("x86: move prefill_possible_map calling early, fix") is the wrong fix: prefill_possible_map() needs to be available even when CONFIG_HOTPLUG_CPU is not set. A followon patch will do that. Fix this correctly by making prefill_possible_map() available even when CONFIG_HOTPLUG_CPU is not set. The function is needed so that the number of possible CPUs can be determined. Tested on uniprocessor machine with CPU hotplug disabled. From boot log: Before: NR_CPUS: 512, nr_cpu_ids: 512, nr_node_ids 1 After: NR_CPUS: 512, nr_cpu_ids: 1, nr_node_ids 1 Signed-off-by:
Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 05 Sep, 2008 1 commit
-
-
Alex Nixon authored
Move reset_lazy_tlbstate into tlb_32.c, and define noop versions of play_dead() in process_{32,64}.c when !CONFIG_SMP. Signed-off-by:
Alex Nixon <alex.nixon@citrix.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 25 Aug, 2008 4 commits
-
-
Alex Nixon authored
It allows paravirt implementations of cpu_disable to share the cpu_disable_common code, without having to take on board APIC writes, which may not be appropriate. Signed-off-by:
Alex Nixon <alex.nixon@citrix.com> Acked-by:
Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Alex Nixon authored
Add the new play_dead into smpboot.c, as it fits more cleanly in there alongside other CONFIG_HOTPLUG functions. Separate out the common code into its own function. Signed-off-by:
Alex Nixon <alex.nixon@citrix.com> Acked-by:
Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Alex Nixon authored
The removal of the CPU from the various maps was redundant as it already happened in cpu_disable. After cleaning this up, cpu_uninit only resets the tlb state, so rename it and create a noop version for the X86_64 case (so the two play_deads can be unified later). Signed-off-by:
Alex Nixon <alex.nixon@citrix.com> Acked-by:
Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Alex Nixon authored
Signed-off-by:
Alex Nixon <alex.nixon@citrix.com> Acked-by:
Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 22 Jul, 2008 2 commits
-
-
Vegard Nossum authored
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by:
Vegard Nossum <vegard.nossum@gmail.com>
-
Jaswinder Singh authored
Moved DECLARE_PER_CPU(int, cpu_number) from CONFIG_X86_32_SMP to CONFIG_X86_32 because cpu_number is required for both. And include asm/smp.h in process_32.c Signed-off-by:
Jaswinder Singh <jaswinder@infradead.org>
-
- 16 Jul, 2008 1 commit
-
-
Jeremy Fitzhardinge authored
This allows Xen's xen_cpu_up() to allocate a pda for the new CPU. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 12 Jul, 2008 3 commits
-
-
Yinghai Lu authored
also remove GET_APIC_ID when read_apic_id is used. need to apply after [PATCH] x86: mach_apicdef.h need to include before smp.h Signed-off-by:
Yinghai Lu <yhlu.kernel@gmail.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
Introduce basic apic operations which handle the apic programming. This will be used later to introduce another specific operations for x2apic. For the perfomance critial accesses like IPI's, EOI etc, we use the native operations as they are already referenced by different indirections like genapic, irq_chip etc. 64bit Paravirt ops can also define their apic operations accordingly. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Cc: akpm@linux-foundation.org Cc: arjan@linux.intel.com Cc: andi@firstfloor.org Cc: ebiederm@xmission.com Cc: jbarnes@virtuousgeek.org Cc: steiner@sgi.com Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
Move the read_apic_id() to genapic routines. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Cc: akpm@linux-foundation.org Cc: arjan@linux.intel.com Cc: andi@firstfloor.org Cc: ebiederm@xmission.com Cc: jbarnes@virtuousgeek.org Cc: steiner@sgi.com Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 08 Jul, 2008 4 commits
-
-
Ingo Molnar authored
fix: arch/x86/kernel/built-in.o: In function `setup_arch': : undefined reference to `prefill_possible_map' Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Yinghai Lu authored
call it right after we are done with MADT/mptable handling, instead of doing that in setup_per_cpu_areas() later on... this way for_possible_cpu() can be used early. Signed-off-by:
Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Glauber Costa authored
Take it out of smpboot.c, and move it to process_32.c, closer to its only user. Signed-off-by:
Glauber Costa <gcosta@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Mike Travis authored
* Introduce a new PER_CPU macro called "EARLY_PER_CPU". This is used by some per_cpu variables that are initialized and accessed before there are per_cpu areas allocated. ["Early" in respect to per_cpu variables is "earlier than the per_cpu areas have been setup".] This patchset adds these new macros: DEFINE_EARLY_PER_CPU(_type, _name, _initvalue) EXPORT_EARLY_PER_CPU_SYMBOL(_name) DECLARE_EARLY_PER_CPU(_type, _name) early_per_cpu_ptr(_name) early_per_cpu_map(_name, _idx) early_per_cpu(_name, _cpu) The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. The early_per_cpu() macro is not very efficient but does show how to access the variable if you have a function that can be called both "early" and "late". It tests the early ptr to be NULL, and if not then it's still valid. Otherwise, the per_cpu variable is used instead: #define early_per_cpu(_name, _cpu) \ (early_per_cpu_ptr(_name) ? \ early_per_cpu_ptr(_name)[_cpu] : \ per_cpu(_name, _cpu)) A better method is to actually check the pointer manually. In the case below, numa_set_node can be called both "early" and "late": void __cpuinit numa_set_node(int cpu, int node) { int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map); if (cpu_to_node_map) cpu_to_node_map[cpu] = node; else per_cpu(x86_cpu_to_node_map, cpu) = node; } * Add a flag "arch_provides_topology_pointers" that indicates pointers to topology cpumask_t maps are available. Otherwise, use the function returning the cpumask_t value. This is useful if cpumask_t set size is very large to avoid copying data on to/off of the stack. * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while the non-debug case has been optimized a bit. * Remove an unreferenced compiler warning in drivers/base/topology.c * Clean up #ifdef in setup.c For inclusion into sched-devel/latest tree. Based on: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git + sched-devel/latest .../mingo/linux-2.6-sched-devel.git Signed-off-by:
Mike Travis <travis@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 26 Jun, 2008 1 commit
-
-
Jens Axboe authored
This converts x86, x86-64, and xen to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Acked-by:
Ingo Molnar <mingo@elte.hu> Acked-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 25 May, 2008 1 commit
-
-
Yinghai Lu authored
move early_res related from e820_64.c to e820.c make edba detection to be done in head32.c remove smp_alloc_memory, because we have fixed trampoline address now. Signed-off-by:
Yinghai Lu <yhlu.kernel@gmail.com> arch/x86/kernel/e820.c | 214 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/e820_64.c | 196 -------------------------------- arch/x86/kernel/head32.c | 76 ++++++++++++ arch/x86/kernel/setup_32.c | 109 +++--------------- arch/x86/kernel/smpboot.c | 17 -- arch/x86/kernel/trampoline.c | 2 arch/x86/mach-voyager/voyager_smp.c | 9 - include/asm-x86/e820.h | 6 + include/asm-x86/e820_64.h | 9 - include/asm-x86/smp.h | 1 arch/x86/kernel/e820.c | 214 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/e820_64.c | 196 -------------------------------- arch/x86/kernel/head32.c | 76 ++++++++++++ arch/x86/kernel/setup_32.c | 109 +++--------------- arch/x86/kernel/smpboot.c | 17 -- arch/x86/kernel/trampoline.c | 2 arch/x86/mach-voyager/voyager_smp.c | 9 - include/asm-x86/e820.h | 6 + include/asm-x86/e820_64.h | 9 - include/asm-x86/smp.h | 1 arch/x86/kernel/e820.c | 214 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/e820_64.c | 196 -------------------------------- arch/x86/kernel/head32.c | 76 ++++++++++++ arch/x86/kernel/setup_32.c | 109 +++--------------- arch/x86/kernel/smpboot.c | 17 -- arch/x86/kernel/trampoline.c | 2 arch/x86/mach-voyager/voyager_smp.c | 9 - include/asm-x86/e820.h | 6 + include/asm-x86/e820_64.h | 9 - include/asm-x86/smp.h | 1 10 files changed, 320 insertions(+), 319 deletions(-) Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-