An error occurred fetching the project authors.
- 24 Aug, 2004 1 commit
-
-
William Lee Irwin III authored
Every arch now bears the burden of sanitizing CLONE_IDLETASK out of the clone_flags passed to do_fork() by userspace. This patch hoists the masking of CLONE_IDLETASK out of the system call entrypoints into do_fork(), and thereby removes some small overheads from do_fork(), as do_fork() may now assume that CLONE_IDLETASK has been cleared. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
- 17 Jun, 2004 2 commits
-
-
Paul Mackerras authored
This rewrites the PPC64 exception entry/exit routines to make them smaller and faster. In particular we no longer save all of the registers for the common exceptions - system calls, hardware interrupts and decrementer (timer) interrupts - only the volatile registers. The other registers are saved and restored (if used) by the C functions we call. This involved changing the registers we use in early exception processing from r20-r23 to r9-r12, which ended up changing quite a lot of code in head.S. Overall this gives us about a 20% reduction in null syscall time. Some system calls need all the registers (e.g. fork/clone/vfork and [rt_]sigsuspend). For these the syscall dispatch code calls a stub that saves the nonvolatile registers before calling the real handler. This also implements the force_successful_syscall_return() thing for ppc64. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Paul Mackerras authored
This implements CONFIG_PREEMPT for ppc64. Aside from the entry.S changes to check the _TIF_NEED_RESCHED bit when returning from an exception, there are various changes to make the ppc64-specific code preempt-safe, mostly adding preempt_enable/disable or get_cpu/put_cpu calls where needed. I have been using this on my desktop G5 for the last week without problems. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
- 31 May, 2004 1 commit
-
-
Linus Torvalds authored
-
- 30 May, 2004 1 commit
-
-
Linus Torvalds authored
-
- 29 May, 2004 1 commit
-
-
Linus Torvalds authored
-
- 25 May, 2004 1 commit
-
-
Paul Mackerras authored
Even with a 16kB stack, we have been seeing stack overflows on PPC64 under stress. This patch implements separate per-cpu stacks for processing interrupts and softirqs, along the lines of the CONFIG_4KSTACKS stuff on x86. At the moment the stacks are still 16kB but I hope we can reduce that to 8kB in future. (Gcc is capable of adding instructions to the function prolog to check the stack pointer whenever it moves it downwards, and I want to use that when I try using 8kB stacks so I can be confident that we aren't overflowing the stack.) Signed-off-by: Paul Mackerras <paulus@samba.org>
-
- 24 May, 2004 1 commit
-
-
Paul Mackerras authored
This improves the stack traces we get on PPC64 by putting a marker in those stack frames that are created as a result of an interrupt or exception. The marker is "regshere" (0x7265677368657265). With this, stack traces show where exceptions have occurred, which can be very useful. This also improves the accuracy of the trace because the relevant return address can be in the link register at the time of the exception rather than on the stack. We now print the PC and exception type for each exception frame, and then the link register if appropriate as the next item in the trace.
-
- 10 May, 2004 1 commit
-
-
Andrew Morton authored
From: Rusty Russell <rusty@rustcorp.com.au> 1) Create an in_sched_functions() function in sched.c and make the archs use it. (Two archs have wchan #if 0'd out: left them alone). 2) Move __sched from linux/init.h to linux/sched.h and add comment. 3) Rename __scheduling_functions_start_here/end_here to __sched_text_start/end. Thanks to wli and Sam Ravnborg for clue donation.
-
- 26 Apr, 2004 1 commit
-
-
Andrew Morton authored
From: Rusty Russell <rusty@rustcorp.com.au> Clean up initrd handling. 1) Expose initrd_start and initrd_end to prom.c (replacing its local initrd_start and initrd_len). 2) Don't hand mem (aka klimit) through functions which don't need it. 3) Add more debugging under DEBUG_PROM in case we broke anything.
-
- 22 Apr, 2004 1 commit
-
-
Andrew Morton authored
It no longer has any callers.
-
- 12 Apr, 2004 1 commit
-
-
Andrew Morton authored
From: William Lee Irwin III <wli@holomorphy.com> This addresses the issue with get_wchan() that the various functions acting as scheduling-related primitives are not, in fact, contiguous in the text segment. It creates an ELF section for scheduling primitives to be placed in, and places currently-detected (i.e. skipped during stack decoding) scheduling primitives and others like io_schedule() and down(), which are currently missed by get_wchan() code, into this section also. The net effects are more reliability of get_wchan()'s results and the new ability, made use of by this code, to arbitrarily place scheduling primitives in the source code without disturbing get_wchan()'s accuracy. Suggestions by Arnd Bergmann and Matthew Wilcox regarding reducing the invasiveness of the patch were incorporated during prior rounds of review. I've at least tried to sweep all arches in this patch.
-
- 18 Mar, 2004 1 commit
-
-
Andrew Morton authored
From: Paul Mackerras <paulus@samba.org> Recently we found a particularly nasty bug in the segment handling in the ppc64 kernel. It would only happen rarely under heavy load, but when it did the machine would lock up with the whole of memory filled with exception stack frames. The primary cause was that we were losing the translation for the kernel stack from the SLB, but we still had it in the ERAT for a while longer. Now, there is a critical region in various exception exit paths where we have loaded the SRR0 and SRR1 registers from GPRs and we are loading those GPRs and the stack pointer from the exception frame on the kernel stack. If we lose the ERAT entry for the kernel stack in that region, we take an SLB miss on the next access to the kernel stack. Taking the exception overwrites the values we have put into SRR0 and SRR1, which means we lose state. In fact we ended up repeating that last section of the exception exit path, but using the user stack pointer this time. That caused another exception (or if it didn't, we loaded a new value from the user stack and then went around and tried to use that). And it spiralled downwards from there. The patch below fixes the primary problem by making sure that we really never cast out the SLB entry for the kernel stack. It also improves debuggability in case anything like this happens again by: - In our exception exit paths, we now check whether the RI bit in the SRR1 value is 0. We already set the RI bit to 0 before starting the critical region, but we never checked it. Now, if we do ever get an exception in one of the critical regions, we will detect it before returning to the critical region, and instead we will print a nasty message and oops. - In the exception entry code, we now check that the kernel stack pointer value we're about to use isn't a userspace address. If it is, we print a nasty message and oops. This has been tested on G5 and pSeries (both with and without hypervisor) and compile-tested on iSeries.
-
- 15 Mar, 2004 1 commit
-
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> Add kernel version to oops.
-
- 27 Feb, 2004 1 commit
-
-
Anton Blanchard authored
ppc64 tlb flush rework from Paul Mackerras Instead of doing a double pass of the pagetables, we batch things up in the pte flush routines and then shoot the batch down in flush_tlb_pending. Our page aging was broken, we never flushed entries out of the ppc64 hashtable. We now flush in ptep_test_and_clear_young. A number of other things were fixed up in the process: - change ppc64_tlb_batch to per cpu data - remove some LPAR debug code - be more careful with ioremap_mm inits - clean up arch/ppc64/mm/init.c, create tlb.c
-
- 23 Feb, 2004 2 commits
-
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> __get_SP used to be a function call which meant we allocated a stack frame before calling it. This meant the SP it returned was one frame below the current function. Lets call that bogusSP (and the real one SP). The new dump_stack was being tail call optimised so it remained one frame above bogusSP. dump_stack would then store below SP (as the ABI allows us to) and would stomp over the back link that bogusSP pointed to (__get_SP had set the back link up so it worked sometimes, just not all the time). Fix this by just making __get_SP an inline that returns the current SP.
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> The might_sleep infrastructure doesnt like our get_users in the backtrace code, we often end up with might_sleep warnings inside might_sleep warnings. Instead just be careful about pointers before dereferencing them. Also remove the hack where we only printed the bottom 32bits of the WCHAN value.
-
- 13 Feb, 2004 1 commit
-
-
Anton Blanchard authored
- Add thread_info to pointer, its a useful piece of information. - Do the kallsyms lookup on the link register - Remove extra newline on one call to die()
-
- 31 Jan, 2004 1 commit
-
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> The current SLB handling code has a number of problems: - We loop trying to find an empty SLB entry before deciding to cast one out. On large working sets this really hurts since the SLB is always full and we end up looping through all 64 entries unnecessarily. - During castout we currently invalidate the entry we are replacing. This is to avoid a nasty race where the entry is in the ERAT but not the SLB and another cpu does a tlbie that removes the ERAT at a critical point. If this race is fixed the SLB can be removed. - The SLB prefault code doesnt work properly The following patch addresses all the above concerns and adds some more optimisations: - feature nop out some segment table only code - slb invalidate the kernel segment on context switch (avoids us having to slb invalidate at each cast out) - optimise flush on context switch, the lazy tlb stuff avoids it being called when going from userspace to kernel thread, but it gets called when going to kernel thread to userspace. In many cases we are returning to the same userspace task, we now check for this and avoid the flush - use the optimised POWER4 mtcrf where possible
-
- 19 Jan, 2004 1 commit
-
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> VMX (Altivec) support & signal32 rework, from Ben Herrenschmidt
-
- 07 Oct, 2003 1 commit
-
-
Arnaldo Carvalho de Melo authored
-
- 30 Sep, 2003 1 commit
-
-
Matthew Wilcox authored
ELF_CORE_SYNC and dump_smp_unlazy_fpu seem to have been introduced by Ingo around 2.5.43, but as far as I can tell, never used.
-
- 07 Sep, 2003 1 commit
-
-
Anton Blanchard authored
-
- 02 Sep, 2003 1 commit
-
-
Anton Blanchard authored
-
- 29 Jul, 2003 1 commit
-
-
Anton Blanchard authored
-
- 20 Jun, 2003 1 commit
-
-
Andrew Morton authored
From: David Mosberger <davidm@napali.hpl.hp.com> This is an attempt at sanitizing the interface for stack trace dumping somewhat. It's basically the last thing which prevents 2.5.x from working out-of-the-box for ia64. ia64 apparently cannot reasonably implement the show_stack interface declared in sched.h. Here is the rationale: modern calling conventions don't maintain a frame pointer and it's not possible to get a reliable stack trace with only a stack pointer as the starting point. You really need more machine state to start with. For a while, I thought the solution is to pass a task pointer to show_stack(), but it turns out that this would negatively impact x86 because it's sometimes useful to show only portions of a stack trace (e.g., starting from the point at which a trap occurred). Thus, this patch _adds_ the task pointer instead: extern void show_stack(struct task_struct *tsk, unsigned long *sp); The idea here is that show_stack(tsk, sp) will show the backtrace of task "tsk", starting from the stack frame that "sp" is pointing to. If tsk is NULL, the trace will be for the current task. If "sp" is NULL, all stack frames of the task are shown. If both are NULL, you'll get the full trace of the current task. I _think_ this should make everyone happy. The patch also removes the declaration of show_trace() in linux/sched.h (it never was a generic function; some platforms, in particular x86, may want to update accordingly). Finally, the patch replaces the one call to show_trace_task() with the equivalent call show_stack(task, NULL). The patch below is for Alpha and i386, since I can (compile-)test those (I'll provide the ia64 update through my regular updates). The other arches will break visibly and updating the code should be trivial: - add a task pointer argument to show_stack() and pass NULL as the first argument where needed - remove show_trace_task() - declare show_trace() in a platform-specific header file if you really want to keep it around
-
- 07 Jun, 2003 1 commit
-
-
Anton Blanchard authored
-
- 01 Jun, 2003 2 commits
-
-
Anton Blanchard authored
-
Anton Blanchard authored
-
- 20 May, 2003 1 commit
-
-
Andrew Morton authored
This updates ppc64 for the do_fork() semantics change.
-
- 29 Mar, 2003 1 commit
-
-
Anton Blanchard authored
-
- 21 Mar, 2003 1 commit
-
-
Paul Mackerras authored
-
- 23 Feb, 2003 1 commit
-
-
Anton Blanchard authored
-
- 15 Feb, 2003 1 commit
-
-
Daniel Jacobowitz authored
-
- 07 Feb, 2003 1 commit
-
-
Todd Inglett authored
-
- 21 Jan, 2003 1 commit
-
-
Anton Blanchard authored
-
- 17 Jan, 2003 1 commit
-
-
Anton Blanchard authored
-
- 09 Dec, 2002 1 commit
-
-
Anton Blanchard authored
-
- 28 Nov, 2002 1 commit
-
-
Anton Blanchard authored
-
- 23 Nov, 2002 1 commit
-
-
Anton Blanchard authored
-