- 28 Jan, 2005 3 commits
-
-
David Mosberger authored
Remove left-over support for Merced B-step CPUs as suggested by Jim Wilson. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Ashok Raj authored
Recent mca percpu changes broke a cpu being brought up after initial boot which is required for cpu hotplug. ia64_mca_cpu_init() must be __devinit so it is not discarded in a hotplug kernel. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Kenneth W. Chen authored
David Mosberger wrote on Wednesday, January 26, 2005 1:31 PM > Couldn't you restore r8/r10 after .work_pending is done in if > pLvSys is TRUE? That way, .work_processed would simply preserve > (save _and_ restore) r8/r10. Thank you for reviewing and the suggestion. Here is the updated patch, net saving for 6 cycles compares to 4 with earlier version. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Rohit Seth <rohit.seth@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
- 26 Jan, 2005 2 commits
-
-
Russ Anderson authored
There is one small problem. In mca_asm.S, r23 was used without being set and the hardcoded value 40 is no longer valid (patch below). With linux-ia64-test-2.6.11 plus David's patch plus the patch below, 1024 memory uncorectable errors were injected and sucessfully recovered on an SGI Altix test machine. 1024 is the number of entries in the page_isolate[] array in arch/ia64/kernel/mca_drv.c. When the array is full, the recovery code says the error is not recoverable and the system reboots. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
This patch cleans up the per-CPU MCA mess with the following changes (and yields a UP kernel that actually boots again): - In percpu.h, make per_cpu_init() a function-call even for the UP case. - In contig.c, enable per_cpu_init() even for UP since we need to allocate the per-CPU MCA data in that case as well. - Move the MCA-related stuff out of the cpuinfo structure into per-CPU variables defined by mca.c. - Rename IA64_KR_PA_CPU_INFO to IA64_KR_PER_CPU_DATA, since it really is a per-CPU pointer now. - In mca.h, move IA64_MCA_STACK_SIZE early enough so it gets defined for assembly-code, too. Tidy up struct ia64_mca_struct. Add declaration of ia64_mca_cpu_init(). - In mca_asm.[hS], replace various GET_*() macros with a single GET_PERCPU_ADDR() which loads the physical address of an arbitrary per-CPU variable. Remove all dependencies on the layout of the cpuinfo structure. Replace hardcoded stack-size with IA64_MCA_STACK_SIZE constant. Replace hardcoded references to ar.k3 with IA64_KR(PER_CPU_DATA). - In setup.c:cpu_init(), initialize ar.k3 to be the physical equivalent of the per-CPU data pointer. - Nuke silly ia64_mca_cpu_t typedef and just use struct ia64_mca_cpu instead. - Move __per_cpu_mca[] from setup.c to mca.c. - Rename set_mca_pointer() to ia64_mca_cpu_init() and sanitize it. - Rename efi.c:pal_code_memdesc() to efi_get_pal_addr() and make it return the PAL address, rather than a memory-descriptor. - Make efi_map_pal_code() use efi_get_pal_addr(). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
- 25 Jan, 2005 6 commits
-
-
Tony Luck authored
A few reports of illegal instruction panics while trying to boot were tracked to this. Fix by David Mosberger. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Tony Luck authored
Spotted by Andreas Schwab, fix from Matthew Wilcox and David Mosberger. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
I noticed that the PTRACE_POKEUSR code incorrectly clears bits 56-58 of _all_ debug registers. The intention was to only clear it for odd-numbered registers, to ensure that user-level can only set user-level data/instruction-breakpoints. Patch below fixes this problem. The patch also replaces explicit clearing of the single-step and taken-branch PSR bits with a call to ptrace_disable() for PTRACE_KILL. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
This patch replaces the idiom: func (args..., long stack) { struct pt_regs *regs = (struct pt_regs *) &stack; with the more commonly used: func (args..., struct pt_regs regs) { The latter didn't used to work with the very earliest kernels and compilers (anybody remember egcs?) but gcc-3.3 and probably even gcc-2.96 don't have a problem with it anymore. The change also makes sparse happier, since it doesn't like it when you access memory past the end of the declared size of that variable. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Tony Luck authored
David thinks this might make Jesse and Willy happy (or at least happier). If they can cope with line breaks before a binary operator, rather than after, then maybe it will :-) Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Kenneth W. Chen authored
This version doesn't cost us any extra cycles. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Rohit Seth <rohit.seth@intel.com> Acked-by: David Mosberger <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
- 23 Jan, 2005 5 commits
-
-
Linus Torvalds authored
Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Keith Owens authored
Anton Blanchard wrote: >Your recent patch looks to break module kallsyms lookups.... >It looks like if CONFIG_KALLSYMS_ALL is set then we never look up module >addresses. Separate lookups for kernel and modules when CONFIG_KALLSYMS_ALL=y. Signed-off-by: Keith Owens <kaos@ocs.com.au> Acked-by: Chris Wedgwood <cw@f00f.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andi Kleen authored
Fix warning: In file included from include/asm/numa.h:5, from arch/x86_64/kernel/setup64.c:27: include/asm/numnodes.h:6:1: warning: "NODES_SHIFT" redefined In file included from include/linux/mmzone.h:13, from include/linux/gfp.h:4, from include/linux/slab.h:15, from include/linux/percpu.h:4, from include/linux/sched.h:33, from arch/x86_64/kernel/setup64.c:11: include/linux/numa.h:11:1: warning: this is the location of the previous definition in UP builds. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andi Kleen authored
From Terence Ripperda <tripperda@nvidia.com> When doing iounmap don't try to change_page_attr back the guard page that ioremap added. Since the last round of change_page_attr changes this would trigger an BUG because the reference count on the changed pages wouldn't match up. The problem would be only visible on machines with >3GB of memory, because only then the PCI memory hole is below end_pfn and change_page_attr is used. Fixed for both i386 and x86-64. This was actually discovered&fixed by Andrea earlier, but I goofed up while doing the last ioremap fixes merge and this change got lost. Poor Terence had to debug it again. Sorry about that. cc: andrea@suse.de Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andi Kleen authored
Undo bogus change that was introduced with kprobes. It's not really needed and it breaks some user applications because it changes the signal for int 3 from SIGTRAP to SIGSEGV. Cc: <prasanna@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
- 22 Jan, 2005 24 commits
-
-
bk://kernel.bkbits.net/davem/sparc-2.6Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
bk://kernel.bkbits.net/davem/net-2.6Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
http://lia64.bkbits.net/linux-ia64-release-2.6.11Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
David S. Miller authored
- If dst/src are equal, memcpy can be used. - Eliminate register writes which were unused Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Luck authored
Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Tony Luck authored
Patch from Christoph Hellwig to: - irq_desc and irq_to_vector machvecs. SN2 has it's own versions, but they're the same as the generic ones - kill do do_IRQ and use __do_IRQ directly everywhere - kill dead X86 ifdefs - move some variable declarations around in irq.c to recuce # of ifdefs Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Jesse Barnes authored
When I converted the sn2 code over to the new DMA API, I left the old routines in place and added wrappers to call them from the generic DMA API functions. This added an unnecessary level of obfuscation since the generic ia64 code calls those functions when any of the old style PCI DMA API functions are called. This patch rectifies the problem making the code much easier to understand and hopefully a little more efficient (though I'm sure gcc was already inlining things pretty well, there were a bunch of unnecessary checks that I took this opportunity to remove). It also shrinks the size of the sn2 pci_dma.c quite a bit. pci_dma.c | 480 +++++++++++++++++++----------------------------------------- 1 files changed, 151 insertions(+), 329 deletions(-) Signed-off-by: Jesse Barnes <jbarnes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Jesse Barnes authored
sn2 does early initialization of the SAL so it can use it for early console support. Unfortunately, the loop to find the SAL entry point was buggy so when we tried out new EFI and SAL system table layouts, the loop didn't terminate. Here's the fix (doh!, use two different loop counters instead of one and just return if we find the SAL entry point). Signed-off-by: Jesse Barnes <jbarnes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Jesse Barnes authored
Ok, here you go Tony. This one fixes the loop and also fixes drm_vm.c. All of the bits aside from the efi.h bit are ia64 specific (either under arch/ia64 or __ia64__), so your tree is probably the right place for all of it. This patch adds efi_range_is_wc() to efi.h. It's used to determine whether an address range can be mapped with the write coalescing attribute. It also fixes up some ia64 specific callers to use the new routine instead of unconditionally calling pgprot_writecombined, which can be dangerous if used on ranges that don't support it. Signed-off-by: Jesse Barnes <jbarnes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Jes Sorensen authored
The following patch fixes the ia64_pal_prefetch_visibility function to take a transaction type argument for either virtual or physical memory as specified in the System Architechture Manual page 2:358. Signed-Off-By: Jes Sorensen <jes@trained-monkey.org> Signed-Off-By: Tony Luck <tony.luck@intel.com>
-
Stéphane Eranian authored
Problem: There exists a case where we stop monitoring, i.e. clear psr.pp/dcr.pp, via IPI. This is when the stop is triggered by a close(), either explicit in the application or implicit via exit_files(). The IPI is necessary because at the time the thread (controlling the context) issues a close() it may not run on the CPU the context is bound to. Yet the call must succeed, hence we need to propagate the call to the right CPU. But what is the problem then? Under IPI, we invoke a perfmon routine which clear the kernel (live) kernel psr.pp bit and also dcr.pp. Then we return from the function and execute the kernel exit path which restores the interrupted state. Unfortunately, this restores the kernel psr from ipsr which now contains a stale value. Therefore monitoring in the kernel will be active even though we stopped it. You cannot modify the "global" psr in an interrupt routine because it will be systematically restored on the way back. Solution: We need to patch ipsr.pp in the kernel exit path to reflect the kernel value of the kernel psr.pp bit. This must be done only when returning to kernel. The proposed patch does patch ipsr.pp such that it is identical to psr.pp. The patch is subtle because the exit path does not have a lot of free registers and also because we need to schedule for a psr read. I had to shuffle things around a little bit. The patch is important because there will be another situation where this problem can occur once we incorporate the support for event set and multiplexing. In this configuration, you may be in the middle of the idle loop and on a timer interrupt, you may stop monitoring. Slightly different condition, yet same problem with ipsr.pp vs. psr.pp. Changelog: - update kernel exit path when returning to kernel to copy psr.pp to ipsr.pp. This is necesary to ensure that if psr.pp was modified during the kernel entry, the change is propagated the the psr.pp of the of the interrupted thread. Psr.pp can be modified as a consequence of an IPI under certain conditions, such as when a system-wide context is closed from a remote CPU. Special thanks to David for reworking the patch to fit into the enhanced exit path. signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Tony Luck authored
Patch from yanmin.zhang@intel.com to fix up some corner cases in ptrace. Many thanks to davidm for reviewing and improving. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Tony Luck authored
Label is unused, and so the compiler generates a warning. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Use srlz.d instead of srlz.i. Safe because we don't care whether the VHPT walker sees the clearing of PSR.ic (if it does, that's fine if it doesn't, it's OK too since the kernel-text is pinned anyhow). Good for another 11+ cycles in (normal) getpid(). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Optimize ia64_leave_syscall() a bit better for McKinley-type cores. The patch looks big, but that's mostly due to renaming r16/r17 to r2/r3. Good for a 13 cycle improvement. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Trivial patch: align rse_clear_invalid to 32-byte boundary. Good for a 9 cycle speed up. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Kernel-threads had both pUStk and pKStk set to FALSE, which was unintentional. I don't think the bug has shown any ill effects, but it's clearly wrong and could come around to bite us later, so let's fix it now. Depends on the previous patch to clean up C usage of the global/root-function predicates. Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
The patch below is purely a cleanup but it's a prerequisite for the next bug fix patch. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Nishanth Aravamudan authored
Use msleep() instead of schedule_timeout() to guarantee the task delays as expected. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
Russ Anderson authored
The fixed sized array of pages that are isolated because of 2xECC memory errors can run out. Increasing the size of the array is a band-aid measure, the real fix will require changes to generic code to add some bits to page_flags so that pages with errors can be marked so as to prevent them ever being examined. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-
David Mosberger authored
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-