Commit 13d06909 authored by Linus Torvalds's avatar Linus Torvalds

Merge bk://gkernel.bkbits.net/net-drivers-2.5

into home.osdl.org:/home/torvalds/v2.5/linux
parents 09d22ea3 2cecfc0f
...@@ -126,7 +126,9 @@ ...@@ -126,7 +126,9 @@
<para> <para>
The functions are named <function>readb</function>, The functions are named <function>readb</function>,
<function>readw</function>, <function>readl</function>, <function>readw</function>, <function>readl</function>,
<function>readq</function>, <function>writeb</function>, <function>readq</function>, <function>readb_relaxed</function>,
<function>readw_relaxed</function>, <function>readl_relaxed</function>,
<function>readq_relaxed</function>, <function>writeb</function>,
<function>writew</function>, <function>writel</function> and <function>writew</function>, <function>writel</function> and
<function>writeq</function>. <function>writeq</function>.
</para> </para>
...@@ -159,6 +161,18 @@ ...@@ -159,6 +161,18 @@
author cares. This kind of property cannot be hidden from driver author cares. This kind of property cannot be hidden from driver
writers in the API. writers in the API.
</para> </para>
<para>
PCI ordering rules also guarantee that PIO read responses arrive
after any outstanding DMA writes on that bus, since for some devices
the result of a <function>readb</function> call may signal to the
driver that a DMA transaction is complete. In many cases, however,
the driver may want to indicate that the next
<function>readb</function> call has no relation to any previous DMA
writes performed by the device. The driver can use
<function>readb_relaxed</function> for these cases, although only
some platforms will honor the relaxed semantics.
</para>
</sect1> </sect1>
<sect1> <sect1>
......
...@@ -11,6 +11,15 @@ see big performance regressions versus the deadline scheduler, please email ...@@ -11,6 +11,15 @@ see big performance regressions versus the deadline scheduler, please email
me. Database users don't bother unless you're willing to test a lot of patches me. Database users don't bother unless you're willing to test a lot of patches
from me ;) its a known issue. from me ;) its a known issue.
Also, users with hardware RAID controllers, doing striping, may find
highly variable performance results with using the as-iosched. The
as-iosched anticipatory implementation is based on the notion that a disk
device has only one physical seeking head. A striped RAID controller
actually has a head for each physical device in the logical RAID device.
However, setting the antic_expire (see tunable parameters below) produces
very similar behavior to the deadline IO scheduler.
Selecting IO schedulers Selecting IO schedulers
----------------------- -----------------------
...@@ -19,6 +28,107 @@ To choose IO schedulers at boot time, use the argument 'elevator=deadline'. ...@@ -19,6 +28,107 @@ To choose IO schedulers at boot time, use the argument 'elevator=deadline'.
globally at boot time only presently. globally at boot time only presently.
Anticipatory IO scheduler Policies
----------------------------------
The as-iosched implementation implements several layers of policies
to determine when an IO request is dispatched to the disk controller.
Here are the policies outlined, in order of application.
1. one-way Elevator algorithm.
The elevator algorithm is similar to that used in deadline scheduler, with
the addition that it allows limited backward movement of the elevator
(i.e. seeks backwards). A seek backwards can occur when choosing between
two IO requests where one is behind the elevator's current position, and
the other is in front of the elevator's position. If the seek distance to
the request in back of the elevator is less than half the seek distance to
the request in front of the elevator, then the request in back can be chosen.
Backward seeks are also limited to a maximum of MAXBACK (1024*1024) sectors.
This favors forward movement of the elevator, while allowing opportunistic
"short" backward seeks.
2. FIFO expiration times for reads and for writes.
This is again very similar to the deadline IO scheduler. The expiration
times for requests on these lists is tunable using the parameters read_expire
and write_expire discussed below. When a read or a write expires in this way,
the IO scheduler will interrupt its current elevator sweep or read anticipation
to service the expired request.
3. Read and write request batching
A batch is a collection of read requests or a collection of write
requests. The as scheduler alternates dispatching read and write batches
to the driver. In the case a read batch, the scheduler submits read
requests to the driver as long as there are read requests to submit, and
the read batch time limit has not been exceeded (read_batch_expire).
The read batch time limit begins counting down only when there are
competing write requests pending.
In the case of a write batch, the scheduler submits write requests to
the driver as long as there are write requests available, and the
write batch time limit has not been exceeded (write_batch_expire).
However, the length of write batches will be gradually shortened
when read batches frequently exceed their time limit.
When changing between batch types, the scheduler waits for all requests
from the previous batch to complete before scheduling requests for the
next batch.
The read and write fifo expiration times described in policy 2 above
are checked only when in scheduling IO of a batch for the corresponding
(read/write) type. So for example, the read FIFO timeout values are
tested only during read batches. Likewise, the write FIFO timeout
values are tested only during write batches. For this reason,
it is generally not recommended for the read batch time
to be longer than the write expiration time, nor for the write batch
time to exceed the read expiration time (see tunable parameters below).
When the IO scheduler changes from a read to a write batch,
it begins the elevator from the request that is on the head of the
write expiration FIFO. Likewise, when changing from a write batch to
a read batch, scheduler begins the elevator from the first entry
on the read expiration FIFO.
4. Read anticipation.
Read anticipation occurs only when scheduling a read batch.
This implementation of read anticipation allows only one read request
to be dispatched to the disk controller at a time. In
contrast, many write requests may be dispatched to the disk controller
at a time during a write batch. It is this characteristic that can make
the anticipatory scheduler perform anomalously with controllers supporting
TCQ, or with hardware striped RAID devices. Setting the antic_expire
queue paramter (see below) to zero disables this behavior, and the anticipatory
scheduler behaves essentially like the deadline scheduler.
When read anticipation is enabled (antic_expire is not zero), reads
are dispatched to the disk controller one at a time.
At the end of each read request, the IO scheduler examines its next
candidate read request from its sorted read list. If that next request
is from the same process as the request that just completed,
or if the next request in the queue is "very close" to the
just completed request, it is dispatched immediately. Otherwise,
statistics (average think time, average seek distance) on the process
that submitted the just completed request are examined. If it seems
likely that that process will submit another request soon, and that
request is likely to be near the just completed request, then the IO
scheduler will stop dispatching more read requests for up time (antic_expire)
milliseconds, hoping that process will submit a new request near the one
that just completed. If such a request is made, then it is dispatched
immediately. If the antic_expire wait time expires, then the IO scheduler
will dispatch the next read request from the sorted read queue.
To decide whether an anticipatory wait is worthwhile, the scheduler
maintains statistics for each process that can be used to compute
mean "think time" (the time between read requests), and mean seek
distance for that process. One observation is that these statistics
are associated with each process, but those statistics are not associated
with a specific IO device. So for example, if a process is doing IO
on several file systems on separate devices, the statistics will be
a combination of IO behavior from all those devices.
Tuning the anticipatory IO scheduler Tuning the anticipatory IO scheduler
------------------------------------ ------------------------------------
When using 'as', the anticipatory IO scheduler there are 5 parameters under When using 'as', the anticipatory IO scheduler there are 5 parameters under
...@@ -26,17 +136,18 @@ When using 'as', the anticipatory IO scheduler there are 5 parameters under ...@@ -26,17 +136,18 @@ When using 'as', the anticipatory IO scheduler there are 5 parameters under
The parameters are: The parameters are:
* read_expire * read_expire
Controls how long until a request becomes "expired". It also controls the Controls how long until a read request becomes "expired". It also controls the
interval between which expired requests are served, so set to 50, a request interval between which expired requests are served, so set to 50, a request
might take anywhere < 100ms to be serviced _if_ it is the next on the might take anywhere < 100ms to be serviced _if_ it is the next on the
expired list. Obviously it won't make the disk go faster. The result expired list. Obviously request expiration strategies won't make the disk
basically equates to the timeslice a single reader gets in the presence of go faster. The result basically equates to the timeslice a single reader
other IO. 100*((seek time / read_expire) + 1) is very roughly the % gets in the presence of other IO. 100*((seek time / read_expire) + 1) is
streaming read efficiency your disk should get with multiple readers. very roughly the % streaming read efficiency your disk should get with
multiple readers.
* read_batch_expire * read_batch_expire
Controls how much time a batch of reads is given before pending writes are Controls how much time a batch of reads is given before pending writes are
served. Higher value is more efficient. This might be set below read_expire served. A higher value is more efficient. This might be set below read_expire
if writes are to be given higher priority than reads, but reads are to be if writes are to be given higher priority than reads, but reads are to be
as efficient as possible when there are no writes. Generally though, it as efficient as possible when there are no writes. Generally though, it
should be some multiple of read_expire. should be some multiple of read_expire.
...@@ -45,7 +156,8 @@ The parameters are: ...@@ -45,7 +156,8 @@ The parameters are:
* write_batch_expire are equivalent to the above, for writes. * write_batch_expire are equivalent to the above, for writes.
* antic_expire * antic_expire
Controls the maximum amount of time we can anticipate a good read before Controls the maximum amount of time we can anticipate a good read (one
with a short seek distance from the most recently completed request) before
giving up. Many other factors may cause anticipation to be stopped early, giving up. Many other factors may cause anticipation to be stopped early,
or some processes will not be "anticipated" at all. Should be a bit higher or some processes will not be "anticipated" at all. Should be a bit higher
for big seek time devices though not a linear correspondence - most for big seek time devices though not a linear correspondence - most
......
...@@ -72,8 +72,10 @@ Offset Type Description ...@@ -72,8 +72,10 @@ Offset Type Description
0x21c unsigned long INITRD_SIZE, size in bytes of ramdisk image 0x21c unsigned long INITRD_SIZE, size in bytes of ramdisk image
0x220 4 bytes (setup.S) 0x220 4 bytes (setup.S)
0x224 unsigned short setup.S heap end pointer 0x224 unsigned short setup.S heap end pointer
0x2cc 4 bytes DISK80_SIG_BUFFER (setup.S)
0x2d0 - 0x600 E820MAP 0x2d0 - 0x600 E820MAP
0x600 - 0x7D4 EDDBUF (setup.S) 0x600 - 0x7ff EDDBUF (setup.S) for disk signature read sector
0x600 - 0x7d3 EDDBUF (setup.S) for edd data
0x800 string, 2K max COMMAND_LINE, the kernel commandline as 0x800 string, 2K max COMMAND_LINE, the kernel commandline as
copied using CL_OFFSET. copied using CL_OFFSET.
......
...@@ -254,8 +254,8 @@ the different loglevels. ...@@ -254,8 +254,8 @@ the different loglevels.
printk_ratelimit: printk_ratelimit:
Some warning messages are rate limited. printk_ratelimit specifies Some warning messages are rate limited. printk_ratelimit specifies
the minimum length of time between these messages, by default we the minimum length of time between these messages (in jiffies), by
allow one every 5 seconds. default we allow one every 5 seconds.
A value of 0 will disable rate limiting. A value of 0 will disable rate limiting.
......
...@@ -830,7 +830,7 @@ quiet_cmd_cscope-file = FILELST cscope.files ...@@ -830,7 +830,7 @@ quiet_cmd_cscope-file = FILELST cscope.files
cmd_cscope-file = $(all-sources) > cscope.files cmd_cscope-file = $(all-sources) > cscope.files
quiet_cmd_cscope = MAKE cscope.out quiet_cmd_cscope = MAKE cscope.out
cmd_cscope = cscope -k -b cmd_cscope = cscope -k -b -q
cscope: FORCE cscope: FORCE
$(call cmd,cscope-file) $(call cmd,cscope-file)
......
...@@ -57,7 +57,7 @@ op_axp_setup(void) ...@@ -57,7 +57,7 @@ op_axp_setup(void)
/* Compute the mask of enabled counters. */ /* Compute the mask of enabled counters. */
for (i = e = 0; i < model->num_counters; ++i) for (i = e = 0; i < model->num_counters; ++i)
if (ctr[0].enabled) if (ctr[i].enabled)
e |= 1 << i; e |= 1 << i;
reg.enable = e; reg.enable = e;
......
...@@ -264,7 +264,7 @@ struct thread_info *alloc_thread_info(struct task_struct *task) ...@@ -264,7 +264,7 @@ struct thread_info *alloc_thread_info(struct task_struct *task)
if (!thread) if (!thread)
thread = ll_alloc_task_struct(); thread = ll_alloc_task_struct();
#ifdef CONFIG_SYSRQ #ifdef CONFIG_MAGIC_SYSRQ
/* /*
* The stack must be cleared if you want SYSRQ-T to * The stack must be cleared if you want SYSRQ-T to
* give sensible stack usage information * give sensible stack usage information
......
...@@ -257,7 +257,7 @@ struct thread_info *alloc_thread_info(struct task_struct *task) ...@@ -257,7 +257,7 @@ struct thread_info *alloc_thread_info(struct task_struct *task)
if (!thread) if (!thread)
thread = ll_alloc_task_struct(); thread = ll_alloc_task_struct();
#ifdef CONFIG_SYSRQ #ifdef CONFIG_MAGIC_SYSRQ
/* /*
* The stack must be cleared if you want SYSRQ-T to * The stack must be cleared if you want SYSRQ-T to
* give sensible stack usage information * give sensible stack usage information
......
...@@ -104,7 +104,7 @@ static unsigned long output_ptr = 0; ...@@ -104,7 +104,7 @@ static unsigned long output_ptr = 0;
static void *malloc(int size); static void *malloc(int size);
static void free(void *where); static void free(void *where);
static void puts(const char *); static void putstr(const char *);
extern int end; extern int end;
static long free_mem_ptr = (long)&end; static long free_mem_ptr = (long)&end;
...@@ -169,7 +169,7 @@ static void scroll(void) ...@@ -169,7 +169,7 @@ static void scroll(void)
vidmem[i] = ' '; vidmem[i] = ' ';
} }
static void puts(const char *s) static void putstr(const char *s)
{ {
int x,y,pos; int x,y,pos;
char c; char c;
...@@ -287,9 +287,9 @@ static void flush_window(void) ...@@ -287,9 +287,9 @@ static void flush_window(void)
static void error(char *x) static void error(char *x)
{ {
puts("\n\n"); putstr("\n\n");
puts(x); putstr(x);
puts("\n\n -- System halted"); putstr("\n\n -- System halted");
while(1); /* Halt */ while(1); /* Halt */
} }
...@@ -373,9 +373,9 @@ asmlinkage int decompress_kernel(struct moveparams *mv, void *rmode) ...@@ -373,9 +373,9 @@ asmlinkage int decompress_kernel(struct moveparams *mv, void *rmode)
else setup_output_buffer_if_we_run_high(mv); else setup_output_buffer_if_we_run_high(mv);
makecrc(); makecrc();
puts("Uncompressing Linux... "); putstr("Uncompressing Linux... ");
gunzip(); gunzip();
puts("Ok, booting the kernel.\n"); putstr("Ok, booting the kernel.\n");
if (high_loaded) close_output_buffer_if_we_run_high(mv); if (high_loaded) close_output_buffer_if_we_run_high(mv);
return high_loaded; return high_loaded;
} }
...@@ -49,6 +49,8 @@ ...@@ -49,6 +49,8 @@
* by Matt Domsch <Matt_Domsch@dell.com> October 2002 * by Matt Domsch <Matt_Domsch@dell.com> October 2002
* conformant to T13 Committee www.t13.org * conformant to T13 Committee www.t13.org
* projects 1572D, 1484D, 1386D, 1226DT * projects 1572D, 1484D, 1386D, 1226DT
* disk signature read by Matt Domsch <Matt_Domsch@dell.com>
* and Andrew Wilks <Andrew_Wilks@dell.com> September 2003
*/ */
#include <linux/config.h> #include <linux/config.h>
...@@ -578,6 +580,25 @@ done_apm_bios: ...@@ -578,6 +580,25 @@ done_apm_bios:
#endif #endif
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE) #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
# Read the first sector of device 80h and store the 4-byte signature
movl $0xFFFFFFFF, %eax
movl %eax, (DISK80_SIG_BUFFER) # assume failure
movb $READ_SECTORS, %ah
movb $1, %al # read 1 sector
movb $0x80, %dl # from device 80
movb $0, %dh # at head 0
movw $1, %cx # cylinder 0, sector 0
pushw %es
pushw %ds
popw %es
movw $EDDBUF, %bx
int $0x13
jc disk_sig_done
movl (EDDBUF+MBR_SIG_OFFSET), %eax
movl %eax, (DISK80_SIG_BUFFER) # store success
disk_sig_done:
popw %es
# Do the BIOS Enhanced Disk Drive calls # Do the BIOS Enhanced Disk Drive calls
# This consists of two calls: # This consists of two calls:
# int 13h ah=41h "Check Extensions Present" # int 13h ah=41h "Check Extensions Present"
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/config.h> #include <linux/config.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/irq.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/apic.h> #include <asm/apic.h>
...@@ -311,7 +312,14 @@ __setup("acpi_pic_sci=", acpi_pic_sci_setup); ...@@ -311,7 +312,14 @@ __setup("acpi_pic_sci=", acpi_pic_sci_setup);
#endif /* CONFIG_ACPI_BUS */ #endif /* CONFIG_ACPI_BUS */
#ifdef CONFIG_X86_IO_APIC
int acpi_irq_to_vector(u32 irq)
{
if (use_pci_vector() && !platform_legacy_irq(irq))
irq = IO_APIC_VECTOR(irq);
return irq;
}
#endif
static unsigned long __init static unsigned long __init
acpi_scan_rsdp ( acpi_scan_rsdp (
......
#ident "$Id$"
/* ----------------------------------------------------------------------- * /* ----------------------------------------------------------------------- *
* *
* Copyright 2000 H. Peter Anvin - All Rights Reserved * Copyright 2000 H. Peter Anvin - All Rights Reserved
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
* linux/arch/i386/kernel/edd.c * linux/arch/i386/kernel/edd.c
* Copyright (C) 2002, 2003 Dell Inc. * Copyright (C) 2002, 2003 Dell Inc.
* by Matt Domsch <Matt_Domsch@dell.com> * by Matt Domsch <Matt_Domsch@dell.com>
* disk80 signature by Matt Domsch, Andrew Wilks, and Sandeep K. Shandilya
* *
* BIOS Enhanced Disk Drive Services (EDD) * BIOS Enhanced Disk Drive Services (EDD)
* conformant to T13 Committee www.t13.org * conformant to T13 Committee www.t13.org
...@@ -59,9 +60,9 @@ MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>"); ...@@ -59,9 +60,9 @@ MODULE_AUTHOR("Matt Domsch <Matt_Domsch@Dell.com>");
MODULE_DESCRIPTION("sysfs interface to BIOS EDD information"); MODULE_DESCRIPTION("sysfs interface to BIOS EDD information");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
#define EDD_VERSION "0.10 2003-Oct-11" #define EDD_VERSION "0.12 2004-Jan-26"
#define EDD_DEVICE_NAME_SIZE 16 #define EDD_DEVICE_NAME_SIZE 16
#define REPORT_URL "http://domsch.com/linux/edd30/results.html" #define REPORT_URL "http://linux.dell.com/edd/results.html"
#define left (PAGE_SIZE - (p - buf) - 1) #define left (PAGE_SIZE - (p - buf) - 1)
...@@ -259,6 +260,14 @@ edd_show_version(struct edd_device *edev, char *buf) ...@@ -259,6 +260,14 @@ edd_show_version(struct edd_device *edev, char *buf)
return (p - buf); return (p - buf);
} }
static ssize_t
edd_show_disk80_sig(struct edd_device *edev, char *buf)
{
char *p = buf;
p += snprintf(p, left, "0x%08x\n", edd_disk80_sig);
return (p - buf);
}
static ssize_t static ssize_t
edd_show_extensions(struct edd_device *edev, char *buf) edd_show_extensions(struct edd_device *edev, char *buf)
{ {
...@@ -429,6 +438,15 @@ edd_has_edd30(struct edd_device *edev) ...@@ -429,6 +438,15 @@ edd_has_edd30(struct edd_device *edev)
return 1; return 1;
} }
static int
edd_has_disk80_sig(struct edd_device *edev)
{
struct edd_info *info = edd_dev_get_info(edev);
if (!edev || !info)
return 0;
return info->device == 0x80;
}
static EDD_DEVICE_ATTR(raw_data, 0444, edd_show_raw_data, NULL); static EDD_DEVICE_ATTR(raw_data, 0444, edd_show_raw_data, NULL);
static EDD_DEVICE_ATTR(version, 0444, edd_show_version, NULL); static EDD_DEVICE_ATTR(version, 0444, edd_show_version, NULL);
static EDD_DEVICE_ATTR(extensions, 0444, edd_show_extensions, NULL); static EDD_DEVICE_ATTR(extensions, 0444, edd_show_extensions, NULL);
...@@ -443,6 +461,7 @@ static EDD_DEVICE_ATTR(default_sectors_per_track, 0444, ...@@ -443,6 +461,7 @@ static EDD_DEVICE_ATTR(default_sectors_per_track, 0444,
edd_has_default_sectors_per_track); edd_has_default_sectors_per_track);
static EDD_DEVICE_ATTR(interface, 0444, edd_show_interface, edd_has_edd30); static EDD_DEVICE_ATTR(interface, 0444, edd_show_interface, edd_has_edd30);
static EDD_DEVICE_ATTR(host_bus, 0444, edd_show_host_bus, edd_has_edd30); static EDD_DEVICE_ATTR(host_bus, 0444, edd_show_host_bus, edd_has_edd30);
static EDD_DEVICE_ATTR(mbr_signature, 0444, edd_show_disk80_sig, edd_has_disk80_sig);
/* These are default attributes that are added for every edd /* These are default attributes that are added for every edd
...@@ -464,6 +483,7 @@ static struct edd_attribute * edd_attrs[] = { ...@@ -464,6 +483,7 @@ static struct edd_attribute * edd_attrs[] = {
&edd_attr_default_sectors_per_track, &edd_attr_default_sectors_per_track,
&edd_attr_interface, &edd_attr_interface,
&edd_attr_host_bus, &edd_attr_host_bus,
&edd_attr_mbr_signature,
NULL, NULL,
}; };
......
...@@ -32,7 +32,6 @@ ...@@ -32,7 +32,6 @@
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/edd.h>
#include <asm/ist.h> #include <asm/ist.h>
extern void dump_thread(struct pt_regs *, struct user *); extern void dump_thread(struct pt_regs *, struct user *);
...@@ -203,11 +202,6 @@ EXPORT_SYMBOL(kunmap_atomic); ...@@ -203,11 +202,6 @@ EXPORT_SYMBOL(kunmap_atomic);
EXPORT_SYMBOL(kmap_atomic_to_page); EXPORT_SYMBOL(kmap_atomic_to_page);
#endif #endif
#ifdef CONFIG_EDD_MODULE
EXPORT_SYMBOL(edd);
EXPORT_SYMBOL(eddnr);
#endif
#if defined(CONFIG_X86_SPEEDSTEP_SMI) || defined(CONFIG_X86_SPEEDSTEP_SMI_MODULE) #if defined(CONFIG_X86_SPEEDSTEP_SMI) || defined(CONFIG_X86_SPEEDSTEP_SMI_MODULE)
EXPORT_SYMBOL(ist_info); EXPORT_SYMBOL(ist_info);
#endif #endif
......
...@@ -451,15 +451,18 @@ int get_fpxregs( struct user_fxsr_struct __user *buf, struct task_struct *tsk ) ...@@ -451,15 +451,18 @@ int get_fpxregs( struct user_fxsr_struct __user *buf, struct task_struct *tsk )
int set_fpxregs( struct task_struct *tsk, struct user_fxsr_struct __user *buf ) int set_fpxregs( struct task_struct *tsk, struct user_fxsr_struct __user *buf )
{ {
int ret = 0;
if ( cpu_has_fxsr ) { if ( cpu_has_fxsr ) {
__copy_from_user( &tsk->thread.i387.fxsave, buf, if (__copy_from_user( &tsk->thread.i387.fxsave, buf,
sizeof(struct user_fxsr_struct) ); sizeof(struct user_fxsr_struct) ))
ret = -EFAULT;
/* mxcsr bit 6 and 31-16 must be zero for security reasons */ /* mxcsr bit 6 and 31-16 must be zero for security reasons */
tsk->thread.i387.fxsave.mxcsr &= 0xffbf; tsk->thread.i387.fxsave.mxcsr &= 0xffbf;
return 0;
} else { } else {
return -EIO; ret = -EIO;
} }
return ret;
} }
/* /*
......
...@@ -1126,7 +1126,8 @@ void __init mp_parse_prt (void) ...@@ -1126,7 +1126,8 @@ void __init mp_parse_prt (void)
/* Don't set up the ACPI SCI because it's already set up */ /* Don't set up the ACPI SCI because it's already set up */
if (acpi_fadt.sci_int == irq) { if (acpi_fadt.sci_int == irq) {
entry->irq = irq; /*we still need to set entry's irq*/ irq = acpi_irq_to_vector(irq);
entry->irq = irq; /* we still need to set entry's irq */
continue; continue;
} }
...@@ -1156,18 +1157,14 @@ void __init mp_parse_prt (void) ...@@ -1156,18 +1157,14 @@ void __init mp_parse_prt (void)
if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) { if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) {
printk(KERN_DEBUG "Pin %d-%d already programmed\n", printk(KERN_DEBUG "Pin %d-%d already programmed\n",
mp_ioapic_routing[ioapic].apic_id, ioapic_pin); mp_ioapic_routing[ioapic].apic_id, ioapic_pin);
if (use_pci_vector() && !platform_legacy_irq(irq)) entry->irq = acpi_irq_to_vector(irq);
irq = IO_APIC_VECTOR(irq);
entry->irq = irq;
continue; continue;
} }
mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit); mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
if (!io_apic_set_pci_routing(ioapic, ioapic_pin, irq, edge_level, active_high_low)) { if (!io_apic_set_pci_routing(ioapic, ioapic_pin, irq, edge_level, active_high_low)) {
if (use_pci_vector() && !platform_legacy_irq(irq)) entry->irq = acpi_irq_to_vector(irq);
irq = IO_APIC_VECTOR(irq);
entry->irq = irq;
} }
printk(KERN_DEBUG "%02x:%02x:%02x[%c] -> %d-%d -> IRQ %d\n", printk(KERN_DEBUG "%02x:%02x:%02x[%c] -> %d-%d -> IRQ %d\n",
entry->id.segment, entry->id.bus, entry->id.segment, entry->id.bus,
......
#ident "$Id$"
/* ----------------------------------------------------------------------- * /* ----------------------------------------------------------------------- *
* *
* Copyright 2000 H. Peter Anvin - All Rights Reserved * Copyright 2000 H. Peter Anvin - All Rights Reserved
......
...@@ -444,6 +444,12 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map) ...@@ -444,6 +444,12 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE) #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
unsigned char eddnr; unsigned char eddnr;
struct edd_info edd[EDDMAXNR]; struct edd_info edd[EDDMAXNR];
unsigned int edd_disk80_sig;
#ifdef CONFIG_EDD_MODULE
EXPORT_SYMBOL(eddnr);
EXPORT_SYMBOL(edd);
EXPORT_SYMBOL(edd_disk80_sig);
#endif
/** /**
* copy_edd() - Copy the BIOS EDD information * copy_edd() - Copy the BIOS EDD information
* from empty_zero_page into a safe place. * from empty_zero_page into a safe place.
...@@ -453,6 +459,7 @@ static inline void copy_edd(void) ...@@ -453,6 +459,7 @@ static inline void copy_edd(void)
{ {
eddnr = EDD_NR; eddnr = EDD_NR;
memcpy(edd, EDD_BUF, sizeof(edd)); memcpy(edd, EDD_BUF, sizeof(edd));
edd_disk80_sig = DISK80_SIGNATURE;
} }
#else #else
#define copy_edd() do {} while (0) #define copy_edd() do {} while (0)
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
* Dave Jones : Report invalid combinations of Athlon CPUs. * Dave Jones : Report invalid combinations of Athlon CPUs.
* Rusty Russell : Hacked into shape for new "hotplug" boot process. */ * Rusty Russell : Hacked into shape for new "hotplug" boot process. */
#include <linux/module.h>
#include <linux/config.h> #include <linux/config.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
...@@ -503,6 +504,7 @@ cpumask_t node_2_cpu_mask[MAX_NUMNODES] = ...@@ -503,6 +504,7 @@ cpumask_t node_2_cpu_mask[MAX_NUMNODES] =
{ [0 ... MAX_NUMNODES-1] = CPU_MASK_NONE }; { [0 ... MAX_NUMNODES-1] = CPU_MASK_NONE };
/* which node each logical CPU is on */ /* which node each logical CPU is on */
int cpu_2_node[NR_CPUS] = { [0 ... NR_CPUS-1] = 0 }; int cpu_2_node[NR_CPUS] = { [0 ... NR_CPUS-1] = 0 };
EXPORT_SYMBOL(cpu_2_node);
/* set up a mapping between cpu and node. */ /* set up a mapping between cpu and node. */
static inline void map_cpu_to_node(int cpu, int node) static inline void map_cpu_to_node(int cpu, int node)
......
...@@ -232,9 +232,13 @@ static void mark_offset_tsc(void) ...@@ -232,9 +232,13 @@ static void mark_offset_tsc(void)
/* sanity check to ensure we're not always losing ticks */ /* sanity check to ensure we're not always losing ticks */
if (lost_count++ > 100) { if (lost_count++ > 100) {
printk(KERN_WARNING "Losing too many ticks!\n"); printk(KERN_WARNING "Losing too many ticks!\n");
printk(KERN_WARNING "TSC cannot be used as a timesource." printk(KERN_WARNING "TSC cannot be used as a timesource. ");
" (Are you running with SpeedStep?)\n"); printk(KERN_WARNING "Possible reasons for this are:\n");
printk(KERN_WARNING "Falling back to a sane timesource.\n"); printk(KERN_WARNING " You're running with Speedstep,\n");
printk(KERN_WARNING " You don't have DMA enabled for your hard disk (see hdparm),\n");
printk(KERN_WARNING " Incorrect TSC synchronization on an SMP system (see dmesg).\n");
printk(KERN_WARNING "Falling back to a sane timesource now.\n");
clock_fallback(); clock_fallback();
} }
} else } else
......
...@@ -34,10 +34,8 @@ struct i386_cpu cpu_devices[NR_CPUS]; ...@@ -34,10 +34,8 @@ struct i386_cpu cpu_devices[NR_CPUS];
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
#include <linux/mmzone.h> #include <linux/mmzone.h>
#include <asm/node.h> #include <asm/node.h>
#include <asm/memblk.h>
struct i386_node node_devices[MAX_NUMNODES]; struct i386_node node_devices[MAX_NUMNODES];
struct i386_memblk memblk_devices[MAX_NR_MEMBLKS];
static int __init topology_init(void) static int __init topology_init(void)
{ {
...@@ -47,8 +45,6 @@ static int __init topology_init(void) ...@@ -47,8 +45,6 @@ static int __init topology_init(void)
arch_register_node(i); arch_register_node(i);
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
if (cpu_possible(i)) arch_register_cpu(i); if (cpu_possible(i)) arch_register_cpu(i);
for (i = 0; i < num_online_memblks(); i++)
arch_register_memblk(i);
return 0; return 0;
} }
......
...@@ -34,10 +34,8 @@ struct i386_cpu cpu_devices[NR_CPUS]; ...@@ -34,10 +34,8 @@ struct i386_cpu cpu_devices[NR_CPUS];
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
#include <linux/mmzone.h> #include <linux/mmzone.h>
#include <asm/node.h> #include <asm/node.h>
#include <asm/memblk.h>
struct i386_node node_devices[MAX_NUMNODES]; struct i386_node node_devices[MAX_NUMNODES];
struct i386_memblk memblk_devices[MAX_NR_MEMBLKS];
static int __init topology_init(void) static int __init topology_init(void)
{ {
...@@ -47,8 +45,6 @@ static int __init topology_init(void) ...@@ -47,8 +45,6 @@ static int __init topology_init(void)
arch_register_node(i); arch_register_node(i);
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
if (cpu_possible(i)) arch_register_cpu(i); if (cpu_possible(i)) arch_register_cpu(i);
for (i = 0; i < num_online_memblks(); i++)
arch_register_memblk(i);
return 0; return 0;
} }
......
...@@ -409,7 +409,7 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma) ...@@ -409,7 +409,7 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma)
pxm_bit_set(pxm); pxm_bit_set(pxm);
/* Insertion sort based on base address */ /* Insertion sort based on base address */
pend = &node_memblk[num_memblks]; pend = &node_memblk[num_node_memblks];
for (p = &node_memblk[0]; p < pend; p++) { for (p = &node_memblk[0]; p < pend; p++) {
if (paddr < p->start_paddr) if (paddr < p->start_paddr)
break; break;
...@@ -421,7 +421,7 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma) ...@@ -421,7 +421,7 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma)
p->start_paddr = paddr; p->start_paddr = paddr;
p->size = size; p->size = size;
p->nid = pxm; p->nid = pxm;
num_memblks++; num_node_memblks++;
} }
void __init void __init
...@@ -448,7 +448,7 @@ acpi_numa_arch_fixup (void) ...@@ -448,7 +448,7 @@ acpi_numa_arch_fixup (void)
} }
/* set logical node id in memory chunk structure */ /* set logical node id in memory chunk structure */
for (i = 0; i < num_memblks; i++) for (i = 0; i < num_node_memblks; i++)
node_memblk[i].nid = pxm_to_nid_map[node_memblk[i].nid]; node_memblk[i].nid = pxm_to_nid_map[node_memblk[i].nid];
/* assign memory bank numbers for each chunk on each node */ /* assign memory bank numbers for each chunk on each node */
...@@ -456,7 +456,7 @@ acpi_numa_arch_fixup (void) ...@@ -456,7 +456,7 @@ acpi_numa_arch_fixup (void)
int bank; int bank;
bank = 0; bank = 0;
for (j = 0; j < num_memblks; j++) for (j = 0; j < num_node_memblks; j++)
if (node_memblk[j].nid == i) if (node_memblk[j].nid == i)
node_memblk[j].bank = bank++; node_memblk[j].bank = bank++;
} }
...@@ -466,7 +466,7 @@ acpi_numa_arch_fixup (void) ...@@ -466,7 +466,7 @@ acpi_numa_arch_fixup (void)
node_cpuid[i].nid = pxm_to_nid_map[node_cpuid[i].nid]; node_cpuid[i].nid = pxm_to_nid_map[node_cpuid[i].nid];
printk(KERN_INFO "Number of logical nodes in system = %d\n", numnodes); printk(KERN_INFO "Number of logical nodes in system = %d\n", numnodes);
printk(KERN_INFO "Number of memory chunks in system = %d\n", num_memblks); printk(KERN_INFO "Number of memory chunks in system = %d\n", num_node_memblks);
if (!slit_table) return; if (!slit_table) return;
memset(numa_slit, -1, sizeof(numa_slit)); memset(numa_slit, -1, sizeof(numa_slit));
......
...@@ -650,7 +650,7 @@ free_state_stack (struct unw_reg_state *rs) ...@@ -650,7 +650,7 @@ free_state_stack (struct unw_reg_state *rs)
/* Unwind decoder routines */ /* Unwind decoder routines */
static enum unw_register_index __attribute__((const)) static enum unw_register_index __attribute_const__
decode_abreg (unsigned char abreg, int memory) decode_abreg (unsigned char abreg, int memory)
{ {
switch (abreg) { switch (abreg) {
......
...@@ -419,14 +419,14 @@ void call_pernode_memory(unsigned long start, unsigned long len, void *arg) ...@@ -419,14 +419,14 @@ void call_pernode_memory(unsigned long start, unsigned long len, void *arg)
func = arg; func = arg;
if (!num_memblks) { if (!num_node_memblks) {
/* No SRAT table, to assume one node (node 0) */ /* No SRAT table, so assume one node (node 0) */
if (start < end) if (start < end)
(*func)(start, len, 0); (*func)(start, len, 0);
return; return;
} }
for (i = 0; i < num_memblks; i++) { for (i = 0; i < num_node_memblks; i++) {
rs = max(start, node_memblk[i].start_paddr); rs = max(start, node_memblk[i].start_paddr);
re = min(end, node_memblk[i].start_paddr + re = min(end, node_memblk[i].start_paddr +
node_memblk[i].size); node_memblk[i].size);
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/config.h> #include <linux/config.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/memblk.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/node.h> #include <linux/node.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -21,7 +20,6 @@ ...@@ -21,7 +20,6 @@
#include <asm/mmzone.h> #include <asm/mmzone.h>
#include <asm/numa.h> #include <asm/numa.h>
static struct memblk *sysfs_memblks;
static struct node *sysfs_nodes; static struct node *sysfs_nodes;
static struct cpu *sysfs_cpus; static struct cpu *sysfs_cpus;
...@@ -29,8 +27,8 @@ static struct cpu *sysfs_cpus; ...@@ -29,8 +27,8 @@ static struct cpu *sysfs_cpus;
* The following structures are usually initialized by ACPI or * The following structures are usually initialized by ACPI or
* similar mechanisms and describe the NUMA characteristics of the machine. * similar mechanisms and describe the NUMA characteristics of the machine.
*/ */
int num_memblks; int num_node_memblks;
struct node_memblk_s node_memblk[NR_MEMBLKS]; struct node_memblk_s node_memblk[NR_NODE_MEMBLKS];
struct node_cpuid_s node_cpuid[NR_CPUS]; struct node_cpuid_s node_cpuid[NR_CPUS];
/* /*
* This is a matrix with "distances" between nodes, they should be * This is a matrix with "distances" between nodes, they should be
...@@ -44,12 +42,12 @@ paddr_to_nid(unsigned long paddr) ...@@ -44,12 +42,12 @@ paddr_to_nid(unsigned long paddr)
{ {
int i; int i;
for (i = 0; i < num_memblks; i++) for (i = 0; i < num_node_memblks; i++)
if (paddr >= node_memblk[i].start_paddr && if (paddr >= node_memblk[i].start_paddr &&
paddr < node_memblk[i].start_paddr + node_memblk[i].size) paddr < node_memblk[i].start_paddr + node_memblk[i].size)
break; break;
return (i < num_memblks) ? node_memblk[i].nid : (num_memblks ? -1 : 0); return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
} }
static int __init topology_init(void) static int __init topology_init(void)
...@@ -63,18 +61,8 @@ static int __init topology_init(void) ...@@ -63,18 +61,8 @@ static int __init topology_init(void)
} }
memset(sysfs_nodes, 0, sizeof(struct node) * numnodes); memset(sysfs_nodes, 0, sizeof(struct node) * numnodes);
sysfs_memblks = kmalloc(sizeof(struct memblk) * num_memblks,
GFP_KERNEL);
if (!sysfs_memblks) {
kfree(sysfs_nodes);
err = -ENOMEM;
goto out;
}
memset(sysfs_memblks, 0, sizeof(struct memblk) * num_memblks);
sysfs_cpus = kmalloc(sizeof(struct cpu) * NR_CPUS, GFP_KERNEL); sysfs_cpus = kmalloc(sizeof(struct cpu) * NR_CPUS, GFP_KERNEL);
if (!sysfs_cpus) { if (!sysfs_cpus) {
kfree(sysfs_memblks);
kfree(sysfs_nodes); kfree(sysfs_nodes);
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
...@@ -85,11 +73,6 @@ static int __init topology_init(void) ...@@ -85,11 +73,6 @@ static int __init topology_init(void)
if ((err = register_node(&sysfs_nodes[i], i, 0))) if ((err = register_node(&sysfs_nodes[i], i, 0)))
goto out; goto out;
for (i = 0; i < num_memblks; i++)
if ((err = register_memblk(&sysfs_memblks[i], i,
&sysfs_nodes[memblk_to_node(i)])))
goto out;
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i)) if (cpu_online(i))
if((err = register_cpu(&sysfs_cpus[i], i, if((err = register_cpu(&sysfs_cpus[i], i,
......
...@@ -6,9 +6,9 @@ ...@@ -6,9 +6,9 @@
* Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved. * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
*/ */
#include <linux/config.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sched.h>
#include <asm/sn/types.h> #include <asm/sn/types.h>
#include <asm/sn/sgi.h> #include <asm/sn/sgi.h>
#include <asm/sn/driver.h> #include <asm/sn/driver.h>
...@@ -123,7 +123,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */ ...@@ -123,7 +123,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
/* sanity check */ /* sanity check */
if (byte_count_max > byte_count) if (byte_count_max > byte_count)
return(NULL); return NULL;
hubinfo_get(hubv, &hubinfo); hubinfo_get(hubv, &hubinfo);
...@@ -152,7 +152,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */ ...@@ -152,7 +152,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
* For now, reject requests that span big windows. * For now, reject requests that span big windows.
*/ */
if ((xtalk_addr % BWIN_SIZE) + byte_count > BWIN_SIZE) if ((xtalk_addr % BWIN_SIZE) + byte_count > BWIN_SIZE)
return(NULL); return NULL;
/* Round xtalk address down for big window alignement */ /* Round xtalk address down for big window alignement */
...@@ -184,7 +184,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */ ...@@ -184,7 +184,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
widget == bw_piomap->hpio_xtalk_info.xp_target) { widget == bw_piomap->hpio_xtalk_info.xp_target) {
bw_piomap->hpio_holdcnt++; bw_piomap->hpio_holdcnt++;
spin_unlock(&hubinfo->h_bwlock); spin_unlock(&hubinfo->h_bwlock);
return(bw_piomap); return bw_piomap;
} }
} }
...@@ -264,7 +264,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */ ...@@ -264,7 +264,7 @@ hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
done: done:
spin_unlock(&hubinfo->h_bwlock); spin_unlock(&hubinfo->h_bwlock);
return(bw_piomap); return bw_piomap;
} }
/* /*
...@@ -330,18 +330,18 @@ hub_piomap_addr(hub_piomap_t hub_piomap, /* mapping resources */ ...@@ -330,18 +330,18 @@ hub_piomap_addr(hub_piomap_t hub_piomap, /* mapping resources */
{ {
/* Verify that range can be mapped using the specified piomap */ /* Verify that range can be mapped using the specified piomap */
if (xtalk_addr < hub_piomap->hpio_xtalk_info.xp_xtalk_addr) if (xtalk_addr < hub_piomap->hpio_xtalk_info.xp_xtalk_addr)
return(0); return 0;
if (xtalk_addr + byte_count > if (xtalk_addr + byte_count >
( hub_piomap->hpio_xtalk_info.xp_xtalk_addr + ( hub_piomap->hpio_xtalk_info.xp_xtalk_addr +
hub_piomap->hpio_xtalk_info.xp_mapsz)) hub_piomap->hpio_xtalk_info.xp_mapsz))
return(0); return 0;
if (hub_piomap->hpio_flags & HUB_PIOMAP_IS_VALID) if (hub_piomap->hpio_flags & HUB_PIOMAP_IS_VALID)
return(hub_piomap->hpio_xtalk_info.xp_kvaddr + return hub_piomap->hpio_xtalk_info.xp_kvaddr +
(xtalk_addr % hub_piomap->hpio_xtalk_info.xp_mapsz)); (xtalk_addr % hub_piomap->hpio_xtalk_info.xp_mapsz);
else else
return(0); return 0;
} }
...@@ -388,9 +388,9 @@ hub_piotrans_addr( vertex_hdl_t dev, /* translate to this device */ ...@@ -388,9 +388,9 @@ hub_piotrans_addr( vertex_hdl_t dev, /* translate to this device */
addr = (caddr_t)iaddr; addr = (caddr_t)iaddr;
} }
#endif #endif
return(addr); return addr;
} else } else
return(0); return 0;
} }
...@@ -425,7 +425,7 @@ hub_dmamap_alloc( vertex_hdl_t dev, /* set up mappings for this device */ ...@@ -425,7 +425,7 @@ hub_dmamap_alloc( vertex_hdl_t dev, /* set up mappings for this device */
if (flags & XTALK_FIXED) if (flags & XTALK_FIXED)
dmamap->hdma_flags |= HUB_DMAMAP_IS_FIXED; dmamap->hdma_flags |= HUB_DMAMAP_IS_FIXED;
return(dmamap); return dmamap;
} }
/* /*
...@@ -467,7 +467,7 @@ hub_dmamap_addr( hub_dmamap_t dmamap, /* use these mapping resources */ ...@@ -467,7 +467,7 @@ hub_dmamap_addr( hub_dmamap_t dmamap, /* use these mapping resources */
} }
/* There isn't actually any DMA mapping hardware on the hub. */ /* There isn't actually any DMA mapping hardware on the hub. */
return( (PHYS_TO_DMA(paddr)) ); return (PHYS_TO_DMA(paddr));
} }
/* /*
...@@ -497,7 +497,7 @@ hub_dmamap_list(hub_dmamap_t hub_dmamap, /* use these mapping resources */ ...@@ -497,7 +497,7 @@ hub_dmamap_list(hub_dmamap_t hub_dmamap, /* use these mapping resources */
} }
/* There isn't actually any DMA mapping hardware on the hub. */ /* There isn't actually any DMA mapping hardware on the hub. */
return(palenlist); return palenlist;
} }
/* /*
...@@ -532,7 +532,7 @@ hub_dmatrans_addr( vertex_hdl_t dev, /* translate for this device */ ...@@ -532,7 +532,7 @@ hub_dmatrans_addr( vertex_hdl_t dev, /* translate for this device */
size_t byte_count, /* length */ size_t byte_count, /* length */
unsigned flags) /* defined in dma.h */ unsigned flags) /* defined in dma.h */
{ {
return( (PHYS_TO_DMA(paddr)) ); return (PHYS_TO_DMA(paddr));
} }
/* /*
...@@ -549,7 +549,7 @@ hub_dmatrans_list( vertex_hdl_t dev, /* translate for this device */ ...@@ -549,7 +549,7 @@ hub_dmatrans_list( vertex_hdl_t dev, /* translate for this device */
{ {
BUG(); BUG();
/* no translation needed */ /* no translation needed */
return(palenlist); return palenlist;
} }
/*ARGSUSED*/ /*ARGSUSED*/
...@@ -609,8 +609,8 @@ hub_check_is_widget0(void *addr) ...@@ -609,8 +609,8 @@ hub_check_is_widget0(void *addr)
{ {
nasid_t nasid = NASID_GET(addr); nasid_t nasid = NASID_GET(addr);
if (((__psunsigned_t)addr >= RAW_NODE_SWIN_BASE(nasid, 0)) && if (((unsigned long)addr >= RAW_NODE_SWIN_BASE(nasid, 0)) &&
((__psunsigned_t)addr < RAW_NODE_SWIN_BASE(nasid, 1))) ((unsigned long)addr < RAW_NODE_SWIN_BASE(nasid, 1)))
return 1; return 1;
return 0; return 0;
} }
...@@ -626,8 +626,8 @@ hub_check_window_equiv(void *addra, void *addrb) ...@@ -626,8 +626,8 @@ hub_check_window_equiv(void *addra, void *addrb)
return 1; return 1;
/* XXX - Assume this is really a small window address */ /* XXX - Assume this is really a small window address */
if (WIDGETID_GET((__psunsigned_t)addra) == if (WIDGETID_GET((unsigned long)addra) ==
WIDGETID_GET((__psunsigned_t)addrb)) WIDGETID_GET((unsigned long)addrb))
return 1; return 1;
return 0; return 0;
......
This diff is collapsed.
...@@ -138,6 +138,8 @@ sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_hand ...@@ -138,6 +138,8 @@ sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_hand
if (!(cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size)))) if (!(cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size))))
return NULL; return NULL;
memset(cpuaddr, 0x0, size);
/* physical addr. of the memory we just got */ /* physical addr. of the memory we just got */
phys_addr = __pa(cpuaddr); phys_addr = __pa(cpuaddr);
...@@ -154,7 +156,8 @@ sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_hand ...@@ -154,7 +156,8 @@ sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_hand
*dma_handle = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size, *dma_handle = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size,
PCIIO_DMA_CMD | PCIIO_DMA_A64); PCIIO_DMA_CMD | PCIIO_DMA_A64);
else { else {
dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_CMD); dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_CMD |
MINIMAL_ATE_FLAG(phys_addr, size));
if (dma_map) { if (dma_map) {
*dma_handle = (dma_addr_t) *dma_handle = (dma_addr_t)
pcibr_dmamap_addr(dma_map, phys_addr, size); pcibr_dmamap_addr(dma_map, phys_addr, size);
...@@ -247,18 +250,6 @@ sn_pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int dire ...@@ -247,18 +250,6 @@ sn_pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int dire
for (i = 0; i < nents; i++, sg++) { for (i = 0; i < nents; i++, sg++) {
phys_addr = __pa((unsigned long)page_address(sg->page) + sg->offset); phys_addr = __pa((unsigned long)page_address(sg->page) + sg->offset);
/*
* Handle the most common case: 64 bit cards. This
* call should always succeed.
*/
if (IS_PCIA64(hwdev)) {
sg->dma_address = pcibr_dmatrans_addr(vhdl, NULL, phys_addr,
sg->length,
PCIIO_DMA_DATA | PCIIO_DMA_A64);
sg->dma_length = sg->length;
continue;
}
/* /*
* Handle 32-63 bit cards via direct mapping * Handle 32-63 bit cards via direct mapping
*/ */
...@@ -385,13 +376,6 @@ sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction) ...@@ -385,13 +376,6 @@ sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction)
dma_addr = 0; dma_addr = 0;
phys_addr = __pa(ptr); phys_addr = __pa(ptr);
if (IS_PCIA64(hwdev)) {
/* This device supports 64 bit DMA addresses. */
dma_addr = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size,
PCIIO_DMA_DATA | PCIIO_DMA_A64);
return dma_addr;
}
/* /*
* Devices that support 32 bit to 63 bit DMA addresses get * Devices that support 32 bit to 63 bit DMA addresses get
* 32 bit DMA addresses. * 32 bit DMA addresses.
...@@ -410,7 +394,8 @@ sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction) ...@@ -410,7 +394,8 @@ sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction)
* let's use the PMU instead. * let's use the PMU instead.
*/ */
dma_map = NULL; dma_map = NULL;
dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_DATA); dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_DATA |
MINIMAL_ATE_FLAG(phys_addr, size));
if (!dma_map) { if (!dma_map) {
printk(KERN_ERR "pci_map_single: Unable to allocate anymore " printk(KERN_ERR "pci_map_single: Unable to allocate anymore "
......
...@@ -131,7 +131,7 @@ sgi_master_io_infr_init(void) ...@@ -131,7 +131,7 @@ sgi_master_io_infr_init(void)
klhwg_add_all_modules(hwgraph_root); klhwg_add_all_modules(hwgraph_root);
klhwg_add_all_nodes(hwgraph_root); klhwg_add_all_nodes(hwgraph_root);
for (cnode = 0; cnode < numnodes; cnode++) { for (cnode = 0; cnode < numionodes; cnode++) {
extern void per_hub_init(cnodeid_t); extern void per_hub_init(cnodeid_t);
per_hub_init(cnode); per_hub_init(cnode);
} }
......
...@@ -31,6 +31,8 @@ ...@@ -31,6 +31,8 @@
#define DBG(x...) #define DBG(x...)
#endif /* DEBUG_KLGRAPH */ #endif /* DEBUG_KLGRAPH */
extern int numionodes;
lboard_t *root_lboard[MAX_COMPACT_NODES]; lboard_t *root_lboard[MAX_COMPACT_NODES];
static int hasmetarouter; static int hasmetarouter;
...@@ -38,13 +40,13 @@ static int hasmetarouter; ...@@ -38,13 +40,13 @@ static int hasmetarouter;
char brick_types[MAX_BRICK_TYPES + 1] = "crikxdpn%#=vo^34567890123456789..."; char brick_types[MAX_BRICK_TYPES + 1] = "crikxdpn%#=vo^34567890123456789...";
lboard_t * lboard_t *
find_lboard(lboard_t *start, unsigned char brd_type) find_lboard_any(lboard_t *start, unsigned char brd_type)
{ {
/* Search all boards stored on this node. */ /* Search all boards stored on this node. */
while (start) { while (start) {
if (start->brd_type == brd_type) if (start->brd_type == brd_type)
return start; return start;
start = KLCF_NEXT(start); start = KLCF_NEXT_ANY(start);
} }
/* Didn't find it. */ /* Didn't find it. */
...@@ -52,19 +54,59 @@ find_lboard(lboard_t *start, unsigned char brd_type) ...@@ -52,19 +54,59 @@ find_lboard(lboard_t *start, unsigned char brd_type)
} }
lboard_t * lboard_t *
find_lboard_class(lboard_t *start, unsigned char brd_type) find_lboard_nasid(lboard_t *start, nasid_t nasid, unsigned char brd_type)
{ {
/* Search all boards stored on this node. */
while (start) {
if ((start->brd_type == brd_type) &&
(start->brd_nasid == nasid))
return start;
if (numionodes == numnodes)
start = KLCF_NEXT_ANY(start);
else
start = KLCF_NEXT(start);
}
/* Didn't find it. */
return (lboard_t *)NULL;
}
lboard_t *
find_lboard_class_any(lboard_t *start, unsigned char brd_type)
{
/* Search all boards stored on this node. */
while (start) { while (start) {
if (KLCLASS(start->brd_type) == KLCLASS(brd_type)) if (KLCLASS(start->brd_type) == KLCLASS(brd_type))
return start; return start;
start = KLCF_NEXT(start); start = KLCF_NEXT_ANY(start);
}
/* Didn't find it. */
return (lboard_t *)NULL;
}
lboard_t *
find_lboard_class_nasid(lboard_t *start, nasid_t nasid, unsigned char brd_type)
{
/* Search all boards stored on this node. */
while (start) {
if (KLCLASS(start->brd_type) == KLCLASS(brd_type) &&
(start->brd_nasid == nasid))
return start;
if (numionodes == numnodes)
start = KLCF_NEXT_ANY(start);
else
start = KLCF_NEXT(start);
} }
/* Didn't find it. */ /* Didn't find it. */
return (lboard_t *)NULL; return (lboard_t *)NULL;
} }
klinfo_t * klinfo_t *
find_component(lboard_t *brd, klinfo_t *kli, unsigned char struct_type) find_component(lboard_t *brd, klinfo_t *kli, unsigned char struct_type)
{ {
...@@ -116,20 +158,6 @@ find_lboard_modslot(lboard_t *start, geoid_t geoid) ...@@ -116,20 +158,6 @@ find_lboard_modslot(lboard_t *start, geoid_t geoid)
return (lboard_t *)NULL; return (lboard_t *)NULL;
} }
lboard_t *
find_lboard_module(lboard_t *start, geoid_t geoid)
{
/* Search all boards stored on this node. */
while (start) {
if (geo_cmp(start->brd_geoid, geoid))
return start;
start = KLCF_NEXT(start);
}
/* Didn't find it. */
return (lboard_t *)NULL;
}
/* /*
* Convert a NIC name to a name for use in the hardware graph. * Convert a NIC name to a name for use in the hardware graph.
*/ */
...@@ -218,7 +246,7 @@ xbow_port_io_enabled(nasid_t nasid, int link) ...@@ -218,7 +246,7 @@ xbow_port_io_enabled(nasid_t nasid, int link)
/* /*
* look for boards that might contain an xbow or xbridge * look for boards that might contain an xbow or xbridge
*/ */
brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_IOBRICK_XBOW); brd = find_lboard_nasid((lboard_t *)KL_CONFIG_INFO(nasid), nasid, KLTYPE_IOBRICK_XBOW);
if (brd == NULL) return 0; if (brd == NULL) return 0;
if ((xbow_p = (klxbow_t *)find_component(brd, NULL, KLSTRUCT_XBOW)) if ((xbow_p = (klxbow_t *)find_component(brd, NULL, KLSTRUCT_XBOW))
...@@ -285,40 +313,6 @@ board_to_path(lboard_t *brd, char *path) ...@@ -285,40 +313,6 @@ board_to_path(lboard_t *brd, char *path)
#define MHZ 1000000 #define MHZ 1000000
/* Get the canonical hardware graph name for the given pci component
* on the given io board.
*/
void
device_component_canonical_name_get(lboard_t *brd,
klinfo_t *component,
char *name)
{
slotid_t slot;
char board_name[20];
ASSERT(brd);
/* Convert the [ CLASS | TYPE ] kind of slotid
* into a string
*/
slot = brd->brd_slot;
/* Get the io board name */
if (!brd || (brd->brd_sversion < 2)) {
strcpy(name, EDGE_LBL_XWIDGET);
} else {
nic_name_convert(brd->brd_name, board_name);
}
/* Give out the canonical name of the pci device*/
sprintf(name,
"/dev/hw/"EDGE_LBL_MODULE "/%x/"EDGE_LBL_SLAB"/%d/"
EDGE_LBL_SLOT"/%s/"EDGE_LBL_PCI"/%d",
geo_module(brd->brd_geoid), geo_slab(brd->brd_geoid),
board_name, KLCF_BRIDGE_W_ID(component));
}
/* /*
* Get the serial number of the main component of a board * Get the serial number of the main component of a board
* Returns 0 if a valid serial number is found * Returns 0 if a valid serial number is found
...@@ -506,7 +500,7 @@ void ...@@ -506,7 +500,7 @@ void
format_module_id(char *buffer, moduleid_t m, int fmt) format_module_id(char *buffer, moduleid_t m, int fmt)
{ {
int rack, position; int rack, position;
char brickchar; unsigned char brickchar;
rack = MODULE_GET_RACK(m); rack = MODULE_GET_RACK(m);
ASSERT(MODULE_GET_BTYPE(m) < MAX_BRICK_TYPES); ASSERT(MODULE_GET_BTYPE(m) < MAX_BRICK_TYPES);
...@@ -560,112 +554,21 @@ format_module_id(char *buffer, moduleid_t m, int fmt) ...@@ -560,112 +554,21 @@ format_module_id(char *buffer, moduleid_t m, int fmt)
} }
/*
* Parse a module id, in either brief or long form.
* Returns < 0 on error.
* The long form does not include a brick type, so it defaults to 0 (CBrick)
*/
int
parse_module_id(char *buffer)
{
unsigned int v, rack, bay, type, form;
moduleid_t m;
char c;
if (strstr(buffer, EDGE_LBL_RACK "/") == buffer) {
form = MODULE_FORMAT_LONG;
buffer += strlen(EDGE_LBL_RACK "/");
/* A long module ID must be exactly 5 non-template chars. */
if (strlen(buffer) != strlen("/" EDGE_LBL_RPOS "/") + 5)
return -1;
}
else {
form = MODULE_FORMAT_BRIEF;
/* A brief module id must be exactly 6 characters */
if (strlen(buffer) != 6)
return -2;
}
/* The rack number must be exactly 3 digits */
if (!(isdigit(buffer[0]) && isdigit(buffer[1]) && isdigit(buffer[2])))
return -3;
rack = 0;
v = *buffer++ - '0';
if (v > RACK_CLASS_MASK(rack) >> RACK_CLASS_SHFT(rack))
return -4;
RACK_ADD_CLASS(rack, v);
v = *buffer++ - '0';
if (v > RACK_GROUP_MASK(rack) >> RACK_GROUP_SHFT(rack))
return -5;
RACK_ADD_GROUP(rack, v);
v = *buffer++ - '0';
/* rack numbers are 1-based */
if (v-1 > RACK_NUM_MASK(rack) >> RACK_NUM_SHFT(rack))
return -6;
RACK_ADD_NUM(rack, v);
if (form == MODULE_FORMAT_BRIEF) {
/* Next should be a module type character. Accept ucase or lcase. */
c = *buffer++;
if (!isalpha(c))
return -7;
/* strchr() returns a pointer into brick_types[], or NULL */
type = (unsigned int)(strchr(brick_types, tolower(c)) - brick_types);
if (type > MODULE_BTYPE_MASK >> MODULE_BTYPE_SHFT)
return -8;
}
else {
/* Hardcode the module type, and skip over the boilerplate */
type = MODULE_CBRICK;
if (strstr(buffer, "/" EDGE_LBL_RPOS "/") != buffer)
return -9;
buffer += strlen("/" EDGE_LBL_RPOS "/");
}
/* The bay number is last. Make sure it's exactly two digits */
if (!(isdigit(buffer[0]) && isdigit(buffer[1]) && !buffer[2]))
return -10;
bay = 10 * (buffer[0] - '0') + (buffer[1] - '0');
if (bay > MODULE_BPOS_MASK >> MODULE_BPOS_SHFT)
return -11;
m = RBT_TO_MODULE(rack, bay, type);
/* avoid sign extending the moduleid_t */
return (int)(unsigned short)m;
}
int int
cbrick_type_get_nasid(nasid_t nasid) cbrick_type_get_nasid(nasid_t nasid)
{ {
lboard_t *brd;
moduleid_t module; moduleid_t module;
uint type;
int t; int t;
brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); module = iomoduleid_get(nasid);
module = geo_module(brd->brd_geoid); if (module < 0 ) {
type = (module & MODULE_BTYPE_MASK) >> MODULE_BTYPE_SHFT; return MODULE_CBRICK;
/* convert brick_type to lower case */ }
if ((type >= 'A') && (type <= 'Z')) t = MODULE_GET_BTYPE(module);
type = type - 'A' + 'a'; if ((char)t == 'o') {
return MODULE_OPUSBRICK;
/* convert to a module.h brick type */ } else {
for( t = 0; t < MAX_BRICK_TYPES; t++ ) { return MODULE_CBRICK;
if( brick_types[t] == type ) { }
return t;
}
}
return -1; return -1;
} }
...@@ -124,8 +124,9 @@ klhwg_add_xbow(cnodeid_t cnode, nasid_t nasid) ...@@ -124,8 +124,9 @@ klhwg_add_xbow(cnodeid_t cnode, nasid_t nasid)
/*REFERENCED*/ /*REFERENCED*/
graph_error_t err; graph_error_t err;
if ((brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_IOBRICK_XBOW)) == NULL) if (!(brd = find_lboard_nasid((lboard_t *)KL_CONFIG_INFO(nasid),
return; nasid, KLTYPE_IOBRICK_XBOW)))
return;
if (KL_CONFIG_DUPLICATE_BOARD(brd)) if (KL_CONFIG_DUPLICATE_BOARD(brd))
return; return;
...@@ -200,7 +201,7 @@ klhwg_add_node(vertex_hdl_t hwgraph_root, cnodeid_t cnode) ...@@ -200,7 +201,7 @@ klhwg_add_node(vertex_hdl_t hwgraph_root, cnodeid_t cnode)
vertex_hdl_t cpu_dir; vertex_hdl_t cpu_dir;
nasid = COMPACT_TO_NASID_NODEID(cnode); nasid = COMPACT_TO_NASID_NODEID(cnode);
brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
ASSERT(brd); ASSERT(brd);
/* Generate a hardware graph path for this board. */ /* Generate a hardware graph path for this board. */
...@@ -280,7 +281,7 @@ klhwg_add_all_routers(vertex_hdl_t hwgraph_root) ...@@ -280,7 +281,7 @@ klhwg_add_all_routers(vertex_hdl_t hwgraph_root)
for (cnode = 0; cnode < numnodes; cnode++) { for (cnode = 0; cnode < numnodes; cnode++) {
nasid = COMPACT_TO_NASID_NODEID(cnode); nasid = COMPACT_TO_NASID_NODEID(cnode);
brd = find_lboard_class((lboard_t *)KL_CONFIG_INFO(nasid), brd = find_lboard_class_any((lboard_t *)KL_CONFIG_INFO(nasid),
KLTYPE_ROUTER); KLTYPE_ROUTER);
if (!brd) if (!brd)
...@@ -307,7 +308,7 @@ klhwg_add_all_routers(vertex_hdl_t hwgraph_root) ...@@ -307,7 +308,7 @@ klhwg_add_all_routers(vertex_hdl_t hwgraph_root)
HWGRAPH_DEBUG((__FILE__, __FUNCTION__, __LINE__, node_vertex, NULL, "Created router path.\n")); HWGRAPH_DEBUG((__FILE__, __FUNCTION__, __LINE__, node_vertex, NULL, "Created router path.\n"));
/* Find the rest of the routers stored on this node. */ /* Find the rest of the routers stored on this node. */
} while ( (brd = find_lboard_class(KLCF_NEXT(brd), } while ( (brd = find_lboard_class_any(KLCF_NEXT_ANY(brd),
KLTYPE_ROUTER)) ); KLTYPE_ROUTER)) );
} }
...@@ -414,7 +415,7 @@ klhwg_connect_routers(vertex_hdl_t hwgraph_root) ...@@ -414,7 +415,7 @@ klhwg_connect_routers(vertex_hdl_t hwgraph_root)
for (cnode = 0; cnode < numnodes; cnode++) { for (cnode = 0; cnode < numnodes; cnode++) {
nasid = COMPACT_TO_NASID_NODEID(cnode); nasid = COMPACT_TO_NASID_NODEID(cnode);
brd = find_lboard_class((lboard_t *)KL_CONFIG_INFO(nasid), brd = find_lboard_class_any((lboard_t *)KL_CONFIG_INFO(nasid),
KLTYPE_ROUTER); KLTYPE_ROUTER);
if (!brd) if (!brd)
...@@ -428,7 +429,7 @@ klhwg_connect_routers(vertex_hdl_t hwgraph_root) ...@@ -428,7 +429,7 @@ klhwg_connect_routers(vertex_hdl_t hwgraph_root)
cnode, nasid); cnode, nasid);
/* Find the rest of the routers stored on this node. */ /* Find the rest of the routers stored on this node. */
} while ( (brd = find_lboard_class(KLCF_NEXT(brd), KLTYPE_ROUTER)) ); } while ( (brd = find_lboard_class_any(KLCF_NEXT_ANY(brd), KLTYPE_ROUTER)) );
} }
} }
...@@ -452,8 +453,7 @@ klhwg_connect_hubs(vertex_hdl_t hwgraph_root) ...@@ -452,8 +453,7 @@ klhwg_connect_hubs(vertex_hdl_t hwgraph_root)
for (cnode = 0; cnode < numionodes; cnode++) { for (cnode = 0; cnode < numionodes; cnode++) {
nasid = COMPACT_TO_NASID_NODEID(cnode); nasid = COMPACT_TO_NASID_NODEID(cnode);
brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
ASSERT(brd);
hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB); hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB);
ASSERT(hub); ASSERT(hub);
...@@ -511,69 +511,6 @@ klhwg_connect_hubs(vertex_hdl_t hwgraph_root) ...@@ -511,69 +511,6 @@ klhwg_connect_hubs(vertex_hdl_t hwgraph_root)
} }
} }
/* Store the pci/vme disabled board information as extended administrative
* hints which can later be used by the drivers using the device/driver
* admin interface.
*/
static void __init
klhwg_device_disable_hints_add(void)
{
cnodeid_t cnode; /* node we are looking at */
nasid_t nasid; /* nasid of the node */
lboard_t *board; /* board we are looking at */
int comp_index; /* component index */
klinfo_t *component; /* component in the board we are
* looking at
*/
char device_name[MAXDEVNAME];
for(cnode = 0; cnode < numnodes; cnode++) {
nasid = COMPACT_TO_NASID_NODEID(cnode);
board = (lboard_t *)KL_CONFIG_INFO(nasid);
/* Check out all the board info stored on a node */
while(board) {
/* No need to look at duplicate boards or non-io
* boards
*/
if (KL_CONFIG_DUPLICATE_BOARD(board) ||
KLCLASS(board->brd_type) != KLCLASS_IO) {
board = KLCF_NEXT(board);
continue;
}
/* Check out all the components of a board */
for (comp_index = 0;
comp_index < KLCF_NUM_COMPS(board);
comp_index++) {
component = KLCF_COMP(board,comp_index);
/* If the component is enabled move on to
* the next component
*/
if (KLCONFIG_INFO_ENABLED(component))
continue;
/* NOTE : Since the prom only supports
* the disabling of pci devices the following
* piece of code makes sense.
* Make sure that this assumption is valid
*/
/* This component is disabled. Store this
* hint in the extended device admin table
*/
/* Get the canonical name of the pci device */
device_component_canonical_name_get(board,
component,
device_name);
#ifdef DEBUG
printf("%s DISABLED\n",device_name);
#endif
}
/* go to the next board info stored on this
* node
*/
board = KLCF_NEXT(board);
}
}
}
void __init void __init
klhwg_add_all_modules(vertex_hdl_t hwgraph_root) klhwg_add_all_modules(vertex_hdl_t hwgraph_root)
{ {
...@@ -637,10 +574,4 @@ klhwg_add_all_nodes(vertex_hdl_t hwgraph_root) ...@@ -637,10 +574,4 @@ klhwg_add_all_nodes(vertex_hdl_t hwgraph_root)
klhwg_add_all_routers(hwgraph_root); klhwg_add_all_routers(hwgraph_root);
klhwg_connect_routers(hwgraph_root); klhwg_connect_routers(hwgraph_root);
klhwg_connect_hubs(hwgraph_root); klhwg_connect_hubs(hwgraph_root);
/* Go through the entire system's klconfig
* to figure out which pci components have been disabled
*/
klhwg_device_disable_hints_add();
} }
...@@ -46,8 +46,9 @@ xswitch_vertex_init(vertex_hdl_t xswitch) ...@@ -46,8 +46,9 @@ xswitch_vertex_init(vertex_hdl_t xswitch)
int rc; int rc;
xvolinfo = kmalloc(sizeof(struct xswitch_vol_s), GFP_KERNEL); xvolinfo = kmalloc(sizeof(struct xswitch_vol_s), GFP_KERNEL);
if (xvolinfo <= 0 ) { if (!xvolinfo) {
printk("xswitch_vertex_init: out of memory\n"); printk(KERN_WARNING "xswitch_vertex_init(): Unable to "
"allocate memory\n");
return; return;
} }
memset(xvolinfo, 0, sizeof(struct xswitch_vol_s)); memset(xvolinfo, 0, sizeof(struct xswitch_vol_s));
...@@ -239,30 +240,29 @@ assign_widgets_to_volunteers(vertex_hdl_t xswitch, vertex_hdl_t hubv) ...@@ -239,30 +240,29 @@ assign_widgets_to_volunteers(vertex_hdl_t xswitch, vertex_hdl_t hubv)
static void static void
early_probe_for_widget(vertex_hdl_t hubv, xwidget_hwid_t hwid) early_probe_for_widget(vertex_hdl_t hubv, xwidget_hwid_t hwid)
{ {
hubreg_t llp_csr_reg;
nasid_t nasid; nasid_t nasid;
hubinfo_t hubinfo; hubinfo_t hubinfo;
hubreg_t llp_csr_reg;
widgetreg_t widget_id;
int result = 0;
hwid->part_num = XWIDGET_PART_NUM_NONE;
hwid->rev_num = XWIDGET_REV_NUM_NONE;
hwid->mfg_num = XWIDGET_MFG_NUM_NONE;
hubinfo_get(hubv, &hubinfo); hubinfo_get(hubv, &hubinfo);
nasid = hubinfo->h_nasid; nasid = hubinfo->h_nasid;
llp_csr_reg = REMOTE_HUB_L(nasid, IIO_LLP_CSR); llp_csr_reg = REMOTE_HUB_L(nasid, IIO_LLP_CSR);
/* if (!(llp_csr_reg & IIO_LLP_CSR_IS_UP))
* If link is up, read the widget's part number. return;
* A direct connect widget must respond to widgetnum=0.
*/
if (llp_csr_reg & IIO_LLP_CSR_IS_UP) {
/* TBD: Put hub into "indirect" mode */
/*
* We're able to read from a widget because our hub's
* WIDGET_ID was set up earlier.
*/
widgetreg_t widget_id = *(volatile widgetreg_t *)
(RAW_NODE_SWIN_BASE(nasid, 0x0) + WIDGET_ID);
DBG("early_probe_for_widget: Hub Vertex 0x%p is UP widget_id = 0x%x Register 0x%p\n", hubv, widget_id, /* Read the Cross-Talk Widget Id on the other end */
(volatile widgetreg_t *)(RAW_NODE_SWIN_BASE(nasid, 0x0) + WIDGET_ID) ); result = snia_badaddr_val((volatile void *)
(RAW_NODE_SWIN_BASE(nasid, 0x0) + WIDGET_ID),
4, (void *) &widget_id);
if (result == 0) { /* Found something connected */
hwid->part_num = XWIDGET_PART_NUM(widget_id); hwid->part_num = XWIDGET_PART_NUM(widget_id);
hwid->rev_num = XWIDGET_REV_NUM(widget_id); hwid->rev_num = XWIDGET_REV_NUM(widget_id);
hwid->mfg_num = XWIDGET_MFG_NUM(widget_id); hwid->mfg_num = XWIDGET_MFG_NUM(widget_id);
...@@ -344,13 +344,12 @@ io_xswitch_widget_init(vertex_hdl_t xswitchv, ...@@ -344,13 +344,12 @@ io_xswitch_widget_init(vertex_hdl_t xswitchv,
return; return;
} }
board = find_lboard_class( board = find_lboard_class_nasid( (lboard_t *)KL_CONFIG_INFO(nasid),
(lboard_t *)KL_CONFIG_INFO(nasid), nasid, KLCLASS_IOBRICK);
KLCLASS_IOBRICK);
if (!board && NODEPDA(cnode)->xbow_peer != INVALID_NASID) { if (!board && NODEPDA(cnode)->xbow_peer != INVALID_NASID) {
board = find_lboard_class( board = find_lboard_class_nasid(
(lboard_t *)KL_CONFIG_INFO( NODEPDA(cnode)->xbow_peer), (lboard_t *)KL_CONFIG_INFO( NODEPDA(cnode)->xbow_peer),
KLCLASS_IOBRICK); NODEPDA(cnode)->xbow_peer, KLCLASS_IOBRICK);
} }
if (board) { if (board) {
...@@ -365,7 +364,7 @@ io_xswitch_widget_init(vertex_hdl_t xswitchv, ...@@ -365,7 +364,7 @@ io_xswitch_widget_init(vertex_hdl_t xswitchv,
{ {
lboard_t *brd; lboard_t *brd;
brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
if ( brd != (lboard_t *)0 ) { if ( brd != (lboard_t *)0 ) {
board->brd_geoid = brd->brd_geoid; board->brd_geoid = brd->brd_geoid;
} }
...@@ -584,10 +583,9 @@ io_init_node(cnodeid_t cnodeid) ...@@ -584,10 +583,9 @@ io_init_node(cnodeid_t cnodeid)
} else { } else {
void *bridge; void *bridge;
extern uint64_t pcireg_control_get(void *);
bridge = (void *)NODE_SWIN_BASE(COMPACT_TO_NASID_NODEID(cnodeid), 0); bridge = (void *)NODE_SWIN_BASE(COMPACT_TO_NASID_NODEID(cnodeid), 0);
npdap->basew_id = pcireg_control_get(bridge) & WIDGET_WIDGET_ID; npdap->basew_id = pcireg_bridge_control_get(bridge) & WIDGET_WIDGET_ID;
printk(" ****io_init_node: Unknown Widget Part Number 0x%x Widget ID 0x%x attached to Hubv 0x%p ****\n", widget_partnum, npdap->basew_id, (void *)hubv); printk(" ****io_init_node: Unknown Widget Part Number 0x%x Widget ID 0x%x attached to Hubv 0x%p ****\n", widget_partnum, npdap->basew_id, (void *)hubv);
return; return;
...@@ -764,7 +762,7 @@ io_brick_map_widget(int brick_type, int widget_num) ...@@ -764,7 +762,7 @@ io_brick_map_widget(int brick_type, int widget_num)
/* Look for brick prefix in table */ /* Look for brick prefix in table */
for (i = 0; i < num_bricks; i++) { for (i = 0; i < num_bricks; i++) {
if (brick_type == io_brick_tab[i].ibm_type) if (brick_type == io_brick_tab[i].ibm_type)
return(io_brick_tab[i].ibm_map_wid[widget_num]); return io_brick_tab[i].ibm_map_wid[widget_num];
} }
return 0; return 0;
......
...@@ -139,7 +139,7 @@ module_probe_snum(module_t *m, nasid_t host_nasid, nasid_t nasid) ...@@ -139,7 +139,7 @@ module_probe_snum(module_t *m, nasid_t host_nasid, nasid_t nasid)
/* /*
* record brick serial number * record brick serial number
*/ */
board = find_lboard((lboard_t *) KL_CONFIG_INFO(host_nasid), KLTYPE_SNIA); board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(host_nasid), host_nasid, KLTYPE_SNIA);
if (! board || KL_CONFIG_DUPLICATE_BOARD(board)) if (! board || KL_CONFIG_DUPLICATE_BOARD(board))
{ {
...@@ -152,8 +152,8 @@ module_probe_snum(module_t *m, nasid_t host_nasid, nasid_t nasid) ...@@ -152,8 +152,8 @@ module_probe_snum(module_t *m, nasid_t host_nasid, nasid_t nasid)
m->snum_valid = 1; m->snum_valid = 1;
} }
board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid),
KLTYPE_IOBRICK_XBOW); nasid, KLTYPE_IOBRICK_XBOW);
if (! board || KL_CONFIG_DUPLICATE_BOARD(board)) if (! board || KL_CONFIG_DUPLICATE_BOARD(board))
return 0; return 0;
...@@ -185,6 +185,7 @@ io_module_init(void) ...@@ -185,6 +185,7 @@ io_module_init(void)
nasid_t nasid; nasid_t nasid;
int nserial; int nserial;
module_t *m; module_t *m;
extern int numionodes;
DPRINTF("*******module_init\n"); DPRINTF("*******module_init\n");
...@@ -196,8 +197,7 @@ io_module_init(void) ...@@ -196,8 +197,7 @@ io_module_init(void)
*/ */
for (node = 0; node < numnodes; node++) { for (node = 0; node < numnodes; node++) {
nasid = COMPACT_TO_NASID_NODEID(node); nasid = COMPACT_TO_NASID_NODEID(node);
board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid), nasid, KLTYPE_SNIA);
board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
ASSERT(board); ASSERT(board);
HWGRAPH_DEBUG((__FILE__, __FUNCTION__, __LINE__, NULL, NULL, "Found Shub lboard 0x%lx nasid 0x%x cnode 0x%x \n", (unsigned long)board, (int)nasid, (int)node)); HWGRAPH_DEBUG((__FILE__, __FUNCTION__, __LINE__, NULL, NULL, "Found Shub lboard 0x%lx nasid 0x%x cnode 0x%x \n", (unsigned long)board, (int)nasid, (int)node));
...@@ -206,4 +206,31 @@ io_module_init(void) ...@@ -206,4 +206,31 @@ io_module_init(void)
if (! m->snum_valid && module_probe_snum(m, nasid, nasid)) if (! m->snum_valid && module_probe_snum(m, nasid, nasid))
nserial++; nserial++;
} }
/*
* Second scan, look for headless/memless board hosted by compute nodes.
*/
for (node = numnodes; node < numionodes; node++) {
nasid_t nasid;
char serial_number[16];
nasid = COMPACT_TO_NASID_NODEID(node);
board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid),
nasid, KLTYPE_SNIA);
ASSERT(board);
HWGRAPH_DEBUG((__FILE__, __FUNCTION__, __LINE__, NULL, NULL, "Found headless/memless lboard 0x%lx node %d nasid %d cnode %d\n", (unsigned long)board, node, (int)nasid, (int)node));
m = module_add_node(board->brd_geoid, node);
/*
* Get and initialize the serial number.
*/
board_serial_number_get( board, serial_number );
if( serial_number[0] != '\0' ) {
encode_str_serial( serial_number, m->snum.snum_str );
m->snum_valid = 1;
nserial++;
}
}
} }
This diff is collapsed.
...@@ -34,22 +34,21 @@ void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t); ...@@ -34,22 +34,21 @@ void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t);
* the 32bit word that contains the "offset" byte. * the 32bit word that contains the "offset" byte.
*/ */
cfg_p cfg_p
pcibr_func_config_addr(bridge_t *bridge, pciio_bus_t bus, pciio_slot_t slot, pcibr_func_config_addr(pcibr_soft_t soft, pciio_bus_t bus, pciio_slot_t slot,
pciio_function_t func, int offset) pciio_function_t func, int offset)
{ {
/* /*
* Type 1 config space * Type 1 config space
*/ */
if (bus > 0) { if (bus > 0) {
bridge->b_pci_cfg = ((bus << 16) | (slot << 11)); pcireg_type1_cntr_set(soft, ((bus << 16) | (slot << 11)));
return &bridge->b_type1_cfg.f[func].l[(offset)]; return (pcireg_type1_cfg_addr(soft, func, offset));
} }
/* /*
* Type 0 config space * Type 0 config space
*/ */
slot++; return (pcireg_type0_cfg_addr(soft, slot, func, offset));
return &bridge->b_type0_cfg_dev[slot].f[func].l[offset];
} }
/* /*
...@@ -58,59 +57,21 @@ pcibr_func_config_addr(bridge_t *bridge, pciio_bus_t bus, pciio_slot_t slot, ...@@ -58,59 +57,21 @@ pcibr_func_config_addr(bridge_t *bridge, pciio_bus_t bus, pciio_slot_t slot,
* 32bit word that contains the "offset" byte. * 32bit word that contains the "offset" byte.
*/ */
cfg_p cfg_p
pcibr_slot_config_addr(bridge_t *bridge, pciio_slot_t slot, int offset) pcibr_slot_config_addr(pcibr_soft_t soft, pciio_slot_t slot, int offset)
{ {
return pcibr_func_config_addr(bridge, 0, slot, 0, offset); return pcibr_func_config_addr(soft, 0, slot, 0, offset);
}
/*
* Return config space data for given slot / offset
*/
unsigned
pcibr_slot_config_get(bridge_t *bridge, pciio_slot_t slot, int offset)
{
cfg_p cfg_base;
cfg_base = pcibr_slot_config_addr(bridge, slot, 0);
return (do_pcibr_config_get(cfg_base, offset, sizeof(unsigned)));
}
/*
* Return config space data for given slot / func / offset
*/
unsigned
pcibr_func_config_get(bridge_t *bridge, pciio_slot_t slot,
pciio_function_t func, int offset)
{
cfg_p cfg_base;
cfg_base = pcibr_func_config_addr(bridge, 0, slot, func, 0);
return (do_pcibr_config_get(cfg_base, offset, sizeof(unsigned)));
}
/*
* Set config space data for given slot / offset
*/
void
pcibr_slot_config_set(bridge_t *bridge, pciio_slot_t slot,
int offset, unsigned val)
{
cfg_p cfg_base;
cfg_base = pcibr_slot_config_addr(bridge, slot, 0);
do_pcibr_config_set(cfg_base, offset, sizeof(unsigned), val);
} }
/* /*
* Set config space data for given slot / func / offset * Set config space data for given slot / func / offset
*/ */
void void
pcibr_func_config_set(bridge_t *bridge, pciio_slot_t slot, pcibr_func_config_set(pcibr_soft_t soft, pciio_slot_t slot,
pciio_function_t func, int offset, unsigned val) pciio_function_t func, int offset, unsigned val)
{ {
cfg_p cfg_base; cfg_p cfg_base;
cfg_base = pcibr_func_config_addr(bridge, 0, slot, func, 0); cfg_base = pcibr_func_config_addr(soft, 0, slot, func, 0);
do_pcibr_config_set(cfg_base, offset, sizeof(unsigned), val); do_pcibr_config_set(cfg_base, offset, sizeof(unsigned), val);
} }
...@@ -124,8 +85,6 @@ pcibr_config_addr(vertex_hdl_t conn, ...@@ -124,8 +85,6 @@ pcibr_config_addr(vertex_hdl_t conn,
pciio_bus_t pciio_bus; pciio_bus_t pciio_bus;
pciio_slot_t pciio_slot; pciio_slot_t pciio_slot;
pciio_function_t pciio_func; pciio_function_t pciio_func;
pcibr_soft_t pcibr_soft;
bridge_t *bridge;
cfg_p cfgbase = (cfg_p)0; cfg_p cfgbase = (cfg_p)0;
pciio_info_t pciio_info; pciio_info_t pciio_info;
...@@ -164,11 +123,7 @@ pcibr_config_addr(vertex_hdl_t conn, ...@@ -164,11 +123,7 @@ pcibr_config_addr(vertex_hdl_t conn,
pciio_func = PCI_TYPE1_FUNC(reg); pciio_func = PCI_TYPE1_FUNC(reg);
} }
pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; cfgbase = pcibr_func_config_addr((pcibr_soft_t) pcibr_info->f_mfast,
bridge = pcibr_soft->bs_base;
cfgbase = pcibr_func_config_addr(bridge,
pciio_bus, pciio_slot, pciio_func, 0); pciio_bus, pciio_slot, pciio_func, 0);
return cfgbase; return cfgbase;
......
This diff is collapsed.
This diff is collapsed.
...@@ -33,8 +33,11 @@ pcibr_hints_get(vertex_hdl_t xconn_vhdl, int alloc) ...@@ -33,8 +33,11 @@ pcibr_hints_get(vertex_hdl_t xconn_vhdl, int alloc)
if (alloc && (rv != GRAPH_SUCCESS)) { if (alloc && (rv != GRAPH_SUCCESS)) {
hint = kmalloc(sizeof (*(hint)), GFP_KERNEL); hint = kmalloc(sizeof (*(hint)), GFP_KERNEL);
if ( !hint ) if ( !hint ) {
printk(KERN_WARNING "pcibr_hints_get(): unable to allocate "
"memory\n");
goto abnormal_exit; goto abnormal_exit;
}
memset(hint, 0, sizeof (*(hint))); memset(hint, 0, sizeof (*(hint)));
hint->rrb_alloc_funct = NULL; hint->rrb_alloc_funct = NULL;
...@@ -57,7 +60,7 @@ pcibr_hints_get(vertex_hdl_t xconn_vhdl, int alloc) ...@@ -57,7 +60,7 @@ pcibr_hints_get(vertex_hdl_t xconn_vhdl, int alloc)
abnormal_exit: abnormal_exit:
kfree(hint); kfree(hint);
return(NULL); return NULL;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -703,30 +703,6 @@ pciio_info_pops_get(pciio_info_t pciio_info) ...@@ -703,30 +703,6 @@ pciio_info_pops_get(pciio_info_t pciio_info)
return (pciio_info->c_pops); return (pciio_info->c_pops);
} }
int
pciio_businfo_multi_master_get(pciio_businfo_t businfo)
{
return businfo->bi_multi_master;
}
pciio_asic_type_t
pciio_businfo_asic_type_get(pciio_businfo_t businfo)
{
return businfo->bi_asic_type;
}
pciio_bus_type_t
pciio_businfo_bus_type_get(pciio_businfo_t businfo)
{
return businfo->bi_bus_type;
}
pciio_bus_speed_t
pciio_businfo_bus_speed_get(pciio_businfo_t businfo)
{
return businfo->bi_bus_speed;
}
/* ===================================================================== /* =====================================================================
* GENERIC PCI INITIALIZATION FUNCTIONS * GENERIC PCI INITIALIZATION FUNCTIONS
*/ */
...@@ -792,9 +768,12 @@ pciio_device_info_new( ...@@ -792,9 +768,12 @@ pciio_device_info_new(
pciio_info = kmalloc(sizeof (*(pciio_info)), GFP_KERNEL); pciio_info = kmalloc(sizeof (*(pciio_info)), GFP_KERNEL);
if ( pciio_info ) if ( pciio_info )
memset(pciio_info, 0, sizeof (*(pciio_info))); memset(pciio_info, 0, sizeof (*(pciio_info)));
else {
printk(KERN_WARNING "pciio_device_info_new(): Unable to "
"allocate memory\n");
return NULL;
}
} }
ASSERT(pciio_info != NULL);
pciio_info->c_slot = slot; pciio_info->c_slot = slot;
pciio_info->c_func = func; pciio_info->c_func = func;
pciio_info->c_vendor = vendor_id; pciio_info->c_vendor = vendor_id;
...@@ -859,13 +838,8 @@ pciio_device_info_unregister(vertex_hdl_t connectpt, ...@@ -859,13 +838,8 @@ pciio_device_info_unregister(vertex_hdl_t connectpt,
pciio_info->c_slot, pciio_info->c_slot,
pciio_info->c_func); pciio_info->c_func);
hwgraph_edge_remove(connectpt,name,&pconn);
pciio_info_set(pconn,0); pciio_info_set(pconn,0);
/* Remove the link to our pci provider */
hwgraph_edge_remove(pconn, EDGE_LBL_MASTER, NULL);
hwgraph_vertex_unref(pconn); hwgraph_vertex_unref(pconn);
hwgraph_vertex_destroy(pconn); hwgraph_vertex_destroy(pconn);
...@@ -1036,12 +1010,3 @@ pciio_info_type1_get(pciio_info_t pci_info) ...@@ -1036,12 +1010,3 @@ pciio_info_type1_get(pciio_info_t pci_info)
{ {
return (pci_info->c_type1); return (pci_info->c_type1);
} }
pciio_businfo_t
pciio_businfo_get(vertex_hdl_t conn)
{
pciio_info_t info;
info = pciio_info_get(conn);
return DEV_FUNC(conn, businfo_get)(conn);
}
This diff is collapsed.
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/delay.h>
#include <asm/sn/sgi.h> #include <asm/sn/sgi.h>
#include <asm/sn/io.h> #include <asm/sn/io.h>
#include <asm/sn/iograph.h> #include <asm/sn/iograph.h>
......
...@@ -9,11 +9,15 @@ ...@@ -9,11 +9,15 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/mm.h>
#include <linux/delay.h>
#include <asm/sn/sgi.h>
#include <asm/sn/sn2/sn_private.h> #include <asm/sn/sn2/sn_private.h>
#include <asm/sn/iograph.h> #include <asm/sn/iograph.h>
#include <asm/sn/simulator.h> #include <asm/sn/simulator.h>
#include <asm/sn/hcl.h> #include <asm/sn/hcl.h>
#include <asm/sn/hcl_util.h> #include <asm/sn/hcl_util.h>
#include <asm/sn/pci/pcibr_private.h>
/* #define DEBUG 1 */ /* #define DEBUG 1 */
/* #define XBOW_DEBUG 1 */ /* #define XBOW_DEBUG 1 */
......
...@@ -30,9 +30,6 @@ ...@@ -30,9 +30,6 @@
* completely disappear. * completely disappear.
*/ */
#define NEW(ptr) (ptr = kmalloc(sizeof (*(ptr)), GFP_KERNEL))
#define DEL(ptr) (kfree(ptr))
char widget_info_fingerprint[] = "widget_info"; char widget_info_fingerprint[] = "widget_info";
/* ===================================================================== /* =====================================================================
...@@ -855,7 +852,9 @@ xwidget_register(xwidget_hwid_t hwid, /* widget's hardware ID */ ...@@ -855,7 +852,9 @@ xwidget_register(xwidget_hwid_t hwid, /* widget's hardware ID */
char *s,devnm[MAXDEVNAME]; char *s,devnm[MAXDEVNAME];
/* Allocate widget_info and associate it with widget vertex */ /* Allocate widget_info and associate it with widget vertex */
NEW(widget_info); widget_info = kmalloc(sizeof(*widget_info), GFP_KERNEL);
if (!widget_info)
return - ENOMEM;
/* Initialize widget_info */ /* Initialize widget_info */
widget_info->w_vertex = widget; widget_info->w_vertex = widget;
...@@ -898,16 +897,13 @@ xwidget_unregister(vertex_hdl_t widget) ...@@ -898,16 +897,13 @@ xwidget_unregister(vertex_hdl_t widget)
/* Make sure that we have valid widget information initialized */ /* Make sure that we have valid widget information initialized */
if (!(widget_info = xwidget_info_get(widget))) if (!(widget_info = xwidget_info_get(widget)))
return(1); return 1;
hwid = &(widget_info->w_hwid); hwid = &(widget_info->w_hwid);
/* Clean out the xwidget information */ kfree(widget_info->w_name);
(void)kfree(widget_info->w_name); kfree(widget_info);
memset((void *)widget_info, 0, sizeof(widget_info)); return 0;
DEL(widget_info);
return(0);
} }
void void
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/errno.h>
#include <asm/sn/sgi.h> #include <asm/sn/sgi.h>
#include <asm/sn/driver.h> #include <asm/sn/driver.h>
#include <asm/sn/iograph.h> #include <asm/sn/iograph.h>
...@@ -18,8 +19,6 @@ ...@@ -18,8 +19,6 @@
#include <asm/sn/xtalk/xwidget.h> #include <asm/sn/xtalk/xwidget.h>
#include <asm/sn/xtalk/xtalk_private.h> #include <asm/sn/xtalk/xtalk_private.h>
#define NEW(ptr) (ptr = kmalloc(sizeof (*(ptr)), GFP_KERNEL))
#define DEL(ptr) (kfree(ptr))
/* /*
* This file provides generic support for Crosstalk * This file provides generic support for Crosstalk
...@@ -118,7 +117,12 @@ xswitch_info_new(vertex_hdl_t xwidget) ...@@ -118,7 +117,12 @@ xswitch_info_new(vertex_hdl_t xwidget)
if (xswitch_info == NULL) { if (xswitch_info == NULL) {
int port; int port;
NEW(xswitch_info); xswitch_info = kmalloc(sizeof(*xswitch_info), GFP_KERNEL);
if (!xswitch_info) {
printk(KERN_WARNING "xswitch_info_new(): Unable to "
"allocate memory\n");
return NULL;
}
xswitch_info->census = 0; xswitch_info->census = 0;
for (port = 0; port <= XSWITCH_CENSUS_PORT_MAX; port++) { for (port = 0; port <= XSWITCH_CENSUS_PORT_MAX; port++) {
xswitch_info_vhdl_set(xswitch_info, port, xswitch_info_vhdl_set(xswitch_info, port,
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
*/ */
#include <linux/config.h> #include <linux/config.h>
#include <asm/sn/sgi.h>
#include <asm/sn/nodepda.h> #include <asm/sn/nodepda.h>
#include <asm/sn/addrs.h> #include <asm/sn/addrs.h>
#include <asm/sn/arch.h> #include <asm/sn/arch.h>
...@@ -14,6 +15,7 @@ ...@@ -14,6 +15,7 @@
#include <asm/sn/pda.h> #include <asm/sn/pda.h>
#include <asm/sn/sn2/shubio.h> #include <asm/sn/sn2/shubio.h>
#include <asm/nodedata.h> #include <asm/nodedata.h>
#include <asm/delay.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/string.h> #include <linux/string.h>
......
This diff is collapsed.
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <asm/sn/sgi.h>
#include <asm/mca.h> #include <asm/mca.h>
#include <asm/sal.h> #include <asm/sal.h>
#include <asm/sn/sn_sal.h> #include <asm/sn/sn_sal.h>
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
* Copyright (c) 2000-2003 Silicon Graphics, Inc. All rights reserved. * Copyright (c) 2000-2003 Silicon Graphics, Inc. All rights reserved.
*/ */
#include <asm/sn/sgi.h>
#include <asm/sn/sn_sal.h> #include <asm/sn/sn_sal.h>
/** /**
......
This diff is collapsed.
...@@ -23,6 +23,10 @@ ...@@ -23,6 +23,10 @@
#undef __sn_readw #undef __sn_readw
#undef __sn_readl #undef __sn_readl
#undef __sn_readq #undef __sn_readq
#undef __sn_readb_relaxed
#undef __sn_readw_relaxed
#undef __sn_readl_relaxed
#undef __sn_readq_relaxed
unsigned int unsigned int
__sn_inb (unsigned long port) __sn_inb (unsigned long port)
...@@ -84,4 +88,28 @@ __sn_readq (void *addr) ...@@ -84,4 +88,28 @@ __sn_readq (void *addr)
return ___sn_readq (addr); return ___sn_readq (addr);
} }
unsigned char
__sn_readb_relaxed (void *addr)
{
return ___sn_readb_relaxed (addr);
}
unsigned short
__sn_readw_relaxed (void *addr)
{
return ___sn_readw_relaxed (addr);
}
unsigned int
__sn_readl_relaxed (void *addr)
{
return ___sn_readl_relaxed (addr);
}
unsigned long
__sn_readq_relaxed (void *addr)
{
return ___sn_readq_relaxed (addr);
}
#endif #endif
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/sn/sn2/addrs.h>
#include <asm/sn/simulator.h> #include <asm/sn/simulator.h>
/* to lookup nasids */ /* to lookup nasids */
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/sn/sgi.h>
#include <asm/sal.h> #include <asm/sal.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/delay.h> #include <asm/delay.h>
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <asm/sn/sgi.h>
#include <asm/sn/sn_sal.h> #include <asm/sn/sn_sal.h>
......
...@@ -181,7 +181,7 @@ config CPU_FREQ_TABLE ...@@ -181,7 +181,7 @@ config CPU_FREQ_TABLE
config PPC601_SYNC_FIX config PPC601_SYNC_FIX
bool "Workarounds for PPC601 bugs" bool "Workarounds for PPC601 bugs"
depends on 6xx depends on 6xx && (PPC_PREP || PPC_PMAC)
help help
Some versions of the PPC601 (the first PowerPC chip) have bugs which Some versions of the PPC601 (the first PowerPC chip) have bugs which
mean that extra synchronization instructions are required near mean that extra synchronization instructions are required near
...@@ -583,11 +583,6 @@ config PPC_CHRP ...@@ -583,11 +583,6 @@ config PPC_CHRP
depends on PPC_MULTIPLATFORM depends on PPC_MULTIPLATFORM
default y default y
config PPC_GEN550
bool
depends on SANDPOINT
default y
config PPC_PMAC config PPC_PMAC
bool bool
depends on PPC_MULTIPLATFORM depends on PPC_MULTIPLATFORM
...@@ -603,6 +598,11 @@ config PPC_OF ...@@ -603,6 +598,11 @@ config PPC_OF
depends on PPC_PMAC || PPC_CHRP depends on PPC_PMAC || PPC_CHRP
default y default y
config PPC_GEN550
bool
depends on SANDPOINT || MCPN765
default y
config FORCE config FORCE
bool bool
depends on 6xx && (PCORE || POWERPMC250) depends on 6xx && (PCORE || POWERPMC250)
...@@ -1404,7 +1404,7 @@ config BOOTX_TEXT ...@@ -1404,7 +1404,7 @@ config BOOTX_TEXT
config SERIAL_TEXT_DEBUG config SERIAL_TEXT_DEBUG
bool "Support for early boot texts over serial port" bool "Support for early boot texts over serial port"
depends on 4xx || GT64260 || LOPEC || MCPN765 || PPLUS || PRPMC800 || SANDPOINT depends on 4xx || GT64260 || LOPEC || PPLUS || PRPMC800 || PPC_GEN550
config OCP config OCP
bool bool
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -122,4 +122,4 @@ loop: ...@@ -122,4 +122,4 @@ loop:
.quad 0 .quad 0
loop2: loop2:
.quad 0 .quad 0
.previous .previous
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment