Commit 3814ee68 authored by Linus Torvalds's avatar Linus Torvalds

Import 2.3.43pre2

parent 27f3c8bb
...@@ -2100,6 +2100,15 @@ D: Optics Storage 8000AT cdrom driver ...@@ -2100,6 +2100,15 @@ D: Optics Storage 8000AT cdrom driver
S: Cliffwood, New Jersey 07721 S: Cliffwood, New Jersey 07721
S: USA S: USA
N: Manfred Spraul
E: manfreds@colorfullife.com
W: http://colorfullife.com/~manfreds
D: major SysV IPC changes
D: various bug fixes (mostly SMP code)
S: Warburgring 67
S: 66424 Homburg
S: Germany
N: Henrik Storner N: Henrik Storner
E: storner@image.dk E: storner@image.dk
W: http://www.image.dk/~storner/ W: http://www.image.dk/~storner/
......
This diff is collapsed.
Few Notes About The PCI Subsystem How To Write Linux PCI Drivers
or by Martin Mares <mj@suse.cz> on 07-Feb-2000
"What should you avoid when writing PCI drivers"
by Martin Mares <mj@suse.cz> on 21-Nov-1999
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The world of PCI is vast and it's full of (mostly unpleasant) surprises.
Different PCI devices have different requirements and different bugs --
because of this, the PCI support layer in Linux kernel is not as trivial
as one would wish. This short pamphlet tries to help all potential driver
authors to find their way through the deep forests of PCI handling.
0. Structure of PCI drivers 0. Structure of PCI drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
Aa typical PCI device driver needs to perform the following actions: There exist two kinds of PCI drivers: new-style ones (which leave most of
probing for devices to the PCI layer and support online insertion and removal
1. Find PCI devices it's able to handle of devices [thus supporting PCI, hot-pluggable PCI and CardBus in single
2. Enable them driver]) and old-style ones which just do all the probing themselves. Unless
3. Access their configuration space you have a very good reason to do so, please don't use the old way of probing
4. Discover addresses and IRQ numbers in any new code. After the driver finds the devices it wishes to operate
on (either the old or the new way), it needs to perform the following steps:
1. How to find PCI devices
~~~~~~~~~~~~~~~~~~~~~~~~~~ Enable the device
In case your driver wants to search for all devices with given vendor/device Access device configuration space
ID, it should use: Discover resources (addresses and IRQ numbers) provided by the device
Allocate these resources
Communicate with the device
Most of these topics are covered by the following sections, for the rest
look at <linux/pci.h>, it's hopefully well commented.
If the PCI subsystem is not configured (CONFIG_PCI is not set), most of
the functions described below are defined as inline functions either completely
empty or just returning an appropriate error codes to avoid lots of ifdefs
in the drivers.
1. New-style drivers
~~~~~~~~~~~~~~~~~~~~
The new-style drivers just call pci_register_driver during their initialization
with a pointer to a structure describing the driver (struct pci_driver) which
contains:
name Name of the driver
id_table Pointer to table of device ID's the driver is
interested in
probe Pointer to a probing function which gets called (during
execution of pci_register_driver for already existing
devices or later if a new device gets inserted) for all
PCI devices which match the ID table and are not handled
by the other drivers yet. This function gets passed a pointer
to the pci_dev structure representing the device and also
which entry in the ID table did the device match. It returns
zero when the driver has accepted the device or an error
code (negative number) otherwise. This function always gets
called from process context, so it can sleep.
remove Pointer to a function which gets called whenever a device
being handled by this driver is removed (either during
deregistration of the driver or when it's manually pulled
out of a hot-pluggable slot). This function can be called
from interrupt context.
suspend, Power management hooks (currently used only for CardBus
resume cards) -- called when the device goes to sleep or is
resumed.
The ID table is an array of struct pci_device_id ending with a all-zero entry.
Each entry consists of:
vendor, device Vendor and device ID to match (or PCI_ANY_ID)
subvendor, Subsystem vendor and device ID to match (or PCI_ANY_ID)
subdevice
class, Device class to match. The class_mask tells which bits
class_mask of the class are honored during the comparison.
driver_data Data private to the driver.
When the driver exits, it just calls pci_deregister_driver() and the PCI layer
automatically calls the remove hook for all devices handled by the driver.
Please mark the initialization and cleanup functions where appropriate
(the corresponding macros are defined in <linux/init.h>):
__init Initialization code. Thrown away after the driver
initializes.
__exit Exit code. Ignored for non-modular drivers.
__devinit Device initialization code. Identical to __init if
the kernel is not compiled with CONFIG_HOTPLUG, normal
function otherwise.
__devexit The same for __exit.
2. How to find PCI devices manually (the old style)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PCI drivers not using the pci_register_driver() interface search
for PCI devices manually using the following constructs:
Searching by vendor and device ID:
struct pci_dev *dev = NULL; struct pci_dev *dev = NULL;
while (dev = pci_find_device(VENDOR_ID, DEVICE_ID, dev)) while (dev = pci_find_device(VENDOR_ID, DEVICE_ID, dev))
configure_device(dev); configure_device(dev);
For class-based search, use pci_find_class(CLASS_ID, dev). Searching by class ID (iterate in a similar way):
pci_find_class(CLASS_ID, dev)
If you need to match by subsystem vendor/device ID, use Searching by both vendor/device and subsystem vendor/device ID:
pci_find_subsys(VENDOR_ID, DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev).
pci_find_subsys(VENDOR_ID, DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev).
You can use the constant PCI_ANY_ID as a wildcard replacement for You can use the constant PCI_ANY_ID as a wildcard replacement for
VENDOR_ID or DEVICE_ID. This allows searching for any device from a VENDOR_ID or DEVICE_ID. This allows searching for any device from a
specific vendor, for example. specific vendor, for example.
In case you want to do some complex matching, you can walk the list of all In case you need to decide according to some more complex criteria,
known PCI devices: you can walk the list of all known PCI devices yourself:
struct pci_dev *dev; struct pci_dev *dev;
pci_for_each_dev(dev) { pci_for_each_dev(dev) {
... do anything you want with dev ... ... do anything you want with dev ...
} }
The `struct pci_dev *' pointer serves as an identification of a PCI device For compatibility with device ordering in older kernels, you can also
and is passed to all other functions operating on PCI devices. use pci_for_each_dev_reverse(dev) for walking the list in the opposite
direction.
2. Enabling devices 3. Enabling devices
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
Before you do anything with the device you've found, you need to enable Before you do anything with the device you've found, you need to enable
it by calling pci_enable_device() which enables I/O and memory regions of it by calling pci_enable_device() which enables I/O and memory regions of
...@@ -57,7 +135,8 @@ if it was in suspended state. Please note that this function can fail. ...@@ -57,7 +135,8 @@ if it was in suspended state. Please note that this function can fail.
which enables the bus master bit in PCI_COMMAND register and also fixes which enables the bus master bit in PCI_COMMAND register and also fixes
the latency timer value if it's set to something bogus by the BIOS. the latency timer value if it's set to something bogus by the BIOS.
3. How to access PCI config space
4. How to access PCI config space
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use pci_(read|write)_config_(byte|word|dword) to access the config You can use pci_(read|write)_config_(byte|word|dword) to access the config
space of a device represented by struct pci_dev *. All these functions return 0 space of a device represented by struct pci_dev *. All these functions return 0
...@@ -72,7 +151,8 @@ use symbolic names of locations and bits declared in <linux/pci.h>. ...@@ -72,7 +151,8 @@ use symbolic names of locations and bits declared in <linux/pci.h>.
pci_find_capability() for the particular capability and it will find the pci_find_capability() for the particular capability and it will find the
corresponding register block for you. corresponding register block for you.
4. Addresses and interrupts
5. Addresses and interrupts
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
Memory and port addresses and interrupt numbers should NOT be read from the Memory and port addresses and interrupt numbers should NOT be read from the
config space. You should use the values in the pci_dev structure as they might config space. You should use the values in the pci_dev structure as they might
...@@ -86,13 +166,33 @@ for memory regions to make sure nobody else is using the same device. ...@@ -86,13 +166,33 @@ for memory regions to make sure nobody else is using the same device.
All interrupt handlers should be registered with SA_SHIRQ and use the devid All interrupt handlers should be registered with SA_SHIRQ and use the devid
to map IRQs to devices (remember that all PCI interrupts are shared). to map IRQs to devices (remember that all PCI interrupts are shared).
5. Other interesting functions
6. Other interesting functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci_find_slot() Find pci_dev corresponding to given bus and pci_find_slot() Find pci_dev corresponding to given bus and
slot numbers. slot numbers.
pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3) pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3)
pci_find_capability() Find specified capability in device's capability
list.
7. Miscellaneous hints
~~~~~~~~~~~~~~~~~~~~~~
When displaying PCI slot names to the user (for example when a driver wants
to tell the user what card has it found), please use pci_dev->slot_name
for this purpose.
Always refer to the PCI devices by a pointer to the pci_dev structure.
All PCI layer functions use this identification and it's the only
reasonable one. Don't use bus/slot/function numbers except for very
special purposes -- on systems with multiple primary buses their semantics
can be pretty complex.
If you're going to use PCI bus mastering DMA, take a look at
Documentation/DMA-mapping.txt.
6. Obsolete functions 8. Obsolete functions
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
There are several functions kept only for compatibility with old drivers There are several functions kept only for compatibility with old drivers
not updated to the new PCI interface. Please don't use them in new code. not updated to the new PCI interface. Please don't use them in new code.
......
...@@ -23,11 +23,30 @@ SUPPORTED CAMERAS: ...@@ -23,11 +23,30 @@ SUPPORTED CAMERAS:
IBM "C-It" camera, also known as "Xirlink PC Camera" IBM "C-It" camera, also known as "Xirlink PC Camera"
The device uses proprietary ASIC (and compression method); The device uses proprietary ASIC (and compression method);
it is manufactured by Xirlink. See http://www.xirlink.com it is manufactured by Xirlink. See http://www.xirlink.com/
or http://www.c-itnow.com/ for details and pictures.
The Linux driver was developed with camera with following
model number (or FCC ID): KSX-XVP510. This camera has three
interfaces, each with one endpoint (control, iso, iso).
It appears that Xirlink made some changes in their cameras recently.
In particular, following models [FCC ID] are suspect; one with
with FCC ID KSX-X9903 is known to be one of them:
XVP300 [KSX-X9903]
XVP600 [KSX-X9902]
XVP610 [KSX-X9902]
(see http://www.xirlink.com/ibmpccamera/ for updates, they refer
to these new cameras by Windows driver dated 12-27-99, v3005 BETA)
These cameras have two interfaces, one endpoint in each (iso, bulk).
Attempts to remotely debug one of these cameras weren't successful.
I'd need to have a camera to figure out how to use it.
WHAT YOU NEED: WHAT YOU NEED:
- An IBM C-it camera - A supported IBM PC (C-it) camera (see above)
- A Linux box with USB support (2.3/2.4 or 2.2 w/backport) - A Linux box with USB support (2.3/2.4 or 2.2 w/backport)
...@@ -73,6 +92,7 @@ Name Type Range [default] Example ...@@ -73,6 +92,7 @@ Name Type Range [default] Example
debug Integer 0-9 [0] debug=1 debug Integer 0-9 [0] debug=1
flags Integer 0-0xFF [0] flags=0x0d flags Integer 0-0xFF [0] flags=0x0d
framerate Integer 0-6 [2] framerate=1 framerate Integer 0-6 [2] framerate=1
hue_correction Integer 0-255 [128] hue_correction=115
init_brightness Integer 0-255 [128] init_brightness=100 init_brightness Integer 0-255 [128] init_brightness=100
init_contrast Integer 0-255 [192] init_contrast=200 init_contrast Integer 0-255 [192] init_contrast=200
init_color Integer 0-255 [128] init_color=130 init_color Integer 0-255 [128] init_color=130
...@@ -108,6 +128,16 @@ framerate This setting controls frame rate of the camera. This is ...@@ -108,6 +128,16 @@ framerate This setting controls frame rate of the camera. This is
Beware - fast settings are very demanding and may not Beware - fast settings are very demanding and may not
work well with all video sizes. Be conservative. work well with all video sizes. Be conservative.
hue_correction This highly optional setting allows to adjust the
hue of the image in a way slightly different from
what usual "hue" control does. Both controls affect
YUV colorspace: regular "hue" control adjusts only
U component, and this "hue_correction" option similarly
adjusts only V component. However usually it is enough
to tweak only U or V to compensate for colored light or
color temperature; this option simply allows more
complicated correction when and if it is necessary.
init_brightness These settings specify _initial_ values which will be init_brightness These settings specify _initial_ values which will be
init_contrast used to set up the camera. If your V4L application has init_contrast used to set up the camera. If your V4L application has
init_color its own controls to adjust the picture then these init_color its own controls to adjust the picture then these
...@@ -143,8 +173,8 @@ videosize This setting chooses one if three image sizes that are ...@@ -143,8 +173,8 @@ videosize This setting chooses one if three image sizes that are
WHAT NEEDS TO BE DONE: WHAT NEEDS TO BE DONE:
- The box freezes if working camera (with xawtv) is unplugged (OHCI). - The box freezes if camera is unplugged after being used (OHCI).
Workaround: don't do that :) End the V4L application first. Workaround: don't do that :)
- Some USB frames are lost on high frame rates, though they shouldn't - Some USB frames are lost on high frame rates, though they shouldn't
- ViCE compression (Xirlink proprietary) may improve frame rate - ViCE compression (Xirlink proprietary) may improve frame rate
- On occasion camera does not start properly; xawtv reports errors. - On occasion camera does not start properly; xawtv reports errors.
......
...@@ -26,11 +26,11 @@ O_OBJS += core_apecs.o core_cia.o core_irongate.o core_lca.o core_mcpcia.o \ ...@@ -26,11 +26,11 @@ O_OBJS += core_apecs.o core_cia.o core_irongate.o core_lca.o core_mcpcia.o \
sys_jensen.o sys_miata.o sys_mikasa.o sys_nautilus.o \ sys_jensen.o sys_miata.o sys_mikasa.o sys_nautilus.o \
sys_noritake.o sys_rawhide.o sys_ruffian.o sys_rx164.o \ sys_noritake.o sys_rawhide.o sys_ruffian.o sys_rx164.o \
sys_sable.o sys_sio.o sys_sx164.o sys_takara.o sys_rx164.o \ sys_sable.o sys_sio.o sys_sx164.o sys_takara.o sys_rx164.o \
es1888.o smc37c669.o smc37c93x.o ns87312.o pci.o es1888.o smc37c669.o smc37c93x.o ns87312.o pci.o pci_iommu.o
else else
ifdef CONFIG_PCI ifdef CONFIG_PCI
O_OBJS += pci.o O_OBJS += pci.o pci_iommu.o
endif endif
# Core logic support # Core logic support
......
...@@ -149,6 +149,9 @@ EXPORT_SYMBOL(__strnlen_user); ...@@ -149,6 +149,9 @@ EXPORT_SYMBOL(__strnlen_user);
EXPORT_SYMBOL_NOVERS(__down_failed); EXPORT_SYMBOL_NOVERS(__down_failed);
EXPORT_SYMBOL_NOVERS(__down_failed_interruptible); EXPORT_SYMBOL_NOVERS(__down_failed_interruptible);
EXPORT_SYMBOL_NOVERS(__up_wakeup); EXPORT_SYMBOL_NOVERS(__up_wakeup);
EXPORT_SYMBOL_NOVERS(__down_read_failed);
EXPORT_SYMBOL_NOVERS(__down_write_failed);
EXPORT_SYMBOL_NOVERS(__rwsem_wake);
/* /*
* SMP-specific symbols. * SMP-specific symbols.
...@@ -161,7 +164,7 @@ EXPORT_SYMBOL(flush_tlb_mm); ...@@ -161,7 +164,7 @@ EXPORT_SYMBOL(flush_tlb_mm);
EXPORT_SYMBOL(flush_tlb_page); EXPORT_SYMBOL(flush_tlb_page);
EXPORT_SYMBOL(flush_tlb_range); EXPORT_SYMBOL(flush_tlb_range);
EXPORT_SYMBOL(cpu_data); EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(cpu_number_map); EXPORT_SYMBOL(__cpu_number_map);
EXPORT_SYMBOL(global_bh_lock); EXPORT_SYMBOL(global_bh_lock);
EXPORT_SYMBOL(global_bh_count); EXPORT_SYMBOL(global_bh_count);
EXPORT_SYMBOL(synchronize_bh); EXPORT_SYMBOL(synchronize_bh);
......
...@@ -356,22 +356,49 @@ struct pci_ops apecs_pci_ops = ...@@ -356,22 +356,49 @@ struct pci_ops apecs_pci_ops =
write_dword: apecs_write_config_dword write_dword: apecs_write_config_dword
}; };
void
apecs_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
wmb();
*(vip)APECS_IOC_TBIA = 0;
mb();
}
void __init void __init
apecs_init_arch(void) apecs_init_arch(void)
{ {
struct pci_controler *hose; struct pci_controler *hose;
/* /*
* Set up the PCI->physical memory translation windows. * Create our single hose.
* For now, window 2 is disabled. In the future, we may */
* want to use it to do scatter/gather DMA. Window 1
* goes at 1 GB and is 1 GB large. pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = APECS_CONF;
hose->index = 0;
/*
* Set up the PCI to main memory translation windows.
*
* Window 1 is direct access 1GB at 1GB
* Window 2 is scatter-gather 8MB at 8MB (for isa)
*/ */
*(vuip)APECS_IOC_PB1R = 1UL << 19 | (APECS_DMA_WIN_BASE & 0xfff00000U); hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
*(vuip)APECS_IOC_PM1R = (APECS_DMA_WIN_SIZE - 1) & 0xfff00000U; hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
*(vuip)APECS_IOC_PB1R = __direct_map_base | 0x00080000;
*(vuip)APECS_IOC_PM1R = (__direct_map_size - 1) & 0xfff00000U;
*(vuip)APECS_IOC_TB1R = 0; *(vuip)APECS_IOC_TB1R = 0;
*(vuip)APECS_IOC_PB2R = 0U; /* disable window 2 */ *(vuip)APECS_IOC_PB2R = hose->sg_isa->dma_base | 0x000c0000;
*(vuip)APECS_IOC_PM2R = (hose->sg_isa->size - 1) & 0xfff00000;
*(vuip)APECS_IOC_TB2R = virt_to_phys(hose->sg_isa->ptes) >> 1;
apecs_pci_tbi(hose, 0, -1);
/* /*
* Finally, clear the HAXR2 register, which gets used * Finally, clear the HAXR2 register, which gets used
...@@ -381,16 +408,6 @@ apecs_init_arch(void) ...@@ -381,16 +408,6 @@ apecs_init_arch(void)
*/ */
*(vuip)APECS_IOC_HAXR2 = 0; *(vuip)APECS_IOC_HAXR2 = 0;
mb(); mb();
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = APECS_CONF;
hose->index = 0;
} }
void void
......
...@@ -314,6 +314,14 @@ struct pci_ops cia_pci_ops = ...@@ -314,6 +314,14 @@ struct pci_ops cia_pci_ops =
write_dword: cia_write_config_dword write_dword: cia_write_config_dword
}; };
void
cia_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
wmb();
*(vip)CIA_IOC_PCI_TBIA = 3; /* Flush all locked and unlocked. */
mb();
}
void __init void __init
cia_init_arch(void) cia_init_arch(void)
{ {
...@@ -369,40 +377,67 @@ cia_init_arch(void) ...@@ -369,40 +377,67 @@ cia_init_arch(void)
#endif /* DEBUG_DUMP_REGS */ #endif /* DEBUG_DUMP_REGS */
/* /*
* Set up error reporting. * Create our single hose.
*/ */
temp = *(vuip)CIA_IOC_CIA_ERR;
temp |= 0x180; /* master, target abort */
*(vuip)CIA_IOC_CIA_ERR = temp;
mb();
temp = *(vuip)CIA_IOC_CIA_CTRL; pci_isa_hose = hose = alloc_pci_controler();
temp |= 0x400; /* turn on FILL_ERR to get mchecks */ hae_mem = alloc_resource();
*(vuip)CIA_IOC_CIA_CTRL = temp;
mb(); hose->io_space = &ioport_resource;
hose->mem_space = hae_mem;
hose->config_space = CIA_CONF;
hose->index = 0;
hae_mem->start = 0;
hae_mem->end = CIA_MEM_R1_MASK;
hae_mem->name = pci_hae0_name;
hae_mem->flags = IORESOURCE_MEM;
if (request_resource(&iomem_resource, hae_mem) < 0)
printk(KERN_ERR "Failed to request HAE_MEM\n");
/* /*
* Set up the PCI->physical memory translation windows. * Set up the PCI to main memory translation windows.
* For now, windows 2 and 3 are disabled. In the future,
* we may want to use them to do scatter/gather DMA.
* *
* Window 0 goes at 1 GB and is 1 GB large. * Window 0 is scatter-gather 8MB at 8MB (for isa)
* Window 1 goes at 2 GB and is 1 GB large. * Window 1 is scatter-gather 128MB at 1GB
* Window 2 is direct access 2GB at 2GB
* ??? We ought to scale window 1 with memory.
*/ */
*(vuip)CIA_IOC_PCI_W0_BASE = CIA_DMA_WIN0_BASE_DEFAULT | 1U; /* NetBSD hints that page tables must be aligned to 32K due
*(vuip)CIA_IOC_PCI_W0_MASK = (CIA_DMA_WIN0_SIZE_DEFAULT - 1) & to a hardware bug. No description of what models affected. */
0xfff00000U; hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, 32768);
*(vuip)CIA_IOC_PCI_T0_BASE = CIA_DMA_WIN0_TRAN_DEFAULT >> 2; hose->sg_pci = iommu_arena_new(0x40000000, 0x08000000, 32768);
__direct_map_base = 0x80000000;
__direct_map_size = 0x80000000;
*(vuip)CIA_IOC_PCI_W1_BASE = CIA_DMA_WIN1_BASE_DEFAULT | 1U; *(vuip)CIA_IOC_PCI_W0_BASE = hose->sg_isa->dma_base | 3;
*(vuip)CIA_IOC_PCI_W1_MASK = (CIA_DMA_WIN1_SIZE_DEFAULT - 1) & *(vuip)CIA_IOC_PCI_W0_MASK = (hose->sg_isa->size - 1) & 0xfff00000;
0xfff00000U; *(vuip)CIA_IOC_PCI_T0_BASE = virt_to_phys(hose->sg_isa->ptes) >> 2;
*(vuip)CIA_IOC_PCI_T1_BASE = CIA_DMA_WIN1_TRAN_DEFAULT >> 2;
*(vuip)CIA_IOC_PCI_W2_BASE = 0x0; *(vuip)CIA_IOC_PCI_W1_BASE = hose->sg_pci->dma_base | 3;
*(vuip)CIA_IOC_PCI_W3_BASE = 0x0; *(vuip)CIA_IOC_PCI_W1_MASK = (hose->sg_pci->size - 1) & 0xfff00000;
mb(); *(vuip)CIA_IOC_PCI_T1_BASE = virt_to_phys(hose->sg_pci->ptes) >> 2;
*(vuip)CIA_IOC_PCI_W2_BASE = __direct_map_base | 1;
*(vuip)CIA_IOC_PCI_W2_MASK = (__direct_map_size - 1) & 0xfff00000;
*(vuip)CIA_IOC_PCI_T2_BASE = 0;
*(vuip)CIA_IOC_PCI_W3_BASE = 0;
cia_pci_tbi(hose, 0, -1);
/*
* Set up error reporting.
*/
temp = *(vuip)CIA_IOC_CIA_ERR;
temp |= 0x180; /* master, target abort */
*(vuip)CIA_IOC_CIA_ERR = temp;
temp = *(vuip)CIA_IOC_CIA_CTRL;
temp |= 0x400; /* turn on FILL_ERR to get mchecks */
*(vuip)CIA_IOC_CIA_CTRL = temp;
/* /*
* Next, clear the CIA_CFG register, which gets used * Next, clear the CIA_CFG register, which gets used
...@@ -410,35 +445,14 @@ cia_init_arch(void) ...@@ -410,35 +445,14 @@ cia_init_arch(void)
* we want to use it, and we do not want to depend on * we want to use it, and we do not want to depend on
* what ARC or SRM might have left behind... * what ARC or SRM might have left behind...
*/ */
*((vuip)CIA_IOC_CFG) = 0; mb(); *(vuip)CIA_IOC_CFG = 0;
/* /*
* Zero the HAEs. * Zero the HAEs.
*/ */
*((vuip)CIA_IOC_HAE_MEM) = 0; mb(); *(vuip)CIA_IOC_HAE_MEM = 0;
*((vuip)CIA_IOC_HAE_MEM); /* read it back. */ *(vuip)CIA_IOC_HAE_IO = 0;
*((vuip)CIA_IOC_HAE_IO) = 0; mb(); mb();
*((vuip)CIA_IOC_HAE_IO); /* read it back. */
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hae_mem = alloc_resource();
hose->io_space = &ioport_resource;
hose->mem_space = hae_mem;
hose->config_space = CIA_CONF;
hose->index = 0;
hae_mem->start = 0;
hae_mem->end = CIA_MEM_R1_MASK;
hae_mem->name = pci_hae0_name;
hae_mem->flags = IORESOURCE_MEM;
if (request_resource(&iomem_resource, hae_mem) < 0)
printk(KERN_ERR "Failed to request HAE_MEM\n");
} }
static inline void static inline void
...@@ -456,6 +470,8 @@ void ...@@ -456,6 +470,8 @@ void
cia_machine_check(unsigned long vector, unsigned long la_ptr, cia_machine_check(unsigned long vector, unsigned long la_ptr,
struct pt_regs * regs) struct pt_regs * regs)
{ {
int expected;
/* Clear the error before any reporting. */ /* Clear the error before any reporting. */
mb(); mb();
mb(); /* magic */ mb(); /* magic */
...@@ -464,5 +480,23 @@ cia_machine_check(unsigned long vector, unsigned long la_ptr, ...@@ -464,5 +480,23 @@ cia_machine_check(unsigned long vector, unsigned long la_ptr,
wrmces(rdmces()); /* reset machine check pending flag. */ wrmces(rdmces()); /* reset machine check pending flag. */
mb(); mb();
process_mcheck_info(vector, la_ptr, regs, "CIA", mcheck_expected(0)); expected = mcheck_expected(0);
if (!expected && vector == 0x660) {
struct el_common *com;
struct el_common_EV5_uncorrectable_mcheck *ev5;
struct el_CIA_sysdata_mcheck *cia;
com = (void *)la_ptr;
ev5 = (void *)(la_ptr + com->proc_offset);
cia = (void *)(la_ptr + com->sys_offset);
if (com->code == 0x202) {
printk(KERN_CRIT "CIA PCI machine check: err0=%08x "
"err1=%08x err2=%08x\n",
(int) cia->pci_err0, (int) cia->pci_err1,
(int) cia->pci_err2);
expected = 1;
}
}
process_mcheck_info(vector, la_ptr, regs, "CIA", expected);
} }
...@@ -351,4 +351,8 @@ irongate_init_arch(void) ...@@ -351,4 +351,8 @@ irongate_init_arch(void)
hose->mem_space = &iomem_resource; hose->mem_space = &iomem_resource;
hose->config_space = IRONGATE_CONF; hose->config_space = IRONGATE_CONF;
hose->index = 0; hose->index = 0;
hose->sg_isa = hose->sg_pci = NULL;
__direct_map_base = 0;
__direct_map_size = 0xffffffff;
} }
...@@ -278,23 +278,51 @@ struct pci_ops lca_pci_ops = ...@@ -278,23 +278,51 @@ struct pci_ops lca_pci_ops =
write_dword: lca_write_config_dword write_dword: lca_write_config_dword
}; };
void
lca_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
wmb();
*(vip)LCA_IOC_TBIA = 0;
mb();
}
void __init void __init
lca_init_arch(void) lca_init_arch(void)
{ {
struct pci_controler *hose; struct pci_controler *hose;
/* /*
* Set up the PCI->physical memory translation windows. * Create our single hose.
* For now, window 1 is disabled. In the future, we may */
* want to use it to do scatter/gather DMA.
pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = LCA_CONF;
hose->index = 0;
/*
* Set up the PCI to main memory translation windows.
* *
* Window 0 goes at 1 GB and is 1 GB large. * Window 0 is direct access 1GB at 1GB
* Window 1 is scatter-gather 8MB at 8MB (for isa)
*/ */
*(vulp)LCA_IOC_W_BASE0 = 1UL << 33 | LCA_DMA_WIN_BASE; hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
*(vulp)LCA_IOC_W_MASK0 = LCA_DMA_WIN_SIZE - 1; hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
*(vulp)LCA_IOC_W_BASE0 = __direct_map_base | (2UL << 32);
*(vulp)LCA_IOC_W_MASK0 = (__direct_map_size - 1) & 0xfff00000;
*(vulp)LCA_IOC_T_BASE0 = 0; *(vulp)LCA_IOC_T_BASE0 = 0;
*(vulp)LCA_IOC_W_BASE1 = 0UL; *(vulp)LCA_IOC_W_BASE1 = hose->sg_isa->dma_base | (3UL << 32);
*(vulp)LCA_IOC_W_MASK1 = (hose->sg_isa->size - 1) & 0xfff00000;
*(vulp)LCA_IOC_T_BASE1 = virt_to_phys(hose->sg_isa->ptes);
*(vulp)LCA_IOC_TB_ENA = 0x80;
lca_pci_tbi(hose, 0, -1);
/* /*
* Disable PCI parity for now. The NCR53c810 chip has * Disable PCI parity for now. The NCR53c810 chip has
...@@ -302,16 +330,6 @@ lca_init_arch(void) ...@@ -302,16 +330,6 @@ lca_init_arch(void)
* data parity errors. * data parity errors.
*/ */
*(vulp)LCA_IOC_PAR_DIS = 1UL<<5; *(vulp)LCA_IOC_PAR_DIS = 1UL<<5;
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = LCA_CONF;
hose->index = 0;
} }
/* /*
......
...@@ -293,6 +293,14 @@ struct pci_ops mcpcia_pci_ops = ...@@ -293,6 +293,14 @@ struct pci_ops mcpcia_pci_ops =
write_dword: mcpcia_write_config_dword write_dword: mcpcia_write_config_dword
}; };
void
mcpcia_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
wmb();
BUG();
mb();
}
static int __init static int __init
mcpcia_probe_hose(int h) mcpcia_probe_hose(int h)
{ {
...@@ -395,31 +403,36 @@ mcpcia_startup_hose(struct pci_controler *hose) ...@@ -395,31 +403,36 @@ mcpcia_startup_hose(struct pci_controler *hose)
/* /*
* Set up the PCI->physical memory translation windows. * Set up the PCI->physical memory translation windows.
* For now, windows 1,2 and 3 are disabled. In the
* future, we may want to use them to do scatter/
* gather DMA.
* *
* Window 0 goes at 2 GB and is 2 GB large. * Window 0 is scatter-gather 8MB at 8MB (for isa)
* Window 1 is scatter-gather 128MB at 1GB
* Window 2 is direct access 2GB at 2GB
* ??? We ought to scale window 1 with memory.
*/ */
*(vuip)MCPCIA_W0_BASE(mid) = 1U | (MCPCIA_DMA_WIN_BASE & 0xfff00000U); hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
*(vuip)MCPCIA_W0_MASK(mid) = (MCPCIA_DMA_WIN_SIZE - 1) & 0xfff00000U; hose->sg_pci = iommu_arena_new(0x40000000, 0x08000000, PAGE_SIZE);
*(vuip)MCPCIA_T0_BASE(mid) = 0; __direct_map_base = 0x80000000;
__direct_map_size = 0x80000000;
*(vuip)MCPCIA_W0_BASE(mid) = hose->sg_isa->dma_base | 3;
*(vuip)MCPCIA_W0_MASK(mid) = (hose->sg_isa->size - 1) & 0xfff00000;
*(vuip)MCPCIA_T0_BASE(mid) = virt_to_phys(hose->sg_isa->ptes) >> 2;
*(vuip)MCPCIA_W1_BASE(mid) = hose->sg_pci->dma_base | 3;
*(vuip)MCPCIA_W1_MASK(mid) = (hose->sg_pci->size - 1) & 0xfff00000;
*(vuip)MCPCIA_T1_BASE(mid) = virt_to_phys(hose->sg_pci->ptes) >> 2;
*(vuip)MCPCIA_W2_BASE(mid) = __direct_map_base | 1;
*(vuip)MCPCIA_W2_MASK(mid) = (__direct_map_size - 1) & 0xfff00000;
*(vuip)MCPCIA_T2_BASE(mid) = 0;
*(vuip)MCPCIA_W1_BASE(mid) = 0x0;
*(vuip)MCPCIA_W2_BASE(mid) = 0x0;
*(vuip)MCPCIA_W3_BASE(mid) = 0x0; *(vuip)MCPCIA_W3_BASE(mid) = 0x0;
*(vuip)MCPCIA_HBASE(mid) = 0x0; mcpcia_pci_tbi(hose, 0, -1);
mb();
#if 0 *(vuip)MCPCIA_HBASE(mid) = 0x0;
tmp = *(vuip)MCPCIA_INT_CTL(mid);
printk("mcpcia_startup_hose: INT_CTL was 0x%x\n", tmp);
*(vuip)MCPCIA_INT_CTL(mid) = 1U;
mb(); mb();
tmp = *(vuip)MCPCIA_INT_CTL(mid);
#endif
*(vuip)MCPCIA_HAE_MEM(mid) = 0U; *(vuip)MCPCIA_HAE_MEM(mid) = 0U;
mb(); mb();
......
...@@ -197,6 +197,12 @@ polaris_init_arch(void) ...@@ -197,6 +197,12 @@ polaris_init_arch(void)
hose->mem_space = &iomem_resource; hose->mem_space = &iomem_resource;
hose->config_space = POLARIS_DENSE_CONFIG_BASE; hose->config_space = POLARIS_DENSE_CONFIG_BASE;
hose->index = 0; hose->index = 0;
hose->sg_isa = hose->sg_pci = NULL;
/* The I/O window is fixed at 2G @ 2G. */
__direct_map_base = 0x80000000;
__direct_map_size = 0x80000000;
} }
static inline void static inline void
......
...@@ -6,20 +6,22 @@ ...@@ -6,20 +6,22 @@
* Code common to all PYXIS core logic chips. * Code common to all PYXIS core logic chips.
*/ */
#include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/kernel.h>
#define __EXTERN_INLINE inline
#include <asm/io.h>
#include <asm/core_pyxis.h>
#undef __EXTERN_INLINE
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/bootmem.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/system.h> #include <asm/system.h>
#define __EXTERN_INLINE inline
#include <asm/io.h>
#include <asm/core_pyxis.h>
#undef __EXTERN_INLINE
#include "proto.h" #include "proto.h"
#include "pci_impl.h" #include "pci_impl.h"
...@@ -284,6 +286,84 @@ struct pci_ops pyxis_pci_ops = ...@@ -284,6 +286,84 @@ struct pci_ops pyxis_pci_ops =
write_dword: pyxis_write_config_dword write_dword: pyxis_write_config_dword
}; };
void
pyxis_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
wmb();
*(vip)PYXIS_TBIA = 3; /* Flush all locked and unlocked. */
mb();
}
/*
* Pass 1 and 2 have a broken scatter-gather tlb -- it cannot be invalidated.
* To work around this problem, we allocate mappings, and put the chip into
* DMA loopback mode to read a garbage page. This works by causing TLB
* misses, causing old entries to be purged to make room for the new entries
* coming in for the garbage page.
*
* Thanks to NetBSD sources for pointing out this bug. What a pain.
*/
static unsigned long broken_tbi_addr;
#define BROKEN_TBI_READS 12
static void
pyxis_broken_pci_tbi(struct pci_controler *hose,
dma_addr_t start, dma_addr_t end)
{
unsigned long flags;
unsigned long bus_addr;
unsigned int ctrl;
long i;
__save_and_cli(flags);
/* Put the chip into PCI loopback mode. */
mb();
ctrl = *(vuip)PYXIS_CTRL;
*(vuip)PYXIS_CTRL = ctrl | 4;
mb();
/* Read from PCI dense memory space at TBI_ADDR, skipping 64k
on each read. This forces SG TLB misses. It appears that
the TLB entries are "not quite LRU", meaning that we need
to read more times than there are actual tags. */
bus_addr = broken_tbi_addr;
for (i = 0; i < BROKEN_TBI_READS; ++i, bus_addr += 64*1024)
pyxis_readl(bus_addr);
/* Restore normal PCI operation. */
mb();
*(vuip)PYXIS_CTRL = ctrl;
mb();
__restore_flags(flags);
}
static void
pyxis_enable_broken_tbi(struct pci_iommu_arena *arena)
{
void *page;
unsigned long *ppte, ofs, pte;
long i, npages;
page = alloc_bootmem_pages(PAGE_SIZE);
pte = (virt_to_phys(page) >> (PAGE_SHIFT - 1)) | 1;
npages = (BROKEN_TBI_READS + 1) * 64*1024 / PAGE_SIZE;
ofs = iommu_arena_alloc(arena, npages);
ppte = arena->ptes + ofs;
for (i = 0; i < npages; ++i)
ppte[i] = pte;
broken_tbi_addr = pyxis_ioremap(arena->dma_base + ofs*PAGE_SIZE);
alpha_mv.mv_pci_tbi = pyxis_broken_pci_tbi;
printk("PYXIS: Enabling broken tbia workaround.\n");
}
void __init void __init
pyxis_init_arch(void) pyxis_init_arch(void)
{ {
...@@ -306,34 +386,70 @@ pyxis_init_arch(void) ...@@ -306,34 +386,70 @@ pyxis_init_arch(void)
*/ */
temp = *(vuip)PYXIS_ERR_MASK; temp = *(vuip)PYXIS_ERR_MASK;
temp &= ~4; temp &= ~4;
*(vuip)PYXIS_ERR_MASK = temp; mb(); *(vuip)PYXIS_ERR_MASK = temp;
temp = *(vuip)PYXIS_ERR_MASK; /* re-read to force write */ mb();
*(vuip)PYXIS_ERR_MASK; /* re-read to force write */
temp = *(vuip)PYXIS_ERR ; temp = *(vuip)PYXIS_ERR;
temp |= 0x180; /* master/target abort */ temp |= 0x180; /* master/target abort */
*(vuip)PYXIS_ERR = temp; mb(); *(vuip)PYXIS_ERR = temp;
temp = *(vuip)PYXIS_ERR; /* re-read to force write */ mb();
*(vuip)PYXIS_ERR; /* re-read to force write */
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = PYXIS_CONF;
hose->index = 0;
/* /*
* Set up the PCI->physical memory translation windows. * Set up the PCI to main memory translation windows.
* For now, windows 2 and 3 are disabled. In the future, we may *
* want to use them to do scatter/gather DMA. * Window 0 is scatter-gather 8MB at 8MB (for isa)
* Window 1 is scatter-gather 128MB at 3GB
* Window 2 is direct access 1GB at 1GB
* Window 3 is direct access 1GB at 2GB
* ??? We ought to scale window 1 with memory.
* *
* Window 0 goes at 2 GB and is 1 GB large. * We must actually use 2 windows to direct-map the 2GB space,
* Window 1 goes at 3 GB and is 1 GB large. * because of an idiot-syncrasy of the CYPRESS chip. It may
* respond to a PCI bus address in the last 1MB of the 4GB
* address range.
*/ */
*(vuip)PYXIS_W0_BASE = PYXIS_DMA_WIN0_BASE_DEFAULT | 1U; /* NetBSD hints that page tables must be aligned to 32K due
*(vuip)PYXIS_W0_MASK = (PYXIS_DMA_WIN0_SIZE_DEFAULT - 1) & 0xfff00000U; to a hardware bug. No description of what models affected. */
*(vuip)PYXIS_T0_BASE = PYXIS_DMA_WIN0_TRAN_DEFAULT >> 2; hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, 32768);
hose->sg_pci = iommu_arena_new(0xc0000000, 0x08000000, 32768);
__direct_map_base = 0x40000000;
__direct_map_size = 0x80000000;
*(vuip)PYXIS_W1_BASE = PYXIS_DMA_WIN1_BASE_DEFAULT | 1U; *(vuip)PYXIS_W0_BASE = hose->sg_isa->dma_base | 3;
*(vuip)PYXIS_W1_MASK = (PYXIS_DMA_WIN1_SIZE_DEFAULT - 1) & 0xfff00000U; *(vuip)PYXIS_W0_MASK = (hose->sg_isa->size - 1) & 0xfff00000;
*(vuip)PYXIS_T1_BASE = PYXIS_DMA_WIN1_TRAN_DEFAULT >> 2; *(vuip)PYXIS_T0_BASE = virt_to_phys(hose->sg_isa->ptes) >> 2;
*(vuip)PYXIS_W2_BASE = 0x0; *(vuip)PYXIS_W1_BASE = hose->sg_pci->dma_base | 3;
*(vuip)PYXIS_W3_BASE = 0x0; *(vuip)PYXIS_W1_MASK = (hose->sg_pci->size - 1) & 0xfff00000;
mb(); *(vuip)PYXIS_T1_BASE = virt_to_phys(hose->sg_pci->ptes) >> 2;
*(vuip)PYXIS_W2_BASE = 0x40000000 | 1;
*(vuip)PYXIS_W2_MASK = (0x40000000 - 1) & 0xfff00000;
*(vuip)PYXIS_T2_BASE = 0;
*(vuip)PYXIS_W3_BASE = 0x80000000 | 1;
*(vuip)PYXIS_W3_MASK = (0x40000000 - 1) & 0xfff00000;
*(vuip)PYXIS_T3_BASE = 0;
/* Pass 1 and 2 (ie revision <= 1) have a broken TBIA. See the
complete description next to pyxis_broken_pci_tbi for details. */
if ((*(vuip)PYXIS_REV & 0xff) <= 1)
pyxis_enable_broken_tbi(hose->sg_pci);
alpha_mv.mv_pci_tbi(hose, 0, -1);
/* /*
* Next, clear the PYXIS_CFG register, which gets used * Next, clear the PYXIS_CFG register, which gets used
...@@ -341,16 +457,11 @@ pyxis_init_arch(void) ...@@ -341,16 +457,11 @@ pyxis_init_arch(void)
* we want to use it, and we do not want to depend on * we want to use it, and we do not want to depend on
* what ARC or SRM might have left behind... * what ARC or SRM might have left behind...
*/ */
{ temp = *(vuip)PYXIS_CFG;
unsigned int pyxis_cfg, temp; if (temp != 0) {
pyxis_cfg = *(vuip)PYXIS_CFG; mb(); *(vuip)PYXIS_CFG = 0;
if (pyxis_cfg != 0) { mb();
#if 1 *(vuip)PYXIS_CFG; /* re-read to force write */
printk("PYXIS_init: CFG was 0x%x\n", pyxis_cfg);
#endif
*(vuip)PYXIS_CFG = 0; mb();
temp = *(vuip)PYXIS_CFG; /* re-read to force write */
}
} }
/* Zero the HAE. */ /* Zero the HAE. */
...@@ -363,27 +474,12 @@ pyxis_init_arch(void) ...@@ -363,27 +474,12 @@ pyxis_init_arch(void)
* Finally, check that the PYXIS_CTRL1 has IOA_BEN set for * Finally, check that the PYXIS_CTRL1 has IOA_BEN set for
* enabling byte/word PCI bus space(s) access. * enabling byte/word PCI bus space(s) access.
*/ */
{ temp = *(vuip) PYXIS_CTRL1;
unsigned int ctrl1; if (!(temp & 1)) {
ctrl1 = *(vuip) PYXIS_CTRL1; *(vuip)PYXIS_CTRL1 = temp | 1;
if (!(ctrl1 & 1)) { mb();
#if 1 *(vuip)PYXIS_CTRL1; /* re-read */
printk("PYXIS_init: enabling byte/word PCI space\n");
#endif
*(vuip) PYXIS_CTRL1 = ctrl1 | 1; mb();
ctrl1 = *(vuip)PYXIS_CTRL1; /* re-read */
}
} }
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = PYXIS_CONF;
hose->index = 0;
} }
static inline void static inline void
...@@ -401,6 +497,8 @@ void ...@@ -401,6 +497,8 @@ void
pyxis_machine_check(unsigned long vector, unsigned long la_ptr, pyxis_machine_check(unsigned long vector, unsigned long la_ptr,
struct pt_regs * regs) struct pt_regs * regs)
{ {
int expected;
/* Clear the error before reporting anything. */ /* Clear the error before reporting anything. */
mb(); mb();
mb(); /* magic */ mb(); /* magic */
...@@ -409,5 +507,23 @@ pyxis_machine_check(unsigned long vector, unsigned long la_ptr, ...@@ -409,5 +507,23 @@ pyxis_machine_check(unsigned long vector, unsigned long la_ptr,
wrmces(0x7); wrmces(0x7);
mb(); mb();
process_mcheck_info(vector, la_ptr, regs, "PYXIS", mcheck_expected(0)); expected = mcheck_expected(0);
if (!expected && vector == 0x660) {
struct el_common *com;
struct el_common_EV5_uncorrectable_mcheck *ev5;
struct el_PYXIS_sysdata_mcheck *pyxis;
com = (void *)la_ptr;
ev5 = (void *)(la_ptr + com->proc_offset);
pyxis = (void *)(la_ptr + com->sys_offset);
if (com->code == 0x202) {
printk(KERN_CRIT "PYXIS PCI machine check: err0=%08x "
"err1=%08x err2=%08x\n",
(int) pyxis->pci_err0, (int) pyxis->pci_err1,
(int) pyxis->pci_err2);
expected = 1;
}
}
process_mcheck_info(vector, la_ptr, regs, "PYXIS", expected);
} }
...@@ -389,6 +389,10 @@ t2_init_arch(void) ...@@ -389,6 +389,10 @@ t2_init_arch(void)
hose->mem_space = &iomem_resource; hose->mem_space = &iomem_resource;
hose->config_space = T2_CONF; hose->config_space = T2_CONF;
hose->index = 0; hose->index = 0;
hose->sg_isa = hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
} }
#define SIC_SEIC (1UL << 33) /* System Event Clear */ #define SIC_SEIC (1UL << 33) /* System Event Clear */
......
...@@ -206,6 +206,23 @@ struct pci_ops tsunami_pci_ops = ...@@ -206,6 +206,23 @@ struct pci_ops tsunami_pci_ops =
write_dword: tsunami_write_config_dword write_dword: tsunami_write_config_dword
}; };
void
tsunami_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end)
{
tsunami_pchip *pchip = hose->index ? TSUNAMI_pchip1 : TSUNAMI_pchip0;
wmb();
/* We can invalidate up to 8 tlb entries in a go. The flush
matches against <31:16> in the pci address. */
if (((start ^ end) & 0xffff0000) == 0)
pchip->tlbiv.csr = (start & 0xffff0000) >> 12;
else
pchip->tlbia.csr = 0;
mb();
}
#ifdef NXM_MACHINE_CHECKS_ON_TSUNAMI #ifdef NXM_MACHINE_CHECKS_ON_TSUNAMI
static long static long
tsunami_probe_read(volatile unsigned long *vaddr) tsunami_probe_read(volatile unsigned long *vaddr)
...@@ -264,6 +281,8 @@ tsunami_init_one_pchip(tsunami_pchip *pchip, int index) ...@@ -264,6 +281,8 @@ tsunami_init_one_pchip(tsunami_pchip *pchip, int index)
return; return;
hose = alloc_pci_controler(); hose = alloc_pci_controler();
if (index == 0)
pci_isa_hose = hose;
hose->io_space = alloc_resource(); hose->io_space = alloc_resource();
hose->mem_space = alloc_resource(); hose->mem_space = alloc_resource();
...@@ -307,27 +326,41 @@ tsunami_init_one_pchip(tsunami_pchip *pchip, int index) ...@@ -307,27 +326,41 @@ tsunami_init_one_pchip(tsunami_pchip *pchip, int index)
saved_pchip[index].tba[3] = pchip->tba[3].csr; saved_pchip[index].tba[3] = pchip->tba[3].csr;
/* /*
* Set up the PCI->physical memory translation windows. * Set up the PCI to main memory translation windows.
* For now, windows 1,2 and 3 are disabled. In the future, *
* we may want to use them to do scatter/gather DMA. * Window 0 is scatter-gather 8MB at 8MB (for isa)
* Window 1 is scatter-gather 128MB at 3GB
* Window 2 is direct access 1GB at 1GB
* Window 3 is direct access 1GB at 2GB
* ??? We ought to scale window 1 memory.
* *
* Window 0 goes at 1 GB and is 1 GB large, mapping to 0. * We must actually use 2 windows to direct-map the 2GB space,
* Window 1 goes at 2 GB and is 1 GB large, mapping to 1GB. * because of an idiot-syncrasy of the CYPRESS chip. It may
* respond to a PCI bus address in the last 1MB of the 4GB
* address range.
*/ */
hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
hose->sg_pci = iommu_arena_new(0xc0000000, 0x08000000, PAGE_SIZE);
__direct_map_base = 0x40000000;
__direct_map_size = 0x80000000;
pchip->wsba[0].csr = TSUNAMI_DMA_WIN0_BASE_DEFAULT | 1UL; pchip->wsba[0].csr = hose->sg_isa->dma_base | 3;
pchip->wsm[0].csr = (TSUNAMI_DMA_WIN0_SIZE_DEFAULT - 1) & pchip->wsm[0].csr = (hose->sg_isa->size - 1) & 0xfff00000;
0xfff00000UL; pchip->tba[0].csr = virt_to_phys(hose->sg_isa->ptes);
pchip->tba[0].csr = TSUNAMI_DMA_WIN0_TRAN_DEFAULT;
pchip->wsba[1].csr = TSUNAMI_DMA_WIN1_BASE_DEFAULT | 1UL; pchip->wsba[1].csr = hose->sg_pci->dma_base | 3;
pchip->wsm[1].csr = (TSUNAMI_DMA_WIN1_SIZE_DEFAULT - 1) & pchip->wsm[1].csr = (hose->sg_pci->size - 1) & 0xfff00000;
0xfff00000UL; pchip->tba[1].csr = virt_to_phys(hose->sg_pci->ptes);
pchip->tba[1].csr = TSUNAMI_DMA_WIN1_TRAN_DEFAULT;
pchip->wsba[2].csr = 0; pchip->wsba[2].csr = 0x40000000 | 1;
pchip->wsba[3].csr = 0; pchip->wsm[2].csr = (0x40000000 - 1) & 0xfff00000;
mb(); pchip->tba[2].csr = 0;
pchip->wsba[3].csr = 0x80000000 | 1;
pchip->wsm[3].csr = (0x40000000 - 1) & 0xfff00000;
pchip->tba[3].csr = 0;
tsunami_pci_tbi(hose, 0, -1);
} }
void __init void __init
......
...@@ -897,6 +897,7 @@ process_mcheck_info(unsigned long vector, unsigned long la_ptr, ...@@ -897,6 +897,7 @@ process_mcheck_info(unsigned long vector, unsigned long la_ptr,
case 0x98: reason = "processor detected hard error"; break; case 0x98: reason = "processor detected hard error"; break;
/* System specific (these are for Alcor, at least): */ /* System specific (these are for Alcor, at least): */
case 0x202: reason = "system detected hard error"; break;
case 0x203: reason = "system detected uncorrectable ECC error"; break; case 0x203: reason = "system detected uncorrectable ECC error"; break;
case 0x204: reason = "SIO SERR occurred on PCI bus"; break; case 0x204: reason = "SIO SERR occurred on PCI bus"; break;
case 0x205: reason = "parity error detected by CIA"; break; case 0x205: reason = "parity error detected by CIA"; break;
......
...@@ -101,8 +101,7 @@ ...@@ -101,8 +101,7 @@
#define DO_TSUNAMI_IO IO(TSUNAMI,tsunami) #define DO_TSUNAMI_IO IO(TSUNAMI,tsunami)
#define BUS(which) \ #define BUS(which) \
mv_virt_to_bus: CAT(which,_virt_to_bus), \ mv_pci_tbi: CAT(which,_pci_tbi)
mv_bus_to_virt: CAT(which,_bus_to_virt)
#define DO_APECS_BUS BUS(apecs) #define DO_APECS_BUS BUS(apecs)
#define DO_CIA_BUS BUS(cia) #define DO_CIA_BUS BUS(cia)
......
...@@ -40,6 +40,7 @@ const char pci_hae0_name[] = "HAE0"; ...@@ -40,6 +40,7 @@ const char pci_hae0_name[] = "HAE0";
*/ */
struct pci_controler *hose_head, **hose_tail = &hose_head; struct pci_controler *hose_head, **hose_tail = &hose_head;
struct pci_controler *pci_isa_hose;
/* /*
* Quirks. * Quirks.
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
struct pci_dev; struct pci_dev;
struct pci_controler; struct pci_controler;
struct pci_iommu_arena;
/* /*
* We can't just blindly use 64K for machines with EISA busses; they * We can't just blindly use 64K for machines with EISA busses; they
...@@ -125,12 +125,17 @@ static inline u8 bridge_swizzle(u8 pin, u8 slot) ...@@ -125,12 +125,17 @@ static inline u8 bridge_swizzle(u8 pin, u8 slot)
/* The hose list. */ /* The hose list. */
extern struct pci_controler *hose_head, **hose_tail; extern struct pci_controler *hose_head, **hose_tail;
extern struct pci_controler *pci_isa_hose;
extern void common_init_pci(void); extern void common_init_pci(void);
extern u8 common_swizzle(struct pci_dev *, u8 *); extern u8 common_swizzle(struct pci_dev *, u8 *);
extern struct pci_controler *alloc_pci_controler(void); extern struct pci_controler *alloc_pci_controler(void);
extern struct resource *alloc_resource(void); extern struct resource *alloc_resource(void);
extern struct pci_iommu_arena *iommu_arena_new(dma_addr_t, unsigned long,
unsigned long);
extern long iommu_arena_alloc(struct pci_iommu_arena *arena, long n);
extern const char *const pci_io_names[]; extern const char *const pci_io_names[];
extern const char *const pci_mem_names[]; extern const char *const pci_mem_names[];
extern const char pci_hae0_name[]; extern const char pci_hae0_name[];
This diff is collapsed.
...@@ -9,55 +9,65 @@ ...@@ -9,55 +9,65 @@
struct pt_regs; struct pt_regs;
struct task_struct; struct task_struct;
struct pci_dev; struct pci_dev;
struct pci_controler;
/* core_apecs.c */ /* core_apecs.c */
extern struct pci_ops apecs_pci_ops; extern struct pci_ops apecs_pci_ops;
extern void apecs_init_arch(void); extern void apecs_init_arch(void);
extern void apecs_pci_clr_err(void); extern void apecs_pci_clr_err(void);
extern void apecs_machine_check(u64, u64, struct pt_regs *); extern void apecs_machine_check(u64, u64, struct pt_regs *);
extern void apecs_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* core_cia.c */ /* core_cia.c */
extern struct pci_ops cia_pci_ops; extern struct pci_ops cia_pci_ops;
extern void cia_init_arch(void); extern void cia_init_arch(void);
extern void cia_machine_check(u64, u64, struct pt_regs *); extern void cia_machine_check(u64, u64, struct pt_regs *);
extern void cia_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* core_irongate.c */ /* core_irongate.c */
extern struct pci_ops irongate_pci_ops; extern struct pci_ops irongate_pci_ops;
extern int irongate_pci_clr_err(void); extern int irongate_pci_clr_err(void);
extern void irongate_init_arch(void); extern void irongate_init_arch(void);
extern void irongate_machine_check(u64, u64, struct pt_regs *); extern void irongate_machine_check(u64, u64, struct pt_regs *);
#define irongate_pci_tbi ((void *)0)
/* core_lca.c */ /* core_lca.c */
extern struct pci_ops lca_pci_ops; extern struct pci_ops lca_pci_ops;
extern void lca_init_arch(void); extern void lca_init_arch(void);
extern void lca_machine_check(u64, u64, struct pt_regs *); extern void lca_machine_check(u64, u64, struct pt_regs *);
extern void lca_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* core_mcpcia.c */ /* core_mcpcia.c */
extern struct pci_ops mcpcia_pci_ops; extern struct pci_ops mcpcia_pci_ops;
extern void mcpcia_init_arch(void); extern void mcpcia_init_arch(void);
extern void mcpcia_init_hoses(void); extern void mcpcia_init_hoses(void);
extern void mcpcia_machine_check(u64, u64, struct pt_regs *); extern void mcpcia_machine_check(u64, u64, struct pt_regs *);
extern void mcpcia_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* core_polaris.c */ /* core_polaris.c */
extern struct pci_ops polaris_pci_ops; extern struct pci_ops polaris_pci_ops;
extern void polaris_init_arch(void); extern void polaris_init_arch(void);
extern void polaris_machine_check(u64, u64, struct pt_regs *); extern void polaris_machine_check(u64, u64, struct pt_regs *);
#define polaris_pci_tbi ((void *)0)
/* core_pyxis.c */ /* core_pyxis.c */
extern struct pci_ops pyxis_pci_ops; extern struct pci_ops pyxis_pci_ops;
extern void pyxis_init_arch(void); extern void pyxis_init_arch(void);
extern void pyxis_machine_check(u64, u64, struct pt_regs *); extern void pyxis_machine_check(u64, u64, struct pt_regs *);
extern void pyxis_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* core_t2.c */ /* core_t2.c */
extern struct pci_ops t2_pci_ops; extern struct pci_ops t2_pci_ops;
extern void t2_init_arch(void); extern void t2_init_arch(void);
extern void t2_machine_check(u64, u64, struct pt_regs *); extern void t2_machine_check(u64, u64, struct pt_regs *);
#define t2_pci_tbi ((void *)0)
/* core_tsunami.c */ /* core_tsunami.c */
extern struct pci_ops tsunami_pci_ops; extern struct pci_ops tsunami_pci_ops;
extern void tsunami_init_arch(void); extern void tsunami_init_arch(void);
extern void tsunami_kill_arch(int); extern void tsunami_kill_arch(int);
extern void tsunami_machine_check(u64, u64, struct pt_regs *); extern void tsunami_machine_check(u64, u64, struct pt_regs *);
extern void tsunami_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t);
/* setup.c */ /* setup.c */
extern unsigned long srm_hae; extern unsigned long srm_hae;
......
...@@ -36,7 +36,9 @@ ...@@ -36,7 +36,9 @@
* critical part is the inline stuff in <asm/semaphore.h> * critical part is the inline stuff in <asm/semaphore.h>
* where we want to avoid any extra jumps and calls. * where we want to avoid any extra jumps and calls.
*/ */
void __up(struct semaphore *sem)
void
__up(struct semaphore *sem)
{ {
wake_one_more(sem); wake_one_more(sem);
wake_up(&sem->wait); wake_up(&sem->wait);
...@@ -63,7 +65,7 @@ void __up(struct semaphore *sem) ...@@ -63,7 +65,7 @@ void __up(struct semaphore *sem)
#define DOWN_VAR \ #define DOWN_VAR \
struct task_struct *tsk = current; \ struct task_struct *tsk = current; \
wait_queue_t wait; \ wait_queue_t wait; \
init_waitqueue_entry(&wait, tsk); init_waitqueue_entry(&wait, tsk)
#define DOWN_HEAD(task_state) \ #define DOWN_HEAD(task_state) \
\ \
...@@ -92,23 +94,27 @@ void __up(struct semaphore *sem) ...@@ -92,23 +94,27 @@ void __up(struct semaphore *sem)
tsk->state = (task_state); \ tsk->state = (task_state); \
} \ } \
tsk->state = TASK_RUNNING; \ tsk->state = TASK_RUNNING; \
remove_wait_queue(&sem->wait, &wait); remove_wait_queue(&sem->wait, &wait)
void __down(struct semaphore * sem) void
__down(struct semaphore * sem)
{ {
DOWN_VAR DOWN_VAR;
DOWN_HEAD(TASK_UNINTERRUPTIBLE) DOWN_HEAD(TASK_UNINTERRUPTIBLE);
if (waking_non_zero(sem)) if (waking_non_zero(sem))
break; break;
schedule(); schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
DOWN_TAIL(TASK_UNINTERRUPTIBLE);
} }
int __down_interruptible(struct semaphore * sem) int
__down_interruptible(struct semaphore * sem)
{ {
int ret = 0; int ret = 0;
DOWN_VAR DOWN_VAR;
DOWN_HEAD(TASK_INTERRUPTIBLE) DOWN_HEAD(TASK_INTERRUPTIBLE);
ret = waking_non_zero_interruptible(sem, tsk); ret = waking_non_zero_interruptible(sem, tsk);
if (ret) if (ret)
...@@ -119,11 +125,149 @@ int __down_interruptible(struct semaphore * sem) ...@@ -119,11 +125,149 @@ int __down_interruptible(struct semaphore * sem)
break; break;
} }
schedule(); schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
DOWN_TAIL(TASK_INTERRUPTIBLE);
return ret; return ret;
} }
int __down_trylock(struct semaphore * sem) int
__down_trylock(struct semaphore * sem)
{ {
return waking_non_zero_trylock(sem); return waking_non_zero_trylock(sem);
} }
/*
* RW Semaphores
*/
void
__down_read(struct rw_semaphore *sem, int count)
{
long tmp;
DOWN_VAR;
retry_down:
if (count < 0) {
/* Wait for the lock to become unbiased. Readers
are non-exclusive. */
/* This takes care of granting the lock. */
up_read(sem);
add_wait_queue(&sem->wait, &wait);
while (sem->count < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
if (sem->count >= 0)
break;
schedule();
}
remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
__asm __volatile (
" mb\n"
"1: ldl_l %0,%1\n"
" subl %0,1,%2\n"
" subl %0,1,%0\n"
" stl_c %2,%1\n"
" bne %2,2f\n"
".section .text2,\"ax\"\n"
"2: br 1b\n"
".previous"
: "=r"(count), "=m"(sem->count), "=r"(tmp)
: : "memory");
if (count <= 0)
goto retry_down;
} else {
add_wait_queue(&sem->wait, &wait);
while (1) {
if (test_and_clear_bit(0, &sem->granted))
break;
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
if ((sem->granted & 1) == 0)
schedule();
}
remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
}
}
void
__down_write(struct rw_semaphore *sem, int count)
{
long tmp;
DOWN_VAR;
retry_down:
if (count + RW_LOCK_BIAS < 0) {
up_write(sem);
add_wait_queue_exclusive(&sem->wait, &wait);
while (sem->count < 0) {
set_task_state(tsk, (TASK_UNINTERRUPTIBLE
| TASK_EXCLUSIVE));
if (sem->count >= RW_LOCK_BIAS)
break;
schedule();
}
remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
__asm __volatile (
" mb\n"
"1: ldl_l %0,%1\n"
" ldah %2,%3(%0)\n"
" ldah %0,%3(%0)\n"
" stl_c %2,%1\n"
" bne %2,2f\n"
".section .text2,\"ax\"\n"
"2: br 1b\n"
".previous"
: "=r"(count), "=m"(sem->count), "=r"(tmp)
: "i"(-(RW_LOCK_BIAS >> 16))
: "memory");
if (count != 0)
goto retry_down;
} else {
/* Put ourselves at the end of the list. */
add_wait_queue_exclusive(&sem->write_bias_wait, &wait);
while (1) {
if (test_and_clear_bit(1, &sem->granted))
break;
set_task_state(tsk, (TASK_UNINTERRUPTIBLE
| TASK_EXCLUSIVE));
if ((sem->granted & 2) == 0)
schedule();
}
remove_wait_queue(&sem->write_bias_wait, &wait);
tsk->state = TASK_RUNNING;
/* If the lock is currently unbiased, awaken the sleepers.
FIXME: This wakes up the readers early in a bit of a
stampede -> bad! */
if (sem->count >= 0)
wake_up(&sem->wait);
}
}
void
__do_rwsem_wake(struct rw_semaphore *sem, int readers)
{
if (readers) {
if (test_and_set_bit(0, &sem->granted))
BUG();
wake_up(&sem->wait);
} else {
if (test_and_set_bit(1, &sem->granted))
BUG();
wake_up(&sem->write_bias_wait);
}
}
...@@ -96,6 +96,13 @@ struct screen_info screen_info = { ...@@ -96,6 +96,13 @@ struct screen_info screen_info = {
orig_video_points: 16 orig_video_points: 16
}; };
/*
* The direct map I/O window, if any. This should be the same
* for all busses, since it's used by virt_to_bus.
*/
unsigned long __direct_map_base;
unsigned long __direct_map_size;
/* /*
* Declare all of the machine vectors. * Declare all of the machine vectors.
...@@ -225,15 +232,8 @@ setup_memory(void) ...@@ -225,15 +232,8 @@ setup_memory(void)
max_low_pfn = end; max_low_pfn = end;
} }
/* Enforce maximum of 2GB even if there is more. Blah. */
if (max_low_pfn > PFN_MAX)
max_low_pfn = PFN_MAX;
printk("max_low_pfn %ld\n", max_low_pfn);
/* Find the end of the kernel memory. */ /* Find the end of the kernel memory. */
start_pfn = PFN_UP(virt_to_phys(_end)); start_pfn = PFN_UP(virt_to_phys(_end));
printk("_end %p, start_pfn %ld\n", _end, start_pfn);
bootmap_start = -1; bootmap_start = -1;
try_again: try_again:
...@@ -243,7 +243,6 @@ setup_memory(void) ...@@ -243,7 +243,6 @@ setup_memory(void)
/* We need to know how many physically contigous pages /* We need to know how many physically contigous pages
we'll need for the bootmap. */ we'll need for the bootmap. */
bootmap_pages = bootmem_bootmap_pages(max_low_pfn); bootmap_pages = bootmem_bootmap_pages(max_low_pfn);
printk("bootmap size: %ld pages\n", bootmap_pages);
/* Now find a good region where to allocate the bootmap. */ /* Now find a good region where to allocate the bootmap. */
for_each_mem_cluster(memdesc, cluster, i) { for_each_mem_cluster(memdesc, cluster, i) {
...@@ -261,8 +260,6 @@ setup_memory(void) ...@@ -261,8 +260,6 @@ setup_memory(void)
if (end > max_low_pfn) if (end > max_low_pfn)
end = max_low_pfn; end = max_low_pfn;
if (end - start >= bootmap_pages) { if (end - start >= bootmap_pages) {
printk("allocating bootmap in area %ld:%ld\n",
start, start+bootmap_pages);
bootmap_start = start; bootmap_start = start;
break; break;
} }
...@@ -270,8 +267,6 @@ setup_memory(void) ...@@ -270,8 +267,6 @@ setup_memory(void)
if (bootmap_start == -1) { if (bootmap_start == -1) {
max_low_pfn >>= 1; max_low_pfn >>= 1;
printk("bootmap area not found now trying with %ld pages\n",
max_low_pfn);
goto try_again; goto try_again;
} }
...@@ -304,8 +299,6 @@ setup_memory(void) ...@@ -304,8 +299,6 @@ setup_memory(void)
/* Reserve the bootmap memory. */ /* Reserve the bootmap memory. */
reserve_bootmem(PFN_PHYS(bootmap_start), bootmap_size); reserve_bootmem(PFN_PHYS(bootmap_start), bootmap_size);
printk("reserving bootmap %ld:%ld\n", bootmap_start,
bootmap_start + PFN_UP(bootmap_size));
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
initrd_start = INITRD_START; initrd_start = INITRD_START;
...@@ -328,27 +321,26 @@ setup_memory(void) ...@@ -328,27 +321,26 @@ setup_memory(void)
#endif /* CONFIG_BLK_DEV_INITRD */ #endif /* CONFIG_BLK_DEV_INITRD */
} }
int __init page_is_ram(unsigned long pfn) int __init
page_is_ram(unsigned long pfn)
{ {
struct memclust_struct * cluster; struct memclust_struct * cluster;
struct memdesc_struct * memdesc; struct memdesc_struct * memdesc;
int i; int i;
memdesc = (struct memdesc_struct *) (hwrpb->mddt_offset + (unsigned long) hwrpb); memdesc = (struct memdesc_struct *)
(hwrpb->mddt_offset + (unsigned long) hwrpb);
for_each_mem_cluster(memdesc, cluster, i) for_each_mem_cluster(memdesc, cluster, i)
{ {
if (pfn >= cluster->start_pfn && if (pfn >= cluster->start_pfn &&
pfn < cluster->start_pfn + cluster->numpages) pfn < cluster->start_pfn + cluster->numpages) {
{ return (cluster->usage & 3) ? 0 : 1;
if (cluster->usage & 3)
return 0;
else
return 1;
} }
} }
return 0; return 0;
} }
#undef PFN_UP #undef PFN_UP
#undef PFN_DOWN #undef PFN_DOWN
#undef PFN_PHYS #undef PFN_PHYS
...@@ -369,8 +361,7 @@ setup_arch(char **cmdline_p) ...@@ -369,8 +361,7 @@ setup_arch(char **cmdline_p)
/* Hack for Jensen... since we're restricted to 8 or 16 chars for /* Hack for Jensen... since we're restricted to 8 or 16 chars for
boot flags depending on the boot mode, we need some shorthand. boot flags depending on the boot mode, we need some shorthand.
This should do for installation. Later we'll add other This should do for installation. */
abbreviations as well... */
if (strcmp(COMMAND_LINE, "INSTALL") == 0) { if (strcmp(COMMAND_LINE, "INSTALL") == 0) {
strcpy(command_line, "root=/dev/fd0 load_ramdisk=1"); strcpy(command_line, "root=/dev/fd0 load_ramdisk=1");
} else { } else {
......
...@@ -70,7 +70,7 @@ int smp_num_cpus = 1; /* Number that came online. */ ...@@ -70,7 +70,7 @@ int smp_num_cpus = 1; /* Number that came online. */
int smp_threads_ready; /* True once the per process idle is forked. */ int smp_threads_ready; /* True once the per process idle is forked. */
cycles_t cacheflush_time; cycles_t cacheflush_time;
int cpu_number_map[NR_CPUS]; int __cpu_number_map[NR_CPUS];
int __cpu_logical_map[NR_CPUS]; int __cpu_logical_map[NR_CPUS];
extern void calibrate_delay(void); extern void calibrate_delay(void);
...@@ -432,7 +432,7 @@ smp_boot_one_cpu(int cpuid, int cpunum) ...@@ -432,7 +432,7 @@ smp_boot_one_cpu(int cpuid, int cpunum)
idle->processor = cpuid; idle->processor = cpuid;
__cpu_logical_map[cpunum] = cpuid; __cpu_logical_map[cpunum] = cpuid;
cpu_number_map[cpuid] = cpunum; __cpu_number_map[cpuid] = cpunum;
idle->has_cpu = 1; /* we schedule the first task manually */ idle->has_cpu = 1; /* we schedule the first task manually */
del_from_runqueue(idle); del_from_runqueue(idle);
...@@ -461,7 +461,7 @@ smp_boot_one_cpu(int cpuid, int cpunum) ...@@ -461,7 +461,7 @@ smp_boot_one_cpu(int cpuid, int cpunum)
/* we must invalidate our stuff as we failed to boot the CPU */ /* we must invalidate our stuff as we failed to boot the CPU */
__cpu_logical_map[cpunum] = -1; __cpu_logical_map[cpunum] = -1;
cpu_number_map[cpuid] = -1; __cpu_number_map[cpuid] = -1;
/* the idle task is local to us so free it as we don't use it */ /* the idle task is local to us so free it as we don't use it */
free_task_struct(idle); free_task_struct(idle);
...@@ -534,11 +534,11 @@ smp_boot_cpus(void) ...@@ -534,11 +534,11 @@ smp_boot_cpus(void)
unsigned long bogosum; unsigned long bogosum;
/* Take care of some initial bookkeeping. */ /* Take care of some initial bookkeeping. */
memset(cpu_number_map, -1, sizeof(cpu_number_map)); memset(__cpu_number_map, -1, sizeof(__cpu_number_map));
memset(__cpu_logical_map, -1, sizeof(__cpu_logical_map)); memset(__cpu_logical_map, -1, sizeof(__cpu_logical_map));
memset(ipi_data, 0, sizeof(ipi_data)); memset(ipi_data, 0, sizeof(ipi_data));
cpu_number_map[smp_boot_cpuid] = 0; __cpu_number_map[smp_boot_cpuid] = 0;
__cpu_logical_map[0] = smp_boot_cpuid; __cpu_logical_map[0] = smp_boot_cpuid;
current->processor = smp_boot_cpuid; current->processor = smp_boot_cpuid;
......
...@@ -110,12 +110,21 @@ jensen_init_irq(void) ...@@ -110,12 +110,21 @@ jensen_init_irq(void)
enable_irq(2); /* enable cascade */ enable_irq(2); /* enable cascade */
} }
static void
jensen_init_arch(void)
{
__direct_map_base = 0;
__direct_map_size = 0xffffffff;
}
static void static void
jensen_machine_check (u64 vector, u64 la, struct pt_regs *regs) jensen_machine_check (u64 vector, u64 la, struct pt_regs *regs)
{ {
printk(KERN_CRIT "Machine check\n"); printk(KERN_CRIT "Machine check\n");
} }
#define jensen_pci_tbi ((void*)0)
/* /*
* The System Vector * The System Vector
...@@ -136,7 +145,7 @@ struct alpha_machine_vector jensen_mv __initmv = { ...@@ -136,7 +145,7 @@ struct alpha_machine_vector jensen_mv __initmv = {
ack_irq: common_ack_irq, ack_irq: common_ack_irq,
device_interrupt: jensen_device_interrupt, device_interrupt: jensen_device_interrupt,
init_arch: NULL, init_arch: jensen_init_arch,
init_irq: jensen_init_irq, init_irq: jensen_init_irq,
init_pit: common_init_pit, init_pit: common_init_pit,
init_pci: NULL, init_pci: NULL,
......
...@@ -54,52 +54,6 @@ sio_init_irq(void) ...@@ -54,52 +54,6 @@ sio_init_irq(void)
enable_irq(2); /* enable cascade */ enable_irq(2); /* enable cascade */
} }
static inline void __init
xl_init_arch(void)
{
struct pci_controler *hose;
/*
* Set up the PCI->physical memory translation windows. For
* the XL we *must* use both windows, in order to maximize the
* amount of physical memory that can be used to DMA from the
* ISA bus, and still allow PCI bus devices access to all of
* host memory.
*
* See <asm/apecs.h> for window bases and sizes.
*
* This restriction due to the true XL motherboards' 82379AB SIO
* PCI<->ISA bridge chip which passes only 27 bits of address...
*/
*(vuip)APECS_IOC_PB1R = 1<<19 | (APECS_XL_DMA_WIN1_BASE & 0xfff00000U);
*(vuip)APECS_IOC_PM1R = (APECS_XL_DMA_WIN1_SIZE - 1) & 0xfff00000U;
*(vuip)APECS_IOC_TB1R = 0;
*(vuip)APECS_IOC_PB2R = 1<<19 | (APECS_XL_DMA_WIN2_BASE & 0xfff00000U);
*(vuip)APECS_IOC_PM2R = (APECS_XL_DMA_WIN2_SIZE - 1) & 0xfff00000U;
*(vuip)APECS_IOC_TB2R = 0;
/*
* Finally, clear the HAXR2 register, which gets used for PCI
* Config Space accesses. That is the way we want to use it,
* and we do not want to depend on what ARC or SRM might have
* left behind...
*/
*(vuip)APECS_IOC_HAXR2 = 0; mb();
/*
* Create our single hose.
*/
hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
hose->config_space = LCA_CONF;
hose->index = 0;
}
static inline void __init static inline void __init
alphabook1_init_arch(void) alphabook1_init_arch(void)
{ {
...@@ -448,7 +402,7 @@ struct alpha_machine_vector xl_mv __initmv = { ...@@ -448,7 +402,7 @@ struct alpha_machine_vector xl_mv __initmv = {
DO_EV4_MMU, DO_EV4_MMU,
DO_DEFAULT_RTC, DO_DEFAULT_RTC,
DO_APECS_IO, DO_APECS_IO,
BUS(apecs_xl), BUS(apecs),
machine_check: apecs_machine_check, machine_check: apecs_machine_check,
max_dma_address: ALPHA_XL_MAX_DMA_ADDRESS, max_dma_address: ALPHA_XL_MAX_DMA_ADDRESS,
min_io_address: DEFAULT_IO_BASE, min_io_address: DEFAULT_IO_BASE,
...@@ -460,7 +414,7 @@ struct alpha_machine_vector xl_mv __initmv = { ...@@ -460,7 +414,7 @@ struct alpha_machine_vector xl_mv __initmv = {
ack_irq: common_ack_irq, ack_irq: common_ack_irq,
device_interrupt: isa_device_interrupt, device_interrupt: isa_device_interrupt,
init_arch: xl_init_arch, init_arch: lca_init_arch,
init_irq: sio_init_irq, init_irq: sio_init_irq,
init_pit: common_init_pit, init_pit: common_init_pit,
init_pci: noname_init_pci, init_pci: noname_init_pci,
......
/* /*
* linux/arch/alpha/lib/semaphore.S * linux/arch/alpha/lib/semaphore.S
* *
* Copyright (C) 1999 Richard Henderson * Copyright (C) 1999, 2000 Richard Henderson
*/ */
/* /*
...@@ -181,3 +181,168 @@ __up_wakeup: ...@@ -181,3 +181,168 @@ __up_wakeup:
lda $30, 20*8($30) lda $30, 20*8($30)
ret $31, ($28), 0 ret $31, ($28), 0
.end __up_wakeup .end __up_wakeup
/* __down_read_failed takes the semaphore in $24, count in $25;
clobbers $24, $25 and $28. */
.globl __down_read_failed
.ent __down_read_failed
__down_read_failed:
ldgp $29,0($27)
lda $30, -18*8($30)
stq $28, 0*8($30)
stq $0, 1*8($30)
stq $1, 2*8($30)
stq $2, 3*8($30)
stq $3, 4*8($30)
stq $4, 5*8($30)
stq $5, 6*8($30)
stq $6, 7*8($30)
stq $7, 8*8($30)
stq $16, 9*8($30)
stq $17, 10*8($30)
stq $18, 11*8($30)
stq $19, 12*8($30)
stq $20, 13*8($30)
stq $21, 14*8($30)
stq $22, 15*8($30)
stq $23, 16*8($30)
stq $26, 17*8($30)
.frame $30, 18*8, $28
.prologue 1
mov $24, $16
mov $25, $17
jsr __down_read
ldq $28, 0*8($30)
ldq $0, 1*8($30)
ldq $1, 2*8($30)
ldq $2, 3*8($30)
ldq $3, 4*8($30)
ldq $4, 5*8($30)
ldq $5, 6*8($30)
ldq $6, 7*8($30)
ldq $7, 8*8($30)
ldq $16, 9*8($30)
ldq $17, 10*8($30)
ldq $18, 11*8($30)
ldq $19, 12*8($30)
ldq $20, 13*8($30)
ldq $21, 14*8($30)
ldq $22, 15*8($30)
ldq $23, 16*8($30)
ldq $26, 17*8($30)
lda $30, 18*8($30)
ret $31, ($28), 0
.end __down_read_failed
/* __down_write_failed takes the semaphore in $24, count in $25;
clobbers $24, $25 and $28. */
.globl __down_write_failed
.ent __down_write_failed
__down_write_failed:
ldgp $29,0($27)
lda $30, -20*8($30)
stq $28, 0*8($30)
stq $0, 1*8($30)
stq $1, 2*8($30)
stq $2, 3*8($30)
stq $3, 4*8($30)
stq $4, 5*8($30)
stq $5, 6*8($30)
stq $6, 7*8($30)
stq $7, 8*8($30)
stq $16, 9*8($30)
stq $17, 10*8($30)
stq $18, 11*8($30)
stq $19, 12*8($30)
stq $20, 13*8($30)
stq $21, 14*8($30)
stq $22, 15*8($30)
stq $23, 16*8($30)
stq $26, 17*8($30)
.frame $30, 18*8, $28
.prologue 1
mov $24, $16
mov $25, $17
jsr __down_write
ldq $28, 0*8($30)
ldq $0, 1*8($30)
ldq $1, 2*8($30)
ldq $2, 3*8($30)
ldq $3, 4*8($30)
ldq $4, 5*8($30)
ldq $5, 6*8($30)
ldq $6, 7*8($30)
ldq $7, 8*8($30)
ldq $16, 9*8($30)
ldq $17, 10*8($30)
ldq $18, 11*8($30)
ldq $19, 12*8($30)
ldq $20, 13*8($30)
ldq $21, 14*8($30)
ldq $22, 15*8($30)
ldq $23, 16*8($30)
ldq $26, 17*8($30)
lda $30, 18*8($30)
ret $31, ($28), 0
.end __down_write_failed
/* __rwsem_wake takes the semaphore in $24, readers in $25;
clobbers $24, $25, and $28. */
.globl __rwsem_wake
.ent __rwsem_wake
__rwsem_wake:
ldgp $29,0($27)
lda $30, -18*8($30)
stq $28, 0*8($30)
stq $0, 1*8($30)
stq $1, 2*8($30)
stq $2, 3*8($30)
stq $3, 4*8($30)
stq $4, 5*8($30)
stq $5, 6*8($30)
stq $6, 7*8($30)
stq $7, 8*8($30)
stq $16, 9*8($30)
stq $17, 10*8($30)
stq $18, 11*8($30)
stq $19, 12*8($30)
stq $20, 13*8($30)
stq $21, 14*8($30)
stq $22, 15*8($30)
stq $23, 16*8($30)
stq $26, 17*8($30)
.frame $30, 18*8, $28
.prologue 1
mov $24, $16
mov $25, $17
jsr __do_rwsem_wake
ldq $28, 0*8($30)
ldq $0, 1*8($30)
ldq $1, 2*8($30)
ldq $2, 3*8($30)
ldq $3, 4*8($30)
ldq $4, 5*8($30)
ldq $5, 6*8($30)
ldq $6, 7*8($30)
ldq $7, 8*8($30)
ldq $16, 9*8($30)
ldq $17, 10*8($30)
ldq $18, 11*8($30)
ldq $19, 12*8($30)
ldq $20, 13*8($30)
ldq $21, 14*8($30)
ldq $22, 15*8($30)
ldq $23, 16*8($30)
ldq $26, 17*8($30)
lda $30, 18*8($30)
ret $31, ($28), 0
.end __rwsem_wake
...@@ -82,4 +82,6 @@ SECTIONS ...@@ -82,4 +82,6 @@ SECTIONS
.debug_funcnames 0 : { *(.debug_funcnames) } .debug_funcnames 0 : { *(.debug_funcnames) }
.debug_typenames 0 : { *(.debug_typenames) } .debug_typenames 0 : { *(.debug_typenames) }
.debug_varnames 0 : { *(.debug_varnames) } .debug_varnames 0 : { *(.debug_varnames) }
/DISCARD/ : { *(.text.exit) *(.data.exit) }
} }
...@@ -12,10 +12,6 @@ ...@@ -12,10 +12,6 @@
* the page directory. [According to comments etc elsewhere on a compressed * the page directory. [According to comments etc elsewhere on a compressed
* kernel it will end up at 0x1000 + 1Mb I hope so as I assume this. - AC] * kernel it will end up at 0x1000 + 1Mb I hope so as I assume this. - AC]
* *
* In SMP mode we keep this page safe. Really we ought to shuffle things and
* put the trampoline here. - AC. An SMP trampoline enters with %cx holding
* the stack base.
*
* Page 0 is deliberately kept safe, since System Management Mode code in * Page 0 is deliberately kept safe, since System Management Mode code in
* laptops may need to access the BIOS data stored there. This is also * laptops may need to access the BIOS data stored there. This is also
* useful for future device drivers that either access the BIOS via VM86 * useful for future device drivers that either access the BIOS via VM86
...@@ -41,24 +37,7 @@ startup_32: ...@@ -41,24 +37,7 @@ startup_32:
movl %ax,%es movl %ax,%es
movl %ax,%fs movl %ax,%fs
movl %ax,%gs movl %ax,%gs
#ifdef __SMP__
orw %bx,%bx # What state are we in BX=1 for SMP
# 0 for boot
jz 2f # Initial boot
/*
* We are trampolining an SMP processor
*/
mov %ax,%ss
xorl %eax,%eax # Back to 0
mov %cx,%ax # SP low 16 bits
movl %eax,%esp
pushl $0 # Clear NT
popfl
ljmp $(__KERNEL_CS), $0x100000 # Into C and sanity
2:
#endif
lss SYMBOL_NAME(stack_start),%esp lss SYMBOL_NAME(stack_start),%esp
xorl %eax,%eax xorl %eax,%eax
1: incl %eax # check that A20 really IS enabled 1: incl %eax # check that A20 really IS enabled
......
...@@ -228,6 +228,7 @@ CONFIG_SCSI_NCR53C8XX_SYNC=20 ...@@ -228,6 +228,7 @@ CONFIG_SCSI_NCR53C8XX_SYNC=20
# CONFIG_SCSI_QLOGIC_FAS is not set # CONFIG_SCSI_QLOGIC_FAS is not set
# CONFIG_SCSI_QLOGIC_ISP is not set # CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set # CONFIG_SCSI_QLOGIC_FC is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_SEAGATE is not set # CONFIG_SCSI_SEAGATE is not set
# CONFIG_SCSI_DC390T is not set # CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_T128 is not set # CONFIG_SCSI_T128 is not set
...@@ -429,7 +430,6 @@ CONFIG_AUTOFS4_FS=y ...@@ -429,7 +430,6 @@ CONFIG_AUTOFS4_FS=y
# CONFIG_AFFS_FS is not set # CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set # CONFIG_HFS_FS is not set
# CONFIG_BFS_FS is not set # CONFIG_BFS_FS is not set
# CONFIG_BFS_FS_WRITE is not set
# CONFIG_FAT_FS is not set # CONFIG_FAT_FS is not set
# CONFIG_MSDOS_FS is not set # CONFIG_MSDOS_FS is not set
# CONFIG_UMSDOS_FS is not set # CONFIG_UMSDOS_FS is not set
......
...@@ -212,17 +212,6 @@ is386: pushl %ecx # restore original EFLAGS ...@@ -212,17 +212,6 @@ is386: pushl %ecx # restore original EFLAGS
orl $2,%eax # set MP orl $2,%eax # set MP
2: movl %eax,%cr0 2: movl %eax,%cr0
call check_x87 call check_x87
#ifdef __SMP__
movb ready,%al # First CPU if 0
orb %al,%al
jz 4f # First CPU skip this stuff
movl %cr4,%eax # Turn on 4Mb pages
orl $16,%eax
movl %eax,%cr4
movl %cr3,%eax # Intel specification clarification says
movl %eax,%cr3 # to do this. Maybe it makes a difference.
# Who knows ?
#endif
4: 4:
#ifdef __SMP__ #ifdef __SMP__
incb ready incb ready
......
This diff is collapsed.
...@@ -75,7 +75,7 @@ ...@@ -75,7 +75,7 @@
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/mmu_context.h>
/* /*
* Machine setup.. * Machine setup..
*/ */
...@@ -1543,6 +1543,10 @@ void cpu_init (void) ...@@ -1543,6 +1543,10 @@ void cpu_init (void)
*/ */
atomic_inc(&init_mm.mm_count); atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm; current->active_mm = &init_mm;
if(current->mm)
BUG();
enter_lazy_tlb(&init_mm, current, nr);
t->esp0 = current->thread.esp0; t->esp0 = current->thread.esp0;
set_tss_desc(nr,t); set_tss_desc(nr,t);
gdt_table[__TSS(nr)].b &= 0xfffffdff; gdt_table[__TSS(nr)].b &= 0xfffffdff;
......
This diff is collapsed.
...@@ -55,7 +55,7 @@ r_base = . ...@@ -55,7 +55,7 @@ r_base = .
jmp flush_instr jmp flush_instr
flush_instr: flush_instr:
ljmpl $__KERNEL_CS, $0x00100000 ljmpl $__KERNEL_CS, $0x00100000
# jump to startup_32 # jump to startup_32 in arch/i386/kernel/head.S
idt_48: idt_48:
.word 0 # idt limit = 0 .word 0 # idt limit = 0
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment