Commit c198a0a4 authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman

Merge gregkh@kernel.bkbits.net:/home/gregkh/linux/pci-2.5

into kroah.com:/home/greg/linux/BK/pci-2.5
parents fffd9155 5828287e
...@@ -11,7 +11,7 @@ DOCBOOKS := wanbook.sgml z8530book.sgml mcabook.sgml videobook.sgml \ ...@@ -11,7 +11,7 @@ DOCBOOKS := wanbook.sgml z8530book.sgml mcabook.sgml videobook.sgml \
kernel-locking.sgml via-audio.sgml mousedrivers.sgml \ kernel-locking.sgml via-audio.sgml mousedrivers.sgml \
deviceiobook.sgml procfs-guide.sgml tulip-user.sgml \ deviceiobook.sgml procfs-guide.sgml tulip-user.sgml \
writing_usb_driver.sgml scsidrivers.sgml sis900.sgml \ writing_usb_driver.sgml scsidrivers.sgml sis900.sgml \
kernel-api.sgml journal-api.sgml lsm.sgml kernel-api.sgml journal-api.sgml lsm.sgml usb.sgml
### ###
# The build process is as follows (targets): # The build process is as follows (targets):
......
...@@ -228,102 +228,6 @@ X!Isound/sound_firmware.c ...@@ -228,102 +228,6 @@ X!Isound/sound_firmware.c
--> -->
</chapter> </chapter>
<chapter id="usb">
<title>USB Devices</title>
<para>Drivers for USB devices talk to the "usbcore" APIs, and are
exposed through driver frameworks such as block, character,
or network devices.
There are two types of public "usbcore" APIs: those intended for
general driver use, and those which are only public to drivers that
are part of the core.
The drivers that are part of the core are involved in managing a USB bus.
They include the "hub" driver, which manages trees of USB devices, and
several different kinds of "host controller" driver (HCD), which control
individual busses.
</para>
<para>The device model seen by USB drivers is relatively complex.
</para>
<itemizedlist>
<listitem><para>USB supports four kinds of data transfer
(control, bulk, interrupt, and isochronous). Two transfer
types use bandwidth as it's available (control and bulk),
while the other two types of transfer (interrupt and isochronous)
are scheduled to provide guaranteed bandwidth.
</para></listitem>
<listitem><para>The device description model includes one or more
"configurations" per device, only one of which is active at a time.
</para></listitem>
<listitem><para>Configurations have one or more "interface", each
of which may have "alternate settings". Interfaces may be
standardized by USB "Class" specifications, or may be specific to
a vendor or device.</para>
<para>USB device drivers actually bind to interfaces, not devices.
Think of them as "interface drivers", though you
may not see many devices where the distinction is important.
Most USB devices are simple, with only one configuration,
one interface, and one alternate setting.
</para></listitem>
<listitem><para>Interfaces have one or more "endpoints", each of
which supports one type and direction of data transfer such as
"bulk out" or "interrupt in". The entire configuration may have
up to sixteen endpoints in each direction, allocated as needed
among all the interfaces.
</para></listitem>
<listitem><para>Data transfer on USB is packetized; each endpoint
has a maximum packet size.
Drivers must often be aware of conventions such as flagging the end
of bulk transfers using "short" (including zero length) packets.
</para></listitem>
<listitem><para>The Linux USB API supports synchronous calls for
control and bulk messaging.
It also supports asynchnous calls for all kinds of data transfer,
using request structures called "URBs" (USB Request Blocks).
</para></listitem>
</itemizedlist>
<para>Accordingly, the USB Core API exposed to device drivers
covers quite a lot of territory. You'll probably need to consult
the USB 2.0 specification, available online from www.usb.org at
no cost, as well as class or device specifications.
</para>
<sect1><title>Data Types and Macros</title>
!Iinclude/linux/usb.h
</sect1>
<sect1><title>USB Core APIs</title>
!Edrivers/usb/core/urb.c
<!-- FIXME: Removed for now since no structured comments in source
X!Edrivers/usb/core/config.c
-->
!Edrivers/usb/core/message.c
!Edrivers/usb/core/file.c
!Edrivers/usb/core/usb.c
</sect1>
<sect1><title>Host Controller APIs</title>
<para>These APIs are only for use by host controller drivers,
most of which implement standard register interfaces such as
EHCI, OHCI, or UHCI.
</para>
!Edrivers/usb/core/hcd.c
!Edrivers/usb/core/hcd-pci.c
!Edrivers/usb/core/buffer.c
</sect1>
</chapter>
<chapter id="uart16x50"> <chapter id="uart16x50">
<title>16x50 UART Driver</title> <title>16x50 UART Driver</title>
!Edrivers/serial/core.c !Edrivers/serial/core.c
......
This diff is collapsed.
...@@ -49,61 +49,14 @@ Preparing the Hardware ...@@ -49,61 +49,14 @@ Preparing the Hardware
----------------------------- -----------------------------
This document assumes you're using a Rev D or newer board running This document assumes you're using a Rev D or newer board running
Redboot as the bootloader. Redboot as the bootloader. Note that the version of RedBoot provided
with the boards has a major issue and you need to replace it with the
The as-supplied RedBoot image appears to leave the first page of RAM latest RedBoot. You can grab the source from the ECOS CVS or you can
in a corrupt state such that certain words in that page are unwritable get a prebuilt image and burn it in using FRU at:
and contain random data. The value of the data, and the location within
the first page changes with each boot, but is generally in the range ftp://source.mvista.com/pub/xscale/iq80310/redboot.bin
0xa0000150 to 0xa0000fff.
Make sure you do an 'fis init' command once you boot with the new
You can grab the source from the ECOS CVS or you can get a prebuilt image
from:
ftp://source.mvista.com/pub/xscale/iop310/IQ80310/redboot.bin
which is:
# strings redboot.bin | grep bootstrap
RedBoot(tm) bootstrap and debug environment, version UNKNOWN - built 14:58:21, Aug 15 2001
md5sum of this version:
bcb96edbc6f8e55b16c165930b6e4439 redboot.bin
You have two options to program it:
1. Using the FRU program (see the instructions in the user manual).
2. Using a Linux host, with MTD support built into the host kernel:
- ensure that the RedBoot image is not locked (issue the following
command under the existing RedBoot image):
RedBoot> fis unlock -f 0 -l 0x40000
- switch S3-1 and S3-2 on.
- reboot the host
- login as root
- identify the 80310 card:
# lspci
...
00:0c.1 Memory controller: Intel Corporation 80310 IOP [IO Processor] (rev 01)
- in this example, bus 0, slot 0c, function 1.
- insert the MTD modules, and the PCI map module:
# insmod drivers/mtd/maps/pci.o
- locate the MTD device (using the bus, slot, function)
# cat /proc/mtd
dev: size erasesize name
mtd0: 00800000 00020000 "00:0c.1"
- in this example, it is mtd device 0. Yours will be different.
Check carefully.
- program the flash
# cat redboot.bin > /dev/mtdblock0
- check the kernel message log for errors (some cat commands don't
error on failure)
# dmesg
- switch S3-1 and S3-2 off
- reboot host
In any case, make sure you do an 'fis init' command once you boot with the new
RedBoot image. RedBoot image.
...@@ -235,7 +188,7 @@ JFFS RedBoot partition mapped into the MTD partition scheme. ...@@ -235,7 +188,7 @@ JFFS RedBoot partition mapped into the MTD partition scheme.
You can grab a pre-built JFFS image to use as a root file system at: You can grab a pre-built JFFS image to use as a root file system at:
ftp://source.mvista.com/pub/xscale/iop310/IQ80310/jffs.img ftp://source.mvista.com/pub/xscale/iq80310/jffs.img
For detailed info on using MTD and creating a JFFS image go to: For detailed info on using MTD and creating a JFFS image go to:
......
Board Overview
-----------------------------
The Worcester IQ80321 board is an evaluation platform for Intel's 80321 Xscale
CPU (sometimes called IOP321 chipset).
The 80321 contains a single PCI hose (called the ATUs), a PCI-to-PCI bridge,
two DMA channels, I2C, I2O messaging unit, XOR unit for RAID operations,
a bus performance monitoring unit, and a memory controller with ECC features.
For more information on the board, see http://developer.intel.com/iio
Port Status
-----------------------------
Supported:
- MTD/JFFS/JFFS2 root
- NFS root
- RAMDISK root
- Serial port (ttyS0)
- Cache/TLB locking on 80321 CPU
- Performance monitoring unit on 80321 CPU
TODO:
- DMA engines
- I2C
- 80321 Bus Performance Monitor
- Application Accelerator Unit (XOR engine for RAID)
- I2O Messaging Unit
- I2C unit
- SSP
Building the Kernel
-----------------------------
make iq80321_config
make oldconfig
make dep
make zImage
This will build an image setup for BOOTP/NFS root support. To change this,
just run make menuconfig and disable nfs root or add a "root=" option.
Preparing the Hardware
-----------------------------
Make sure you do an 'fis init' command once you boot with the new
RedBoot image.
Downloading Linux
-----------------------------
Assuming you have your development system setup to act as a bootp/dhcp
server and running tftp:
NOTE: The 80321 board uses a different default memory map than the 80310.
RedBoot> load -r -b 0x01008000 -m y
Once the download is completed:
RedBoot> go 0x01008000
There is a version of RedBoot floating around that has DHCP support, but
I've never been able to cleanly transfer a kernel image and have it run.
Root Devices
-----------------------------
A kernel is not useful without a root filesystem, and you have several
choices with this board: NFS root, RAMDISK, or JFFS/JFFS2. For development
purposes, it is suggested that you use NFS root for easy access to various
tools. Once you're ready to deploy, probably want to utilize JFFS/JFFS2 on
the flash device.
MTD on the IQ80321
-----------------------------
Linux on the IQ80321 supports RedBoot FIS paritioning if it is enabled.
Out of the box, once you've done 'fis init' on RedBoot, you will get
the following partitioning scheme:
root@192.168.0.14:~# cat /proc/mtd
dev: size erasesize name
mtd0: 00040000 00020000 "RedBoot"
mtd1: 00040000 00020000 "RedBoot[backup]"
mtd2: 0075f000 00020000 "unallocated space"
mtd3: 00001000 00020000 "RedBoot config"
mtd4: 00020000 00020000 "FIS directory"
To create an FIS directory, you need to use the fis command in RedBoot.
As an example, you can burn the kernel into the flash once it's downloaded:
RedBoot> fis create -b 0x01008000 -l 0x8CBAC -r 0x01008000 -f 0x80000 kernel
... Erase from 0x00080000-0x00120000: .....
... Program from 0x01008000-0x01094bac at 0x00080000: .....
... Unlock from 0x007e0000-0x00800000: .
... Erase from 0x007e0000-0x00800000: .
... Program from 0x01fdf000-0x01fff000 at 0x007e0000: .
... Lock from 0x007e0000-0x00800000: .
RedBoot> fis list
Name FLASH addr Mem addr Length Entry point
RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
kernel 0x00080000 0x01008000 0x000A0000 0x00000000
This leads to the following Linux MTD setup:
mtroot@192.168.0.14:~# cat /proc/mtd
dev: size erasesize name
mtd0: 00040000 00020000 "RedBoot"
mtd1: 00040000 00020000 "RedBoot[backup]"
mtd2: 000a0000 00020000 "kernel"
mtd3: 006bf000 00020000 "unallocated space"
mtd4: 00001000 00020000 "RedBoot config"
mtd5: 00020000 00020000 "FIS directory"
Note that there is not a 1:1 mapping to the number of RedBoot paritions to
MTD partitions as unused space also gets allocated into MTD partitions.
As an aside, the -r option when creating the Kernel entry allows you to
simply do an 'fis load kernel' to copy the image from flash into memory.
You can then do an 'fis go 0x01008000' to start Linux.
If you choose to use static partitioning instead of the RedBoot partioning:
/dev/mtd0 0x00000000 - 0x0007ffff: Boot Monitor (512k)
/dev/mtd1 0x00080000 - 0x0011ffff: Kernel Image (640K)
/dev/mtd2 0x00120000 - 0x0071ffff: File System (6M)
/dev/mtd3 0x00720000 - 0x00800000: RedBoot Reserved (896K)
To use a JFFS1/2 root FS, you need to donwload the JFFS image using either
tftp or ymodem, and then copy it to flash:
RedBoot> load -r -b 0x01000000 /tftpboot/jffs.img
Raw file loaded 0x01000000-0x01600000
RedBoot> fis create -b 0x01000000 -l 0x600000 -f 0x120000 jffs
... Erase from 0x00120000-0x00720000: ..................................
... Program from 0x01000000-0x01600000 at 0x00120000: ..................
......................
... Unlock from 0x007e0000-0x00800000: .
... Erase from 0x007e0000-0x00800000: .
... Program from 0x01fdf000-0x01fff000 at 0x007e0000: .
... Lock from 0x007e0000-0x00800000: .
RedBoot> fis list
Name FLASH addr Mem addr Length Entry point
RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
kernel 0x00080000 0x01008000 0x000A0000 0x01008000
jffs 0x00120000 0x00120000 0x00600000 0x00000000
This looks like this in Linux:
root@192.168.0.14:~# cat /proc/mtd
dev: size erasesize name
mtd0: 00040000 00020000 "RedBoot"
mtd1: 00040000 00020000 "RedBoot[backup]"
mtd2: 000a0000 00020000 "kernel"
mtd3: 00600000 00020000 "jffs"
mtd4: 000bf000 00020000 "unallocated space"
mtd5: 00001000 00020000 "RedBoot config"
mtd6: 00020000 00020000 "FIS directory"
You need to boot the kernel once and watch the boot messages to see how the
JFFS RedBoot partition mapped into the MTD partition scheme.
You can grab a pre-built JFFS image to use as a root file system at:
ftp://source.mvista.com/pub/xscale/iq80310/jffs.img
For detailed info on using MTD and creating a JFFS image go to:
http://www.linux-mtd.infradead.org.
For details on using RedBoot's FIS commands, type 'fis help' or consult
your RedBoot manual.
BUGS and ISSUES
-----------------------------
* As shipped from Intel, pre-production boards have two issues:
- The on board ethernet is disabled S8E1-2 is off. You will need to turn it on.
- The PCIXCAPs are configured for a 100Mhz clock, but the clock selected is
actually only 66Mhz. This causes the wrong PPL multiplier to be used and the
board only runs at 400Mhz instead of 600Mhz. The way to observe this is to
use a independent clock to time a "sleep 10" command from the prompt. If it
takes 15 seconds instead of 10, you are running at 400Mhz.
- The experimental IOP310 drivers for the AAU, DMA, etc. are not supported yet.
Contributors
-----------------------------
The port to the IQ80321 was performed by:
Rory Bolt <rorybolt@pacbell.net> - Initial port, debugging.
This port was based on the IQ80310 port with the following contributors:
Nicolas Pitre <nico@cam.org> - Initial port, cleanup, debugging
Matt Porter <mporter@mvista.com> - PCI subsystem development, debugging
Tim Sanders <tsanders@sanders.org> - Initial PCI code
Deepak Saxena <dsaxena@mvista.com> - Cleanup, debug, cache lock, PMU
The port is currently maintained by Deepak Saxena <dsaxena@mvista.com>
-----------------------------
Enjoy.
Support functions for the Intel 80310 AAU
===========================================
Dave Jiang <dave.jiang@intel.com>
Last updated: 09/18/2001
The Intel 80312 companion chip in the 80310 chipset contains an AAU. The
AAU is capable of processing up to 8 data block sources and perform XOR
operations on them. This unit is typically used to accelerated XOR
operations utilized by RAID storage device drivers such as RAID 5. This
API is designed to provide a set of functions to take adventage of the
AAU. The AAU can also be used to transfer data blocks and used as a memory
copier. The AAU transfer the memory faster than the operation performed by
using CPU copy therefore it is recommended to use the AAU for memory copy.
------------------
int aau_request(u32 *aau_context, const char *device_id);
This function allows the user the acquire the control of the the AAU. The
function will return a context of AAU to the user and allocate
an interrupt for the AAU. The user must pass the context as a parameter to
various AAU API calls.
int aau_queue_buffer(u32 aau_context, aau_head_t *listhead);
This function starts the AAU operation. The user must create a SGL
header with a SGL attached. The format is presented below. The SGL is
built from kernel memory.
/* hardware descriptor */
typedef struct _aau_desc
{
u32 NDA; /* next descriptor address [READONLY] */
u32 SAR[AAU_SAR_GROUP]; /* src addrs */
u32 DAR; /* destination addr */
u32 BC; /* byte count */
u32 DC; /* descriptor control */
u32 SARE[AAU_SAR_GROUP]; /* extended src addrs */
} aau_desc_t;
/* user SGL format */
typedef struct _aau_sgl
{
aau_desc_t aau_desc; /* AAU HW Desc */
u32 status; /* status of SGL [READONLY] */
struct _aau_sgl *next; /* pointer to next SG [READONLY] */
void *dest; /* destination addr */
void *src[AAU_SAR_GROUP]; /* source addr[4] */
void *ext_src[AAU_SAR_GROUP]; /* ext src addr[4] */
u32 total_src; /* total number of source */
} aau_sgl_t;
/* header for user SGL */
typedef struct _aau_head
{
u32 total; /* total descriptors allocated */
u32 status; /* SGL status */
aau_sgl_t *list; /* ptr to head of list */
aau_callback_t callback; /* callback func ptr */
} aau_head_t;
The function will call aau_start() and start the AAU after it queues
the SGL to the processing queue. When the function will either
a. Sleep on the wait queue aau->wait_q if no callback has been provided, or
b. Continue and then call the provided callback function when DMA interrupt
has been triggered.
int aau_suspend(u32 aau_context);
Stops/Suspends the AAU operation
int aau_free(u32 aau_context);
Frees the ownership of AAU. Called when no longer need AAU service.
aau_sgl_t * aau_get_buffer(u32 aau_context, int num_buf);
This function obtains an AAU SGL for the user. User must specify the number
of descriptors to be allocated in the chain that is returned.
void aau_return_buffer(u32 aau_context, aau_sgl_t *list);
This function returns all SGL back to the API after user is done.
int aau_memcpy(void *dest, void *src, u32 size);
This function is a short cut for user to do memory copy utilizing the AAU for
better large block memory copy vs using the CPU. This is similar to using
typical memcpy() call.
* User is responsible for the source address(es) and the destination address.
The source and destination should all be cached memory.
void aau_test()
{
u32 aau;
char dev_id[] = "AAU";
int size = 2;
int err = 0;
aau_head_t *head;
aau_sgl_t *list;
u32 i;
u32 result = 0;
void *src, *dest;
printk("Starting AAU test\n");
if((err = aau_request(&aau, dev_id))<0)
{
printk("test - AAU request failed: %d\n", err);
return;
}
else
{
printk("test - AAU request successful\n");
}
head = kmalloc(sizeof(aau_head_t), GFP_KERNEL);
head->total = size;
head->status = 0;
head->callback = NULL;
list = aau_get_buffer(aau, size);
if(!list)
{
printk("Can't get buffers\n");
return;
}
head->list = list;
src = kmalloc(1024, GFP_KERNEL);
dest = kmalloc(1024, GFP_KERNEL);
while(list)
{
list->status = 0;
list->aau_desc->SAR[0] = (u32)src;
list->aau_desc->DAR = (u32)dest;
list->aau_desc->BC = 1024;
/* see iop310-aau.h for more DCR commands */
list->aau_desc->DC = AAU_DCR_WRITE | AAU_DCR_BLKCTRL_1_DF;
if(!list->next)
{
list->aau_desc->DC = AAU_DCR_IE;
break;
}
list = list->next;
}
printk("test- Queueing buffer for AAU operation\n");
err = aau_queue_buffer(aau, head);
if(err >= 0)
{
printk("AAU Queue Buffer is done...\n");
}
else
{
printk("AAU Queue Buffer failed...: %d\n", err);
}
#if 1
printk("freeing the AAU\n");
aau_return_buffer(aau, head->list);
aau_free(aau);
kfree(src);
kfree(dest);
kfree((void *)head);
#endif
}
All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
will be responsible if anything goes wrong. =)
TODO
____
* Testing
* Do zero-size AAU transfer/channel at init
so all we have to do is chainining
Support functions forthe Intel 80310 DMA channels
==================================================
Dave Jiang <dave.jiang@intel.com>
Last updated: 09/18/2001
The Intel 80310 XScale chipset provides 3 DMA channels via the 80312 I/O
companion chip. Two of them resides on the primary PCI bus and one on the
secondary PCI bus.
The DMA API provided is not compatible with the generic interface in the
ARM tree unfortunately due to how the 80312 DMACs work. Hopefully some time
in the near future a software interface can be done to bridge the differences.
The DMA API has been modeled after Nicholas Pitre's SA11x0 DMA API therefore
they will look somewhat similar.
80310 DMA API
-------------
int dma_request(dmach_t channel, const char *device_id);
This function will attempt to allocate the channel depending on what the
user requests:
IOP310_DMA_P0: PCI Primary 1
IOP310_DMA_P1: PCI Primary 2
IOP310_DMA_S0: PCI Secondary 1
/*EOF*/
Once the user allocates the DMA channel it is owned until released. Although
other users can also use the same DMA channel, but no new resources will be
allocated. The function will return the allocated channel number if successful.
int dma_queue_buffer(dmach_t channel, dma_sghead_t *listhead);
The user will construct a SGL in the form of below:
/*
* Scattered Gather DMA List for user
*/
typedef struct _dma_desc
{
u32 NDAR; /* next descriptor adress [READONLY] */
u32 PDAR; /* PCI address */
u32 PUADR; /* upper PCI address */
u32 LADR; /* local address */
u32 BC; /* byte count */
u32 DC; /* descriptor control */
} dma_desc_t;
typedef struct _dma_sgl
{
dma_desc_t dma_desc; /* DMA descriptor */
u32 status; /* descriptor status [READONLY] */
u32 data; /* user defined data */
struct _dma_sgl *next; /* next descriptor [READONLY] */
} dma_sgl_t;
/* dma sgl head */
typedef struct _dma_head
{
u32 total; /* total elements in SGL */
u32 status; /* status of sgl */
u32 mode; /* read or write mode */
dma_sgl_t *list; /* pointer to list */
dma_callback_t callback; /* callback function */
} dma_head_t;
The user shall allocate user SGL elements by calling the function:
dma_get_buffer(). This function will give the user an SGL element. The user
is responsible for creating the SGL head however. The user is also
responsible for allocating the memory for DMA data. The following code segment
shows how a DMA operation can be performed:
#include <asm/arch/iop310-dma.h>
void dma_test(void)
{
char dev_id[] = "Primary 0";
dma_head_t *sgl_head = NULL;
dma_sgl_t *sgl = NULL;
int err = 0;
int channel = -1;
u32 *test_ptr = 0;
DECLARE_WAIT_QUEUE_HEAD(wait_q);
*(IOP310_ATUCR) = (IOP310_ATUCR_PRIM_OUT_ENAB |
IOP310_ATUCR_DIR_ADDR_ENAB);
channel = dma_request(IOP310_DMA_P0, dev_id);
sgl_head = (dma_head_t *)kmalloc(sizeof(dma_head_t), GFP_KERNEL);
sgl_head->callback = NULL; /* no callback created */
sgl_head->total = 2; /* allocating 2 DMA descriptors */
sgl_head->mode = (DMA_MOD_WRITE);
sgl_head->status = 0;
/* now we get the two descriptors */
sgl = dma_get_buffer(channel, 2);
/* we set the header to point to the list we allocated */
sgl_head->list = sgl;
/* allocate 1k of DMA data */
sgl->data = (u32)kmalloc(1024, GFP_KERNEL);
/* Local address is physical */
sgl->dma_desc.LADR = (u32)virt_to_phys(sgl->data);
/* write to arbitrary location over the PCI bus */
sgl->dma_desc.PDAR = 0x00600000;
sgl->dma_desc.PUADR = 0;
sgl->dma_desc.BC = 1024;
/* set write & invalidate PCI command */
sgl->dma_desc.DC = DMA_DCR_PCI_MWI;
sgl->status = 0;
/* set a pattern */
memset(sgl->data, 0xFF, 1024);
/* User's responsibility to keep buffers cached coherent */
cpu_dcache_clean(sgl->data, sgl->data + 1024);
sgl = sgl->next;
sgl->data = (u32)kmalloc(1024, GFP_KERNEL);
sgl->dma_desc.LADR = (u32)virt_to_phys(sgl->data);
sgl->dma_desc.PDAR = 0x00610000;
sgl->dma_desc.PUADR = 0;
sgl->dma_desc.BC = 1024;
/* second descriptor has interrupt flag enabled */
sgl->dma_desc.DC = (DMA_DCR_PCI_MWI | DMA_DCR_IE);
/* must set end of chain flag */
sgl->status = DMA_END_CHAIN; /* DO NOT FORGET THIS!!!! */
memset(sgl->data, 0x0f, 1024);
/* User's responsibility to keep buffers cached coherent */
cpu_dcache_clean(sgl->data, sgl->data + 1024);
/* queing the buffer, this function will sleep since no callback */
err = dma_queue_buffer(channel, sgl_head);
/* now we are woken from DMA complete */
/* do data operations here */
/* free DMA data if necessary */
/* return the descriptors */
dma_return_buffer(channel, sgl_head->list);
/* free the DMA */
dma_free(channel);
kfree((void *)sgl_head);
}
dma_sgl_t * dma_get_buffer(dmach_t channel, int buf_num);
This call allocates DMA descriptors for the user.
void dma_return_buffer(dmach_t channel, dma_sgl_t *list);
This call returns the allocated descriptors back to the API.
int dma_suspend(dmach_t channel);
This call suspends any DMA transfer on the given channel.
int dma_resume(dmach_t channel);
This call resumes a DMA transfer which would have been stopped through
dma_suspend().
int dma_flush_all(dmach_t channel);
This completely flushes all queued buffers and on-going DMA transfers on a
given channel. This is called when DMA channel errors have occured.
void dma_free(dmach_t channel);
This clears all activities on a given DMA channel and releases it for future
requests.
Buffer Allocation
-----------------
It is the user's responsibility to allocate, free, and keep track of the
allocated DMA data memory. Upon calling dma_queue_buffer() the user must
relinquish the control of the buffers to the kernel and not change the
state of the buffers that it has passed to the kernel. The user will regain
the control of the buffers when it has been woken up by the bottom half of
the DMA interrupt handler. The user can allocate cached buffers or non-cached
via pci_alloc_consistent(). It is the user's responsibility to ensure that
the data is cache coherent.
*Reminder*
The user is responsble to ensure the ATU is setup properly for DMA transfers.
All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
will be responsible ifanything goes wrong.
Support functions for the Intel 80310 MU
===========================================
Dave Jiang <dave.jiang@intel.com>
Last updated: 10/11/2001
The messaging unit of the IOP310 contains 4 components and is utilized for
passing messages between the PCI agents on the primary bus and the Intel(R)
80200 CPU. The four components are:
Messaging Component
Doorbell Component
Circular Queues Component
Index Registers Component
Messaging Component:
Contains 4 32bit registers, 2 in and 2 out. Writing to the registers assert
interrupt on the PCI bus or to the 80200 depend on incoming or outgoing.
int mu_msg_request(u32 *mu_context);
Request the usage of Messaging Component. mu_context is written back by the
API. The MU context is passed to other Messaging calls as a parameter.
int mu_msg_set_callback(u32 mu_context, u8 reg, mu_msg_cb_t func);
Setup the callback function for incoming messages. Callback can be setup for
outbound 0, 1, or both outbound registers.
int mu_msg_post(u32 mu_context, u32 val, u8 reg);
Posting a message in the val parameter. The reg parameter denotes whether
to use register 0, 1.
int mu_msg_free(u32 mu_context, u8 mode);
Free the usage of messaging component. mode can be specified soft or hard. In
hardmode all resources are unallocated.
Doorbell Component:
The doorbell registers contains 1 inbound and 1 outbound. Depending on the bits
being set different interrupts are asserted.
int mu_db_request(u32 *mu_context);
Request the usage of the doorbell register.
int mu_db_set_callback(u32 mu_context, mu_db_cb_t func);
Setting up the inbound callback.
void mu_db_ring(u32 mu_context, u32 mask);
Write to the outbound db register with mask.
int mu_db_free(u32 mu_context);
Free the usage of doorbell component.
Circular Queues Component:
The circular queue component has 4 circular queues. Inbound post, inbound free,
outbound post, outbound free. These queues are used to pass messages.
int mu_cq_request(u32 *mu_context, u32 q_size);
Request the usage of the queue. See code comment header for q_size. It tells
the API how big of queues to setup.
int mu_cq_inbound_init(u32 mu_context, mfa_list_t *list, u32 size,
mu_cq_cb_t func);
Init inbound queues. The user must provide a list of free message frames to
be put in inbound free queue and the callback function to handle the inbound
messages.
int mu_cq_enable(u32 mu_context);
Enables the circular queues mechanism. Called once all the setup functions
are called.
u32 mu_cq_get_frame(u32 mu_context);
Obtain the address of an outbound free frame for the user.
int mu_cq_post_frame(u32 mu_context, u32 mfa);
The user can post the frame once getting the frame and put information in the
frame.
int mu_cq_free(u32 mu_context);
Free the usage of circular queues mechanism.
Index Registers Component:
The index register provides the mechanism to receive inbound messages.
int mu_ir_request(u32 *mu_context);
Request of Index Register component usage.
int mu_ir_set_callback(u32 mu_context, mu_ir_cb_t callback);
Setting up callback for inbound messages. The callback will receive the
value of the register that IAR offsets to.
int mu_ir_free(u32 mu_context);
Free the usage of Index Registers component.
void mu_set_irq_threshold(u32 mu_context, int thresh);
Setup the IRQ threshold before relinquish processing in IRQ space. Default
is set at 10 loops.
*NOTE: Example of host driver that utilize the MU can be found in the Linux I2O
driver. Specifically i2o_pci and some functions of i2o_core. The I2O driver
only utilize the circular queues mechanism. The other 3 components are simple
enough that they can be easily setup. The MU API provides no flow control for
the messaging mechanism. Flow control of the messaging needs to be established
by a higher layer of software on the IOP or the host driver.
All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
will be responsible if anything goes wrong. =)
TODO
____
Intel's XScale Microarchitecture 80312 companion processor provides a
Performance Monitoring Unit (PMON) that can be utilized to provide
information that can be useful for fine tuning of code. This text
file describes the API that's been developed for use by Linux kernel
programmers. Note that to get the most usage out of the PMON,
I highly reccomend getting the XScale reference manual from Intel[1]
and looking at chapter 12.
To use the PMON, you must #include <asm-arm/arch-iop310/pmon.h> in your
source file.
Since there's only one PMON, only one user can currently use the PMON
at a given time. To claim the PMON for usage, call iop310_pmon_claim() which
returns an identifier. When you are done using the PMON, call
iop310_pmon_release() with the id you were given earlier.
The PMON consists of 14 registers that can be used for performance measurements.
By combining different statistics, you can derive complex performance metrics.
To start the PMON, just call iop310_pmon_start(mode). Mode tells the PMON what
statistics to capture and can each be one of:
IOP310_PMU_MODE0
Performance Monitoring Disabled
IOP310_PMU_MODE1
Primary PCI bus and internal agents (bridge, dma Ch0, dam Ch1, patu)
IOP310_PMU_MODE2
Secondary PCI bus and internal agents (bridge, dma Ch0, dam Ch1, patu)
IOP310_PMU_MODE3
Secondary PCI bus and internal agents (external masters 0..2 and Intel
80312 I/O companion chip)
IOP310_PMU_MODE4
Secondary PCI bus and internal agents (external masters 3..5 and Intel
80312 I/O companion chip)
IOP310_PMU_MODE5
Intel 80312 I/O companion chip internal bus, DMA Channels and Application
Accelerator
IOP310_PMU_MODE6
Intel 80312 I/O companion chip internal bus, PATU, SATU and Intel 80200
processor
IOP310_PMU_MODE7
Intel 80312 I/O companion chip internal bus, Primary PCI bus, Secondary
PCI bus and Secondary PCI agents (external masters 0..5 & Intel 80312 I/O
companion chip)
To get the results back, call iop310_pmon_stop(&results) where results is
defined as follows:
typedef struct _iop310_pmon_result
{
u32 timestamp; /* Global Time Stamp Register */
u32 timestamp_overflow; /* Time Stamp overflow count */
u32 event_count[14]; /* Programmable Event Counter
Registers 1-14 */
u32 event_overflow[14]; /* Overflow counter for PECR1-14 */
} iop310_pmon_res_t;
--
This code is still under development, so please feel free to send patches,
questions, comments, etc to me.
Deepak Saxena <dsaxena@mvista.com>
Intel's XScale Microarchitecture provides support for locking of data
and instructions into the appropriate caches. This file provides
an overview of the API that has been developed to take advantage of this
feature from kernel space. Note that there is NO support for user space
cache locking.
For example usage of this code, grab:
ftp://source.mvista.com/pub/xscale/cache-test.c
If you have any questions, comments, patches, etc, please contact me.
Deepak Saxena <dsaxena@mvista.com>
API DESCRIPTION
I. Header File
#include <asm/xscale-lock.h>
II. Cache Capability Discovery
SYNOPSIS
int cache_query(u8 cache_type,
struct cache_capabilities *pcache);
struct cache_capabilities
{
u32 flags; /* Flags defining capabilities */
u32 cache_size; /* Cache size in K (1024 bytes) */
u32 max_lock; /* Maximum lockable region in K */
}
/*
* Flags
*/
/*
* Bit 0: Cache lockability
* Bits 1-31: Reserved for future use
*/
#define CACHE_LOCKABLE 0x00000001 /* Cache can be locked */
/*
* Cache Types
*/
#define ICACHE 0x00
#define DCACHE 0x01
DESCRIPTION
This function fills out the pcache capability identifier for the
requested cache. cache_type is either DCACHE or ICACHE. This
function is not very useful at the moment as all XScale CPU's
have the same size Cache, but is is provided for future XScale
based processors that may have larger cache sizes.
RETURN VALUE
This function returns 0 if no error occurs, otherwise it returns
a negative, errno compatible value.
-EIO Unknown hardware error
III. Cache Locking
SYNOPSIS
int cache_lock(void *addr, u32 len, u8 cache_type, const char *desc);
DESCRIPTION
This function locks a physically contigous portion of memory starting
at the virtual address pointed to by addr into the cache referenced
by cache_type.
The address of the data/instruction that is to be locked must be
aligned on a cache line boundary (L1_CACHE_ALIGNEMENT).
The desc parameter is an optional (pass NULL if not used) human readable
descriptor of the locked memory region that is used by the cache
management code to build the /proc/cache_locks table.
Note that this function does not check whether the address is valid
or not before locking it into the cache. That duty is up to the
caller. Also, it does not check for duplicate or overlaping
entries.
RETURN VALUE
If the function is successful in locking the entry into cache, a
zero is returned.
If an error occurs, an appropriate error value is returned.
-EINVAL The memory address provided was not cache line aligned
-ENOMEM Could not allocate memory to complete operation
-ENOSPC Not enough space left on cache to lock in requested region
-EIO Unknown error
III. Cache Unlocking
SYNOPSIS
int cache_unlock(void *addr)
DESCRIPTION
This function unlocks a portion of memory that was previously locked
into either the I or D cache.
RETURN VALUE
If the entry is cleanly unlocked from the cache, a 0 is returned.
In the case of an error, an appropriate error is returned.
-ENOENT No entry with given address associated with this cache
-EIO Unknown error
Intel's XScale Microarchitecture processors provide a Performance
Monitoring Unit (PMU) that can be utilized to provide information
that can be useful for fine tuning of code. This text file describes
the API that's been developed for use by Linux kernel programmers.
When I have some extra time on my hand, I will extend the code to
provide support for user mode performance monitoring (which is
probably much more useful). Note that to get the most usage out
of the PMU, I highly reccomend getting the XScale reference manual
from Intel and looking at chapter 12.
To use the PMU, you must #include <asm/xscale-pmu.h> in your source file.
Since there's only one PMU, only one user can currently use the PMU
at a given time. To claim the PMU for usage, call pmu_claim() which
returns an identifier. When you are done using the PMU, call
pmu_release() with the identifier that you were given by pmu_claim.
In addition, the PMU can only be used on XScale based systems that
provide an external timer. Systems that the PMU is currently supported
on are:
- Cyclone IQ80310
Before delving into how to use the PMU code, let's do a quick overview
of the PMU itself. The PMU consists of three registers that can be
used for performance measurements. The first is the CCNT register with
provides the number of clock cycles elapsed since the PMU was started.
The next two register, PMN0 and PMN1, are eace user programmable to
provide 1 of 20 different performance statistics. By combining different
statistics, you can derive complex performance metrics.
To start the PMU, just call pmu_start(pm0, pmn1). pmn0 and pmn1 tell
the PMU what statistics to capture and can each be one of:
EVT_ICACHE_MISS
Instruction fetches requiring access to external memory
EVT_ICACHE_NO_DELIVER
Instruction cache could not deliver an instruction. Either an
ICACHE miss or an instruction TLB miss.
EVT_ICACHE_DATA_STALL
Stall in execution due to a data dependency. This counter is
incremented each cycle in which the condition is present.
EVT_ITLB_MISS
Instruction TLB miss
EVT_DTLB_MISS
Data TLB miss
EVT_BRANCH
A branch instruction was executed and it may or may not have
changed program flow
EVT_BRANCH_MISS
A branch (B or BL instructions only) was mispredicted
EVT_INSTRUCTION
An instruction was executed
EVT_DCACHE_FULL_STALL
Stall because data cache buffers are full. Incremented on every
cycle in which condition is present.
EVT_DCACHE_FULL_STALL_CONTIG
Stall because data cache buffers are full. Incremented on every
cycle in which condition is contigous.
EVT_DCACHE_ACCESS
Data cache access (data fetch)
EVT_DCACHE_MISS
Data cache miss
EVT_DCACHE_WRITE_BACK
Data cache write back. This counter is incremented for every
1/2 line (four words) that are written back.
EVT_PC_CHANGED
Software changed the PC. This is incremented only when the
software changes the PC and there is no mode change. For example,
a MOV instruction that targets the PC would increment the counter.
An SWI would not as it triggers a mode change.
EVT_BCU_REQUEST
The Bus Control Unit(BCU) received a request from the core
EVT_BCU_FULL
The BCU request queue if full. A high value for this event means
that the BCU is often waiting for to complete on the external bus.
EVT_BCU_DRAIN
The BCU queues were drained due to either a Drain Write Buffer
command or an I/O transaction for a page that was marked as
uncacheable and unbufferable.
EVT_BCU_ECC_NO_ELOG
The BCU detected an ECC error on the memory bus but noe ELOG
register was available to to log the errors.
EVT_BCU_1_BIT_ERR
The BCU detected a 1-bit error while reading from the bus.
EVT_RMW
An RMW cycle occurred due to narrow write on ECC protected memory.
To get the results back, call pmu_stop(&results) where results is defined
as a struct pmu_results:
struct pmu_results
{
u32 ccnt; /* Clock Counter Register */
u32 ccnt_of; /
u32 pmn0; /* Performance Counter Register 0 */
u32 pmn0_of;
u32 pmn1; /* Performance Counter Register 1 */
u32 pmn1_of;
};
Pretty simple huh? Following are some examples of how to get some commonly
wanted numbers out of the PMU data. Note that since you will be dividing
things, this isn't super useful from the kernel and you need to printk the
data out to syslog. See [1] for more examples.
Instruction Cache Efficiency
pmu_start(EVT_INSTRUCTION, EVT_ICACHE_MISS);
...
pmu_stop(&results);
icache_miss_rage = results.pmn1 / results.pmn0;
cycles_per_instruction = results.ccnt / results.pmn0;
Data Cache Efficiency
pmu_start(EVT_DCACHE_ACCESS, EVT_DCACHE_MISS);
...
pmu_stop(&results);
dcache_miss_rage = results.pmn1 / results.pmn0;
Instruction Fetch Latency
pmu_start(EVT_ICACHE_NO_DELIVER, EVT_ICACHE_MISS);
...
pmu_stop(&results);
average_stall_waiting_for_instruction_fetch =
results.pmn0 / results.pmn1;
percent_stall_cycles_due_to_instruction_fetch =
results.pmn0 / results.ccnt;
ToDo:
- Add support for usermode PMU usage. This might require hooking into
the scheduler so that we pause the PMU when the task that requested
statistics is scheduled out.
--
This code is still under development, so please feel free to send patches,
questions, comments, etc to me.
Deepak Saxena <dsaxena@mvista.com>
Intel's XScale Microarchitecture provides support for locking of TLB
entries in both the instruction and data TLBs. This file provides
an overview of the API that has been developed to take advantage of this
feature from kernel space. Note that there is NO support for user space.
In general, this feature should be used in conjunction with locking
data or instructions into the appropriate caches. See the file
cache-lock.txt in this directory.
If you have any questions, comments, patches, etc, please contact me.
Deepak Saxena <dsaxena@mvista.com>
API DESCRIPTION
I. Header file
#include <asm/xscale-lock.h>
II. Locking an entry into the TLB
SYNOPSIS
xscale_tlb_lock(u8 tlb_type, u32 addr);
/*
* TLB types
*/
#define ITLB 0x0
#define DTLB 0x1
DESCRIPTION
This function locks the virtual to physical mapping for virtual
address addr into the requested TLB.
RETURN VALUE
If the entry is properly locked into the TLB, a 0 is returned.
In case of an error, an appropriate error is returned.
-ENOSPC No more entries left in the TLB
-EIO Unknown error
III. Unlocking an entry from a TLB
SYNOPSIS
xscale_tlb_unlock(u8 tlb_type, u32 addr);
DESCRIPTION
This function unlocks the entry for virtual address addr from the
specified cache.
RETURN VALUE
If the TLB entry is properly unlocked, a 0 is returned.
In case of an error, an appropriate error is returned.
-ENOENT No entry for given address in specified TLB
...@@ -516,6 +516,14 @@ running once the system is up. ...@@ -516,6 +516,14 @@ running once the system is up.
[KNL,BOOT] Force usage of a specific region of memory [KNL,BOOT] Force usage of a specific region of memory
Region of memory to be used, from ss to ss+nn. Region of memory to be used, from ss to ss+nn.
mem=nn[KMG]#ss[KMG]
[KNL,BOOT,ACPI] Mark specific memory as ACPI data.
Region of memory to be used, from ss to ss+nn.
mem=nn[KMG]$ss[KMG]
[KNL,BOOT,ACPI] Mark specific memory as reserved.
Region of memory to be used, from ss to ss+nn.
mem=nopentium [BUGS=IA-32] Disable usage of 4MB pages for kernel mem=nopentium [BUGS=IA-32] Disable usage of 4MB pages for kernel
memory. memory.
......
...@@ -591,7 +591,6 @@ alpha_switch_to: ...@@ -591,7 +591,6 @@ alpha_switch_to:
*/ */
.globl ret_from_fork .globl ret_from_fork
#if CONFIG_SMP || CONFIG_PREEMPT
.align 4 .align 4
.ent ret_from_fork .ent ret_from_fork
ret_from_fork: ret_from_fork:
...@@ -599,9 +598,6 @@ ret_from_fork: ...@@ -599,9 +598,6 @@ ret_from_fork:
mov $17, $16 mov $17, $16
jmp $31, schedule_tail jmp $31, schedule_tail
.end ret_from_fork .end ret_from_fork
#else
ret_from_fork = ret_from_sys_call
#endif
/* /*
* kernel_thread(fn, arg, clone_flags) * kernel_thread(fn, arg, clone_flags)
......
...@@ -48,7 +48,6 @@ asmlinkage long ...@@ -48,7 +48,6 @@ asmlinkage long
sys_pciconfig_iobase(long which, unsigned long bus, unsigned long dfn) sys_pciconfig_iobase(long which, unsigned long bus, unsigned long dfn)
{ {
struct pci_controller *hose; struct pci_controller *hose;
struct pci_dev *dev;
/* from hose or from bus.devfn */ /* from hose or from bus.devfn */
if (which & IOBASE_FROM_HOSE) { if (which & IOBASE_FROM_HOSE) {
...@@ -106,6 +105,7 @@ sys_pciconfig_write(unsigned long bus, unsigned long dfn, ...@@ -106,6 +105,7 @@ sys_pciconfig_write(unsigned long bus, unsigned long dfn,
void * void *
pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp) pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp)
{ {
return NULL;
} }
void void
pci_free_consistent(struct pci_dev *pdev, size_t size, void *cpu_addr, pci_free_consistent(struct pci_dev *pdev, size_t size, void *cpu_addr,
...@@ -116,6 +116,7 @@ dma_addr_t ...@@ -116,6 +116,7 @@ dma_addr_t
pci_map_single(struct pci_dev *pdev, void *cpu_addr, size_t size, pci_map_single(struct pci_dev *pdev, void *cpu_addr, size_t size,
int direction) int direction)
{ {
return (dma_addr_t) 0;
} }
void void
pci_unmap_single(struct pci_dev *pdev, dma_addr_t dma_addr, size_t size, pci_unmap_single(struct pci_dev *pdev, dma_addr_t dma_addr, size_t size,
......
...@@ -62,17 +62,6 @@ quirk_isa_bridge(struct pci_dev *dev) ...@@ -62,17 +62,6 @@ quirk_isa_bridge(struct pci_dev *dev)
dev->class = PCI_CLASS_BRIDGE_ISA << 8; dev->class = PCI_CLASS_BRIDGE_ISA << 8;
} }
static void __init
quirk_ali_ide_ports(struct pci_dev *dev)
{
if (dev->resource[0].end == 0xffff)
dev->resource[0].end = dev->resource[0].start + 7;
if (dev->resource[2].end == 0xffff)
dev->resource[2].end = dev->resource[2].start + 7;
if (dev->resource[3].end == 0xffff)
dev->resource[3].end = dev->resource[3].start + 7;
}
static void __init static void __init
quirk_cypress(struct pci_dev *dev) quirk_cypress(struct pci_dev *dev)
{ {
...@@ -121,8 +110,6 @@ pcibios_fixup_final(struct pci_dev *dev) ...@@ -121,8 +110,6 @@ pcibios_fixup_final(struct pci_dev *dev)
struct pci_fixup pcibios_fixups[] __initdata = { struct pci_fixup pcibios_fixups[] __initdata = {
{ PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82378, { PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82378,
quirk_isa_bridge }, quirk_isa_bridge },
{ PCI_FIXUP_HEADER, PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229,
quirk_ali_ide_ports },
{ PCI_FIXUP_HEADER, PCI_VENDOR_ID_CONTAQ, PCI_DEVICE_ID_CONTAQ_82C693, { PCI_FIXUP_HEADER, PCI_VENDOR_ID_CONTAQ, PCI_DEVICE_ID_CONTAQ_82C693,
quirk_cypress }, quirk_cypress },
{ PCI_FIXUP_FINAL, PCI_ANY_ID, PCI_ANY_ID, { PCI_FIXUP_FINAL, PCI_ANY_ID, PCI_ANY_ID,
......
...@@ -418,6 +418,7 @@ static void sa1111_wake(struct sa1111 *sachip) ...@@ -418,6 +418,7 @@ static void sa1111_wake(struct sa1111 *sachip)
spin_lock_irqsave(&sachip->lock, flags); spin_lock_irqsave(&sachip->lock, flags);
#if CONFIG_ARCH_SA1100
/* /*
* First, set up the 3.6864MHz clock on GPIO 27 for the SA-1111: * First, set up the 3.6864MHz clock on GPIO 27 for the SA-1111:
* (SA-1110 Developer's Manual, section 9.1.2.1) * (SA-1110 Developer's Manual, section 9.1.2.1)
...@@ -425,6 +426,11 @@ static void sa1111_wake(struct sa1111 *sachip) ...@@ -425,6 +426,11 @@ static void sa1111_wake(struct sa1111 *sachip)
GAFR |= GPIO_32_768kHz; GAFR |= GPIO_32_768kHz;
GPDR |= GPIO_32_768kHz; GPDR |= GPIO_32_768kHz;
TUCR = TUCR_3_6864MHz; TUCR = TUCR_3_6864MHz;
#elif CONFIG_ARCH_PXA
pxa_gpio_mode(GPIO11_3_6MHz_MD);
#else
#error missing clock setup
#endif
/* /*
* Turn VCO on, and disable PLL Bypass. * Turn VCO on, and disable PLL Bypass.
...@@ -461,6 +467,8 @@ static void sa1111_wake(struct sa1111 *sachip) ...@@ -461,6 +467,8 @@ static void sa1111_wake(struct sa1111 *sachip)
spin_unlock_irqrestore(&sachip->lock, flags); spin_unlock_irqrestore(&sachip->lock, flags);
} }
#ifdef CONFIG_ARCH_SA1100
/* /*
* Configure the SA1111 shared memory controller. * Configure the SA1111 shared memory controller.
*/ */
...@@ -476,6 +484,8 @@ sa1111_configure_smc(struct sa1111 *sachip, int sdram, unsigned int drac, ...@@ -476,6 +484,8 @@ sa1111_configure_smc(struct sa1111 *sachip, int sdram, unsigned int drac,
sa1111_writel(smcr, sachip->base + SA1111_SMCR); sa1111_writel(smcr, sachip->base + SA1111_SMCR);
} }
#endif
static void static void
sa1111_init_one_child(struct sa1111 *sachip, struct sa1111_dev *sadev, unsigned int offset) sa1111_init_one_child(struct sa1111 *sachip, struct sa1111_dev *sadev, unsigned int offset)
{ {
...@@ -569,6 +579,7 @@ __sa1111_probe(struct device *me, unsigned long phys_addr, int irq) ...@@ -569,6 +579,7 @@ __sa1111_probe(struct device *me, unsigned long phys_addr, int irq)
*/ */
sa1111_wake(sachip); sa1111_wake(sachip);
#ifdef CONFIG_ARCH_SA1100
/* /*
* The SDRAM configuration of the SA1110 and the SA1111 must * The SDRAM configuration of the SA1110 and the SA1111 must
* match. This is very important to ensure that SA1111 accesses * match. This is very important to ensure that SA1111 accesses
...@@ -592,6 +603,7 @@ __sa1111_probe(struct device *me, unsigned long phys_addr, int irq) ...@@ -592,6 +603,7 @@ __sa1111_probe(struct device *me, unsigned long phys_addr, int irq)
* Enable the SA1110 memory bus request and grant signals. * Enable the SA1110 memory bus request and grant signals.
*/ */
sa1110_mb_enable(); sa1110_mb_enable();
#endif
/* /*
* The interrupt controller must be initialised before any * The interrupt controller must be initialised before any
......
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
.globl swapper_pg_dir .globl swapper_pg_dir
.equ swapper_pg_dir, TEXTADDR - 0x4000 .equ swapper_pg_dir, TEXTADDR - 0x4000
.macro pgtbl, reg, rambase .macro pgtbl, reg
adr \reg, stext adr \reg, stext
sub \reg, \reg, #0x4000 sub \reg, \reg, #0x4000
.endm .endm
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
* can convert the page table base address to the base address of the section * can convert the page table base address to the base address of the section
* containing both. * containing both.
*/ */
.macro krnladr, rd, pgtable, rambase .macro krnladr, rd, pgtable
bic \rd, \pgtable, #0x000ff000 bic \rd, \pgtable, #0x000ff000
.endm .endm
...@@ -164,7 +164,7 @@ __mmap_switched: ...@@ -164,7 +164,7 @@ __mmap_switched:
* r8 = page table flags * r8 = page table flags
*/ */
__create_page_tables: __create_page_tables:
pgtbl r4, r5 @ page table address pgtbl r4 @ page table address
/* /*
* Clear the 16K level 1 swapper page table * Clear the 16K level 1 swapper page table
...@@ -184,7 +184,7 @@ __create_page_tables: ...@@ -184,7 +184,7 @@ __create_page_tables:
* cater for the MMU enable. This identity mapping * cater for the MMU enable. This identity mapping
* will be removed by paging_init() * will be removed by paging_init()
*/ */
krnladr r2, r4, r5 @ start of kernel krnladr r2, r4 @ start of kernel
add r3, r8, r2 @ flags + kernel base add r3, r8, r2 @ flags + kernel base
str r3, [r4, r2, lsr #18] @ identity mapping str r3, [r4, r2, lsr #18] @ identity mapping
......
...@@ -4,18 +4,17 @@ ...@@ -4,18 +4,17 @@
# Common support (must be linked before board specific support) # Common support (must be linked before board specific support)
obj-y += generic.o irq.o dma.o obj-y += generic.o irq.o dma.o
obj-$(CONFIG_SA1111) += sa1111.o
# Specific board support # Specific board support
obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o
obj-$(CONFIG_ARCH_PXA_IDP) += idp.o obj-$(CONFIG_ARCH_PXA_IDP) += idp.o
# Support for blinky lights # Support for blinky lights
leds-y := leds.o led-y := leds.o
leds-$(CONFIG_ARCH_LUBBOCK) += leds-lubbock.o led-$(CONFIG_ARCH_LUBBOCK) += leds-lubbock.o
leds-$(CONFIG_ARCH_PXA_IDP) += leds-idp.o led-$(CONFIG_ARCH_PXA_IDP) += leds-idp.o
obj-$(CONFIG_LEDS) += $(leds-y) obj-$(CONFIG_LEDS) += $(led-y)
# Misc features # Misc features
obj-$(CONFIG_PM) += pm.o sleep.o obj-$(CONFIG_PM) += pm.o sleep.o
...@@ -38,7 +38,7 @@ ...@@ -38,7 +38,7 @@
static unsigned char L_clk_mult[32] = { 0, 27, 32, 36, 40, 45, 0, }; static unsigned char L_clk_mult[32] = { 0, 27, 32, 36, 40, 45, 0, };
/* Memory Frequency to Run Mode Frequency Multiplier (M) */ /* Memory Frequency to Run Mode Frequency Multiplier (M) */
static unsigned char M_clk_mult[4] = { 0, 1, 2, 0 }; static unsigned char M_clk_mult[4] = { 0, 1, 2, 4 };
/* Run Mode Frequency to Turbo Mode Frequency Multiplier (N) */ /* Run Mode Frequency to Turbo Mode Frequency Multiplier (N) */
/* Note: we store the value N * 2 here. */ /* Note: we store the value N * 2 here. */
...@@ -47,11 +47,12 @@ static unsigned char N2_clk_mult[8] = { 0, 0, 2, 3, 4, 0, 6, 0 }; ...@@ -47,11 +47,12 @@ static unsigned char N2_clk_mult[8] = { 0, 0, 2, 3, 4, 0, 6, 0 };
/* Crystal clock */ /* Crystal clock */
#define BASE_CLK 3686400 #define BASE_CLK 3686400
/* /*
* Display what we were booted with. * Get the clock frequency as reflected by CCCR and the turbo flag.
* We assume these values have been applied via a fcs.
* If info is not 0 we also display the current settings.
*/ */
static int __init pxa_display_clocks(void) unsigned int get_clk_frequency_khz(int info)
{ {
unsigned long cccr, turbo; unsigned long cccr, turbo;
unsigned int l, L, m, M, n2, N; unsigned int l, L, m, M, n2, N;
...@@ -67,20 +68,24 @@ static int __init pxa_display_clocks(void) ...@@ -67,20 +68,24 @@ static int __init pxa_display_clocks(void)
M = m * L; M = m * L;
N = n2 * M / 2; N = n2 * M / 2;
L += 5000; if(info)
printk( KERN_INFO "Memory clock: %d.%02dMHz (*%d)\n", {
L / 1000000, (L % 1000000) / 10000, l ); L += 5000;
M += 5000; printk( KERN_INFO "Memory clock: %d.%02dMHz (*%d)\n",
printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n", L / 1000000, (L % 1000000) / 10000, l );
M / 1000000, (M % 1000000) / 10000, m ); M += 5000;
N += 5000; printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n",
printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n", M / 1000000, (M % 1000000) / 10000, m );
N / 1000000, (N % 1000000) / 10000, n2 / 2, (n2 % 2) * 5, N += 5000;
(turbo & 1) ? "" : "in" ); printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n",
N / 1000000, (N % 1000000) / 10000, n2 / 2, (n2 % 2) * 5,
return 0; (turbo & 1) ? "" : "in" );
}
return (turbo & 1) ? (N/1000) : (M/1000);
} }
EXPORT_SYMBOL(get_clk_frequency_khz);
/* /*
* Return the current lclk requency in units of 10kHz * Return the current lclk requency in units of 10kHz
...@@ -132,5 +137,5 @@ static struct map_desc standard_io_desc[] __initdata = { ...@@ -132,5 +137,5 @@ static struct map_desc standard_io_desc[] __initdata = {
void __init pxa_map_io(void) void __init pxa_map_io(void)
{ {
iotable_init(standard_io_desc, ARRAY_SIZE(standard_io_desc)); iotable_init(standard_io_desc, ARRAY_SIZE(standard_io_desc));
pxa_display_clocks(); get_clk_frequency_khz(1);
} }
...@@ -241,10 +241,4 @@ void __init pxa_init_irq(void) ...@@ -241,10 +241,4 @@ void __init pxa_init_irq(void)
/* Install handler for GPIO 2-80 edge detect interrupts */ /* Install handler for GPIO 2-80 edge detect interrupts */
set_irq_chip(IRQ_GPIO_2_80, &pxa_internal_chip); set_irq_chip(IRQ_GPIO_2_80, &pxa_internal_chip);
set_irq_chained_handler(IRQ_GPIO_2_80, pxa_gpio_demux_handler); set_irq_chained_handler(IRQ_GPIO_2_80, pxa_gpio_demux_handler);
/*
* We generally don't want the LCD IRQ being
* enabled as soon as we request it.
*/
set_irq_flags(IRQ_LCD, IRQF_VALID | IRQF_NOAUTOEN);
} }
...@@ -27,4 +27,4 @@ pxa_leds_init(void) ...@@ -27,4 +27,4 @@ pxa_leds_init(void)
return 0; return 0;
} }
__initcall(pxa_leds_init); core_initcall(pxa_leds_init);
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/device.h>
#include <linux/major.h> #include <linux/major.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
...@@ -31,7 +32,6 @@ ...@@ -31,7 +32,6 @@
#include <asm/hardware/sa1111.h> #include <asm/hardware/sa1111.h>
#include "generic.h" #include "generic.h"
#include "sa1111.h"
static void lubbock_ack_irq(unsigned int irq) static void lubbock_ack_irq(unsigned int irq)
{ {
...@@ -106,24 +106,16 @@ static void __init lubbock_init_irq(void) ...@@ -106,24 +106,16 @@ static void __init lubbock_init_irq(void)
static int __init lubbock_init(void) static int __init lubbock_init(void)
{ {
int ret; return sa1111_init(0x10000000, LUBBOCK_SA1111_IRQ);
ret = sa1111_probe(LUBBOCK_SA1111_BASE);
if (ret)
return ret;
sa1111_wake();
sa1111_init_irq(LUBBOCK_SA1111_IRQ);
return 0;
} }
__initcall(lubbock_init); subsys_initcall(lubbock_init);
static struct map_desc lubbock_io_desc[] __initdata = { static struct map_desc lubbock_io_desc[] __initdata = {
/* virtual physical length type */ /* virtual physical length type */
{ 0xf0000000, 0x08000000, 0x00100000, MT_DEVICE }, /* CPLD */ { 0xf0000000, 0x08000000, 0x00100000, MT_DEVICE }, /* CPLD */
{ 0xf1000000, 0x0c000000, 0x00100000, MT_DEVICE }, /* LAN91C96 IO */ { 0xf1000000, 0x0c000000, 0x00100000, MT_DEVICE }, /* LAN91C96 IO */
{ 0xf1100000, 0x0e000000, 0x00100000, MT_DEVICE }, /* LAN91C96 Attr */ { 0xf1100000, 0x0e000000, 0x00100000, MT_DEVICE }, /* LAN91C96 Attr */
{ 0xf4000000, 0x10000000, 0x00400000, MT_DEVICE } /* SA1111 */
}; };
static void __init lubbock_map_io(void) static void __init lubbock_map_io(void)
......
...@@ -342,11 +342,6 @@ config X86_ALIGNMENT_16 ...@@ -342,11 +342,6 @@ config X86_ALIGNMENT_16
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2
default y default y
config X86_TSC
bool
depends on MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2
default y
config X86_GOOD_APIC config X86_GOOD_APIC
bool bool
depends on MK7 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 depends on MK7 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8
...@@ -500,6 +495,11 @@ config HAVE_ARCH_BOOTMEM_NODE ...@@ -500,6 +495,11 @@ config HAVE_ARCH_BOOTMEM_NODE
depends on NUMA depends on NUMA
default y default y
config X86_TSC
bool
depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2) && !X86_NUMAQ
default y
config X86_MCE config X86_MCE
bool "Machine Check Exception" bool "Machine Check Exception"
---help--- ---help---
......
...@@ -319,6 +319,31 @@ ret_point: ...@@ -319,6 +319,31 @@ ret_point:
pushl saved_context_eflags ; popfl pushl saved_context_eflags ; popfl
ret ret
ENTRY(do_suspend_lowlevel_s4bios)
cmpl $0,4(%esp)
jne ret_point
call save_processor_state
movl %esp, saved_context_esp
movl %eax, saved_context_eax
movl %ebx, saved_context_ebx
movl %ecx, saved_context_ecx
movl %edx, saved_context_edx
movl %ebp, saved_context_ebp
movl %esi, saved_context_esi
movl %edi, saved_context_edi
pushfl ; popl saved_context_eflags
movl $ret_point,saved_eip
movl %esp,saved_esp
movl %ebp,saved_ebp
movl %ebx,saved_ebx
movl %edi,saved_edi
movl %esi,saved_esi
call acpi_enter_sleep_state_s4bios
ret
ALIGN ALIGN
# saved registers # saved registers
saved_gdt: .long 0,0 saved_gdt: .long 0,0
......
...@@ -68,7 +68,6 @@ EXPORT_SYMBOL(EISA_bus); ...@@ -68,7 +68,6 @@ EXPORT_SYMBOL(EISA_bus);
EXPORT_SYMBOL(MCA_bus); EXPORT_SYMBOL(MCA_bus);
#ifdef CONFIG_DISCONTIGMEM #ifdef CONFIG_DISCONTIGMEM
EXPORT_SYMBOL(node_data); EXPORT_SYMBOL(node_data);
EXPORT_SYMBOL(pfn_to_nid);
#endif #endif
#ifdef CONFIG_X86_NUMAQ #ifdef CONFIG_X86_NUMAQ
EXPORT_SYMBOL(xquad_portio); EXPORT_SYMBOL(xquad_portio);
......
...@@ -110,7 +110,7 @@ void __init MP_processor_info (struct mpc_config_processor *m) ...@@ -110,7 +110,7 @@ void __init MP_processor_info (struct mpc_config_processor *m)
if (!(m->mpc_cpuflag & CPU_ENABLED)) if (!(m->mpc_cpuflag & CPU_ENABLED))
return; return;
apicid = mpc_apic_id(m, translation_table[mpc_record]->trans_quad); apicid = mpc_apic_id(m, translation_table[mpc_record]);
if (m->mpc_featureflag&(1<<0)) if (m->mpc_featureflag&(1<<0))
Dprintk(" Floating point unit present.\n"); Dprintk(" Floating point unit present.\n");
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/mmzone.h> #include <linux/mmzone.h>
#include <linux/module.h>
#include <asm/numaq.h> #include <asm/numaq.h>
/* These are needed before the pgdat's are created */ /* These are needed before the pgdat's are created */
...@@ -82,19 +83,7 @@ static void __init smp_dump_qct(void) ...@@ -82,19 +83,7 @@ static void __init smp_dump_qct(void)
* physnode_map[8- ] = -1; * physnode_map[8- ] = -1;
*/ */
int physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1}; int physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
EXPORT_SYMBOL(physnode_map);
#define PFN_TO_ELEMENT(pfn) (pfn / PAGES_PER_ELEMENT)
#define PA_TO_ELEMENT(pa) (PFN_TO_ELEMENT(pa >> PAGE_SHIFT))
int pfn_to_nid(unsigned long pfn)
{
int nid = physnode_map[PFN_TO_ELEMENT(pfn)];
if (nid == -1)
BUG(); /* address is not present */
return nid;
}
/* /*
* for each node mark the regions * for each node mark the regions
......
...@@ -552,6 +552,12 @@ static void __init parse_cmdline_early (char ** cmdline_p) ...@@ -552,6 +552,12 @@ static void __init parse_cmdline_early (char ** cmdline_p)
if (*from == '@') { if (*from == '@') {
start_at = memparse(from+1, &from); start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_RAM); add_memory_region(start_at, mem_size, E820_RAM);
} else if (*from == '#') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_ACPI);
} else if (*from == '$') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_RESERVED);
} else { } else {
limit_regions(mem_size); limit_regions(mem_size);
userdef=1; userdef=1;
......
...@@ -48,6 +48,14 @@ extern unsigned long max_low_pfn; ...@@ -48,6 +48,14 @@ extern unsigned long max_low_pfn;
extern unsigned long totalram_pages; extern unsigned long totalram_pages;
extern unsigned long totalhigh_pages; extern unsigned long totalhigh_pages;
#define LARGE_PAGE_BYTES (PTRS_PER_PTE * PAGE_SIZE)
unsigned long node_remap_start_pfn[MAX_NUMNODES];
unsigned long node_remap_size[MAX_NUMNODES];
unsigned long node_remap_offset[MAX_NUMNODES];
void *node_remap_start_vaddr[MAX_NUMNODES];
void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
/* /*
* Find the highest page frame number we have available for the node * Find the highest page frame number we have available for the node
*/ */
...@@ -65,12 +73,13 @@ static void __init find_max_pfn_node(int nid) ...@@ -65,12 +73,13 @@ static void __init find_max_pfn_node(int nid)
*/ */
static void __init allocate_pgdat(int nid) static void __init allocate_pgdat(int nid)
{ {
unsigned long node_datasz; if (nid)
NODE_DATA(nid) = (pg_data_t *)node_remap_start_vaddr[nid];
node_datasz = PFN_UP(sizeof(struct pglist_data)); else {
NODE_DATA(nid) = (pg_data_t *)(__va(min_low_pfn << PAGE_SHIFT)); NODE_DATA(nid) = (pg_data_t *)(__va(min_low_pfn << PAGE_SHIFT));
min_low_pfn += node_datasz; min_low_pfn += PFN_UP(sizeof(pg_data_t));
memset(NODE_DATA(nid), 0, sizeof(struct pglist_data)); memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
}
} }
/* /*
...@@ -113,14 +122,6 @@ static void __init register_bootmem_low_pages(unsigned long system_max_low_pfn) ...@@ -113,14 +122,6 @@ static void __init register_bootmem_low_pages(unsigned long system_max_low_pfn)
} }
} }
#define LARGE_PAGE_BYTES (PTRS_PER_PTE * PAGE_SIZE)
unsigned long node_remap_start_pfn[MAX_NUMNODES];
unsigned long node_remap_size[MAX_NUMNODES];
unsigned long node_remap_offset[MAX_NUMNODES];
void *node_remap_start_vaddr[MAX_NUMNODES];
extern void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
void __init remap_numa_kva(void) void __init remap_numa_kva(void)
{ {
void *vaddr; void *vaddr;
...@@ -145,7 +146,7 @@ static unsigned long calculate_numa_remap_pages(void) ...@@ -145,7 +146,7 @@ static unsigned long calculate_numa_remap_pages(void)
for (nid = 1; nid < numnodes; nid++) { for (nid = 1; nid < numnodes; nid++) {
/* calculate the size of the mem_map needed in bytes */ /* calculate the size of the mem_map needed in bytes */
size = (node_end_pfn[nid] - node_start_pfn[nid] + 1) size = (node_end_pfn[nid] - node_start_pfn[nid] + 1)
* sizeof(struct page); * sizeof(struct page) + sizeof(pg_data_t);
/* convert size to large (pmd size) pages, rounding up */ /* convert size to large (pmd size) pages, rounding up */
size = (size + LARGE_PAGE_BYTES - 1) / LARGE_PAGE_BYTES; size = (size + LARGE_PAGE_BYTES - 1) / LARGE_PAGE_BYTES;
/* now the roundup is correct, convert to PAGE_SIZE pages */ /* now the roundup is correct, convert to PAGE_SIZE pages */
...@@ -195,9 +196,9 @@ unsigned long __init setup_memory(void) ...@@ -195,9 +196,9 @@ unsigned long __init setup_memory(void)
printk("Low memory ends at vaddr %08lx\n", printk("Low memory ends at vaddr %08lx\n",
(ulong) pfn_to_kaddr(max_low_pfn)); (ulong) pfn_to_kaddr(max_low_pfn));
for (nid = 0; nid < numnodes; nid++) { for (nid = 0; nid < numnodes; nid++) {
allocate_pgdat(nid);
node_remap_start_vaddr[nid] = pfn_to_kaddr( node_remap_start_vaddr[nid] = pfn_to_kaddr(
highstart_pfn - node_remap_offset[nid]); highstart_pfn - node_remap_offset[nid]);
allocate_pgdat(nid);
printk ("node %d will remap to vaddr %08lx - %08lx\n", nid, printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
(ulong) node_remap_start_vaddr[nid], (ulong) node_remap_start_vaddr[nid],
(ulong) pfn_to_kaddr(highstart_pfn (ulong) pfn_to_kaddr(highstart_pfn
...@@ -251,13 +252,6 @@ unsigned long __init setup_memory(void) ...@@ -251,13 +252,6 @@ unsigned long __init setup_memory(void)
*/ */
find_smp_config(); find_smp_config();
/*insert other nodes into pgdat_list*/
for (nid = 1; nid < numnodes; nid++){
NODE_DATA(nid)->pgdat_next = pgdat_list;
pgdat_list = NODE_DATA(nid);
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
if (LOADER_TYPE && INITRD_START) { if (LOADER_TYPE && INITRD_START) {
if (INITRD_START + INITRD_SIZE <= (system_max_low_pfn << PAGE_SHIFT)) { if (INITRD_START + INITRD_SIZE <= (system_max_low_pfn << PAGE_SHIFT)) {
...@@ -282,6 +276,18 @@ void __init zone_sizes_init(void) ...@@ -282,6 +276,18 @@ void __init zone_sizes_init(void)
{ {
int nid; int nid;
/*
* Insert nodes into pgdat_list backward so they appear in order.
* Clobber node 0's links and NULL out pgdat_list before starting.
*/
pgdat_list = NULL;
for (nid = numnodes - 1; nid >= 0; nid--) {
if (nid)
memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
NODE_DATA(nid)->pgdat_next = pgdat_list;
pgdat_list = NODE_DATA(nid);
}
for (nid = 0; nid < numnodes; nid++) { for (nid = 0; nid < numnodes; nid++) {
unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0}; unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
unsigned long *zholes_size; unsigned long *zholes_size;
...@@ -314,13 +320,18 @@ void __init zone_sizes_init(void) ...@@ -314,13 +320,18 @@ void __init zone_sizes_init(void)
* normal bootmem allocator, but other nodes come from the * normal bootmem allocator, but other nodes come from the
* remapped KVA area - mbligh * remapped KVA area - mbligh
*/ */
if (nid) if (!nid)
free_area_init_node(nid, NODE_DATA(nid),
node_remap_start_vaddr[nid], zones_size,
start, zholes_size);
else
free_area_init_node(nid, NODE_DATA(nid), 0, free_area_init_node(nid, NODE_DATA(nid), 0,
zones_size, start, zholes_size); zones_size, start, zholes_size);
else {
unsigned long lmem_map;
lmem_map = (unsigned long)node_remap_start_vaddr[nid];
lmem_map += sizeof(pg_data_t) + PAGE_SIZE - 1;
lmem_map &= PAGE_MASK;
free_area_init_node(nid, NODE_DATA(nid),
(struct page *)lmem_map, zones_size,
start, zholes_size);
}
} }
return; return;
} }
......
...@@ -17,7 +17,7 @@ static int __pci_conf1_mq_read (int seg, int bus, int dev, int fn, int reg, int ...@@ -17,7 +17,7 @@ static int __pci_conf1_mq_read (int seg, int bus, int dev, int fn, int reg, int
{ {
unsigned long flags; unsigned long flags;
if (!value || (bus > 255) || (dev > 31) || (fn > 7) || (reg > 255)) if (!value || (bus > MAX_MP_BUSSES) || (dev > 31) || (fn > 7) || (reg > 255))
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&pci_config_lock, flags); spin_lock_irqsave(&pci_config_lock, flags);
...@@ -45,7 +45,7 @@ static int __pci_conf1_mq_write (int seg, int bus, int dev, int fn, int reg, int ...@@ -45,7 +45,7 @@ static int __pci_conf1_mq_write (int seg, int bus, int dev, int fn, int reg, int
{ {
unsigned long flags; unsigned long flags;
if ((bus > 255) || (dev > 31) || (fn > 7) || (reg > 255)) if ((bus > MAX_MP_BUSSES) || (dev > 31) || (fn > 7) || (reg > 255))
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&pci_config_lock, flags); spin_lock_irqsave(&pci_config_lock, flags);
......
...@@ -6,6 +6,7 @@ menu "ACPI Support" ...@@ -6,6 +6,7 @@ menu "ACPI Support"
config ACPI config ACPI
bool "ACPI Support" if X86 bool "ACPI Support" if X86
depends on !X86_VISWS
default y if IA64 && (!IA64_HP_SIM || IA64_SGI_SN) default y if IA64 && (!IA64_HP_SIM || IA64_SGI_SN)
---help--- ---help---
Advanced Configuration and Power Interface (ACPI) support for Advanced Configuration and Power Interface (ACPI) support for
......
...@@ -76,6 +76,7 @@ EXPORT_SYMBOL(acpi_acquire_global_lock); ...@@ -76,6 +76,7 @@ EXPORT_SYMBOL(acpi_acquire_global_lock);
EXPORT_SYMBOL(acpi_release_global_lock); EXPORT_SYMBOL(acpi_release_global_lock);
EXPORT_SYMBOL(acpi_get_current_resources); EXPORT_SYMBOL(acpi_get_current_resources);
EXPORT_SYMBOL(acpi_get_possible_resources); EXPORT_SYMBOL(acpi_get_possible_resources);
EXPORT_SYMBOL(acpi_walk_resources);
EXPORT_SYMBOL(acpi_set_current_resources); EXPORT_SYMBOL(acpi_set_current_resources);
EXPORT_SYMBOL(acpi_enable_event); EXPORT_SYMBOL(acpi_enable_event);
EXPORT_SYMBOL(acpi_disable_event); EXPORT_SYMBOL(acpi_disable_event);
...@@ -86,6 +87,7 @@ EXPORT_SYMBOL(acpi_get_sleep_type_data); ...@@ -86,6 +87,7 @@ EXPORT_SYMBOL(acpi_get_sleep_type_data);
EXPORT_SYMBOL(acpi_get_register); EXPORT_SYMBOL(acpi_get_register);
EXPORT_SYMBOL(acpi_set_register); EXPORT_SYMBOL(acpi_set_register);
EXPORT_SYMBOL(acpi_enter_sleep_state); EXPORT_SYMBOL(acpi_enter_sleep_state);
EXPORT_SYMBOL(acpi_enter_sleep_state_s4bios);
EXPORT_SYMBOL(acpi_get_system_info); EXPORT_SYMBOL(acpi_get_system_info);
EXPORT_SYMBOL(acpi_get_devices); EXPORT_SYMBOL(acpi_get_devices);
......
...@@ -644,15 +644,46 @@ acpi_ec_remove ( ...@@ -644,15 +644,46 @@ acpi_ec_remove (
} }
static acpi_status
acpi_ec_io_ports (
struct acpi_resource *resource,
void *context)
{
struct acpi_ec *ec = (struct acpi_ec *) context;
struct acpi_generic_address *addr;
if (resource->id != ACPI_RSTYPE_IO) {
return AE_OK;
}
/*
* The first address region returned is the data port, and
* the second address region returned is the status/command
* port.
*/
if (ec->data_addr.register_bit_width == 0) {
addr = &ec->data_addr;
} else if (ec->command_addr.register_bit_width == 0) {
addr = &ec->command_addr;
} else {
return AE_CTRL_TERMINATE;
}
addr->address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
addr->register_bit_width = 8;
addr->register_bit_offset = 0;
addr->address = resource->data.io.min_base_address;
return AE_OK;
}
static int static int
acpi_ec_start ( acpi_ec_start (
struct acpi_device *device) struct acpi_device *device)
{ {
int result = 0;
acpi_status status = AE_OK; acpi_status status = AE_OK;
struct acpi_ec *ec = NULL; struct acpi_ec *ec = NULL;
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
struct acpi_resource *resource = NULL;
ACPI_FUNCTION_TRACE("acpi_ec_start"); ACPI_FUNCTION_TRACE("acpi_ec_start");
...@@ -667,33 +698,13 @@ acpi_ec_start ( ...@@ -667,33 +698,13 @@ acpi_ec_start (
/* /*
* Get I/O port addresses. Convert to GAS format. * Get I/O port addresses. Convert to GAS format.
*/ */
status = acpi_get_current_resources(ec->handle, &buffer); status = acpi_walk_resources(ec->handle, METHOD_NAME__CRS,
if (ACPI_FAILURE(status)) { acpi_ec_io_ports, ec);
if (ACPI_FAILURE(status) || ec->command_addr.register_bit_width == 0) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error getting I/O port addresses")); ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error getting I/O port addresses"));
return_VALUE(-ENODEV); return_VALUE(-ENODEV);
} }
resource = (struct acpi_resource *) buffer.pointer;
if (!resource || (resource->id != ACPI_RSTYPE_IO)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid or missing resource\n"));
result = -ENODEV;
goto end;
}
ec->data_addr.address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
ec->data_addr.register_bit_width = 8;
ec->data_addr.register_bit_offset = 0;
ec->data_addr.address = resource->data.io.min_base_address;
resource = ACPI_NEXT_RESOURCE(resource);
if (!resource || (resource->id != ACPI_RSTYPE_IO)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid or missing resource\n"));
result = -ENODEV;
goto end;
}
ec->command_addr.address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
ec->command_addr.register_bit_width = 8;
ec->command_addr.register_bit_offset = 0;
ec->command_addr.address = resource->data.io.min_base_address;
ec->status_addr = ec->command_addr; ec->status_addr = ec->command_addr;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "gpe=0x%02x, ports=0x%2x,0x%2x\n", ACPI_DEBUG_PRINT((ACPI_DB_INFO, "gpe=0x%02x, ports=0x%2x,0x%2x\n",
...@@ -706,8 +717,7 @@ acpi_ec_start ( ...@@ -706,8 +717,7 @@ acpi_ec_start (
status = acpi_install_gpe_handler(ec->gpe_bit, status = acpi_install_gpe_handler(ec->gpe_bit,
ACPI_EVENT_EDGE_TRIGGERED, &acpi_ec_gpe_handler, ec); ACPI_EVENT_EDGE_TRIGGERED, &acpi_ec_gpe_handler, ec);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
result = -ENODEV; return_VALUE(-ENODEV);
goto end;
} }
status = acpi_install_address_space_handler (ec->handle, status = acpi_install_address_space_handler (ec->handle,
...@@ -715,13 +725,10 @@ acpi_ec_start ( ...@@ -715,13 +725,10 @@ acpi_ec_start (
&acpi_ec_space_setup, ec); &acpi_ec_space_setup, ec);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
acpi_remove_gpe_handler(ec->gpe_bit, &acpi_ec_gpe_handler); acpi_remove_gpe_handler(ec->gpe_bit, &acpi_ec_gpe_handler);
result = -ENODEV; return_VALUE(-ENODEV);
goto end;
} }
end:
acpi_os_free(buffer.pointer);
return_VALUE(result); return_VALUE(AE_OK);
} }
......
...@@ -4,6 +4,6 @@ ...@@ -4,6 +4,6 @@
obj-y := evevent.o evregion.o evsci.o evxfevnt.o \ obj-y := evevent.o evregion.o evsci.o evxfevnt.o \
evmisc.o evrgnini.o evxface.o evxfregn.o \ evmisc.o evrgnini.o evxface.o evxfregn.o \
evgpe.o evgpe.o evgpeblk.o
EXTRA_CFLAGS += $(ACPI_CFLAGS) EXTRA_CFLAGS += $(ACPI_CFLAGS)
...@@ -110,7 +110,7 @@ acpi_ev_initialize ( ...@@ -110,7 +110,7 @@ acpi_ev_initialize (
* *
* RETURN: Status * RETURN: Status
* *
* DESCRIPTION: Install handlers for the SCI, Global Lock, and GPEs. * DESCRIPTION: Install interrupt handlers for the SCI and Global Lock
* *
******************************************************************************/ ******************************************************************************/
...@@ -134,16 +134,6 @@ acpi_ev_handler_initialize ( ...@@ -134,16 +134,6 @@ acpi_ev_handler_initialize (
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
/* Install handlers for control method GPE handlers (_Lxx, _Exx) */
status = acpi_ev_init_gpe_control_methods ();
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
"Unable to initialize GPE control methods, %s\n",
acpi_format_exception (status)));
return_ACPI_STATUS (status);
}
/* Install the handler for the Global Lock */ /* Install the handler for the Global Lock */
status = acpi_ev_init_global_lock_handler (); status = acpi_ev_init_global_lock_handler ();
......
This diff is collapsed.
This diff is collapsed.
...@@ -84,84 +84,6 @@ acpi_ev_is_notify_object ( ...@@ -84,84 +84,6 @@ acpi_ev_is_notify_object (
} }
/*******************************************************************************
*
* FUNCTION: acpi_ev_get_gpe_register_info
*
* PARAMETERS: gpe_number - Raw GPE number
*
* RETURN: Pointer to the info struct for this GPE register.
*
* DESCRIPTION: Returns the register index (index into the GPE register info
* table) associated with this GPE.
*
******************************************************************************/
struct acpi_gpe_register_info *
acpi_ev_get_gpe_register_info (
u32 gpe_number)
{
if (gpe_number > acpi_gbl_gpe_number_max) {
return (NULL);
}
return (&acpi_gbl_gpe_register_info [ACPI_DIV_8 (acpi_gbl_gpe_number_to_index[gpe_number].number_index)]);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_get_gpe_number_info
*
* PARAMETERS: gpe_number - Raw GPE number
*
* RETURN: None.
*
* DESCRIPTION: Returns the number index (index into the GPE number info table)
* associated with this GPE.
*
******************************************************************************/
struct acpi_gpe_number_info *
acpi_ev_get_gpe_number_info (
u32 gpe_number)
{
if (gpe_number > acpi_gbl_gpe_number_max) {
return (NULL);
}
return (&acpi_gbl_gpe_number_info [acpi_gbl_gpe_number_to_index[gpe_number].number_index]);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_get_gpe_number_index
*
* PARAMETERS: gpe_number - Raw GPE number
*
* RETURN: None.
*
* DESCRIPTION: Returns the number index (index into the GPE number info table)
* associated with this GPE.
*
******************************************************************************/
u32
acpi_ev_get_gpe_number_index (
u32 gpe_number)
{
if (gpe_number > acpi_gbl_gpe_number_max) {
return (ACPI_GPE_INVALID);
}
return (acpi_gbl_gpe_number_to_index[gpe_number].number_index);
}
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_ev_queue_notify_request * FUNCTION: acpi_ev_queue_notify_request
...@@ -601,6 +523,9 @@ acpi_ev_terminate (void) ...@@ -601,6 +523,9 @@ acpi_ev_terminate (void)
{ {
acpi_native_uint i; acpi_native_uint i;
acpi_status status; acpi_status status;
struct acpi_gpe_block_info *gpe_block;
struct acpi_gpe_block_info *next_gpe_block;
struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("ev_terminate"); ACPI_FUNCTION_TRACE ("ev_terminate");
...@@ -625,13 +550,19 @@ acpi_ev_terminate (void) ...@@ -625,13 +550,19 @@ acpi_ev_terminate (void)
/* /*
* Disable all GPEs * Disable all GPEs
*/ */
for (i = 0; i < acpi_gbl_gpe_number_max; i++) { gpe_block = acpi_gbl_gpe_block_list_head;
if (acpi_ev_get_gpe_number_index ((u32)i) != ACPI_GPE_INVALID) { while (gpe_block) {
status = acpi_hw_disable_gpe((u32) i); gpe_event_info = gpe_block->event_info;
for (i = 0; i < (gpe_block->register_count * 8); i++) {
status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not disable GPE %d\n", (u32) i)); ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not disable GPE %d\n", (u32) i));
} }
gpe_event_info++;
} }
gpe_block = gpe_block->next;
} }
/* /*
...@@ -654,21 +585,16 @@ acpi_ev_terminate (void) ...@@ -654,21 +585,16 @@ acpi_ev_terminate (void)
} }
/* /*
* Free global tables, etc. * Free global GPE blocks and related info structures
*/ */
if (acpi_gbl_gpe_register_info) { gpe_block = acpi_gbl_gpe_block_list_head;
ACPI_MEM_FREE (acpi_gbl_gpe_register_info); while (gpe_block) {
acpi_gbl_gpe_register_info = NULL; next_gpe_block = gpe_block->next;
} ACPI_MEM_FREE (gpe_block->event_info);
ACPI_MEM_FREE (gpe_block->register_info);
if (acpi_gbl_gpe_number_info) { ACPI_MEM_FREE (gpe_block);
ACPI_MEM_FREE (acpi_gbl_gpe_number_info);
acpi_gbl_gpe_number_info = NULL; gpe_block = next_gpe_block;
}
if (acpi_gbl_gpe_number_to_index) {
ACPI_MEM_FREE (acpi_gbl_gpe_number_to_index);
acpi_gbl_gpe_number_to_index = NULL;
} }
return_VOID; return_VOID;
......
...@@ -69,38 +69,24 @@ acpi_ev_sci_handler ( ...@@ -69,38 +69,24 @@ acpi_ev_sci_handler (
void *context) void *context)
{ {
u32 interrupt_handled = ACPI_INTERRUPT_NOT_HANDLED; u32 interrupt_handled = ACPI_INTERRUPT_NOT_HANDLED;
u32 value;
acpi_status status;
ACPI_FUNCTION_TRACE("ev_sci_handler"); ACPI_FUNCTION_TRACE("ev_sci_handler");
/* /*
* Make sure that ACPI is enabled by checking SCI_EN. Note that we are * We are guaranteed by the ACPI CA initialization/shutdown code that
* required to treat the SCI interrupt as sharable, level, active low. * if this interrupt handler is installed, ACPI is enabled.
*/ */
status = acpi_get_register (ACPI_BITREG_SCI_ENABLE, &value, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) {
return (ACPI_INTERRUPT_NOT_HANDLED);
}
if (!value) {
/* ACPI is not enabled; this interrupt cannot be for us */
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
/* /*
* Fixed acpi_events: * Fixed acpi_events:
* -------------
* Check for and dispatch any Fixed acpi_events that have occurred * Check for and dispatch any Fixed acpi_events that have occurred
*/ */
interrupt_handled |= acpi_ev_fixed_event_detect (); interrupt_handled |= acpi_ev_fixed_event_detect ();
/* /*
* GPEs: * GPEs:
* -----
* Check for and dispatch any GPEs that have occurred * Check for and dispatch any GPEs that have occurred
*/ */
interrupt_handled |= acpi_ev_gpe_detect (); interrupt_handled |= acpi_ev_gpe_detect ();
......
...@@ -492,7 +492,7 @@ acpi_install_gpe_handler ( ...@@ -492,7 +492,7 @@ acpi_install_gpe_handler (
void *context) void *context)
{ {
acpi_status status; acpi_status status;
struct acpi_gpe_number_info *gpe_number_info; struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_install_gpe_handler"); ACPI_FUNCTION_TRACE ("acpi_install_gpe_handler");
...@@ -506,8 +506,8 @@ acpi_install_gpe_handler ( ...@@ -506,8 +506,8 @@ acpi_install_gpe_handler (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number); gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
if (!gpe_number_info) { if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
...@@ -518,25 +518,25 @@ acpi_install_gpe_handler ( ...@@ -518,25 +518,25 @@ acpi_install_gpe_handler (
/* Make sure that there isn't a handler there already */ /* Make sure that there isn't a handler there already */
if (gpe_number_info->handler) { if (gpe_event_info->handler) {
status = AE_ALREADY_EXISTS; status = AE_ALREADY_EXISTS;
goto cleanup; goto cleanup;
} }
/* Install the handler */ /* Install the handler */
gpe_number_info->handler = handler; gpe_event_info->handler = handler;
gpe_number_info->context = context; gpe_event_info->context = context;
gpe_number_info->type = (u8) type; gpe_event_info->type = (u8) type;
/* Clear the GPE (of stale events), the enable it */ /* Clear the GPE (of stale events), the enable it */
status = acpi_hw_clear_gpe (gpe_number); status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
goto cleanup; goto cleanup;
} }
status = acpi_hw_enable_gpe (gpe_number); status = acpi_hw_enable_gpe (gpe_event_info);
cleanup: cleanup:
...@@ -564,7 +564,7 @@ acpi_remove_gpe_handler ( ...@@ -564,7 +564,7 @@ acpi_remove_gpe_handler (
acpi_gpe_handler handler) acpi_gpe_handler handler)
{ {
acpi_status status; acpi_status status;
struct acpi_gpe_number_info *gpe_number_info; struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_remove_gpe_handler"); ACPI_FUNCTION_TRACE ("acpi_remove_gpe_handler");
...@@ -578,14 +578,14 @@ acpi_remove_gpe_handler ( ...@@ -578,14 +578,14 @@ acpi_remove_gpe_handler (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number); gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
if (!gpe_number_info) { if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
/* Disable the GPE before removing the handler */ /* Disable the GPE before removing the handler */
status = acpi_hw_disable_gpe (gpe_number); status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -597,16 +597,16 @@ acpi_remove_gpe_handler ( ...@@ -597,16 +597,16 @@ acpi_remove_gpe_handler (
/* Make sure that the installed handler is the same */ /* Make sure that the installed handler is the same */
if (gpe_number_info->handler != handler) { if (gpe_event_info->handler != handler) {
(void) acpi_hw_enable_gpe (gpe_number); (void) acpi_hw_enable_gpe (gpe_event_info);
status = AE_BAD_PARAMETER; status = AE_BAD_PARAMETER;
goto cleanup; goto cleanup;
} }
/* Remove the handler */ /* Remove the handler */
gpe_number_info->handler = NULL; gpe_event_info->handler = NULL;
gpe_number_info->context = NULL; gpe_event_info->context = NULL;
cleanup: cleanup:
......
...@@ -163,6 +163,7 @@ acpi_enable_event ( ...@@ -163,6 +163,7 @@ acpi_enable_event (
{ {
acpi_status status = AE_OK; acpi_status status = AE_OK;
u32 value; u32 value;
struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_enable_event"); ACPI_FUNCTION_TRACE ("acpi_enable_event");
...@@ -209,19 +210,20 @@ acpi_enable_event ( ...@@ -209,19 +210,20 @@ acpi_enable_event (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) { gpe_event_info = acpi_ev_get_gpe_event_info (event);
if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
/* Enable the requested GPE number */ /* Enable the requested GPE number */
status = acpi_hw_enable_gpe (event); status = acpi_hw_enable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
if (flags & ACPI_EVENT_WAKE_ENABLE) { if (flags & ACPI_EVENT_WAKE_ENABLE) {
acpi_hw_enable_gpe_for_wakeup (event); acpi_hw_enable_gpe_for_wakeup (gpe_event_info);
} }
break; break;
...@@ -257,6 +259,7 @@ acpi_disable_event ( ...@@ -257,6 +259,7 @@ acpi_disable_event (
{ {
acpi_status status = AE_OK; acpi_status status = AE_OK;
u32 value; u32 value;
struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_disable_event"); ACPI_FUNCTION_TRACE ("acpi_disable_event");
...@@ -301,7 +304,8 @@ acpi_disable_event ( ...@@ -301,7 +304,8 @@ acpi_disable_event (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) { gpe_event_info = acpi_ev_get_gpe_event_info (event);
if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
...@@ -311,10 +315,10 @@ acpi_disable_event ( ...@@ -311,10 +315,10 @@ acpi_disable_event (
*/ */
if (flags & ACPI_EVENT_WAKE_DISABLE) { if (flags & ACPI_EVENT_WAKE_DISABLE) {
acpi_hw_disable_gpe_for_wakeup (event); acpi_hw_disable_gpe_for_wakeup (gpe_event_info);
} }
else { else {
status = acpi_hw_disable_gpe (event); status = acpi_hw_disable_gpe (gpe_event_info);
} }
break; break;
...@@ -346,6 +350,7 @@ acpi_clear_event ( ...@@ -346,6 +350,7 @@ acpi_clear_event (
u32 type) u32 type)
{ {
acpi_status status = AE_OK; acpi_status status = AE_OK;
struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_clear_event"); ACPI_FUNCTION_TRACE ("acpi_clear_event");
...@@ -375,11 +380,12 @@ acpi_clear_event ( ...@@ -375,11 +380,12 @@ acpi_clear_event (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) { gpe_event_info = acpi_ev_get_gpe_event_info (event);
if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
status = acpi_hw_clear_gpe (event); status = acpi_hw_clear_gpe (gpe_event_info);
break; break;
...@@ -415,6 +421,7 @@ acpi_get_event_status ( ...@@ -415,6 +421,7 @@ acpi_get_event_status (
acpi_event_status *event_status) acpi_event_status *event_status)
{ {
acpi_status status = AE_OK; acpi_status status = AE_OK;
struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_get_event_status"); ACPI_FUNCTION_TRACE ("acpi_get_event_status");
...@@ -447,7 +454,8 @@ acpi_get_event_status ( ...@@ -447,7 +454,8 @@ acpi_get_event_status (
/* Ensure that we have a valid GPE number */ /* Ensure that we have a valid GPE number */
if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) { gpe_event_info = acpi_ev_get_gpe_event_info (event);
if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER); return_ACPI_STATUS (AE_BAD_PARAMETER);
} }
...@@ -464,3 +472,4 @@ acpi_get_event_status ( ...@@ -464,3 +472,4 @@ acpi_get_event_status (
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -49,26 +49,6 @@ ...@@ -49,26 +49,6 @@
ACPI_MODULE_NAME ("hwgpe") ACPI_MODULE_NAME ("hwgpe")
/******************************************************************************
*
* FUNCTION: acpi_hw_get_gpe_bit_mask
*
* PARAMETERS: gpe_number - The GPE
*
* RETURN: Gpe register bitmask for this gpe level
*
* DESCRIPTION: Get the bitmask for this GPE
*
******************************************************************************/
u8
acpi_hw_get_gpe_bit_mask (
u32 gpe_number)
{
return (acpi_gbl_gpe_number_info [acpi_ev_get_gpe_number_index (gpe_number)].bit_mask);
}
/****************************************************************************** /******************************************************************************
* *
* FUNCTION: acpi_hw_enable_gpe * FUNCTION: acpi_hw_enable_gpe
...@@ -83,37 +63,29 @@ acpi_hw_get_gpe_bit_mask ( ...@@ -83,37 +63,29 @@ acpi_hw_get_gpe_bit_mask (
acpi_status acpi_status
acpi_hw_enable_gpe ( acpi_hw_enable_gpe (
u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info)
{ {
u32 in_byte; u32 in_byte;
acpi_status status; acpi_status status;
struct acpi_gpe_register_info *gpe_register_info;
ACPI_FUNCTION_ENTRY (); ACPI_FUNCTION_ENTRY ();
/* Get the info block for the entire GPE register */
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
if (!gpe_register_info) {
return (AE_BAD_PARAMETER);
}
/* /*
* Read the current value of the register, set the appropriate bit * Read the current value of the register, set the appropriate bit
* to enable the GPE, and write out the new register. * to enable the GPE, and write out the new register.
*/ */
status = acpi_hw_low_level_read (8, &in_byte, status = acpi_hw_low_level_read (8, &in_byte,
&gpe_register_info->enable_address, 0); &gpe_event_info->register_info->enable_address, 0);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return (status); return (status);
} }
/* Write with the new GPE bit enabled */ /* Write with the new GPE bit enabled */
status = acpi_hw_low_level_write (8, (in_byte | acpi_hw_get_gpe_bit_mask (gpe_number)), status = acpi_hw_low_level_write (8, (in_byte | gpe_event_info->bit_mask),
&gpe_register_info->enable_address, 0); &gpe_event_info->register_info->enable_address, 0);
return (status); return (status);
} }
...@@ -134,7 +106,7 @@ acpi_hw_enable_gpe ( ...@@ -134,7 +106,7 @@ acpi_hw_enable_gpe (
void void
acpi_hw_enable_gpe_for_wakeup ( acpi_hw_enable_gpe_for_wakeup (
u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info)
{ {
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
...@@ -144,7 +116,7 @@ acpi_hw_enable_gpe_for_wakeup ( ...@@ -144,7 +116,7 @@ acpi_hw_enable_gpe_for_wakeup (
/* Get the info block for the entire GPE register */ /* Get the info block for the entire GPE register */
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number); gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) { if (!gpe_register_info) {
return; return;
} }
...@@ -152,7 +124,7 @@ acpi_hw_enable_gpe_for_wakeup ( ...@@ -152,7 +124,7 @@ acpi_hw_enable_gpe_for_wakeup (
/* /*
* Set the bit so we will not disable this when sleeping * Set the bit so we will not disable this when sleeping
*/ */
gpe_register_info->wake_enable |= acpi_hw_get_gpe_bit_mask (gpe_number); gpe_register_info->wake_enable |= gpe_event_info->bit_mask;
} }
...@@ -170,7 +142,7 @@ acpi_hw_enable_gpe_for_wakeup ( ...@@ -170,7 +142,7 @@ acpi_hw_enable_gpe_for_wakeup (
acpi_status acpi_status
acpi_hw_disable_gpe ( acpi_hw_disable_gpe (
u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info)
{ {
u32 in_byte; u32 in_byte;
acpi_status status; acpi_status status;
...@@ -182,7 +154,7 @@ acpi_hw_disable_gpe ( ...@@ -182,7 +154,7 @@ acpi_hw_disable_gpe (
/* Get the info block for the entire GPE register */ /* Get the info block for the entire GPE register */
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number); gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) { if (!gpe_register_info) {
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
...@@ -199,13 +171,13 @@ acpi_hw_disable_gpe ( ...@@ -199,13 +171,13 @@ acpi_hw_disable_gpe (
/* Write the byte with this GPE bit cleared */ /* Write the byte with this GPE bit cleared */
status = acpi_hw_low_level_write (8, (in_byte & ~(acpi_hw_get_gpe_bit_mask (gpe_number))), status = acpi_hw_low_level_write (8, (in_byte & ~(gpe_event_info->bit_mask)),
&gpe_register_info->enable_address, 0); &gpe_register_info->enable_address, 0);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return (status); return (status);
} }
acpi_hw_disable_gpe_for_wakeup(gpe_number); acpi_hw_disable_gpe_for_wakeup (gpe_event_info);
return (AE_OK); return (AE_OK);
} }
...@@ -225,7 +197,7 @@ acpi_hw_disable_gpe ( ...@@ -225,7 +197,7 @@ acpi_hw_disable_gpe (
void void
acpi_hw_disable_gpe_for_wakeup ( acpi_hw_disable_gpe_for_wakeup (
u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info)
{ {
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
...@@ -235,7 +207,7 @@ acpi_hw_disable_gpe_for_wakeup ( ...@@ -235,7 +207,7 @@ acpi_hw_disable_gpe_for_wakeup (
/* Get the info block for the entire GPE register */ /* Get the info block for the entire GPE register */
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number); gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) { if (!gpe_register_info) {
return; return;
} }
...@@ -243,7 +215,7 @@ acpi_hw_disable_gpe_for_wakeup ( ...@@ -243,7 +215,7 @@ acpi_hw_disable_gpe_for_wakeup (
/* /*
* Clear the bit so we will disable this when sleeping * Clear the bit so we will disable this when sleeping
*/ */
gpe_register_info->wake_enable &= ~(acpi_hw_get_gpe_bit_mask (gpe_number)); gpe_register_info->wake_enable &= ~(gpe_event_info->bit_mask);
} }
...@@ -261,28 +233,20 @@ acpi_hw_disable_gpe_for_wakeup ( ...@@ -261,28 +233,20 @@ acpi_hw_disable_gpe_for_wakeup (
acpi_status acpi_status
acpi_hw_clear_gpe ( acpi_hw_clear_gpe (
u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info)
{ {
acpi_status status; acpi_status status;
struct acpi_gpe_register_info *gpe_register_info;
ACPI_FUNCTION_ENTRY (); ACPI_FUNCTION_ENTRY ();
/* Get the info block for the entire GPE register */
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
if (!gpe_register_info) {
return (AE_BAD_PARAMETER);
}
/* /*
* Write a one to the appropriate bit in the status register to * Write a one to the appropriate bit in the status register to
* clear this GPE. * clear this GPE.
*/ */
status = acpi_hw_low_level_write (8, acpi_hw_get_gpe_bit_mask (gpe_number), status = acpi_hw_low_level_write (8, gpe_event_info->bit_mask,
&gpe_register_info->status_address, 0); &gpe_event_info->register_info->status_address, 0);
return (status); return (status);
} }
...@@ -308,6 +272,7 @@ acpi_hw_get_gpe_status ( ...@@ -308,6 +272,7 @@ acpi_hw_get_gpe_status (
u32 in_byte; u32 in_byte;
u8 bit_mask; u8 bit_mask;
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
struct acpi_gpe_event_info *gpe_event_info;
acpi_status status; acpi_status status;
acpi_event_status local_event_status = 0; acpi_event_status local_event_status = 0;
...@@ -319,16 +284,18 @@ acpi_hw_get_gpe_status ( ...@@ -319,16 +284,18 @@ acpi_hw_get_gpe_status (
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
/* Get the info block for the entire GPE register */ gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
if (!gpe_event_info) {
gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
if (!gpe_register_info) {
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
/* Get the info block for the entire GPE register */
gpe_register_info = gpe_event_info->register_info;
/* Get the register bitmask for this GPE */ /* Get the register bitmask for this GPE */
bit_mask = acpi_hw_get_gpe_bit_mask (gpe_number); bit_mask = gpe_event_info->bit_mask;
/* GPE Enabled? */ /* GPE Enabled? */
...@@ -375,7 +342,7 @@ acpi_hw_get_gpe_status ( ...@@ -375,7 +342,7 @@ acpi_hw_get_gpe_status (
* *
* DESCRIPTION: Disable all non-wakeup GPEs * DESCRIPTION: Disable all non-wakeup GPEs
* Call with interrupts disabled. The interrupt handler also * Call with interrupts disabled. The interrupt handler also
* modifies acpi_gbl_gpe_register_info[i].Enable, so it should not be * modifies gpe_register_info->Enable, so it should not be
* given the chance to run until after non-wake GPEs are * given the chance to run until after non-wake GPEs are
* re-enabled. * re-enabled.
* *
...@@ -389,40 +356,49 @@ acpi_hw_disable_non_wakeup_gpes ( ...@@ -389,40 +356,49 @@ acpi_hw_disable_non_wakeup_gpes (
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
u32 in_value; u32 in_value;
acpi_status status; acpi_status status;
struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_ENTRY (); ACPI_FUNCTION_ENTRY ();
for (i = 0; i < acpi_gbl_gpe_register_count; i++) { gpe_block = acpi_gbl_gpe_block_list_head;
/* Get the info block for the entire GPE register */ while (gpe_block) {
/* Get the register info for the entire GPE block */
gpe_register_info = &acpi_gbl_gpe_register_info[i]; gpe_register_info = gpe_block->register_info;
if (!gpe_register_info) { if (!gpe_register_info) {
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
/* for (i = 0; i < gpe_block->register_count; i++) {
* Read the enabled status of all GPEs. We /*
* will be using it to restore all the GPEs later. * Read the enabled status of all GPEs. We
*/ * will be using it to restore all the GPEs later.
status = acpi_hw_low_level_read (8, &in_value, */
&gpe_register_info->enable_address, 0); status = acpi_hw_low_level_read (8, &in_value,
if (ACPI_FAILURE (status)) { &gpe_register_info->enable_address, 0);
return (status); if (ACPI_FAILURE (status)) {
return (status);
}
gpe_register_info->enable = (u8) in_value;
/*
* Disable all GPEs except wakeup GPEs.
*/
status = acpi_hw_low_level_write (8, gpe_register_info->wake_enable,
&gpe_register_info->enable_address, 0);
if (ACPI_FAILURE (status)) {
return (status);
}
gpe_register_info++;
} }
gpe_register_info->enable = (u8) in_value; gpe_block = gpe_block->next;
/*
* Disable all GPEs except wakeup GPEs.
*/
status = acpi_hw_low_level_write (8, gpe_register_info->wake_enable,
&gpe_register_info->enable_address, 0);
if (ACPI_FAILURE (status)) {
return (status);
}
} }
return (AE_OK); return (AE_OK);
} }
...@@ -446,28 +422,37 @@ acpi_hw_enable_non_wakeup_gpes ( ...@@ -446,28 +422,37 @@ acpi_hw_enable_non_wakeup_gpes (
u32 i; u32 i;
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
acpi_status status; acpi_status status;
struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_ENTRY (); ACPI_FUNCTION_ENTRY ();
for (i = 0; i < acpi_gbl_gpe_register_count; i++) { gpe_block = acpi_gbl_gpe_block_list_head;
/* Get the info block for the entire GPE register */ while (gpe_block) {
/* Get the register info for the entire GPE block */
gpe_register_info = &acpi_gbl_gpe_register_info[i]; gpe_register_info = gpe_block->register_info;
if (!gpe_register_info) { if (!gpe_register_info) {
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
/* for (i = 0; i < gpe_block->register_count; i++) {
* We previously stored the enabled status of all GPEs. /*
* Blast them back in. * We previously stored the enabled status of all GPEs.
*/ * Blast them back in.
status = acpi_hw_low_level_write (8, gpe_register_info->enable, */
&gpe_register_info->enable_address, 0); status = acpi_hw_low_level_write (8, gpe_register_info->enable,
if (ACPI_FAILURE (status)) { &gpe_register_info->enable_address, 0);
return (status); if (ACPI_FAILURE (status)) {
return (status);
}
gpe_register_info++;
} }
gpe_block = gpe_block->next;
} }
return (AE_OK); return (AE_OK);
} }
...@@ -67,8 +67,8 @@ acpi_status ...@@ -67,8 +67,8 @@ acpi_status
acpi_hw_clear_acpi_status (void) acpi_hw_clear_acpi_status (void)
{ {
acpi_native_uint i; acpi_native_uint i;
acpi_native_uint gpe_block;
acpi_status status; acpi_status status;
struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_TRACE ("hw_clear_acpi_status"); ACPI_FUNCTION_TRACE ("hw_clear_acpi_status");
...@@ -100,16 +100,19 @@ acpi_hw_clear_acpi_status (void) ...@@ -100,16 +100,19 @@ acpi_hw_clear_acpi_status (void)
} }
} }
/* Clear the GPE Bits */ /* Clear the GPE Bits in all GPE registers in all GPE blocks */
for (gpe_block = 0; gpe_block < ACPI_MAX_GPE_BLOCKS; gpe_block++) { gpe_block = acpi_gbl_gpe_block_list_head;
for (i = 0; i < acpi_gbl_gpe_block_info[gpe_block].register_count; i++) { while (gpe_block) {
for (i = 0; i < gpe_block->register_count; i++) {
status = acpi_hw_low_level_write (8, 0xFF, status = acpi_hw_low_level_write (8, 0xFF,
acpi_gbl_gpe_block_info[gpe_block].block_address, (u32) i); &gpe_block->register_info[i].status_address, (u32) i);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
goto unlock_and_exit; goto unlock_and_exit;
} }
} }
gpe_block = gpe_block->next;
} }
unlock_and_exit: unlock_and_exit:
......
...@@ -250,7 +250,7 @@ acpi_enter_sleep_state ( ...@@ -250,7 +250,7 @@ acpi_enter_sleep_state (
/* Get current value of PM1A control */ /* Get current value of PM1A control */
status = acpi_hw_register_read (ACPI_MTX_LOCK, ACPI_REGISTER_PM1_CONTROL, &PM1Acontrol); status = acpi_hw_register_read (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1_CONTROL, &PM1Acontrol);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -268,12 +268,12 @@ acpi_enter_sleep_state ( ...@@ -268,12 +268,12 @@ acpi_enter_sleep_state (
/* Write #1: fill in SLP_TYP data */ /* Write #1: fill in SLP_TYP data */
status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol); status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol); status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -287,12 +287,12 @@ acpi_enter_sleep_state ( ...@@ -287,12 +287,12 @@ acpi_enter_sleep_state (
ACPI_FLUSH_CPU_CACHE (); ACPI_FLUSH_CPU_CACHE ();
status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol); status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol); status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -308,7 +308,7 @@ acpi_enter_sleep_state ( ...@@ -308,7 +308,7 @@ acpi_enter_sleep_state (
*/ */
acpi_os_stall (10000000); acpi_os_stall (10000000);
status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1_CONTROL, status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1_CONTROL,
sleep_enable_reg_info->access_bit_mask); sleep_enable_reg_info->access_bit_mask);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
...@@ -318,7 +318,7 @@ acpi_enter_sleep_state ( ...@@ -318,7 +318,7 @@ acpi_enter_sleep_state (
/* Wait until we enter sleep state */ /* Wait until we enter sleep state */
do { do {
status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_LOCK); status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -327,7 +327,7 @@ acpi_enter_sleep_state ( ...@@ -327,7 +327,7 @@ acpi_enter_sleep_state (
} while (!in_value); } while (!in_value);
status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_LOCK); status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) { if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status); return_ACPI_STATUS (status);
} }
...@@ -335,6 +335,51 @@ acpi_enter_sleep_state ( ...@@ -335,6 +335,51 @@ acpi_enter_sleep_state (
return_ACPI_STATUS (AE_OK); return_ACPI_STATUS (AE_OK);
} }
/******************************************************************************
*
* FUNCTION: acpi_enter_sleep_state_s4bios
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Perform a S4 bios request.
* THIS FUNCTION MUST BE CALLED WITH INTERRUPTS DISABLED
*
******************************************************************************/
acpi_status
acpi_enter_sleep_state_s4bios (
void)
{
u32 in_value;
acpi_status status;
ACPI_FUNCTION_TRACE ("acpi_enter_sleep_state_s4bios");
acpi_set_register (ACPI_BITREG_WAKE_STATUS, 1, ACPI_MTX_DO_NOT_LOCK);
acpi_hw_clear_acpi_status();
acpi_hw_disable_non_wakeup_gpes();
ACPI_FLUSH_CPU_CACHE();
status = acpi_os_write_port (acpi_gbl_FADT->smi_cmd, (acpi_integer) acpi_gbl_FADT->S4bios_req, 8);
do {
acpi_os_stall(1000);
status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
} while (!in_value);
return_ACPI_STATUS (AE_OK);
}
/****************************************************************************** /******************************************************************************
* *
* FUNCTION: acpi_leave_sleep_state * FUNCTION: acpi_leave_sleep_state
......
...@@ -514,10 +514,12 @@ acpi_os_write_pci_configuration ( ...@@ -514,10 +514,12 @@ acpi_os_write_pci_configuration (
/* TODO: Change code to take advantage of driver model more */ /* TODO: Change code to take advantage of driver model more */
void void
acpi_os_derive_pci_id ( acpi_os_derive_pci_id_2 (
acpi_handle rhandle, /* upper bound */ acpi_handle rhandle, /* upper bound */
acpi_handle chandle, /* current node */ acpi_handle chandle, /* current node */
struct acpi_pci_id **id) struct acpi_pci_id **id,
int *is_bridge,
u8 *bus_number)
{ {
acpi_handle handle; acpi_handle handle;
struct acpi_pci_id *pci_id = *id; struct acpi_pci_id *pci_id = *id;
...@@ -528,7 +530,7 @@ acpi_os_derive_pci_id ( ...@@ -528,7 +530,7 @@ acpi_os_derive_pci_id (
acpi_get_parent(chandle, &handle); acpi_get_parent(chandle, &handle);
if (handle != rhandle) { if (handle != rhandle) {
acpi_os_derive_pci_id(rhandle, handle, &pci_id); acpi_os_derive_pci_id_2(rhandle, handle, &pci_id, is_bridge, bus_number);
status = acpi_get_type(handle, &type); status = acpi_get_type(handle, &type);
if ( (ACPI_FAILURE(status)) || (type != ACPI_TYPE_DEVICE) ) if ( (ACPI_FAILURE(status)) || (type != ACPI_TYPE_DEVICE) )
...@@ -539,17 +541,42 @@ acpi_os_derive_pci_id ( ...@@ -539,17 +541,42 @@ acpi_os_derive_pci_id (
pci_id->device = ACPI_HIWORD (ACPI_LODWORD (temp)); pci_id->device = ACPI_HIWORD (ACPI_LODWORD (temp));
pci_id->function = ACPI_LOWORD (ACPI_LODWORD (temp)); pci_id->function = ACPI_LOWORD (ACPI_LODWORD (temp));
if (*is_bridge)
pci_id->bus = *bus_number;
/* any nicer way to get bus number of bridge ? */ /* any nicer way to get bus number of bridge ? */
status = acpi_os_read_pci_configuration(pci_id, 0x0e, &tu8, 8); status = acpi_os_read_pci_configuration(pci_id, 0x0e, &tu8, 8);
if (ACPI_SUCCESS(status) && (tu8 & 0x7f) == 1) { if (ACPI_SUCCESS(status) &&
((tu8 & 0x7f) == 1 || (tu8 & 0x7f) == 2)) {
status = acpi_os_read_pci_configuration(pci_id, 0x18, &tu8, 8);
if (!ACPI_SUCCESS(status)) {
/* Certainly broken... FIX ME */
return;
}
*is_bridge = 1;
pci_id->bus = tu8;
status = acpi_os_read_pci_configuration(pci_id, 0x19, &tu8, 8); status = acpi_os_read_pci_configuration(pci_id, 0x19, &tu8, 8);
if (ACPI_SUCCESS(status)) if (ACPI_SUCCESS(status)) {
pci_id->bus = tu8; *bus_number = tu8;
} }
} else
*is_bridge = 0;
} }
} }
} }
void
acpi_os_derive_pci_id (
acpi_handle rhandle, /* upper bound */
acpi_handle chandle, /* current node */
struct acpi_pci_id **id)
{
int is_bridge = 1;
u8 bus_number = (*id)->bus;
acpi_os_derive_pci_id_2(rhandle, chandle, id, &is_bridge, &bus_number);
}
#else /*!CONFIG_ACPI_PCI*/ #else /*!CONFIG_ACPI_PCI*/
acpi_status acpi_status
......
...@@ -90,42 +90,25 @@ static struct { ...@@ -90,42 +90,25 @@ static struct {
PCI Link Device Management PCI Link Device Management
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
static int static acpi_status
acpi_pci_link_get_possible ( acpi_pci_link_check_possible (
struct acpi_pci_link *link) struct acpi_resource *resource,
void *context)
{ {
int result = 0; struct acpi_pci_link *link = (struct acpi_pci_link *) context;
acpi_status status = AE_OK;
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
struct acpi_resource *resource = NULL;
int i = 0; int i = 0;
ACPI_FUNCTION_TRACE("acpi_pci_link_get_possible"); ACPI_FUNCTION_TRACE("acpi_pci_link_check_possible");
if (!link)
return_VALUE(-EINVAL);
status = acpi_get_possible_resources(link->handle, &buffer);
if (ACPI_FAILURE(status) || !buffer.pointer) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PRS\n"));
result = -ENODEV;
goto end;
}
resource = (struct acpi_resource *) buffer.pointer;
/* skip past dependent function resource (if present) */
if (resource->id == ACPI_RSTYPE_START_DPF)
resource = ACPI_NEXT_RESOURCE(resource);
switch (resource->id) { switch (resource->id) {
case ACPI_RSTYPE_START_DPF:
return AE_OK;
case ACPI_RSTYPE_IRQ: case ACPI_RSTYPE_IRQ:
{ {
struct acpi_resource_irq *p = &resource->data.irq; struct acpi_resource_irq *p = &resource->data.irq;
if (!p || !p->number_of_interrupts) { if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN, "Blank IRQ resource\n")); ACPI_DEBUG_PRINT((ACPI_DB_WARN, "Blank IRQ resource\n"));
result = -ENODEV; return AE_OK;
goto end;
} }
for (i = 0; (i<p->number_of_interrupts && i<ACPI_PCI_LINK_MAX_POSSIBLE); i++) { for (i = 0; (i<p->number_of_interrupts && i<ACPI_PCI_LINK_MAX_POSSIBLE); i++) {
if (!p->interrupts[i]) { if (!p->interrupts[i]) {
...@@ -143,8 +126,7 @@ acpi_pci_link_get_possible ( ...@@ -143,8 +126,7 @@ acpi_pci_link_get_possible (
if (!p || !p->number_of_interrupts) { if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN, ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n")); "Blank IRQ resource\n"));
result = -ENODEV; return AE_OK;
goto end;
} }
for (i = 0; (i<p->number_of_interrupts && i<ACPI_PCI_LINK_MAX_POSSIBLE); i++) { for (i = 0; (i<p->number_of_interrupts && i<ACPI_PCI_LINK_MAX_POSSIBLE); i++) {
if (!p->interrupts[i]) { if (!p->interrupts[i]) {
...@@ -159,18 +141,76 @@ acpi_pci_link_get_possible ( ...@@ -159,18 +141,76 @@ acpi_pci_link_get_possible (
default: default:
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
"Resource is not an IRQ entry\n")); "Resource is not an IRQ entry\n"));
result = -ENODEV; return AE_OK;
goto end;
break;
} }
return AE_CTRL_TERMINATE;
}
static int
acpi_pci_link_get_possible (
struct acpi_pci_link *link)
{
acpi_status status;
ACPI_FUNCTION_TRACE("acpi_pci_link_get_possible");
if (!link)
return_VALUE(-EINVAL);
status = acpi_walk_resources(link->handle, METHOD_NAME__PRS,
acpi_pci_link_check_possible, link);
if (ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PRS\n"));
return_VALUE(-ENODEV);
}
ACPI_DEBUG_PRINT((ACPI_DB_INFO, ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Found %d possible IRQs\n", link->irq.possible_count)); "Found %d possible IRQs\n", link->irq.possible_count));
end: return_VALUE(0);
acpi_os_free(buffer.pointer); }
return_VALUE(result);
static acpi_status
acpi_pci_link_check_current (
struct acpi_resource *resource,
void *context)
{
int *irq = (int *) context;
ACPI_FUNCTION_TRACE("acpi_pci_link_check_current");
switch (resource->id) {
case ACPI_RSTYPE_IRQ:
{
struct acpi_resource_irq *p = &resource->data.irq;
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n"));
return AE_OK;
}
*irq = p->interrupts[0];
break;
}
case ACPI_RSTYPE_EXT_IRQ:
{
struct acpi_resource_ext_irq *p = &resource->data.extended_irq;
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n"));
return AE_OK;
}
*irq = p->interrupts[0];
break;
}
default:
ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
"Resource isn't an IRQ\n"));
return AE_OK;
}
return AE_CTRL_TERMINATE;
} }
...@@ -180,8 +220,6 @@ acpi_pci_link_get_current ( ...@@ -180,8 +220,6 @@ acpi_pci_link_get_current (
{ {
int result = 0; int result = 0;
acpi_status status = AE_OK; acpi_status status = AE_OK;
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
struct acpi_resource *resource = NULL;
int irq = 0; int irq = 0;
ACPI_FUNCTION_TRACE("acpi_pci_link_get_current"); ACPI_FUNCTION_TRACE("acpi_pci_link_get_current");
...@@ -206,47 +244,16 @@ acpi_pci_link_get_current ( ...@@ -206,47 +244,16 @@ acpi_pci_link_get_current (
* Query and parse _CRS to get the current IRQ assignment. * Query and parse _CRS to get the current IRQ assignment.
*/ */
status = acpi_get_current_resources(link->handle, &buffer); status = acpi_walk_resources(link->handle, METHOD_NAME__CRS,
acpi_pci_link_check_current, &irq);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _CRS\n")); ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _CRS\n"));
result = -ENODEV; result = -ENODEV;
goto end; goto end;
} }
resource = (struct acpi_resource *) buffer.pointer;
switch (resource->id) {
case ACPI_RSTYPE_IRQ:
{
struct acpi_resource_irq *p = &resource->data.irq;
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n"));
result = -ENODEV;
goto end;
}
irq = p->interrupts[0];
break;
}
case ACPI_RSTYPE_EXT_IRQ:
{
struct acpi_resource_ext_irq *p = &resource->data.extended_irq;
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n"));
result = -ENODEV;
goto end;
}
irq = p->interrupts[0];
break;
}
default:
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Resource isn't an IRQ\n"));
result = -ENODEV;
goto end;
}
if (!irq) { if (!irq) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid use of IRQ 0\n")); ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "No IRQ resource found\n"));
result = -ENODEV; result = -ENODEV;
goto end; goto end;
} }
...@@ -263,8 +270,6 @@ acpi_pci_link_get_current ( ...@@ -263,8 +270,6 @@ acpi_pci_link_get_current (
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Link at IRQ %d \n", link->irq.active)); ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Link at IRQ %d \n", link->irq.active));
end: end:
acpi_os_free(buffer.pointer);
return_VALUE(result); return_VALUE(result);
} }
......
...@@ -1560,7 +1560,7 @@ acpi_processor_get_info ( ...@@ -1560,7 +1560,7 @@ acpi_processor_get_info (
acpi_status status = 0; acpi_status status = 0;
union acpi_object object = {0}; union acpi_object object = {0};
struct acpi_buffer buffer = {sizeof(union acpi_object), &object}; struct acpi_buffer buffer = {sizeof(union acpi_object), &object};
static int cpu_count = 0; static int cpu_index = 0;
ACPI_FUNCTION_TRACE("acpi_processor_get_info"); ACPI_FUNCTION_TRACE("acpi_processor_get_info");
...@@ -1570,6 +1570,13 @@ acpi_processor_get_info ( ...@@ -1570,6 +1570,13 @@ acpi_processor_get_info (
if (num_online_cpus() > 1) if (num_online_cpus() > 1)
errata.smp = TRUE; errata.smp = TRUE;
/*
* Extra Processor objects may be enumerated on MP systems with
* less than the max # of CPUs. They should be ignored.
*/
if ((cpu_index + 1) > num_online_cpus())
return_VALUE(-ENODEV);
acpi_processor_errata(pr); acpi_processor_errata(pr);
/* /*
...@@ -1601,7 +1608,7 @@ acpi_processor_get_info ( ...@@ -1601,7 +1608,7 @@ acpi_processor_get_info (
* TBD: Synch processor ID (via LAPIC/LSAPIC structures) on SMP. * TBD: Synch processor ID (via LAPIC/LSAPIC structures) on SMP.
* >>> 'acpi_get_processor_id(acpi_id, &id)' in arch/xxx/acpi.c * >>> 'acpi_get_processor_id(acpi_id, &id)' in arch/xxx/acpi.c
*/ */
pr->id = cpu_count++; pr->id = cpu_index++;
pr->acpi_id = object.processor.proc_id; pr->acpi_id = object.processor.proc_id;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Processor [%d:%d]\n", pr->id, ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Processor [%d:%d]\n", pr->id,
...@@ -1609,21 +1616,17 @@ acpi_processor_get_info ( ...@@ -1609,21 +1616,17 @@ acpi_processor_get_info (
if (!object.processor.pblk_address) if (!object.processor.pblk_address)
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No PBLK (NULL address)\n")); ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No PBLK (NULL address)\n"));
else if (object.processor.pblk_length < 4) else if (object.processor.pblk_length != 6)
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid PBLK length [%d]\n", ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid PBLK length [%d]\n",
object.processor.pblk_length)); object.processor.pblk_length));
else { else {
pr->throttling.address = object.processor.pblk_address; pr->throttling.address = object.processor.pblk_address;
pr->throttling.duty_offset = acpi_fadt.duty_offset; pr->throttling.duty_offset = acpi_fadt.duty_offset;
pr->throttling.duty_width = acpi_fadt.duty_width; pr->throttling.duty_width = acpi_fadt.duty_width;
pr->power.states[ACPI_STATE_C2].address =
if (object.processor.pblk_length >= 5) object.processor.pblk_address + 4;
pr->power.states[ACPI_STATE_C2].address = pr->power.states[ACPI_STATE_C3].address =
object.processor.pblk_address + 4; object.processor.pblk_address + 5;
if (object.processor.pblk_length >= 6)
pr->power.states[ACPI_STATE_C3].address =
object.processor.pblk_address + 5;
} }
acpi_processor_get_power_info(pr); acpi_processor_get_power_info(pr);
......
...@@ -212,6 +212,60 @@ acpi_rs_get_prs_method_data ( ...@@ -212,6 +212,60 @@ acpi_rs_get_prs_method_data (
} }
/*******************************************************************************
*
* FUNCTION: acpi_rs_get_method_data
*
* PARAMETERS: Handle - a handle to the containing object
* ret_buffer - a pointer to a buffer structure for the
* results
*
* RETURN: Status
*
* DESCRIPTION: This function is called to get the _CRS or _PRS value of an
* object contained in an object specified by the handle passed in
*
* If the function fails an appropriate status will be returned
* and the contents of the callers buffer is undefined.
*
******************************************************************************/
acpi_status
acpi_rs_get_method_data (
acpi_handle handle,
char *path,
struct acpi_buffer *ret_buffer)
{
union acpi_operand_object *obj_desc;
acpi_status status;
ACPI_FUNCTION_TRACE ("rs_get_method_data");
/* Parameters guaranteed valid by caller */
/*
* Execute the method, no parameters
*/
status = acpi_ut_evaluate_object (handle, path, ACPI_BTYPE_BUFFER, &obj_desc);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
/*
* Make the call to create a resource linked list from the
* byte stream buffer that comes back from the method
* execution.
*/
status = acpi_rs_create_resource_list (obj_desc, ret_buffer);
/* On exit, we must delete the object returned by evaluate_object */
acpi_ut_remove_reference (obj_desc);
return_ACPI_STATUS (status);
}
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_rs_set_srs_method_data * FUNCTION: acpi_rs_set_srs_method_data
......
This diff is collapsed.
...@@ -183,14 +183,21 @@ acpi_system_suspend( ...@@ -183,14 +183,21 @@ acpi_system_suspend(
status = acpi_enter_sleep_state(state); status = acpi_enter_sleep_state(state);
break; break;
case ACPI_STATE_S2:
#ifdef CONFIG_SOFTWARE_SUSPEND #ifdef CONFIG_SOFTWARE_SUSPEND
case ACPI_STATE_S2:
case ACPI_STATE_S3: case ACPI_STATE_S3:
do_suspend_lowlevel(0); do_suspend_lowlevel(0);
break;
#endif #endif
case ACPI_STATE_S4:
do_suspend_lowlevel_s4bios(0);
break;
default:
printk(KERN_WARNING PREFIX "don't know how to handle %d state.\n", state);
break; break;
} }
local_irq_restore(flags); local_irq_restore(flags);
printk(KERN_CRIT "Back to C!\n");
return status; return status;
} }
...@@ -211,21 +218,31 @@ acpi_suspend ( ...@@ -211,21 +218,31 @@ acpi_suspend (
if (state < ACPI_STATE_S1 || state > ACPI_STATE_S5) if (state < ACPI_STATE_S1 || state > ACPI_STATE_S5)
return AE_ERROR; return AE_ERROR;
/* Since we handle S4OS via a different path (swsusp), give up if no s4bios. */
if (state == ACPI_STATE_S4 && !acpi_gbl_FACS->S4bios_f)
return AE_ERROR;
/*
* TBD: S1 can be done without device_suspend. Make a CONFIG_XX
* to handle however when S1 failed without device_suspend.
*/
freeze_processes(); /* device_suspend needs processes to be stopped */ freeze_processes(); /* device_suspend needs processes to be stopped */
/* do we have a wakeup address for S2 and S3? */ /* do we have a wakeup address for S2 and S3? */
if (state == ACPI_STATE_S2 || state == ACPI_STATE_S3) { /* Here, we support only S4BIOS, those we set the wakeup address */
/* S4OS is only supported for now via swsusp.. */
if (state == ACPI_STATE_S2 || state == ACPI_STATE_S3 || ACPI_STATE_S4) {
if (!acpi_wakeup_address) if (!acpi_wakeup_address)
return AE_ERROR; return AE_ERROR;
acpi_set_firmware_waking_vector((acpi_physical_address) acpi_wakeup_address); acpi_set_firmware_waking_vector((acpi_physical_address) acpi_wakeup_address);
} }
acpi_enter_sleep_state_prep(state);
status = acpi_system_save_state(state); status = acpi_system_save_state(state);
if (!ACPI_SUCCESS(status)) if (!ACPI_SUCCESS(status))
return status; return status;
acpi_enter_sleep_state_prep(state);
/* disable interrupts and flush caches */ /* disable interrupts and flush caches */
ACPI_DISABLE_IRQS(); ACPI_DISABLE_IRQS();
ACPI_FLUSH_CPU_CACHE(); ACPI_FLUSH_CPU_CACHE();
...@@ -237,8 +254,8 @@ acpi_suspend ( ...@@ -237,8 +254,8 @@ acpi_suspend (
* mode. So, we run these unconditionaly to make sure we have a usable system * mode. So, we run these unconditionaly to make sure we have a usable system
* no matter what. * no matter what.
*/ */
acpi_system_restore_state(state);
acpi_leave_sleep_state(state); acpi_leave_sleep_state(state);
acpi_system_restore_state(state);
/* make sure interrupts are enabled */ /* make sure interrupts are enabled */
ACPI_ENABLE_IRQS(); ACPI_ENABLE_IRQS();
...@@ -268,6 +285,10 @@ static int __init acpi_sleep_init(void) ...@@ -268,6 +285,10 @@ static int __init acpi_sleep_init(void)
sleep_states[i] = 1; sleep_states[i] = 1;
printk(" S%d", i); printk(" S%d", i);
} }
if (i == ACPI_STATE_S4 && acpi_gbl_FACS->S4bios_f) {
sleep_states[i] = 1;
printk(" S4bios");
}
} }
printk(")\n"); printk(")\n");
......
...@@ -27,8 +27,11 @@ static int acpi_system_sleep_seq_show(struct seq_file *seq, void *offset) ...@@ -27,8 +27,11 @@ static int acpi_system_sleep_seq_show(struct seq_file *seq, void *offset)
ACPI_FUNCTION_TRACE("acpi_system_sleep_seq_show"); ACPI_FUNCTION_TRACE("acpi_system_sleep_seq_show");
for (i = 0; i <= ACPI_STATE_S5; i++) { for (i = 0; i <= ACPI_STATE_S5; i++) {
if (sleep_states[i]) if (sleep_states[i]) {
seq_printf(seq,"S%d ", i); seq_printf(seq,"S%d ", i);
if (i == ACPI_STATE_S4 && acpi_gbl_FACS->S4bios_f)
seq_printf(seq, "S4bios ");
}
} }
seq_puts(seq, "\n"); seq_puts(seq, "\n");
......
This diff is collapsed.
...@@ -239,9 +239,8 @@ acpi_tb_convert_fadt1 ( ...@@ -239,9 +239,8 @@ acpi_tb_convert_fadt1 (
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm1b_cnt_blk, local_fadt->pm1_cnt_len, local_fadt->V1_pm1b_cnt_blk); ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm1b_cnt_blk, local_fadt->pm1_cnt_len, local_fadt->V1_pm1b_cnt_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm2_cnt_blk, local_fadt->pm2_cnt_len, local_fadt->V1_pm2_cnt_blk); ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm2_cnt_blk, local_fadt->pm2_cnt_len, local_fadt->V1_pm2_cnt_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm_tmr_blk, local_fadt->pm_tm_len, local_fadt->V1_pm_tmr_blk); ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm_tmr_blk, local_fadt->pm_tm_len, local_fadt->V1_pm_tmr_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk, local_fadt->gpe0_blk_len, local_fadt->V1_gpe0_blk); ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk, 0, local_fadt->V1_gpe0_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk, local_fadt->gpe1_blk_len, local_fadt->V1_gpe1_blk); ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk, 0, local_fadt->V1_gpe1_blk);
} }
...@@ -314,15 +313,16 @@ acpi_tb_convert_fadt2 ( ...@@ -314,15 +313,16 @@ acpi_tb_convert_fadt2 (
if (!(local_fadt->xgpe0_blk.address)) { if (!(local_fadt->xgpe0_blk.address)) {
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk, ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk,
local_fadt->gpe0_blk_len, local_fadt->V1_gpe0_blk); 0, local_fadt->V1_gpe0_blk);
} }
if (!(local_fadt->xgpe1_blk.address)) { if (!(local_fadt->xgpe1_blk.address)) {
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk, ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk,
local_fadt->gpe1_blk_len, local_fadt->V1_gpe1_blk); 0, local_fadt->V1_gpe1_blk);
} }
} }
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_tb_convert_table_fadt * FUNCTION: acpi_tb_convert_table_fadt
......
...@@ -645,11 +645,11 @@ acpi_ut_copy_simple_object ( ...@@ -645,11 +645,11 @@ acpi_ut_copy_simple_object (
/* /*
* Allocate and copy the actual buffer if and only if: * Allocate and copy the actual buffer if and only if:
* 1) There is a valid buffer (length > 0) * 1) There is a valid buffer pointer
* 2) The buffer is not static (not in an ACPI table) (in this case, * 2) The buffer is not static (not in an ACPI table) (in this case,
* the actual pointer was already copied above) * the actual pointer was already copied above)
*/ */
if ((source_desc->buffer.length) && if ((source_desc->buffer.pointer) &&
(!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) { (!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) {
dest_desc->buffer.pointer = ACPI_MEM_ALLOCATE (source_desc->buffer.length); dest_desc->buffer.pointer = ACPI_MEM_ALLOCATE (source_desc->buffer.length);
if (!dest_desc->buffer.pointer) { if (!dest_desc->buffer.pointer) {
...@@ -665,11 +665,11 @@ acpi_ut_copy_simple_object ( ...@@ -665,11 +665,11 @@ acpi_ut_copy_simple_object (
/* /*
* Allocate and copy the actual string if and only if: * Allocate and copy the actual string if and only if:
* 1) There is a valid string (length > 0) * 1) There is a valid string pointer
* 2) The string is not static (not in an ACPI table) (in this case, * 2) The string is not static (not in an ACPI table) (in this case,
* the actual pointer was already copied above) * the actual pointer was already copied above)
*/ */
if ((source_desc->string.length) && if ((source_desc->string.pointer) &&
(!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) { (!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) {
dest_desc->string.pointer = ACPI_MEM_ALLOCATE ((acpi_size) source_desc->string.length + 1); dest_desc->string.pointer = ACPI_MEM_ALLOCATE ((acpi_size) source_desc->string.length + 1);
if (!dest_desc->string.pointer) { if (!dest_desc->string.pointer) {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -30,3 +30,4 @@ obj-$(CONFIG_ALIM7101_WDT) += alim7101_wdt.o ...@@ -30,3 +30,4 @@ obj-$(CONFIG_ALIM7101_WDT) += alim7101_wdt.o
obj-$(CONFIG_SC1200_WDT) += sc1200wdt.o obj-$(CONFIG_SC1200_WDT) += sc1200wdt.o
obj-$(CONFIG_WAFER_WDT) += wafer5823wdt.o obj-$(CONFIG_WAFER_WDT) += wafer5823wdt.o
obj-$(CONFIG_CPU5_WDT) += cpu5wdt.o obj-$(CONFIG_CPU5_WDT) += cpu5wdt.o
obj-$(CONFIG_AMD7XX_TCO) += amd7xx_tco.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment