Commit 4406c56d authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6

* 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: (75 commits)
  PCI hotplug: clean up acpi_run_hpp()
  PCI hotplug: acpiphp: use generic pci_configure_slot()
  PCI hotplug: shpchp: use generic pci_configure_slot()
  PCI hotplug: pciehp: use generic pci_configure_slot()
  PCI hotplug: add pci_configure_slot()
  PCI hotplug: clean up acpi_get_hp_params_from_firmware() interface
  PCI hotplug: acpiphp: don't cache hotplug_params in acpiphp_bridge
  PCI hotplug: acpiphp: remove superfluous _HPP/_HPX evaluation
  PCI: Clear saved_state after the state has been restored
  PCI PM: Return error codes from pci_pm_resume()
  PCI: use dev_printk in quirk messages
  PCI / PCIe portdrv: Fix pcie_portdrv_slot_reset()
  PCI Hotplug: convert acpi_pci_detect_ejectable() to take an acpi_handle
  PCI Hotplug: acpiphp: find bridges the easy way
  PCI: pcie portdrv: remove unused variable
  PCI / ACPI PM: Propagate wake-up enable for devices w/o ACPI support
  ACPI PM: Replace wakeup.prepared with reference counter
  PCI PM: Introduce device flag wakeup_prepared
  PCI / ACPI PM: Rework some debug messages
  PCI PM: Simplify PCI wake-up code
  ...

Fixed up conflict in arch/powerpc/kernel/pci_64.c due to OF device tree
scanning having been moved and merged for the 32- and 64-bit cases.  The
'needs_freset' initialization added in 6e19314c ("PCI/powerpc: support
PCIe fundamental reset") is now in arch/powerpc/kernel/pci_of_scan.c.
parents 6b7b352f 5e3573db
......@@ -84,6 +84,16 @@ Description:
from this part of the device tree.
Depends on CONFIG_HOTPLUG.
What: /sys/bus/pci/devices/.../reset
Date: July 2009
Contact: Michael S. Tsirkin <mst@redhat.com>
Description:
Some devices allow an individual function to be reset
without affecting other functions in the same device.
For devices that have this support, a file named reset
will be present in sysfs. Writing 1 to this file
will perform reset.
What: /sys/bus/pci/devices/.../vpd
Date: February 2008
Contact: Ben Hutchings <bhutchings@solarflare.com>
......
This diff is collapsed.
VGA Arbiter
===========
Graphic devices are accessed through ranges in I/O or memory space. While most
modern devices allow relocation of such ranges, some "Legacy" VGA devices
implemented on PCI will typically have the same "hard-decoded" addresses as
they did on ISA. For more details see "PCI Bus Binding to IEEE Std 1275-1994
Standard for Boot (Initialization Configuration) Firmware Revision 2.1"
Section 7, Legacy Devices.
The Resource Access Control (RAC) module inside the X server [0] existed for
the legacy VGA arbitration task (besides other bus management tasks) when more
than one legacy device co-exists on the same machine. But the problem happens
when these devices are trying to be accessed by different userspace clients
(e.g. two server in parallel). Their address assignments conflict. Moreover,
ideally, being an userspace application, it is not the role of the the X
server to control bus resources. Therefore an arbitration scheme outside of
the X server is needed to control the sharing of these resources. This
document introduces the operation of the VGA arbiter implemented for Linux
kernel.
----------------------------------------------------------------------------
I. Details and Theory of Operation
I.1 vgaarb
I.2 libpciaccess
I.3 xf86VGAArbiter (X server implementation)
II. Credits
III.References
I. Details and Theory of Operation
==================================
I.1 vgaarb
----------
The vgaarb is a module of the Linux Kernel. When it is initially loaded, it
scans all PCI devices and adds the VGA ones inside the arbitration. The
arbiter then enables/disables the decoding on different devices of the VGA
legacy instructions. Device which do not want/need to use the arbiter may
explicitly tell it by calling vga_set_legacy_decoding().
The kernel exports a char device interface (/dev/vga_arbiter) to the clients,
which has the following semantics:
open : open user instance of the arbiter. By default, it's attached to
the default VGA device of the system.
close : close user instance. Release locks made by the user
read : return a string indicating the status of the target like:
"<card_ID>,decodes=<io_state>,owns=<io_state>,locks=<io_state> (ic,mc)"
An IO state string is of the form {io,mem,io+mem,none}, mc and
ic are respectively mem and io lock counts (for debugging/
diagnostic only). "decodes" indicate what the card currently
decodes, "owns" indicates what is currently enabled on it, and
"locks" indicates what is locked by this card. If the card is
unplugged, we get "invalid" then for card_ID and an -ENODEV
error is returned for any command until a new card is targeted.
write : write a command to the arbiter. List of commands:
target <card_ID> : switch target to card <card_ID> (see below)
lock <io_state> : acquires locks on target ("none" is an invalid io_state)
trylock <io_state> : non-blocking acquire locks on target (returns EBUSY if
unsuccessful)
unlock <io_state> : release locks on target
unlock all : release all locks on target held by this user (not
implemented yet)
decodes <io_state> : set the legacy decoding attributes for the card
poll : event if something changes on any card (not just the
target)
card_ID is of the form "PCI:domain:bus:dev.fn". It can be set to "default"
to go back to the system default card (TODO: not implemented yet). Currently,
only PCI is supported as a prefix, but the userland API may support other bus
types in the future, even if the current kernel implementation doesn't.
Note about locks:
The driver keeps track of which user has which locks on which card. It
supports stacking, like the kernel one. This complexifies the implementation
a bit, but makes the arbiter more tolerant to user space problems and able
to properly cleanup in all cases when a process dies.
Currently, a max of 16 cards can have locks simultaneously issued from
user space for a given user (file descriptor instance) of the arbiter.
In the case of devices hot-{un,}plugged, there is a hook - pci_notify() - to
notify them being added/removed in the system and automatically added/removed
in the arbiter.
There's also a in-kernel API of the arbiter in the case of DRM, vgacon and
others which may use the arbiter.
I.2 libpciaccess
----------------
To use the vga arbiter char device it was implemented an API inside the
libpciaccess library. One fieldd was added to struct pci_device (each device
on the system):
/* the type of resource decoded by the device */
int vgaarb_rsrc;
Besides it, in pci_system were added:
int vgaarb_fd;
int vga_count;
struct pci_device *vga_target;
struct pci_device *vga_default_dev;
The vga_count is usually need to keep informed how many cards are being
arbitrated, so for instance if there's only one then it can totally escape the
scheme.
These functions below acquire VGA resources for the given card and mark those
resources as locked. If the resources requested are "normal" (and not legacy)
resources, the arbiter will first check whether the card is doing legacy
decoding for that type of resource. If yes, the lock is "converted" into a
legacy resource lock. The arbiter will first look for all VGA cards that
might conflict and disable their IOs and/or Memory access, including VGA
forwarding on P2P bridges if necessary, so that the requested resources can
be used. Then, the card is marked as locking these resources and the IO and/or
Memory access is enabled on the card (including VGA forwarding on parent
P2P bridges if any). In the case of vga_arb_lock(), the function will block
if some conflicting card is already locking one of the required resources (or
any resource on a different bus segment, since P2P bridges don't differentiate
VGA memory and IO afaik). If the card already owns the resources, the function
succeeds. vga_arb_trylock() will return (-EBUSY) instead of blocking. Nested
calls are supported (a per-resource counter is maintained).
Set the target device of this client.
int pci_device_vgaarb_set_target (struct pci_device *dev);
For instance, in x86 if two devices on the same bus want to lock different
resources, both will succeed (lock). If devices are in different buses and
trying to lock different resources, only the first who tried succeeds.
int pci_device_vgaarb_lock (void);
int pci_device_vgaarb_trylock (void);
Unlock resources of device.
int pci_device_vgaarb_unlock (void);
Indicates to the arbiter if the card decodes legacy VGA IOs, legacy VGA
Memory, both, or none. All cards default to both, the card driver (fbdev for
example) should tell the arbiter if it has disabled legacy decoding, so the
card can be left out of the arbitration process (and can be safe to take
interrupts at any time.
int pci_device_vgaarb_decodes (int new_vgaarb_rsrc);
Connects to the arbiter device, allocates the struct
int pci_device_vgaarb_init (void);
Close the connection
void pci_device_vgaarb_fini (void);
I.3 xf86VGAArbiter (X server implementation)
--------------------------------------------
(TODO)
X server basically wraps all the functions that touch VGA registers somehow.
II. Credits
===========
Benjamin Herrenschmidt (IBM?) started this work when he discussed such design
with the Xorg community in 2005 [1, 2]. In the end of 2007, Paulo Zanoni and
Tiago Vignatti (both of C3SL/Federal University of Paraná) proceeded his work
enhancing the kernel code to adapt as a kernel module and also did the
implementation of the user space side [3]. Now (2009) Tiago Vignatti and Dave
Airlie finally put this work in shape and queued to Jesse Barnes' PCI tree.
III. References
==============
[0] http://cgit.freedesktop.org/xorg/xserver/commit/?id=4b42448a2388d40f257774fbffdccaea87bd0347
[1] http://lists.freedesktop.org/archives/xorg/2005-March/006663.html
[2] http://lists.freedesktop.org/archives/xorg/2005-March/006745.html
[3] http://lists.freedesktop.org/archives/xorg/2007-October/029507.html
......@@ -52,7 +52,6 @@ struct pci_controller {
bus numbers. */
#define pcibios_assign_all_busses() 1
#define pcibios_scan_all_fns(a, b) 0
#define PCIBIOS_MIN_IO alpha_mv.min_io_address
#define PCIBIOS_MIN_MEM alpha_mv.min_mem_address
......
......@@ -6,8 +6,6 @@
#include <mach/hardware.h> /* for PCIBIOS_MIN_* */
#define pcibios_scan_all_fns(a, b) 0
#ifdef CONFIG_PCI_HOST_ITE8152
/* ITE bridge requires setting latency timer to avoid early bus access
termination by PIC bus mater devices
......
......@@ -86,7 +86,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
struct pci_bus *bus;
struct pci_dev *dev;
int idx;
struct resource *r, *pr;
struct resource *r;
/* Depth-First Search on bus tree */
for (ln=bus_list->next; ln != bus_list; ln=ln->next) {
......@@ -96,8 +96,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
r = &dev->resource[idx];
if (!r->start)
continue;
pr = pci_find_parent_resource(dev, r);
if (!pr || request_resource(pr, r) < 0)
if (pci_claim_resource(dev, idx) < 0)
printk(KERN_ERR "PCI: Cannot allocate resource region %d of bridge %s\n", idx, pci_name(dev));
}
}
......@@ -110,7 +109,7 @@ static void __init pcibios_allocate_resources(int pass)
struct pci_dev *dev = NULL;
int idx, disabled;
u16 command;
struct resource *r, *pr;
struct resource *r;
for_each_pci_dev(dev) {
pci_read_config_word(dev, PCI_COMMAND, &command);
......@@ -127,8 +126,7 @@ static void __init pcibios_allocate_resources(int pass)
if (pass == disabled) {
DBG("PCI: Resource %08lx-%08lx (f=%lx, d=%d, p=%d)\n",
r->start, r->end, r->flags, disabled, pass);
pr = pci_find_parent_resource(dev, r);
if (!pr || request_resource(pr, r) < 0) {
if (pci_claim_resource(dev, idx) < 0) {
printk(KERN_ERR "PCI: Cannot allocate resource region %d of device %s\n", idx, pci_name(dev));
/* We'll assign a new address later */
r->end -= r->start;
......
......@@ -8,7 +8,6 @@
*/
#define pcibios_assign_all_busses() 0
#define pcibios_scan_all_fns(a, b) 0
static inline void pcibios_set_master(struct pci_dev *dev)
{
......
......@@ -17,7 +17,6 @@
* loader.
*/
#define pcibios_assign_all_busses() 0
#define pcibios_scan_all_fns(a, b) 0
#define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0x10000000
......@@ -135,7 +134,18 @@ extern void pcibios_resource_to_bus(struct pci_dev *dev,
extern void pcibios_bus_to_resource(struct pci_dev *dev,
struct resource *res, struct pci_bus_region *region);
#define pcibios_scan_all_fns(a, b) 0
static inline struct resource *
pcibios_select_root(struct pci_dev *pdev, struct resource *res)
{
struct resource *root = NULL;
if (res->flags & IORESOURCE_IO)
root = &ioport_resource;
if (res->flags & IORESOURCE_MEM)
root = &iomem_resource;
return root;
}
#define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
......
......@@ -65,8 +65,6 @@ extern int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin);
extern unsigned int pcibios_assign_all_busses(void);
#define pcibios_scan_all_fns(a, b) 0
extern unsigned long PCIBIOS_MIN_IO;
extern unsigned long PCIBIOS_MIN_MEM;
......
......@@ -101,7 +101,18 @@ extern void pcibios_bus_to_resource(struct pci_dev *dev,
struct resource *res,
struct pci_bus_region *region);
#define pcibios_scan_all_fns(a, b) 0
static inline struct resource *
pcibios_select_root(struct pci_dev *pdev, struct resource *res)
{
struct resource *root = NULL;
if (res->flags & IORESOURCE_IO)
root = &ioport_resource;
if (res->flags & IORESOURCE_MEM)
root = &iomem_resource;
return root;
}
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
......
......@@ -233,7 +233,6 @@ static inline void pcibios_register_hba(struct pci_hba_data *x)
* rp7420/8420 boxes and then revisit this issue.
*/
#define pcibios_assign_all_busses() (1)
#define pcibios_scan_all_fns(a, b) (0)
#define PCIBIOS_MIN_IO 0x10
#define PCIBIOS_MIN_MEM 0x1000 /* NBPG - but pci/setup-res.c dies */
......
......@@ -45,7 +45,6 @@ struct pci_dev;
*/
#define pcibios_assign_all_busses() \
(ppc_pci_has_flag(PPC_PCI_REASSIGN_ALL_BUS))
#define pcibios_scan_all_fns(a, b) 0
static inline void pcibios_set_master(struct pci_dev *dev)
{
......
......@@ -139,6 +139,7 @@ struct pci_dev *of_create_pci_dev(struct device_node *node,
dev->dev.bus = &pci_bus_type;
dev->devfn = devfn;
dev->multifunction = 0; /* maybe a lie? */
dev->needs_freset = 0; /* pcie fundamental reset required */
dev->vendor = get_int_prop(node, "vendor-id", 0xffff);
dev->device = get_int_prop(node, "device-id", 0xffff);
......
......@@ -744,7 +744,15 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state stat
static void __rtas_set_slot_reset(struct pci_dn *pdn)
{
rtas_pci_slot_reset (pdn, 1);
struct pci_dev *dev = pdn->pcidev;
/* Determine type of EEH reset required by device,
* default hot reset or fundamental reset
*/
if (dev->needs_freset)
rtas_pci_slot_reset(pdn, 3);
else
rtas_pci_slot_reset(pdn, 1);
/* The PCI bus requires that the reset be held high for at least
* a 100 milliseconds. We wait a bit longer 'just in case'. */
......
......@@ -10,7 +10,6 @@
or architectures with incomplete PCI setup by the loader */
#define pcibios_assign_all_busses() 1
#define pcibios_scan_all_fns(a, b) 0
/*
* A board can define one or more PCI channels that represent built-in (or
......
......@@ -10,7 +10,6 @@
* or architectures with incomplete PCI setup by the loader.
*/
#define pcibios_assign_all_busses() 0
#define pcibios_scan_all_fns(a, b) 0
#define PCIBIOS_MIN_IO 0UL
#define PCIBIOS_MIN_MEM 0UL
......
......@@ -10,7 +10,6 @@
* or architectures with incomplete PCI setup by the loader.
*/
#define pcibios_assign_all_busses() 0
#define pcibios_scan_all_fns(a, b) 0
#define PCIBIOS_MIN_IO 0UL
#define PCIBIOS_MIN_MEM 0UL
......
......@@ -2,6 +2,5 @@
#define __UM_PCI_H
#define PCI_DMA_BUS_IS_PHYS (1)
#define pcibios_scan_all_fns(a, b) 0
#endif
......@@ -48,7 +48,6 @@ extern unsigned int pcibios_assign_all_busses(void);
#else
#define pcibios_assign_all_busses() 0
#endif
#define pcibios_scan_all_fns(a, b) 0
extern unsigned long pci_mem_start;
#define PCIBIOS_MIN_IO 0x1000
......
......@@ -225,10 +225,8 @@ static __init int iommu_setup(char *p)
if (!strncmp(p, "soft", 4))
swiotlb = 1;
#endif
if (!strncmp(p, "pt", 2)) {
if (!strncmp(p, "pt", 2))
iommu_pass_through = 1;
return 1;
}
gart_parse_options(p);
......
......@@ -508,7 +508,7 @@ static void __init quirk_amd_nb_node(struct pci_dev *dev)
pci_read_config_dword(nb_ht, 0x60, &val);
set_dev_node(&dev->dev, val & 7);
pci_dev_put(dev);
pci_dev_put(nb_ht);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB,
......
......@@ -15,63 +15,6 @@
* also get peer root bus resource for io,mmio
*/
#ifdef CONFIG_NUMA
#define BUS_NR 256
#ifdef CONFIG_X86_64
static int mp_bus_to_node[BUS_NR];
void set_mp_bus_to_node(int busnum, int node)
{
if (busnum >= 0 && busnum < BUS_NR)
mp_bus_to_node[busnum] = node;
}
int get_mp_bus_to_node(int busnum)
{
int node = -1;
if (busnum < 0 || busnum > (BUS_NR - 1))
return node;
node = mp_bus_to_node[busnum];
/*
* let numa_node_id to decide it later in dma_alloc_pages
* if there is no ram on that node
*/
if (node != -1 && !node_online(node))
node = -1;
return node;
}
#else /* CONFIG_X86_32 */
static unsigned char mp_bus_to_node[BUS_NR];
void set_mp_bus_to_node(int busnum, int node)
{
if (busnum >= 0 && busnum < BUS_NR)
mp_bus_to_node[busnum] = (unsigned char) node;
}
int get_mp_bus_to_node(int busnum)
{
int node;
if (busnum < 0 || busnum > (BUS_NR - 1))
return 0;
node = mp_bus_to_node[busnum];
return node;
}
#endif /* CONFIG_X86_32 */
#endif /* CONFIG_NUMA */
#ifdef CONFIG_X86_64
/*
......@@ -301,11 +244,6 @@ static int __init early_fill_mp_bus_info(void)
u64 val;
u32 address;
#ifdef CONFIG_NUMA
for (i = 0; i < BUS_NR; i++)
mp_bus_to_node[i] = -1;
#endif
if (!early_pci_allowed())
return -1;
......@@ -346,7 +284,7 @@ static int __init early_fill_mp_bus_info(void)
node = (reg >> 4) & 0x07;
#ifdef CONFIG_NUMA
for (j = min_bus; j <= max_bus; j++)
mp_bus_to_node[j] = (unsigned char) node;
set_mp_bus_to_node(j, node);
#endif
link = (reg >> 8) & 0x03;
......
......@@ -600,3 +600,72 @@ struct pci_bus * __devinit pci_scan_bus_with_sysdata(int busno)
{
return pci_scan_bus_on_node(busno, &pci_root_ops, -1);
}
/*
* NUMA info for PCI busses
*
* Early arch code is responsible for filling in reasonable values here.
* A node id of "-1" means "use current node". In other words, if a bus
* has a -1 node id, it's not tightly coupled to any particular chunk
* of memory (as is the case on some Nehalem systems).
*/
#ifdef CONFIG_NUMA
#define BUS_NR 256
#ifdef CONFIG_X86_64
static int mp_bus_to_node[BUS_NR] = {
[0 ... BUS_NR - 1] = -1
};
void set_mp_bus_to_node(int busnum, int node)
{
if (busnum >= 0 && busnum < BUS_NR)
mp_bus_to_node[busnum] = node;
}
int get_mp_bus_to_node(int busnum)
{
int node = -1;
if (busnum < 0 || busnum > (BUS_NR - 1))
return node;
node = mp_bus_to_node[busnum];
/*
* let numa_node_id to decide it later in dma_alloc_pages
* if there is no ram on that node
*/
if (node != -1 && !node_online(node))
node = -1;
return node;
}
#else /* CONFIG_X86_32 */
static unsigned char mp_bus_to_node[BUS_NR] = {
[0 ... BUS_NR - 1] = -1
};
void set_mp_bus_to_node(int busnum, int node)
{
if (busnum >= 0 && busnum < BUS_NR)
mp_bus_to_node[busnum] = (unsigned char) node;
}
int get_mp_bus_to_node(int busnum)
{
int node;
if (busnum < 0 || busnum > (BUS_NR - 1))
return 0;
node = mp_bus_to_node[busnum];
return node;
}
#endif /* CONFIG_X86_32 */
#endif /* CONFIG_NUMA */
......@@ -61,20 +61,6 @@ static struct acpi_driver acpi_pci_root_driver = {
},
};
struct acpi_pci_root {
struct list_head node;
struct acpi_device *device;
struct pci_bus *bus;
u16 segment;
u8 bus_nr;
u32 osc_support_set; /* _OSC state of support bits */
u32 osc_control_set; /* _OSC state of control bits */
u32 osc_control_qry; /* the latest _OSC query result */
u32 osc_queried:1; /* has _OSC control been queried? */
};
static LIST_HEAD(acpi_pci_roots);
static struct acpi_pci_driver *sub_driver;
......@@ -317,7 +303,7 @@ static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
return status;
}
static struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
{
struct acpi_pci_root *root;
......@@ -327,6 +313,7 @@ static struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
}
return NULL;
}
EXPORT_SYMBOL_GPL(acpi_pci_find_root);
struct acpi_handle_node {
struct list_head node;
......
......@@ -44,6 +44,8 @@
#include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h>
#include "sleep.h"
#define _COMPONENT ACPI_POWER_COMPONENT
ACPI_MODULE_NAME("power");
#define ACPI_POWER_CLASS "power_resource"
......@@ -361,17 +363,15 @@ int acpi_device_sleep_wake(struct acpi_device *dev,
*/
int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
{
int i, err;
int i, err = 0;
if (!dev || !dev->wakeup.flags.valid)
return -EINVAL;
/*
* Do not execute the code below twice in a row without calling
* acpi_disable_wakeup_device_power() in between for the same device
*/
if (dev->wakeup.flags.prepared)
return 0;
mutex_lock(&acpi_device_lock);
if (dev->wakeup.prepare_count++)
goto out;
/* Open power resource */
for (i = 0; i < dev->wakeup.resources.count; i++) {
......@@ -379,7 +379,8 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
if (ret) {
printk(KERN_ERR PREFIX "Transition power state\n");
dev->wakeup.flags.valid = 0;
return -ENODEV;
err = -ENODEV;
goto err_out;
}
}
......@@ -388,9 +389,13 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
* in arbitrary power state afterwards.
*/
err = acpi_device_sleep_wake(dev, 1, sleep_state, 3);
if (!err)
dev->wakeup.flags.prepared = 1;
err_out:
if (err)
dev->wakeup.prepare_count = 0;
out:
mutex_unlock(&acpi_device_lock);
return err;
}
......@@ -402,35 +407,42 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
*/
int acpi_disable_wakeup_device_power(struct acpi_device *dev)
{
int i, ret;
int i, err = 0;
if (!dev || !dev->wakeup.flags.valid)
return -EINVAL;
mutex_lock(&acpi_device_lock);
if (--dev->wakeup.prepare_count > 0)
goto out;
/*
* Do not execute the code below twice in a row without calling
* acpi_enable_wakeup_device_power() in between for the same device
* Executing the code below even if prepare_count is already zero when
* the function is called may be useful, for example for initialisation.
*/
if (!dev->wakeup.flags.prepared)
return 0;
if (dev->wakeup.prepare_count < 0)
dev->wakeup.prepare_count = 0;
dev->wakeup.flags.prepared = 0;
ret = acpi_device_sleep_wake(dev, 0, 0, 0);
if (ret)
return ret;
err = acpi_device_sleep_wake(dev, 0, 0, 0);
if (err)
goto out;
/* Close power resource */
for (i = 0; i < dev->wakeup.resources.count; i++) {
ret = acpi_power_off_device(dev->wakeup.resources.handles[i], dev);
int ret = acpi_power_off_device(
dev->wakeup.resources.handles[i], dev);
if (ret) {
printk(KERN_ERR PREFIX "Transition power state\n");
dev->wakeup.flags.valid = 0;
return -ENODEV;
err = -ENODEV;
goto out;
}
}
return ret;
out:
mutex_unlock(&acpi_device_lock);
return err;
}
/* --------------------------------------------------------------------------
......
......@@ -781,6 +781,7 @@ static int acpi_bus_get_wakeup_device_flags(struct acpi_device *device)
kfree(buffer.pointer);
device->wakeup.flags.valid = 1;
device->wakeup.prepare_count = 0;
/* Call _PSW/_DSW object to disable its ability to wake the sleeping
* system for the ACPI device with the _PRW object.
* The _PSW object is depreciated in ACPI 3.0 and is replaced by _DSW.
......
......@@ -689,19 +689,25 @@ int acpi_pm_device_sleep_wake(struct device *dev, bool enable)
{
acpi_handle handle;
struct acpi_device *adev;
int error;
if (!device_may_wakeup(dev))
if (!device_can_wakeup(dev))
return -EINVAL;
handle = DEVICE_ACPI_HANDLE(dev);
if (!handle || ACPI_FAILURE(acpi_bus_get_device(handle, &adev))) {
printk(KERN_DEBUG "ACPI handle has no context!\n");
dev_dbg(dev, "ACPI handle has no context in %s!\n", __func__);
return -ENODEV;
}
return enable ?
error = enable ?
acpi_enable_wakeup_device_power(adev, acpi_target_sleep_state) :
acpi_disable_wakeup_device_power(adev);
if (!error)
dev_info(dev, "wake-up capability %s by ACPI\n",
enable ? "enabled" : "disabled");
return error;
}
#endif
......
......@@ -68,7 +68,7 @@ void acpi_enable_wakeup_device(u8 sleep_state)
/* If users want to disable run-wake GPE,
* we only disable it for wake and leave it for runtime
*/
if ((!dev->wakeup.state.enabled && !dev->wakeup.flags.prepared)
if ((!dev->wakeup.state.enabled && !dev->wakeup.prepare_count)
|| sleep_state > (u32) dev->wakeup.sleep_state) {
if (dev->wakeup.flags.run_wake) {
/* set_gpe_type will disable GPE, leave it like that */
......@@ -100,7 +100,7 @@ void acpi_disable_wakeup_device(u8 sleep_state)
if (!dev->wakeup.flags.valid)
continue;
if ((!dev->wakeup.state.enabled && !dev->wakeup.flags.prepared)
if ((!dev->wakeup.state.enabled && !dev->wakeup.prepare_count)
|| sleep_state > (u32) dev->wakeup.sleep_state) {
if (dev->wakeup.flags.run_wake) {
acpi_set_gpe_type(dev->wakeup.gpe_device,
......
obj-y += drm/
obj-y += drm/ vga/
config VGA_ARB
bool "VGA Arbitration" if EMBEDDED
default y
depends on PCI
help
Some "legacy" VGA devices implemented on PCI typically have the same
hard-decoded addresses as they did on ISA. When multiple PCI devices
are accessed at same time they need some kind of coordination. Please
see Documentation/vgaarbiter.txt for more details. Select this to
enable VGA arbiter.
obj-$(CONFIG_VGA_ARB) += vgaarb.o
This diff is collapsed.
......@@ -8,6 +8,9 @@ obj-y += access.o bus.o probe.o remove.o pci.o quirks.o \
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSFS) += slot.o
obj-$(CONFIG_PCI_LEGACY) += legacy.o
CFLAGS_legacy.o += -Wno-deprecated-declarations
# Build PCI Express stuff if needed
obj-$(CONFIG_PCIEPORTBUS) += pcie/
......
......@@ -22,7 +22,7 @@ obj-$(CONFIG_HOTPLUG_PCI_SGI) += sgi_hotplug.o
# Link this last so it doesn't claim devices that have a real hotplug driver
obj-$(CONFIG_HOTPLUG_PCI_FAKE) += fakephp.o
pci_hotplug-objs := pci_hotplug_core.o
pci_hotplug-objs := pci_hotplug_core.o pcihp_slot.o
ifdef CONFIG_HOTPLUG_PCI_CPCI
pci_hotplug-objs += cpci_hotplug_core.o \
......
......@@ -41,7 +41,6 @@
#define warn(format, arg...) printk(KERN_WARNING "%s: " format , MY_NAME , ## arg)
#define METHOD_NAME__SUN "_SUN"
#define METHOD_NAME__HPP "_HPP"
#define METHOD_NAME_OSHP "OSHP"
static int debug_acpi;
......@@ -215,80 +214,41 @@ acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx)
static acpi_status
acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp)
{
acpi_status status;
u8 nui[4];
struct acpi_buffer ret_buf = { 0, NULL};
struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *ext_obj, *package;
int i, len = 0;
acpi_get_name(handle, ACPI_FULL_PATHNAME, &string);
acpi_status status;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *package, *fields;
int i;
/* Clear the return buffer with zeros */
memset(hpp, 0, sizeof(struct hotplug_params));
/* get _hpp */
status = acpi_evaluate_object(handle, METHOD_NAME__HPP, NULL, &ret_buf);
switch (status) {
case AE_BUFFER_OVERFLOW:
ret_buf.pointer = kmalloc (ret_buf.length, GFP_KERNEL);
if (!ret_buf.pointer) {
printk(KERN_ERR "%s:%s alloc for _HPP fail\n",
__func__, (char *)string.pointer);
kfree(string.pointer);
return AE_NO_MEMORY;
}
status = acpi_evaluate_object(handle, METHOD_NAME__HPP,
NULL, &ret_buf);
if (ACPI_SUCCESS(status))
break;
default:
if (ACPI_FAILURE(status)) {
pr_debug("%s:%s _HPP fail=0x%x\n", __func__,
(char *)string.pointer, status);
kfree(string.pointer);
return status;
}
}
status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer);
if (ACPI_FAILURE(status))
return status;
ext_obj = (union acpi_object *) ret_buf.pointer;
if (ext_obj->type != ACPI_TYPE_PACKAGE) {
printk(KERN_ERR "%s:%s _HPP obj not a package\n", __func__,
(char *)string.pointer);
package = (union acpi_object *) buffer.pointer;
if (package->type != ACPI_TYPE_PACKAGE ||
package->package.count != 4) {
status = AE_ERROR;
goto free_and_return;
goto exit;
}
len = ext_obj->package.count;
package = (union acpi_object *) ret_buf.pointer;
for ( i = 0; (i < len) || (i < 4); i++) {
ext_obj = (union acpi_object *) &package->package.elements[i];
switch (ext_obj->type) {
case ACPI_TYPE_INTEGER:
nui[i] = (u8)ext_obj->integer.value;
break;
default:
printk(KERN_ERR "%s:%s _HPP obj type incorrect\n",
__func__, (char *)string.pointer);
fields = package->package.elements;
for (i = 0; i < 4; i++) {
if (fields[i].type != ACPI_TYPE_INTEGER) {
status = AE_ERROR;
goto free_and_return;
goto exit;
}
}
hpp->t0 = &hpp->type0_data;
hpp->t0->cache_line_size = nui[0];
hpp->t0->latency_timer = nui[1];
hpp->t0->enable_serr = nui[2];
hpp->t0->enable_perr = nui[3];
pr_debug(" _HPP: cache_line_size=0x%x\n", hpp->t0->cache_line_size);
pr_debug(" _HPP: latency timer =0x%x\n", hpp->t0->latency_timer);
pr_debug(" _HPP: enable SERR =0x%x\n", hpp->t0->enable_serr);
pr_debug(" _HPP: enable PERR =0x%x\n", hpp->t0->enable_perr);
hpp->t0->revision = 1;
hpp->t0->cache_line_size = fields[0].integer.value;
hpp->t0->latency_timer = fields[1].integer.value;
hpp->t0->enable_serr = fields[2].integer.value;
hpp->t0->enable_perr = fields[3].integer.value;
free_and_return:
kfree(string.pointer);
kfree(ret_buf.pointer);
exit:
kfree(buffer.pointer);
return status;
}
......@@ -322,20 +282,19 @@ static acpi_status acpi_run_oshp(acpi_handle handle)
return status;
}
/* acpi_get_hp_params_from_firmware
/* pci_get_hp_params
*
* @bus - the pci_bus of the bus on which the device is newly added
* @dev - the pci_dev for which we want parameters
* @hpp - allocated by the caller
*/
acpi_status acpi_get_hp_params_from_firmware(struct pci_bus *bus,
struct hotplug_params *hpp)
int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp)
{
acpi_status status = AE_NOT_FOUND;
acpi_status status;
acpi_handle handle, phandle;
struct pci_bus *pbus;
handle = NULL;
for (pbus = bus; pbus; pbus = pbus->parent) {
for (pbus = dev->bus; pbus; pbus = pbus->parent) {
handle = acpi_pci_get_bridge_handle(pbus);
if (handle)
break;
......@@ -345,15 +304,15 @@ acpi_status acpi_get_hp_params_from_firmware(struct pci_bus *bus,
* _HPP settings apply to all child buses, until another _HPP is
* encountered. If we don't find an _HPP for the input pci dev,
* look for it in the parent device scope since that would apply to
* this pci dev. If we don't find any _HPP, use hardcoded defaults
* this pci dev.
*/
while (handle) {
status = acpi_run_hpx(handle, hpp);
if (ACPI_SUCCESS(status))
break;
return 0;
status = acpi_run_hpp(handle, hpp);
if (ACPI_SUCCESS(status))
break;
return 0;
if (acpi_is_root_bridge(handle))
break;
status = acpi_get_parent(handle, &phandle);
......@@ -361,9 +320,9 @@ acpi_status acpi_get_hp_params_from_firmware(struct pci_bus *bus,
break;
handle = phandle;
}
return status;
return -ENODEV;
}
EXPORT_SYMBOL_GPL(acpi_get_hp_params_from_firmware);
EXPORT_SYMBOL_GPL(pci_get_hp_params);
/**
* acpi_get_hp_hw_control_from_firmware
......@@ -500,18 +459,18 @@ check_hotplug(acpi_handle handle, u32 lvl, void *context, void **rv)
/**
* acpi_pci_detect_ejectable - check if the PCI bus has ejectable slots
* @pbus - PCI bus to scan
* @handle - handle of the PCI bus to scan
*
* Returns 1 if the PCI bus has ACPI based ejectable slots, 0 otherwise.
*/
int acpi_pci_detect_ejectable(struct pci_bus *pbus)
int acpi_pci_detect_ejectable(acpi_handle handle)
{
acpi_handle handle;
int found = 0;
if (!(handle = acpi_pci_get_bridge_handle(pbus)))
return 0;
acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, (u32)1,
if (!handle)
return found;
acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1,
check_hotplug, (void *)&found, NULL);
return found;
}
......
......@@ -91,9 +91,6 @@ struct acpiphp_bridge {
/* PCI-to-PCI bridge device */
struct pci_dev *pci_dev;
/* ACPI 2.0 _HPP parameters */
struct hotplug_params hpp;
spinlock_t res_lock;
};
......
......@@ -59,7 +59,7 @@ static DEFINE_SPINLOCK(ioapic_list_lock);
static void handle_hotplug_event_bridge (acpi_handle, u32, void *);
static void acpiphp_sanitize_bus(struct pci_bus *bus);
static void acpiphp_set_hpp_values(acpi_handle handle, struct pci_bus *bus);
static void acpiphp_set_hpp_values(struct pci_bus *bus);
static void handle_hotplug_event_func(acpi_handle handle, u32 type, void *context);
/* callback routine to check for the existence of a pci dock device */
......@@ -261,51 +261,21 @@ register_slot(acpi_handle handle, u32 lvl, void *context, void **rv)
/* see if it's worth looking at this bridge */
static int detect_ejectable_slots(struct pci_bus *pbus)
static int detect_ejectable_slots(acpi_handle handle)
{
int found = acpi_pci_detect_ejectable(pbus);
int found = acpi_pci_detect_ejectable(handle);
if (!found) {
acpi_handle bridge_handle = acpi_pci_get_bridge_handle(pbus);
if (!bridge_handle)
return 0;
acpi_walk_namespace(ACPI_TYPE_DEVICE, bridge_handle, (u32)1,
acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, (u32)1,
is_pci_dock_device, (void *)&found, NULL);
}
return found;
}
/* decode ACPI 2.0 _HPP hot plug parameters */
static void decode_hpp(struct acpiphp_bridge *bridge)
{
acpi_status status;
status = acpi_get_hp_params_from_firmware(bridge->pci_bus, &bridge->hpp);
if (ACPI_FAILURE(status) ||
!bridge->hpp.t0 || (bridge->hpp.t0->revision > 1)) {
/* use default numbers */
printk(KERN_WARNING
"%s: Could not get hotplug parameters. Use defaults\n",
__func__);
bridge->hpp.t0 = &bridge->hpp.type0_data;
bridge->hpp.t0->revision = 0;
bridge->hpp.t0->cache_line_size = 0x10;
bridge->hpp.t0->latency_timer = 0x40;
bridge->hpp.t0->enable_serr = 0;
bridge->hpp.t0->enable_perr = 0;
}
}
/* initialize miscellaneous stuff for both root and PCI-to-PCI bridge */
static void init_bridge_misc(struct acpiphp_bridge *bridge)
{
acpi_status status;
/* decode ACPI 2.0 _HPP (hot plug parameters) */
decode_hpp(bridge);
/* must be added to the list prior to calling register_slot */
list_add(&bridge->list, &bridge_list);
......@@ -399,9 +369,10 @@ static inline void config_p2p_bridge_flags(struct acpiphp_bridge *bridge)
/* allocate and initialize host bridge data structure */
static void add_host_bridge(acpi_handle *handle, struct pci_bus *pci_bus)
static void add_host_bridge(acpi_handle *handle)
{
struct acpiphp_bridge *bridge;
struct acpi_pci_root *root = acpi_pci_find_root(handle);
bridge = kzalloc(sizeof(struct acpiphp_bridge), GFP_KERNEL);
if (bridge == NULL)
......@@ -410,7 +381,7 @@ static void add_host_bridge(acpi_handle *handle, struct pci_bus *pci_bus)
bridge->type = BRIDGE_TYPE_HOST;
bridge->handle = handle;
bridge->pci_bus = pci_bus;
bridge->pci_bus = root->bus;
spin_lock_init(&bridge->res_lock);
......@@ -419,7 +390,7 @@ static void add_host_bridge(acpi_handle *handle, struct pci_bus *pci_bus)
/* allocate and initialize PCI-to-PCI bridge data structure */
static void add_p2p_bridge(acpi_handle *handle, struct pci_dev *pci_dev)
static void add_p2p_bridge(acpi_handle *handle)
{
struct acpiphp_bridge *bridge;
......@@ -433,8 +404,8 @@ static void add_p2p_bridge(acpi_handle *handle, struct pci_dev *pci_dev)
bridge->handle = handle;
config_p2p_bridge_flags(bridge);
bridge->pci_dev = pci_dev_get(pci_dev);
bridge->pci_bus = pci_dev->subordinate;
bridge->pci_dev = acpi_get_pci_dev(handle);
bridge->pci_bus = bridge->pci_dev->subordinate;
if (!bridge->pci_bus) {
err("This is not a PCI-to-PCI bridge!\n");
goto err;
......@@ -451,7 +422,7 @@ static void add_p2p_bridge(acpi_handle *handle, struct pci_dev *pci_dev)
init_bridge_misc(bridge);
return;
err:
pci_dev_put(pci_dev);
pci_dev_put(bridge->pci_dev);
kfree(bridge);
return;
}
......@@ -462,39 +433,21 @@ static acpi_status
find_p2p_bridge(acpi_handle handle, u32 lvl, void *context, void **rv)
{
acpi_status status;
acpi_handle dummy_handle;
unsigned long long tmp;
int device, function;
struct pci_dev *dev;
struct pci_bus *pci_bus = context;
status = acpi_get_handle(handle, "_ADR", &dummy_handle);
if (ACPI_FAILURE(status))
return AE_OK; /* continue */
status = acpi_evaluate_integer(handle, "_ADR", NULL, &tmp);
if (ACPI_FAILURE(status)) {
dbg("%s: _ADR evaluation failure\n", __func__);
return AE_OK;
}
device = (tmp >> 16) & 0xffff;
function = tmp & 0xffff;
dev = pci_get_slot(pci_bus, PCI_DEVFN(device, function));
dev = acpi_get_pci_dev(handle);
if (!dev || !dev->subordinate)
goto out;
/* check if this bridge has ejectable slots */
if ((detect_ejectable_slots(dev->subordinate) > 0)) {
if ((detect_ejectable_slots(handle) > 0)) {
dbg("found PCI-to-PCI bridge at PCI %s\n", pci_name(dev));
add_p2p_bridge(handle, dev);
add_p2p_bridge(handle);
}
/* search P2P bridges under this p2p bridge */
status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, (u32)1,
find_p2p_bridge, dev->subordinate, NULL);
find_p2p_bridge, NULL, NULL);
if (ACPI_FAILURE(status))
warn("find_p2p_bridge failed (error code = 0x%x)\n", status);
......@@ -509,9 +462,7 @@ static int add_bridge(acpi_handle handle)
{
acpi_status status;
unsigned long long tmp;
int seg, bus;
acpi_handle dummy_handle;
struct pci_bus *pci_bus;
/* if the bridge doesn't have _STA, we assume it is always there */
status = acpi_get_handle(handle, "_STA", &dummy_handle);
......@@ -526,36 +477,15 @@ static int add_bridge(acpi_handle handle)
return 0;
}
/* get PCI segment number */
status = acpi_evaluate_integer(handle, "_SEG", NULL, &tmp);
seg = ACPI_SUCCESS(status) ? tmp : 0;
/* get PCI bus number */
status = acpi_evaluate_integer(handle, "_BBN", NULL, &tmp);
if (ACPI_SUCCESS(status)) {
bus = tmp;
} else {
warn("can't get bus number, assuming 0\n");
bus = 0;
}
pci_bus = pci_find_bus(seg, bus);
if (!pci_bus) {
err("Can't find bus %04x:%02x\n", seg, bus);
return 0;
}
/* check if this bridge has ejectable slots */
if (detect_ejectable_slots(pci_bus) > 0) {
if (detect_ejectable_slots(handle) > 0) {
dbg("found PCI host-bus bridge with hot-pluggable slots\n");
add_host_bridge(handle, pci_bus);
add_host_bridge(handle);
}
/* search P2P bridges under this host bridge */
status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, (u32)1,
find_p2p_bridge, pci_bus, NULL);
find_p2p_bridge, NULL, NULL);
if (ACPI_FAILURE(status))
warn("find_p2p_bridge failed (error code = 0x%x)\n", status);
......@@ -1083,7 +1013,7 @@ static int __ref enable_device(struct acpiphp_slot *slot)
pci_bus_assign_resources(bus);
acpiphp_sanitize_bus(bus);
acpiphp_set_hpp_values(slot->bridge->handle, bus);
acpiphp_set_hpp_values(bus);
list_for_each_entry(func, &slot->funcs, sibling)
acpiphp_configure_ioapics(func->handle);
pci_enable_bridges(bus);
......@@ -1294,70 +1224,12 @@ static int acpiphp_check_bridge(struct acpiphp_bridge *bridge)
return retval;
}
static void program_hpp(struct pci_dev *dev, struct acpiphp_bridge *bridge)
static void acpiphp_set_hpp_values(struct pci_bus *bus)
{
u16 pci_cmd, pci_bctl;
struct pci_dev *cdev;
/* Program hpp values for this device */
if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL ||
(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
return;
if ((dev->class >> 8) == PCI_CLASS_BRIDGE_HOST)
return;
pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE,
bridge->hpp.t0->cache_line_size);
pci_write_config_byte(dev, PCI_LATENCY_TIMER,
bridge->hpp.t0->latency_timer);
pci_read_config_word(dev, PCI_COMMAND, &pci_cmd);
if (bridge->hpp.t0->enable_serr)
pci_cmd |= PCI_COMMAND_SERR;
else
pci_cmd &= ~PCI_COMMAND_SERR;
if (bridge->hpp.t0->enable_perr)
pci_cmd |= PCI_COMMAND_PARITY;
else
pci_cmd &= ~PCI_COMMAND_PARITY;
pci_write_config_word(dev, PCI_COMMAND, pci_cmd);
/* Program bridge control value and child devices */
if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER,
bridge->hpp.t0->latency_timer);
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl);
if (bridge->hpp.t0->enable_serr)
pci_bctl |= PCI_BRIDGE_CTL_SERR;
else
pci_bctl &= ~PCI_BRIDGE_CTL_SERR;
if (bridge->hpp.t0->enable_perr)
pci_bctl |= PCI_BRIDGE_CTL_PARITY;
else
pci_bctl &= ~PCI_BRIDGE_CTL_PARITY;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl);
if (dev->subordinate) {
list_for_each_entry(cdev, &dev->subordinate->devices,
bus_list)
program_hpp(cdev, bridge);
}
}
}
static void acpiphp_set_hpp_values(acpi_handle handle, struct pci_bus *bus)
{
struct acpiphp_bridge bridge;
struct pci_dev *dev;
memset(&bridge, 0, sizeof(bridge));
bridge.handle = handle;
bridge.pci_bus = bus;
bridge.pci_dev = bus->self;
decode_hpp(&bridge);
list_for_each_entry(dev, &bus->devices, bus_list)
program_hpp(dev, &bridge);
pci_configure_slot(dev);
}
/*
......@@ -1387,24 +1259,23 @@ static void acpiphp_sanitize_bus(struct pci_bus *bus)
/* Program resources in newly inserted bridge */
static int acpiphp_configure_bridge (acpi_handle handle)
{
struct pci_dev *dev;
struct pci_bus *bus;
dev = acpi_get_pci_dev(handle);
if (!dev) {
err("cannot get PCI domain and bus number for bridge\n");
return -EINVAL;
if (acpi_is_root_bridge(handle)) {
struct acpi_pci_root *root = acpi_pci_find_root(handle);
bus = root->bus;
} else {
struct pci_dev *pdev = acpi_get_pci_dev(handle);
bus = pdev->subordinate;
pci_dev_put(pdev);
}
bus = dev->bus;
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
acpiphp_sanitize_bus(bus);
acpiphp_set_hpp_values(handle, bus);
acpiphp_set_hpp_values(bus);
pci_enable_bridges(bus);
acpiphp_configure_ioapics(handle);
pci_dev_put(dev);
return 0;
}
......
......@@ -86,7 +86,8 @@ static char *pci_bus_speed_strings[] = {
"66 MHz PCIX 533", /* 0x11 */
"100 MHz PCIX 533", /* 0x12 */
"133 MHz PCIX 533", /* 0x13 */
"25 GBps PCI-E", /* 0x14 */
"2.5 GT/s PCI-E", /* 0x14 */
"5.0 GT/s PCI-E", /* 0x15 */
};
#ifdef CONFIG_HOTPLUG_PCI_CPCI
......
......@@ -237,17 +237,8 @@ static inline int pciehp_get_hp_hw_control_from_firmware(struct pci_dev *dev)
return retval;
return pciehp_acpi_slot_detection_check(dev);
}
static inline int pciehp_get_hp_params_from_firmware(struct pci_dev *dev,
struct hotplug_params *hpp)
{
if (ACPI_FAILURE(acpi_get_hp_params_from_firmware(dev->bus, hpp)))
return -ENODEV;
return 0;
}
#else
#define pciehp_firmware_init() do {} while (0)
#define pciehp_get_hp_hw_control_from_firmware(dev) 0
#define pciehp_get_hp_params_from_firmware(dev, hpp) (-ENODEV)
#endif /* CONFIG_ACPI */
#endif /* _PCIEHP_H */
......@@ -47,7 +47,7 @@ int pciehp_acpi_slot_detection_check(struct pci_dev *dev)
{
if (slot_detection_mode != PCIEHP_DETECT_ACPI)
return 0;
if (acpi_pci_detect_ejectable(dev->subordinate))
if (acpi_pci_detect_ejectable(DEVICE_ACPI_HANDLE(&dev->dev)))
return 0;
return -ENODEV;
}
......@@ -76,9 +76,9 @@ static int __init dummy_probe(struct pcie_device *dev)
{
int pos;
u32 slot_cap;
acpi_handle handle;
struct slot *slot, *tmp;
struct pci_dev *pdev = dev->port;
struct pci_bus *pbus = pdev->subordinate;
/* Note: pciehp_detect_mode != PCIEHP_DETECT_ACPI here */
if (pciehp_get_hp_hw_control_from_firmware(pdev))
return -ENODEV;
......@@ -94,7 +94,8 @@ static int __init dummy_probe(struct pcie_device *dev)
dup_slot_id++;
}
list_add_tail(&slot->slot_list, &dummy_slots);
if (!acpi_slot_detected && acpi_pci_detect_ejectable(pbus))
handle = DEVICE_ACPI_HANDLE(&pdev->dev);
if (!acpi_slot_detected && acpi_pci_detect_ejectable(handle))
acpi_slot_detected = 1;
return -ENODEV; /* dummy driver always returns error */
}
......
......@@ -246,11 +246,6 @@ static int board_added(struct slot *p_slot)
goto err_exit;
}
/*
* Some PCI Express root ports require fixup after hot-plug operation.
*/
if (pcie_mch_quirk)
pci_fixup_device(pci_fixup_final, ctrl->pci_dev);
if (PWR_LED(ctrl))
p_slot->hpc_ops->green_led_on(p_slot);
......
......@@ -693,7 +693,10 @@ static int hpc_get_max_lnk_speed(struct slot *slot, enum pci_bus_speed *value)
switch (lnk_cap & 0x000F) {
case 1:
lnk_speed = PCIE_2PT5GB;
lnk_speed = PCIE_2_5GB;
break;
case 2:
lnk_speed = PCIE_5_0GB;
break;
default:
lnk_speed = PCIE_LNK_SPEED_UNKNOWN;
......@@ -772,7 +775,10 @@ static int hpc_get_cur_lnk_speed(struct slot *slot, enum pci_bus_speed *value)
switch (lnk_status & PCI_EXP_LNKSTA_CLS) {
case 1:
lnk_speed = PCIE_2PT5GB;
lnk_speed = PCIE_2_5GB;
break;
case 2:
lnk_speed = PCIE_5_0GB;
break;
default:
lnk_speed = PCIE_LNK_SPEED_UNKNOWN;
......
......@@ -34,136 +34,6 @@
#include "../pci.h"
#include "pciehp.h"
static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp)
{
u16 pci_cmd, pci_bctl;
if (hpp->revision > 1) {
warn("Rev.%d type0 record not supported\n", hpp->revision);
return;
}
pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpp->cache_line_size);
pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp->latency_timer);
pci_read_config_word(dev, PCI_COMMAND, &pci_cmd);
if (hpp->enable_serr)
pci_cmd |= PCI_COMMAND_SERR;
else
pci_cmd &= ~PCI_COMMAND_SERR;
if (hpp->enable_perr)
pci_cmd |= PCI_COMMAND_PARITY;
else
pci_cmd &= ~PCI_COMMAND_PARITY;
pci_write_config_word(dev, PCI_COMMAND, pci_cmd);
/* Program bridge control value */
if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER,
hpp->latency_timer);
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl);
if (hpp->enable_serr)
pci_bctl |= PCI_BRIDGE_CTL_SERR;
else
pci_bctl &= ~PCI_BRIDGE_CTL_SERR;
if (hpp->enable_perr)
pci_bctl |= PCI_BRIDGE_CTL_PARITY;
else
pci_bctl &= ~PCI_BRIDGE_CTL_PARITY;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl);
}
}
static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
{
int pos;
u16 reg16;
u32 reg32;
if (hpp->revision > 1) {
warn("Rev.%d type2 record not supported\n", hpp->revision);
return;
}
/* Find PCI Express capability */
pos = pci_find_capability(dev, PCI_CAP_ID_EXP);
if (!pos)
return;
/* Initialize Device Control Register */
pci_read_config_word(dev, pos + PCI_EXP_DEVCTL, &reg16);
reg16 = (reg16 & hpp->pci_exp_devctl_and) | hpp->pci_exp_devctl_or;
pci_write_config_word(dev, pos + PCI_EXP_DEVCTL, reg16);
/* Initialize Link Control Register */
if (dev->subordinate) {
pci_read_config_word(dev, pos + PCI_EXP_LNKCTL, &reg16);
reg16 = (reg16 & hpp->pci_exp_lnkctl_and)
| hpp->pci_exp_lnkctl_or;
pci_write_config_word(dev, pos + PCI_EXP_LNKCTL, reg16);
}
/* Find Advanced Error Reporting Enhanced Capability */
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
if (!pos)
return;
/* Initialize Uncorrectable Error Mask Register */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &reg32);
reg32 = (reg32 & hpp->unc_err_mask_and) | hpp->unc_err_mask_or;
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32);
/* Initialize Uncorrectable Error Severity Register */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &reg32);
reg32 = (reg32 & hpp->unc_err_sever_and) | hpp->unc_err_sever_or;
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32);
/* Initialize Correctable Error Mask Register */
pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg32);
reg32 = (reg32 & hpp->cor_err_mask_and) | hpp->cor_err_mask_or;
pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32);
/* Initialize Advanced Error Capabilities and Control Register */
pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32);
reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or;
pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32);
/*
* FIXME: The following two registers are not supported yet.
*
* o Secondary Uncorrectable Error Severity Register
* o Secondary Uncorrectable Error Mask Register
*/
}
static void program_fw_provided_values(struct pci_dev *dev)
{
struct pci_dev *cdev;
struct hotplug_params hpp;
/* Program hpp values for this device */
if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL ||
(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
return;
if (pciehp_get_hp_params_from_firmware(dev, &hpp)) {
warn("Could not get hotplug parameters\n");
return;
}
if (hpp.t2)
program_hpp_type2(dev, hpp.t2);
if (hpp.t0)
program_hpp_type0(dev, hpp.t0);
/* Program child devices */
if (dev->subordinate) {
list_for_each_entry(cdev, &dev->subordinate->devices,
bus_list)
program_fw_provided_values(cdev);
}
}
static int __ref pciehp_add_bridge(struct pci_dev *dev)
{
struct pci_bus *parent = dev->bus;
......@@ -226,7 +96,7 @@ int pciehp_configure_device(struct slot *p_slot)
(dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)) {
pciehp_add_bridge(dev);
}
program_fw_provided_values(dev);
pci_configure_slot(dev);
pci_dev_put(dev);
}
......@@ -285,11 +155,6 @@ int pciehp_unconfigure_device(struct slot *p_slot)
}
pci_dev_put(temp);
}
/*
* Some PCI Express root ports require fixup after hot-plug operation.
*/
if (pcie_mch_quirk)
pci_fixup_device(pci_fixup_final, p_slot->ctrl->pci_dev);
return rc;
}
/*
* Copyright (C) 1995,2001 Compaq Computer Corporation
* Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com)
* Copyright (C) 2001 IBM Corp.
* Copyright (C) 2003-2004 Intel Corporation
* (c) Copyright 2009 Hewlett-Packard Development Company, L.P.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
static struct hpp_type0 pci_default_type0 = {
.revision = 1,
.cache_line_size = 8,
.latency_timer = 0x40,
.enable_serr = 0,
.enable_perr = 0,
};
static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp)
{
u16 pci_cmd, pci_bctl;
if (!hpp) {
/*
* Perhaps we *should* use default settings for PCIe, but
* pciehp didn't, so we won't either.
*/
if (dev->is_pcie)
return;
dev_info(&dev->dev, "using default PCI settings\n");
hpp = &pci_default_type0;
}
if (hpp->revision > 1) {
dev_warn(&dev->dev,
"PCI settings rev %d not supported; using defaults\n",
hpp->revision);
hpp = &pci_default_type0;
}
pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpp->cache_line_size);
pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp->latency_timer);
pci_read_config_word(dev, PCI_COMMAND, &pci_cmd);
if (hpp->enable_serr)
pci_cmd |= PCI_COMMAND_SERR;
else
pci_cmd &= ~PCI_COMMAND_SERR;
if (hpp->enable_perr)
pci_cmd |= PCI_COMMAND_PARITY;
else
pci_cmd &= ~PCI_COMMAND_PARITY;
pci_write_config_word(dev, PCI_COMMAND, pci_cmd);
/* Program bridge control value */
if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER,
hpp->latency_timer);
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl);
if (hpp->enable_serr)
pci_bctl |= PCI_BRIDGE_CTL_SERR;
else
pci_bctl &= ~PCI_BRIDGE_CTL_SERR;
if (hpp->enable_perr)
pci_bctl |= PCI_BRIDGE_CTL_PARITY;
else
pci_bctl &= ~PCI_BRIDGE_CTL_PARITY;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl);
}
}
static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp)
{
if (hpp)
dev_warn(&dev->dev, "PCI-X settings not supported\n");
}
static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
{
int pos;
u16 reg16;
u32 reg32;
if (!hpp)
return;
/* Find PCI Express capability */
pos = pci_find_capability(dev, PCI_CAP_ID_EXP);
if (!pos)
return;
if (hpp->revision > 1) {
dev_warn(&dev->dev, "PCIe settings rev %d not supported\n",
hpp->revision);
return;
}
/* Initialize Device Control Register */
pci_read_config_word(dev, pos + PCI_EXP_DEVCTL, &reg16);
reg16 = (reg16 & hpp->pci_exp_devctl_and) | hpp->pci_exp_devctl_or;
pci_write_config_word(dev, pos + PCI_EXP_DEVCTL, reg16);
/* Initialize Link Control Register */
if (dev->subordinate) {
pci_read_config_word(dev, pos + PCI_EXP_LNKCTL, &reg16);
reg16 = (reg16 & hpp->pci_exp_lnkctl_and)
| hpp->pci_exp_lnkctl_or;
pci_write_config_word(dev, pos + PCI_EXP_LNKCTL, reg16);
}
/* Find Advanced Error Reporting Enhanced Capability */
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
if (!pos)
return;
/* Initialize Uncorrectable Error Mask Register */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &reg32);
reg32 = (reg32 & hpp->unc_err_mask_and) | hpp->unc_err_mask_or;
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32);
/* Initialize Uncorrectable Error Severity Register */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &reg32);
reg32 = (reg32 & hpp->unc_err_sever_and) | hpp->unc_err_sever_or;
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32);
/* Initialize Correctable Error Mask Register */
pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg32);
reg32 = (reg32 & hpp->cor_err_mask_and) | hpp->cor_err_mask_or;
pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32);
/* Initialize Advanced Error Capabilities and Control Register */
pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32);
reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or;
pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32);
/*
* FIXME: The following two registers are not supported yet.
*
* o Secondary Uncorrectable Error Severity Register
* o Secondary Uncorrectable Error Mask Register
*/
}
void pci_configure_slot(struct pci_dev *dev)
{
struct pci_dev *cdev;
struct hotplug_params hpp;
int ret;
if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL ||
(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
return;
memset(&hpp, 0, sizeof(hpp));
ret = pci_get_hp_params(dev, &hpp);
if (ret)
dev_warn(&dev->dev, "no hotplug settings from platform\n");
program_hpp_type2(dev, hpp.t2);
program_hpp_type1(dev, hpp.t1);
program_hpp_type0(dev, hpp.t0);
if (dev->subordinate) {
list_for_each_entry(cdev, &dev->subordinate->devices,
bus_list)
pci_configure_slot(cdev);
}
}
EXPORT_SYMBOL_GPL(pci_configure_slot);
......@@ -188,21 +188,12 @@ static inline const char *slot_name(struct slot *slot)
#ifdef CONFIG_ACPI
#include <linux/pci-acpi.h>
static inline int get_hp_params_from_firmware(struct pci_dev *dev,
struct hotplug_params *hpp)
{
if (ACPI_FAILURE(acpi_get_hp_params_from_firmware(dev->bus, hpp)))
return -ENODEV;
return 0;
}
static inline int get_hp_hw_control_from_firmware(struct pci_dev *dev)
{
u32 flags = OSC_SHPC_NATIVE_HP_CONTROL;
return acpi_get_hp_hw_control_from_firmware(dev, flags);
}
#else
#define get_hp_params_from_firmware(dev, hpp) (-ENODEV)
#define get_hp_hw_control_from_firmware(dev) (0)
#endif
......
......@@ -34,66 +34,6 @@
#include "../pci.h"
#include "shpchp.h"
static void program_fw_provided_values(struct pci_dev *dev)
{
u16 pci_cmd, pci_bctl;
struct pci_dev *cdev;
struct hotplug_params hpp;
/* Program hpp values for this device */
if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL ||
(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
return;
/* use default values if we can't get them from firmware */
if (get_hp_params_from_firmware(dev, &hpp) ||
!hpp.t0 || (hpp.t0->revision > 1)) {
warn("Could not get hotplug parameters. Use defaults\n");
hpp.t0 = &hpp.type0_data;
hpp.t0->revision = 0;
hpp.t0->cache_line_size = 8;
hpp.t0->latency_timer = 0x40;
hpp.t0->enable_serr = 0;
hpp.t0->enable_perr = 0;
}
pci_write_config_byte(dev,
PCI_CACHE_LINE_SIZE, hpp.t0->cache_line_size);
pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp.t0->latency_timer);
pci_read_config_word(dev, PCI_COMMAND, &pci_cmd);
if (hpp.t0->enable_serr)
pci_cmd |= PCI_COMMAND_SERR;
else
pci_cmd &= ~PCI_COMMAND_SERR;
if (hpp.t0->enable_perr)
pci_cmd |= PCI_COMMAND_PARITY;
else
pci_cmd &= ~PCI_COMMAND_PARITY;
pci_write_config_word(dev, PCI_COMMAND, pci_cmd);
/* Program bridge control value and child devices */
if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER,
hpp.t0->latency_timer);
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl);
if (hpp.t0->enable_serr)
pci_bctl |= PCI_BRIDGE_CTL_SERR;
else
pci_bctl &= ~PCI_BRIDGE_CTL_SERR;
if (hpp.t0->enable_perr)
pci_bctl |= PCI_BRIDGE_CTL_PARITY;
else
pci_bctl &= ~PCI_BRIDGE_CTL_PARITY;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl);
if (dev->subordinate) {
list_for_each_entry(cdev, &dev->subordinate->devices,
bus_list)
program_fw_provided_values(cdev);
}
}
}
int __ref shpchp_configure_device(struct slot *p_slot)
{
struct pci_dev *dev;
......@@ -153,7 +93,7 @@ int __ref shpchp_configure_device(struct slot *p_slot)
child->subordinate = pci_do_scan_bus(child);
pci_bus_size_bridges(child);
}
program_fw_provided_values(dev);
pci_configure_slot(dev);
pci_dev_put(dev);
}
......
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include "pci.h"
/**
* pci_find_device - begin or continue searching for a PCI device by vendor/device id
* @vendor: PCI vendor id to match, or %PCI_ANY_ID to match all vendor ids
* @device: PCI device id to match, or %PCI_ANY_ID to match all device ids
* @from: Previous PCI device found in search, or %NULL for new search.
*
* Iterates through the list of known PCI devices. If a PCI device is found
* with a matching @vendor and @device, a pointer to its device structure is
* returned. Otherwise, %NULL is returned.
* A new search is initiated by passing %NULL as the @from argument.
* Otherwise if @from is not %NULL, searches continue from next device
* on the global list.
*
* NOTE: Do not use this function any more; use pci_get_device() instead, as
* the PCI device returned by this function can disappear at any moment in
* time.
*/
struct pci_dev *pci_find_device(unsigned int vendor, unsigned int device,
struct pci_dev *from)
{
struct pci_dev *pdev;
pci_dev_get(from);
pdev = pci_get_subsys(vendor, device, PCI_ANY_ID, PCI_ANY_ID, from);
pci_dev_put(pdev);
return pdev;
}
EXPORT_SYMBOL(pci_find_device);
This diff is collapsed.
......@@ -109,15 +109,32 @@ static bool acpi_pci_can_wakeup(struct pci_dev *dev)
return handle ? acpi_bus_can_wakeup(handle) : false;
}
static void acpi_pci_propagate_wakeup_enable(struct pci_bus *bus, bool enable)
{
while (bus->parent) {
struct pci_dev *bridge = bus->self;
int ret;
ret = acpi_pm_device_sleep_wake(&bridge->dev, enable);
if (!ret || bridge->is_pcie)
return;
bus = bus->parent;
}
/* We have reached the root bus. */
if (bus->bridge)
acpi_pm_device_sleep_wake(bus->bridge, enable);
}
static int acpi_pci_sleep_wake(struct pci_dev *dev, bool enable)
{
int error = acpi_pm_device_sleep_wake(&dev->dev, enable);
if (acpi_pci_can_wakeup(dev))
return acpi_pm_device_sleep_wake(&dev->dev, enable);
if (!error)
dev_printk(KERN_INFO, &dev->dev,
"wake-up capability %s by ACPI\n",
enable ? "enabled" : "disabled");
return error;
if (!dev->is_pcie)
acpi_pci_propagate_wakeup_enable(dev->bus, enable);
return 0;
}
static struct pci_platform_pm_ops acpi_pci_platform_pm = {
......
......@@ -19,37 +19,98 @@
#include <linux/cpu.h>
#include "pci.h"
/*
* Dynamic device IDs are disabled for !CONFIG_HOTPLUG
*/
struct pci_dynid {
struct list_head node;
struct pci_device_id id;
};
#ifdef CONFIG_HOTPLUG
/**
* pci_add_dynid - add a new PCI device ID to this driver and re-probe devices
* @drv: target pci driver
* @vendor: PCI vendor ID
* @device: PCI device ID
* @subvendor: PCI subvendor ID
* @subdevice: PCI subdevice ID
* @class: PCI class
* @class_mask: PCI class mask
* @driver_data: private driver data
*
* Adds a new dynamic pci device ID to this driver and causes the
* driver to probe for all devices again. @drv must have been
* registered prior to calling this function.
*
* CONTEXT:
* Does GFP_KERNEL allocation.
*
* RETURNS:
* 0 on success, -errno on failure.
*/
int pci_add_dynid(struct pci_driver *drv,
unsigned int vendor, unsigned int device,
unsigned int subvendor, unsigned int subdevice,
unsigned int class, unsigned int class_mask,
unsigned long driver_data)
{
struct pci_dynid *dynid;
int retval;
dynid = kzalloc(sizeof(*dynid), GFP_KERNEL);
if (!dynid)
return -ENOMEM;
dynid->id.vendor = vendor;
dynid->id.device = device;
dynid->id.subvendor = subvendor;
dynid->id.subdevice = subdevice;
dynid->id.class = class;
dynid->id.class_mask = class_mask;
dynid->id.driver_data = driver_data;
spin_lock(&drv->dynids.lock);
list_add_tail(&dynid->node, &drv->dynids.list);
spin_unlock(&drv->dynids.lock);
get_driver(&drv->driver);
retval = driver_attach(&drv->driver);
put_driver(&drv->driver);
return retval;
}
static void pci_free_dynids(struct pci_driver *drv)
{
struct pci_dynid *dynid, *n;
spin_lock(&drv->dynids.lock);
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
list_del(&dynid->node);
kfree(dynid);
}
spin_unlock(&drv->dynids.lock);
}
/*
* Dynamic device ID manipulation via sysfs is disabled for !CONFIG_HOTPLUG
*/
#ifdef CONFIG_HOTPLUG
/**
* store_new_id - add a new PCI device ID to this driver and re-probe devices
* store_new_id - sysfs frontend to pci_add_dynid()
* @driver: target device driver
* @buf: buffer for scanning device ID data
* @count: input size
*
* Adds a new dynamic pci device ID to this driver,
* and causes the driver to probe for all devices again.
* Allow PCI IDs to be added to an existing driver via sysfs.
*/
static ssize_t
store_new_id(struct device_driver *driver, const char *buf, size_t count)
{
struct pci_dynid *dynid;
struct pci_driver *pdrv = to_pci_driver(driver);
const struct pci_device_id *ids = pdrv->id_table;
__u32 vendor, device, subvendor=PCI_ANY_ID,
subdevice=PCI_ANY_ID, class=0, class_mask=0;
unsigned long driver_data=0;
int fields=0;
int retval=0;
int retval;
fields = sscanf(buf, "%x %x %x %x %x %x %lx",
&vendor, &device, &subvendor, &subdevice,
......@@ -72,27 +133,8 @@ store_new_id(struct device_driver *driver, const char *buf, size_t count)
return retval;
}
dynid = kzalloc(sizeof(*dynid), GFP_KERNEL);
if (!dynid)
return -ENOMEM;
dynid->id.vendor = vendor;
dynid->id.device = device;
dynid->id.subvendor = subvendor;
dynid->id.subdevice = subdevice;
dynid->id.class = class;
dynid->id.class_mask = class_mask;
dynid->id.driver_data = driver_data;
spin_lock(&pdrv->dynids.lock);
list_add_tail(&dynid->node, &pdrv->dynids.list);
spin_unlock(&pdrv->dynids.lock);
if (get_driver(&pdrv->driver)) {
retval = driver_attach(&pdrv->driver);
put_driver(&pdrv->driver);
}
retval = pci_add_dynid(pdrv, vendor, device, subvendor, subdevice,
class, class_mask, driver_data);
if (retval)
return retval;
return count;
......@@ -145,19 +187,6 @@ store_remove_id(struct device_driver *driver, const char *buf, size_t count)
}
static DRIVER_ATTR(remove_id, S_IWUSR, NULL, store_remove_id);
static void
pci_free_dynids(struct pci_driver *drv)
{
struct pci_dynid *dynid, *n;
spin_lock(&drv->dynids.lock);
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
list_del(&dynid->node);
kfree(dynid);
}
spin_unlock(&drv->dynids.lock);
}
static int
pci_create_newid_file(struct pci_driver *drv)
{
......@@ -186,7 +215,6 @@ static void pci_remove_removeid_file(struct pci_driver *drv)
driver_remove_file(&drv->driver, &driver_attr_remove_id);
}
#else /* !CONFIG_HOTPLUG */
static inline void pci_free_dynids(struct pci_driver *drv) {}
static inline int pci_create_newid_file(struct pci_driver *drv)
{
return 0;
......@@ -417,8 +445,6 @@ static int pci_legacy_suspend(struct device *dev, pm_message_t state)
struct pci_dev * pci_dev = to_pci_dev(dev);
struct pci_driver * drv = pci_dev->driver;
pci_dev->state_saved = false;
if (drv && drv->suspend) {
pci_power_t prev = pci_dev->current_state;
int error;
......@@ -514,7 +540,6 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev)
static void pci_pm_default_resume_noirq(struct pci_dev *pci_dev)
{
pci_restore_standard_config(pci_dev);
pci_dev->state_saved = false;
pci_fixup_device(pci_fixup_resume_early, pci_dev);
}
......@@ -580,8 +605,6 @@ static int pci_pm_suspend(struct device *dev)
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend(dev, PMSG_SUSPEND);
pci_dev->state_saved = false;
if (!pm) {
pci_pm_default_suspend(pci_dev);
goto Fixup;
......@@ -694,7 +717,7 @@ static int pci_pm_resume(struct device *dev)
pci_pm_reenable_device(pci_dev);
}
return 0;
return error;
}
#else /* !CONFIG_SUSPEND */
......@@ -716,8 +739,6 @@ static int pci_pm_freeze(struct device *dev)
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend(dev, PMSG_FREEZE);
pci_dev->state_saved = false;
if (!pm) {
pci_pm_default_suspend(pci_dev);
return 0;
......@@ -793,6 +814,8 @@ static int pci_pm_thaw(struct device *dev)
pci_pm_reenable_device(pci_dev);
}
pci_dev->state_saved = false;
return error;
}
......@@ -804,8 +827,6 @@ static int pci_pm_poweroff(struct device *dev)
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend(dev, PMSG_HIBERNATE);
pci_dev->state_saved = false;
if (!pm) {
pci_pm_default_suspend(pci_dev);
goto Fixup;
......@@ -1106,6 +1127,7 @@ static int __init pci_driver_init(void)
postcore_initcall(pci_driver_init);
EXPORT_SYMBOL_GPL(pci_add_dynid);
EXPORT_SYMBOL(pci_match_id);
EXPORT_SYMBOL(__pci_register_driver);
EXPORT_SYMBOL(pci_unregister_driver);
......
......@@ -19,8 +19,16 @@
#include <linux/module.h>
#include <linux/pci.h>
static char ids[1024] __initdata;
module_param_string(ids, ids, sizeof(ids), 0);
MODULE_PARM_DESC(ids, "Initial PCI IDs to add to the stub driver, format is "
"\"vendor:device[:subvendor[:subdevice[:class[:class_mask]]]]\""
" and multiple comma separated entries can be specified");
static int pci_stub_probe(struct pci_dev *dev, const struct pci_device_id *id)
{
dev_printk(KERN_INFO, &dev->dev, "claimed by stub\n");
return 0;
}
......@@ -32,7 +40,42 @@ static struct pci_driver stub_driver = {
static int __init pci_stub_init(void)
{
return pci_register_driver(&stub_driver);
char *p, *id;
int rc;
rc = pci_register_driver(&stub_driver);
if (rc)
return rc;
/* add ids specified in the module parameter */
p = ids;
while ((id = strsep(&p, ","))) {
unsigned int vendor, device, subvendor = PCI_ANY_ID,
subdevice = PCI_ANY_ID, class=0, class_mask=0;
int fields;
fields = sscanf(id, "%x:%x:%x:%x:%x:%x",
&vendor, &device, &subvendor, &subdevice,
&class, &class_mask);
if (fields < 2) {
printk(KERN_WARNING
"pci-stub: invalid id string \"%s\"\n", id);
continue;
}
printk(KERN_INFO
"pci-stub: add %04X:%04X sub=%04X:%04X cls=%08X/%08X\n",
vendor, device, subvendor, subdevice, class, class_mask);
rc = pci_add_dynid(&stub_driver, vendor, device,
subvendor, subdevice, class, class_mask, 0);
if (rc)
printk(KERN_WARNING
"pci-stub: failed to add dynamic id (%d)\n", rc);
}
return 0;
}
static void __exit pci_stub_exit(void)
......
......@@ -916,6 +916,24 @@ int __attribute__ ((weak)) pcibios_add_platform_entries(struct pci_dev *dev)
return 0;
}
static ssize_t reset_store(struct device *dev,
struct device_attribute *attr, const char *buf,
size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
unsigned long val;
ssize_t result = strict_strtoul(buf, 0, &val);
if (result < 0)
return result;
if (val != 1)
return -EINVAL;
return pci_reset_function(pdev);
}
static struct device_attribute reset_attr = __ATTR(reset, 0200, NULL, reset_store);
static int pci_create_capabilities_sysfs(struct pci_dev *dev)
{
int retval;
......@@ -943,7 +961,22 @@ static int pci_create_capabilities_sysfs(struct pci_dev *dev)
/* Active State Power Management */
pcie_aspm_create_sysfs_dev_files(dev);
if (!pci_probe_reset_function(dev)) {
retval = device_create_file(&dev->dev, &reset_attr);
if (retval)
goto error;
dev->reset_fn = 1;
}
return 0;
error:
pcie_aspm_remove_sysfs_dev_files(dev);
if (dev->vpd && dev->vpd->attr) {
sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr);
kfree(dev->vpd->attr);
}
return retval;
}
int __must_check pci_create_sysfs_dev_files (struct pci_dev *pdev)
......@@ -1037,6 +1070,10 @@ static void pci_remove_capabilities_sysfs(struct pci_dev *dev)
}
pcie_aspm_remove_sysfs_dev_files(dev);
if (dev->reset_fn) {
device_remove_file(&dev->dev, &reset_attr);
dev->reset_fn = 0;
}
}
/**
......
......@@ -41,6 +41,12 @@ int pci_domains_supported = 1;
unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE;
unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE;
#define DEFAULT_HOTPLUG_IO_SIZE (256)
#define DEFAULT_HOTPLUG_MEM_SIZE (2*1024*1024)
/* pci=hpmemsize=nnM,hpiosize=nn can override this */
unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE;
unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE;
/**
* pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
* @bus: pointer to PCI bus structure to search
......@@ -848,6 +854,7 @@ pci_restore_state(struct pci_dev *dev)
if (!dev->state_saved)
return 0;
/* PCI Express register must be restored first */
pci_restore_pcie_state(dev);
......@@ -869,6 +876,8 @@ pci_restore_state(struct pci_dev *dev)
pci_restore_msi_state(dev);
pci_restore_iov_state(dev);
dev->state_saved = false;
return 0;
}
......@@ -1214,30 +1223,40 @@ void pci_pme_active(struct pci_dev *dev, bool enable)
*/
int pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable)
{
int error = 0;
bool pme_done = false;
int ret = 0;
if (enable && !device_may_wakeup(&dev->dev))
return -EINVAL;
/* Don't do the same thing twice in a row for one device. */
if (!!enable == !!dev->wakeup_prepared)
return 0;
/*
* According to "PCI System Architecture" 4th ed. by Tom Shanley & Don
* Anderson we should be doing PME# wake enable followed by ACPI wake
* enable. To disable wake-up we call the platform first, for symmetry.
*/
if (!enable && platform_pci_can_wakeup(dev))
error = platform_pci_sleep_wake(dev, false);
if (!enable || pci_pme_capable(dev, state)) {
pci_pme_active(dev, enable);
pme_done = true;
}
if (enable) {
int error;
if (enable && platform_pci_can_wakeup(dev))
if (pci_pme_capable(dev, state))
pci_pme_active(dev, true);
else
ret = 1;
error = platform_pci_sleep_wake(dev, true);
if (ret)
ret = error;
if (!ret)
dev->wakeup_prepared = true;
} else {
platform_pci_sleep_wake(dev, false);
pci_pme_active(dev, false);
dev->wakeup_prepared = false;
}
return pme_done ? 0 : error;
return ret;
}
/**
......@@ -1356,6 +1375,7 @@ void pci_pm_init(struct pci_dev *dev)
int pm;
u16 pmc;
dev->wakeup_prepared = false;
dev->pm_cap = 0;
/* find PCI PM capability in list */
......@@ -2261,6 +2281,22 @@ int __pci_reset_function(struct pci_dev *dev)
}
EXPORT_SYMBOL_GPL(__pci_reset_function);
/**
* pci_probe_reset_function - check whether the device can be safely reset
* @dev: PCI device to reset
*
* Some devices allow an individual function to be reset without affecting
* other functions in the same device. The PCI device must be responsive
* to PCI config space in order to use this function.
*
* Returns 0 if the device function can be reset or negative if the
* device doesn't support resetting a single function.
*/
int pci_probe_reset_function(struct pci_dev *dev)
{
return pci_dev_reset(dev, 1);
}
/**
* pci_reset_function - quiesce and reset a PCI device function
* @dev: PCI device to reset
......@@ -2504,6 +2540,50 @@ int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type)
return 0;
}
/**
* pci_set_vga_state - set VGA decode state on device and parents if requested
* @dev the PCI device
* @decode - true = enable decoding, false = disable decoding
* @command_bits PCI_COMMAND_IO and/or PCI_COMMAND_MEMORY
* @change_bridge - traverse ancestors and change bridges
*/
int pci_set_vga_state(struct pci_dev *dev, bool decode,
unsigned int command_bits, bool change_bridge)
{
struct pci_bus *bus;
struct pci_dev *bridge;
u16 cmd;
WARN_ON(command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY));
pci_read_config_word(dev, PCI_COMMAND, &cmd);
if (decode == true)
cmd |= command_bits;
else
cmd &= ~command_bits;
pci_write_config_word(dev, PCI_COMMAND, cmd);
if (change_bridge == false)
return 0;
bus = dev->bus;
while (bus) {
bridge = bus->self;
if (bridge) {
pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
&cmd);
if (decode == true)
cmd |= PCI_BRIDGE_CTL_VGA;
else
cmd &= ~PCI_BRIDGE_CTL_VGA;
pci_write_config_word(bridge, PCI_BRIDGE_CONTROL,
cmd);
}
bus = bus->parent;
}
return 0;
}
#define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
spinlock_t resource_alignment_lock = SPIN_LOCK_UNLOCKED;
......@@ -2672,6 +2752,10 @@ static int __init pci_setup(char *str)
strlen(str + 19));
} else if (!strncmp(str, "ecrc=", 5)) {
pcie_ecrc_get_policy(str + 5);
} else if (!strncmp(str, "hpiosize=", 9)) {
pci_hotplug_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "hpmemsize=", 10)) {
pci_hotplug_mem_size = memparse(str + 10, &str);
} else {
printk(KERN_ERR "PCI: Unknown option `%s'\n",
str);
......
......@@ -16,6 +16,7 @@ extern void pci_cleanup_rom(struct pci_dev *dev);
extern int pci_mmap_fits(struct pci_dev *pdev, int resno,
struct vm_area_struct *vma);
#endif
int pci_probe_reset_function(struct pci_dev *dev);
/**
* struct pci_platform_pm_ops - Firmware PM callbacks
......@@ -133,7 +134,6 @@ static inline int pci_no_d1d2(struct pci_dev *dev)
return (dev->no_d1d2 || parent_dstates);
}
extern int pcie_mch_quirk;
extern struct device_attribute pci_dev_attrs[];
extern struct device_attribute dev_attr_cpuaffinity;
extern struct device_attribute dev_attr_cpulistaffinity;
......
......@@ -22,11 +22,10 @@
#include <linux/miscdevice.h>
#include <linux/pci.h>
#include <linux/fs.h>
#include <asm/uaccess.h>
#include <linux/uaccess.h>
#include "aerdrv.h"
struct aer_error_inj
{
struct aer_error_inj {
u8 bus;
u8 dev;
u8 fn;
......@@ -38,8 +37,7 @@ struct aer_error_inj
u32 header_log3;
};
struct aer_error
{
struct aer_error {
struct list_head list;
unsigned int bus;
unsigned int devfn;
......@@ -55,8 +53,7 @@ struct aer_error
u32 source_id;
};
struct pci_bus_ops
{
struct pci_bus_ops {
struct list_head list;
struct pci_bus *bus;
struct pci_ops *ops;
......@@ -150,7 +147,7 @@ static u32 *find_pci_config_dword(struct aer_error *err, int where,
target = &err->header_log1;
break;
case PCI_ERR_HEADER_LOG+8:
target = &err->header_log2;
target = &err->header_log2;
break;
case PCI_ERR_HEADER_LOG+12:
target = &err->header_log3;
......@@ -258,8 +255,7 @@ static int pci_bus_set_aer_ops(struct pci_bus *bus)
bus_ops = NULL;
out:
spin_unlock_irqrestore(&inject_lock, flags);
if (bus_ops)
kfree(bus_ops);
kfree(bus_ops);
return 0;
}
......@@ -401,10 +397,8 @@ static int aer_inject(struct aer_error_inj *einj)
else
ret = -EINVAL;
out_put:
if (err_alloc)
kfree(err_alloc);
if (rperr_alloc)
kfree(rperr_alloc);
kfree(err_alloc);
kfree(rperr_alloc);
pci_dev_put(dev);
return ret;
}
......@@ -458,8 +452,7 @@ static void __exit aer_inject_exit(void)
}
spin_lock_irqsave(&inject_lock, flags);
list_for_each_entry_safe(err, err_next,
&pci_bus_ops_list, list) {
list_for_each_entry_safe(err, err_next, &pci_bus_ops_list, list) {
list_del(&err->list);
kfree(err);
}
......
......@@ -38,7 +38,7 @@ MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
static int __devinit aer_probe (struct pcie_device *dev);
static int __devinit aer_probe(struct pcie_device *dev);
static void aer_remove(struct pcie_device *dev);
static pci_ers_result_t aer_error_detected(struct pci_dev *dev,
enum pci_channel_state error);
......@@ -47,7 +47,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
static struct pci_error_handlers aer_error_handlers = {
.error_detected = aer_error_detected,
.resume = aer_error_resume,
.resume = aer_error_resume,
};
static struct pcie_port_service_driver aerdriver = {
......@@ -134,12 +134,12 @@ EXPORT_SYMBOL_GPL(aer_irq);
*
* Invoked when Root Port's AER service is loaded.
**/
static struct aer_rpc* aer_alloc_rpc(struct pcie_device *dev)
static struct aer_rpc *aer_alloc_rpc(struct pcie_device *dev)
{
struct aer_rpc *rpc;
if (!(rpc = kzalloc(sizeof(struct aer_rpc),
GFP_KERNEL)))
rpc = kzalloc(sizeof(struct aer_rpc), GFP_KERNEL);
if (!rpc)
return NULL;
/*
......@@ -189,26 +189,28 @@ static void aer_remove(struct pcie_device *dev)
*
* Invoked when PCI Express bus loads AER service driver.
**/
static int __devinit aer_probe (struct pcie_device *dev)
static int __devinit aer_probe(struct pcie_device *dev)
{
int status;
struct aer_rpc *rpc;
struct device *device = &dev->device;
/* Init */
if ((status = aer_init(dev)))
status = aer_init(dev);
if (status)
return status;
/* Alloc rpc data structure */
if (!(rpc = aer_alloc_rpc(dev))) {
rpc = aer_alloc_rpc(dev);
if (!rpc) {
dev_printk(KERN_DEBUG, device, "alloc rpc failed\n");
aer_remove(dev);
return -ENOMEM;
}
/* Request IRQ ISR */
if ((status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv",
dev))) {
status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev);
if (status) {
dev_printk(KERN_DEBUG, device, "request IRQ failed\n");
aer_remove(dev);
return status;
......
......@@ -16,12 +16,9 @@
#define AER_NONFATAL 0
#define AER_FATAL 1
#define AER_CORRECTABLE 2
#define AER_UNCORRECTABLE 4
#define AER_ERROR_MASK 0x001fffff
#define AER_ERROR(d) (d & AER_ERROR_MASK)
/* Root Error Status Register Bits */
#define ROOT_ERR_STATUS_MASKS 0x0f
#define ROOT_ERR_STATUS_MASKS 0x0f
#define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \
PCI_EXP_RTCTL_SENFEE| \
......@@ -32,8 +29,6 @@
#define ERR_COR_ID(d) (d & 0xffff)
#define ERR_UNCOR_ID(d) (d >> 16)
#define AER_SUCCESS 0
#define AER_UNSUCCESS 1
#define AER_ERROR_SOURCES_MAX 100
#define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \
......@@ -43,13 +38,6 @@
PCI_ERR_UNC_UNX_COMP| \
PCI_ERR_UNC_MALF_TLP)
/* AER Error Info Flags */
#define AER_TLP_HEADER_VALID_FLAG 0x00000001
#define AER_MULTI_ERROR_VALID_FLAG 0x00000002
#define ERR_CORRECTABLE_ERROR_MASK 0x000031c1
#define ERR_UNCORRECTABLE_ERROR_MASK 0x001ff010
struct header_log_regs {
unsigned int dw0;
unsigned int dw1;
......@@ -61,11 +49,20 @@ struct header_log_regs {
struct aer_err_info {
struct pci_dev *dev[AER_MAX_MULTI_ERR_DEVICES];
int error_dev_num;
u16 id;
int severity; /* 0:NONFATAL | 1:FATAL | 2:COR */
int flags;
unsigned int id:16;
unsigned int severity:2; /* 0:NONFATAL | 1:FATAL | 2:COR */
unsigned int __pad1:5;
unsigned int multi_error_valid:1;
unsigned int first_error:5;
unsigned int __pad2:2;
unsigned int tlp_header_valid:1;
unsigned int status; /* COR/UNCOR Error Status */
struct header_log_regs tlp; /* TLP Header */
unsigned int mask; /* COR/UNCOR Error Mask */
struct header_log_regs tlp; /* TLP Header */
};
struct aer_err_source {
......@@ -125,6 +122,7 @@ extern void aer_delete_rootport(struct aer_rpc *rpc);
extern int aer_init(struct pcie_device *dev);
extern void aer_isr(struct work_struct *work);
extern void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
extern void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info);
extern irqreturn_t aer_irq(int irq, void *context);
#ifdef CONFIG_ACPI
......@@ -136,4 +134,4 @@ static inline int aer_osc_setup(struct pcie_device *pciedev)
}
#endif
#endif //_AERDRV_H_
#endif /* _AERDRV_H_ */
......@@ -49,10 +49,11 @@ int pci_enable_pcie_error_reporting(struct pci_dev *dev)
PCI_EXP_DEVCTL_NFERE |
PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_URRE;
pci_write_config_word(dev, pos+PCI_EXP_DEVCTL,
reg16);
pci_write_config_word(dev, pos+PCI_EXP_DEVCTL, reg16);
return 0;
}
EXPORT_SYMBOL_GPL(pci_enable_pcie_error_reporting);
int pci_disable_pcie_error_reporting(struct pci_dev *dev)
{
......@@ -68,10 +69,11 @@ int pci_disable_pcie_error_reporting(struct pci_dev *dev)
PCI_EXP_DEVCTL_NFERE |
PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_URRE);
pci_write_config_word(dev, pos+PCI_EXP_DEVCTL,
reg16);
pci_write_config_word(dev, pos+PCI_EXP_DEVCTL, reg16);
return 0;
}
EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting);
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
{
......@@ -92,6 +94,7 @@ int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
return 0;
}
EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status);
#if 0
int pci_cleanup_aer_correct_error_status(struct pci_dev *dev)
......@@ -110,7 +113,6 @@ int pci_cleanup_aer_correct_error_status(struct pci_dev *dev)
}
#endif /* 0 */
static int set_device_error_reporting(struct pci_dev *dev, void *data)
{
bool enable = *((bool *)data);
......@@ -164,8 +166,9 @@ static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev)
e_info->dev[e_info->error_dev_num] = dev;
e_info->error_dev_num++;
return 1;
} else
return 0;
}
return 0;
}
......@@ -193,7 +196,7 @@ static int find_device_iter(struct pci_dev *dev, void *data)
* If there is no multiple error, we stop
* or continue based on the id comparing.
*/
if (!(e_info->flags & AER_MULTI_ERROR_VALID_FLAG))
if (!e_info->multi_error_valid)
return result;
/*
......@@ -233,24 +236,16 @@ static int find_device_iter(struct pci_dev *dev, void *data)
status = 0;
mask = 0;
if (e_info->severity == AER_CORRECTABLE) {
pci_read_config_dword(dev,
pos + PCI_ERR_COR_STATUS,
&status);
pci_read_config_dword(dev,
pos + PCI_ERR_COR_MASK,
&mask);
if (status & ERR_CORRECTABLE_ERROR_MASK & ~mask) {
pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status);
pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &mask);
if (status & ~mask) {
add_error_device(e_info, dev);
goto added;
}
} else {
pci_read_config_dword(dev,
pos + PCI_ERR_UNCOR_STATUS,
&status);
pci_read_config_dword(dev,
pos + PCI_ERR_UNCOR_MASK,
&mask);
if (status & ERR_UNCORRECTABLE_ERROR_MASK & ~mask) {
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask);
if (status & ~mask) {
add_error_device(e_info, dev);
goto added;
}
......@@ -259,7 +254,7 @@ static int find_device_iter(struct pci_dev *dev, void *data)
return 0;
added:
if (e_info->flags & AER_MULTI_ERROR_VALID_FLAG)
if (e_info->multi_error_valid)
return 0;
else
return 1;
......@@ -411,8 +406,7 @@ static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
pci_cleanup_aer_uncorrect_error_status(dev);
dev->error_state = pci_channel_io_normal;
}
}
else {
} else {
/*
* If the error is reported by an end point, we think this
* error is related to the upstream link of the end point.
......@@ -473,7 +467,7 @@ static pci_ers_result_t reset_link(struct pcie_device *aerdev,
if (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE)
udev = dev;
else
udev= dev->bus->self;
udev = dev->bus->self;
data.is_downstream = 0;
data.aer_driver = NULL;
......@@ -576,7 +570,7 @@ static pci_ers_result_t do_recovery(struct pcie_device *aerdev,
*
* Invoked when an error being detected by Root Port.
*/
static void handle_error_source(struct pcie_device * aerdev,
static void handle_error_source(struct pcie_device *aerdev,
struct pci_dev *dev,
struct aer_err_info *info)
{
......@@ -682,7 +676,7 @@ static void disable_root_aer(struct aer_rpc *rpc)
*
* Invoked by DPC handler to consume an error.
*/
static struct aer_err_source* get_e_source(struct aer_rpc *rpc)
static struct aer_err_source *get_e_source(struct aer_rpc *rpc)
{
struct aer_err_source *e_source;
unsigned long flags;
......@@ -702,32 +696,50 @@ static struct aer_err_source* get_e_source(struct aer_rpc *rpc)
return e_source;
}
/**
* get_device_error_info - read error status from dev and store it to info
* @dev: pointer to the device expected to have a error record
* @info: pointer to structure to store the error record
*
* Return 1 on success, 0 on error.
*/
static int get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
{
int pos;
int pos, temp;
info->status = 0;
info->tlp_header_valid = 0;
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
/* The device might not support AER */
if (!pos)
return AER_SUCCESS;
return 1;
if (info->severity == AER_CORRECTABLE) {
pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS,
&info->status);
if (!(info->status & ERR_CORRECTABLE_ERROR_MASK))
return AER_UNSUCCESS;
pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK,
&info->mask);
if (!(info->status & ~info->mask))
return 0;
} else if (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE ||
info->severity == AER_NONFATAL) {
/* Link is still healthy for IO reads */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS,
&info->status);
if (!(info->status & ERR_UNCORRECTABLE_ERROR_MASK))
return AER_UNSUCCESS;
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK,
&info->mask);
if (!(info->status & ~info->mask))
return 0;
/* Get First Error Pointer */
pci_read_config_dword(dev, pos + PCI_ERR_CAP, &temp);
info->first_error = PCI_ERR_CAP_FEP(temp);
if (info->status & AER_LOG_TLP_MASKS) {
info->flags |= AER_TLP_HEADER_VALID_FLAG;
info->tlp_header_valid = 1;
pci_read_config_dword(dev,
pos + PCI_ERR_HEADER_LOG, &info->tlp.dw0);
pci_read_config_dword(dev,
......@@ -739,7 +751,7 @@ static int get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
}
}
return AER_SUCCESS;
return 1;
}
static inline void aer_process_err_devices(struct pcie_device *p_device,
......@@ -753,14 +765,14 @@ static inline void aer_process_err_devices(struct pcie_device *p_device,
e_info->id);
}
/* Report all before handle them, not to lost records by reset etc. */
for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
if (get_device_error_info(e_info->dev[i], e_info) ==
AER_SUCCESS) {
if (get_device_error_info(e_info->dev[i], e_info))
aer_print_error(e_info->dev[i], e_info);
handle_error_source(p_device,
e_info->dev[i],
e_info);
}
}
for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
if (get_device_error_info(e_info->dev[i], e_info))
handle_error_source(p_device, e_info->dev[i], e_info);
}
}
......@@ -806,7 +818,9 @@ static void aer_isr_one_error(struct pcie_device *p_device,
if (e_src->status &
(PCI_ERR_ROOT_MULTI_COR_RCV |
PCI_ERR_ROOT_MULTI_UNCOR_RCV))
e_info->flags |= AER_MULTI_ERROR_VALID_FLAG;
e_info->multi_error_valid = 1;
aer_print_port_info(p_device->port, e_info);
find_source_device(p_device->port, e_info);
aer_process_err_devices(p_device, e_info);
......@@ -863,10 +877,5 @@ int aer_init(struct pcie_device *dev)
if (aer_osc_setup(dev) && !forceload)
return -ENXIO;
return AER_SUCCESS;
return 0;
}
EXPORT_SYMBOL_GPL(pci_enable_pcie_error_reporting);
EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting);
EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status);
......@@ -27,69 +27,70 @@
#define AER_AGENT_COMPLETER 2
#define AER_AGENT_TRANSMITTER 3
#define AER_AGENT_REQUESTER_MASK (PCI_ERR_UNC_COMP_TIME| \
PCI_ERR_UNC_UNSUP)
#define AER_AGENT_COMPLETER_MASK PCI_ERR_UNC_COMP_ABORT
#define AER_AGENT_TRANSMITTER_MASK(t, e) (e & (PCI_ERR_COR_REP_ROLL| \
((t == AER_CORRECTABLE) ? PCI_ERR_COR_REP_TIMER: 0)))
#define AER_AGENT_REQUESTER_MASK(t) ((t == AER_CORRECTABLE) ? \
0 : (PCI_ERR_UNC_COMP_TIME|PCI_ERR_UNC_UNSUP))
#define AER_AGENT_COMPLETER_MASK(t) ((t == AER_CORRECTABLE) ? \
0 : PCI_ERR_UNC_COMP_ABORT)
#define AER_AGENT_TRANSMITTER_MASK(t) ((t == AER_CORRECTABLE) ? \
(PCI_ERR_COR_REP_ROLL|PCI_ERR_COR_REP_TIMER) : 0)
#define AER_GET_AGENT(t, e) \
((e & AER_AGENT_COMPLETER_MASK) ? AER_AGENT_COMPLETER : \
(e & AER_AGENT_REQUESTER_MASK) ? AER_AGENT_REQUESTER : \
(AER_AGENT_TRANSMITTER_MASK(t, e)) ? AER_AGENT_TRANSMITTER : \
((e & AER_AGENT_COMPLETER_MASK(t)) ? AER_AGENT_COMPLETER : \
(e & AER_AGENT_REQUESTER_MASK(t)) ? AER_AGENT_REQUESTER : \
(e & AER_AGENT_TRANSMITTER_MASK(t)) ? AER_AGENT_TRANSMITTER : \
AER_AGENT_RECEIVER)
#define AER_PHYSICAL_LAYER_ERROR_MASK PCI_ERR_COR_RCVR
#define AER_DATA_LINK_LAYER_ERROR_MASK(t, e) \
(PCI_ERR_UNC_DLP| \
PCI_ERR_COR_BAD_TLP| \
PCI_ERR_COR_BAD_DLLP| \
PCI_ERR_COR_REP_ROLL| \
((t == AER_CORRECTABLE) ? \
PCI_ERR_COR_REP_TIMER: 0))
#define AER_PHYSICAL_LAYER_ERROR 0
#define AER_DATA_LINK_LAYER_ERROR 1
#define AER_TRANSACTION_LAYER_ERROR 2
#define AER_GET_LAYER_ERROR(t, e) \
((e & AER_PHYSICAL_LAYER_ERROR_MASK) ? \
AER_PHYSICAL_LAYER_ERROR : \
(e & AER_DATA_LINK_LAYER_ERROR_MASK(t, e)) ? \
AER_DATA_LINK_LAYER_ERROR : \
AER_TRANSACTION_LAYER_ERROR)
#define AER_PHYSICAL_LAYER_ERROR_MASK(t) ((t == AER_CORRECTABLE) ? \
PCI_ERR_COR_RCVR : 0)
#define AER_DATA_LINK_LAYER_ERROR_MASK(t) ((t == AER_CORRECTABLE) ? \
(PCI_ERR_COR_BAD_TLP| \
PCI_ERR_COR_BAD_DLLP| \
PCI_ERR_COR_REP_ROLL| \
PCI_ERR_COR_REP_TIMER) : PCI_ERR_UNC_DLP)
#define AER_GET_LAYER_ERROR(t, e) \
((e & AER_PHYSICAL_LAYER_ERROR_MASK(t)) ? AER_PHYSICAL_LAYER_ERROR : \
(e & AER_DATA_LINK_LAYER_ERROR_MASK(t)) ? AER_DATA_LINK_LAYER_ERROR : \
AER_TRANSACTION_LAYER_ERROR)
#define AER_PR(info, pdev, fmt, args...) \
printk("%s%s %s: " fmt, (info->severity == AER_CORRECTABLE) ? \
KERN_WARNING : KERN_ERR, dev_driver_string(&pdev->dev), \
dev_name(&pdev->dev), ## args)
/*
* AER error strings
*/
static char* aer_error_severity_string[] = {
static char *aer_error_severity_string[] = {
"Uncorrected (Non-Fatal)",
"Uncorrected (Fatal)",
"Corrected"
};
static char* aer_error_layer[] = {
static char *aer_error_layer[] = {
"Physical Layer",
"Data Link Layer",
"Transaction Layer"
};
static char* aer_correctable_error_string[] = {
"Receiver Error ", /* Bit Position 0 */
static char *aer_correctable_error_string[] = {
"Receiver Error ", /* Bit Position 0 */
NULL,
NULL,
NULL,
NULL,
NULL,
"Bad TLP ", /* Bit Position 6 */
"Bad DLLP ", /* Bit Position 7 */
"RELAY_NUM Rollover ", /* Bit Position 8 */
"Bad TLP ", /* Bit Position 6 */
"Bad DLLP ", /* Bit Position 7 */
"RELAY_NUM Rollover ", /* Bit Position 8 */
NULL,
NULL,
NULL,
"Replay Timer Timeout ", /* Bit Position 12 */
"Advisory Non-Fatal ", /* Bit Position 13 */
"Replay Timer Timeout ", /* Bit Position 12 */
"Advisory Non-Fatal ", /* Bit Position 13 */
NULL,
NULL,
NULL,
......@@ -110,7 +111,7 @@ static char* aer_correctable_error_string[] = {
NULL,
};
static char* aer_uncorrectable_error_string[] = {
static char *aer_uncorrectable_error_string[] = {
NULL,
NULL,
NULL,
......@@ -123,10 +124,10 @@ static char* aer_uncorrectable_error_string[] = {
NULL,
NULL,
NULL,
"Poisoned TLP ", /* Bit Position 12 */
"Poisoned TLP ", /* Bit Position 12 */
"Flow Control Protocol ", /* Bit Position 13 */
"Completion Timeout ", /* Bit Position 14 */
"Completer Abort ", /* Bit Position 15 */
"Completion Timeout ", /* Bit Position 14 */
"Completer Abort ", /* Bit Position 15 */
"Unexpected Completion ", /* Bit Position 16 */
"Receiver Overflow ", /* Bit Position 17 */
"Malformed TLP ", /* Bit Position 18 */
......@@ -145,98 +146,69 @@ static char* aer_uncorrectable_error_string[] = {
NULL,
};
static char* aer_agent_string[] = {
static char *aer_agent_string[] = {
"Receiver ID",
"Requester ID",
"Completer ID",
"Transmitter ID"
};
static char * aer_get_error_source_name(int severity,
unsigned int status,
char errmsg_buff[])
static void __aer_print_error(struct aer_err_info *info, struct pci_dev *dev)
{
int i;
char * errmsg = NULL;
int i, status;
char *errmsg = NULL;
status = (info->status & ~info->mask);
for (i = 0; i < 32; i++) {
if (!(status & (1 << i)))
continue;
if (severity == AER_CORRECTABLE)
if (info->severity == AER_CORRECTABLE)
errmsg = aer_correctable_error_string[i];
else
errmsg = aer_uncorrectable_error_string[i];
if (!errmsg) {
sprintf(errmsg_buff, "Unknown Error Bit %2d ", i);
errmsg = errmsg_buff;
}
break;
if (errmsg)
AER_PR(info, dev, " [%2d] %s%s\n", i, errmsg,
info->first_error == i ? " (First)" : "");
else
AER_PR(info, dev, " [%2d] Unknown Error Bit%s\n", i,
info->first_error == i ? " (First)" : "");
}
return errmsg;
}
static DEFINE_SPINLOCK(logbuf_lock);
static char errmsg_buff[100];
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
{
char * errmsg;
int err_layer, agent;
char * loglevel;
if (info->severity == AER_CORRECTABLE)
loglevel = KERN_WARNING;
else
loglevel = KERN_ERR;
printk("%s+------ PCI-Express Device Error ------+\n", loglevel);
printk("%sError Severity\t\t: %s\n", loglevel,
aer_error_severity_string[info->severity]);
if ( info->status == 0) {
printk("%sPCIE Bus Error type\t: (Unaccessible)\n", loglevel);
printk("%sUnaccessible Received\t: %s\n", loglevel,
info->flags & AER_MULTI_ERROR_VALID_FLAG ?
"Multiple" : "First");
printk("%sUnregistered Agent ID\t: %04x\n", loglevel,
(dev->bus->number << 8) | dev->devfn);
int id = ((dev->bus->number << 8) | dev->devfn);
if (info->status == 0) {
AER_PR(info, dev,
"PCIE Bus Error: severity=%s, type=Unaccessible, "
"id=%04x(Unregistered Agent ID)\n",
aer_error_severity_string[info->severity], id);
} else {
err_layer = AER_GET_LAYER_ERROR(info->severity, info->status);
printk("%sPCIE Bus Error type\t: %s\n", loglevel,
aer_error_layer[err_layer]);
spin_lock(&logbuf_lock);
errmsg = aer_get_error_source_name(info->severity,
info->status,
errmsg_buff);
printk("%s%s\t: %s\n", loglevel, errmsg,
info->flags & AER_MULTI_ERROR_VALID_FLAG ?
"Multiple" : "First");
spin_unlock(&logbuf_lock);
int layer, agent;
layer = AER_GET_LAYER_ERROR(info->severity, info->status);
agent = AER_GET_AGENT(info->severity, info->status);
printk("%s%s\t\t: %04x\n", loglevel,
aer_agent_string[agent],
(dev->bus->number << 8) | dev->devfn);
printk("%sVendorID=%04xh, DeviceID=%04xh,"
" Bus=%02xh, Device=%02xh, Function=%02xh\n",
loglevel,
dev->vendor,
dev->device,
dev->bus->number,
PCI_SLOT(dev->devfn),
PCI_FUNC(dev->devfn));
if (info->flags & AER_TLP_HEADER_VALID_FLAG) {
AER_PR(info, dev,
"PCIE Bus Error: severity=%s, type=%s, id=%04x(%s)\n",
aer_error_severity_string[info->severity],
aer_error_layer[layer], id, aer_agent_string[agent]);
AER_PR(info, dev,
" device [%04x:%04x] error status/mask=%08x/%08x\n",
dev->vendor, dev->device, info->status, info->mask);
__aer_print_error(info, dev);
if (info->tlp_header_valid) {
unsigned char *tlp = (unsigned char *) &info->tlp;
printk("%sTLP Header:\n", loglevel);
printk("%s%02x%02x%02x%02x %02x%02x%02x%02x"
AER_PR(info, dev, " TLP Header:"
" %02x%02x%02x%02x %02x%02x%02x%02x"
" %02x%02x%02x%02x %02x%02x%02x%02x\n",
loglevel,
*(tlp + 3), *(tlp + 2), *(tlp + 1), *tlp,
*(tlp + 7), *(tlp + 6), *(tlp + 5), *(tlp + 4),
*(tlp + 11), *(tlp + 10), *(tlp + 9),
......@@ -244,5 +216,15 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
*(tlp + 13), *(tlp + 12));
}
}
if (info->id && info->error_dev_num > 1 && info->id == id)
AER_PR(info, dev,
" Error of this Agent(%04x) is reported first\n", id);
}
void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info)
{
dev_info(&dev->dev, "AER: %s%s error received: id=%04x\n",
info->multi_error_valid ? "Multiple " : "",
aer_error_severity_string[info->severity], info->id);
}
This diff is collapsed.
This diff is collapsed.
......@@ -205,6 +205,7 @@ static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev)
/* If fatal, restore cfg space for possible link reset at upstream */
if (dev->error_state == pci_channel_io_frozen) {
dev->state_saved = true;
pci_restore_state(dev);
pcie_portdrv_restore_config(dev);
pci_enable_pcie_error_reporting(dev);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -119,6 +119,7 @@ int pci_claim_resource(struct pci_dev *dev, int resource)
return err;
}
EXPORT_SYMBOL(pci_claim_resource);
#ifdef CONFIG_PCI_QUIRKS
void pci_disable_bridge_window(struct pci_dev *dev)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment