- 18 Jul, 2008 5 commits
-
-
Dan Williams authored
All callers of async_tx_sync_epilog have called async_tx_quiesce on the depend_tx, so async_tx_sync_epilog need only call the callback to complete the operation. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Replace open coded "wait and acknowledge" instances with async_tx_quiesce. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Ensure forward progress is made when a dmaengine driver is unable to allocate an xor descriptor by breaking the dependency chain with async_tx_quisce() and issue any pending descriptors. Tested with iop-adma by setting device->max_xor = 2 to force multiple calls to device_prep_dma_xor for each call to async_xor and limiting the descriptor slot pool to 5. Discovered that the minimum descriptor pool size for iop-adma is 2 * iop_chan_xor_slot_cnt(device->max_xor) + 1. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
When the number of source buffers for an xor operation exceeds the hardware channel maximum async_xor creates a chain of dependent operations. The result of one operation is reused as an input to the next to continue the xor calculation. The destination buffer should remain mapped for the duration of the entire chain. To provide this guarantee the code must no longer be allowed to fallback to the synchronous path as this will preclude the buffer from being unmapped, i.e. the dma-driver will potentially miss the descriptor with !DMA_COMPL_SKIP_DEST_UNMAP. Cc: Neil Brown <neilb@suse.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Li Zefan authored
In the rcu update side, don't use list_for_each_entry_rcu(). Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
- 08 Jul, 2008 12 commits
-
-
Haavard Skinnemoen authored
This adds a driver for the Synopsys DesignWare DMA controller (aka DMACA on AVR32 systems.) This DMA controller can be found integrated on the AT32AP7000 chip and is primarily meant for peripheral DMA transfer, but can also be used for memory-to-memory transfers. This patch is based on a driver from David Brownell which was based on an older version of the DMA Engine framework. It also implements the proposed extensions to the DMA Engine API for slave DMA operations. The dmatest client shows no problems, but there may still be room for improvement performance-wise. DMA slave transfer performance is definitely "good enough"; reading 100 MiB from an SD card running at ~20 MHz yields ~7.2 MiB/s average transfer rate. Full documentation for this controller can be found in the Synopsys DW AHB DMAC Databook: http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf The controller has lots of implementation options, so it's usually a good idea to check the data sheet of the chip it's intergrated on as well. The AT32AP7000 data sheet can be found here: http://www.atmel.com/dyn/products/datasheets.asp?family_id=682 Changes since v4: * Use client_count instead of dma_chan_is_in_use() * Add missing include * Unmap buffers unless client told us not to Changes since v3: * Update to latest DMA engine and DMA slave APIs * Embed the hw descriptor into the sw descriptor * Clean up and update MODULE_DESCRIPTION, copyright date, etc. Changes since v2: * Dequeue all pending transfers in terminate_all() * Rename dw_dmac.h -> dw_dmac_regs.h * Define and use controller-specific dma_slave data * Fix up a few outdated comments * Define hardware registers as structs (doesn't generate better code, unfortunately, but it looks nicer.) * Get number of channels from platform_data instead of hardcoding it based on CONFIG_WHATEVER_CPU. * Give slave clients exclusive access to the channel Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>, Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Haavard Skinnemoen authored
This patch adds the necessary interfaces to the DMA Engine framework to use functionality found on most embedded DMA controllers: DMA from and to I/O registers with hardware handshaking. In this context, hardware hanshaking means that the peripheral that owns the I/O registers in question is able to tell the DMA controller when more data is available for reading, or when there is room for more data to be written. This usually happens internally on the chip, but these signals may also be exported outside the chip for things like IDE DMA, etc. A new struct dma_slave is introduced. This contains information that the DMA engine driver needs to set up slave transfers to and from a slave device. Most engines supporting DMA slave transfers will want to extend this structure with controller-specific parameters. This additional information is usually passed from the platform/board code through the client driver. A "slave" pointer is added to the dma_client struct. This must point to a valid dma_slave structure iff the DMA_SLAVE capability is requested. The DMA engine driver may use this information in its device_alloc_chan_resources hook to configure the DMA controller for slave transfers from and to the given slave device. A new operation for preparing slave DMA transfers is added to struct dma_device. This takes a scatterlist and returns a single descriptor representing the whole transfer. Another new operation for terminating all pending transfers is added as well. The latter is needed because there may be errors outside the scope of the DMA Engine framework that may require DMA operations to be terminated prematurely. DMA Engine drivers may extend the dma_device, dma_chan and/or dma_slave_descriptor structures to allow controller-specific operations. The client driver can detect such extensions by looking at the DMA Engine's struct device, or it can request a specific DMA Engine device by setting the dma_dev field in struct dma_slave. dmaslave interface changes since v4: * Fix checkpatch errors * Fix changelog (there are no slave descriptors anymore) dmaslave interface changes since v3: * Use dma_data_direction instead of a new enum * Submit slave transfers as scatterlists * Remove the DMA slave descriptor struct dmaslave interface changes since v2: * Add a dma_dev field to struct dma_slave. If set, the client can only be bound to the DMA controller that corresponds to this device. This allows controller-specific extensions of the dma_slave structure; if the device matches, the controller may safely assume its extensions are present. * Move reg_width into struct dma_slave as there are currently no users that need to be able to set the width on a per-transfer basis. dmaslave interface changes since v1: * Drop the set_direction and set_width descriptor hooks. Pass the direction and width to the prep function instead. * Declare a dma_slave struct with fixed information about a slave, i.e. register addresses, handshake interfaces and such. * Add pointer to a dma_slave struct to dma_client. Can be NULL if the DMA_SLAVE capability isn't requested. * Drop the set_slave device hook since the alloc_chan_resources hook now has enough information to set up the channel for slave transfers. Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
In some cases client code may need the dma-driver to skip the unmap of source and/or destination buffers. Setting these flags indicates to the driver to skip the unmap step. In this regard async_xor is currently broken in that it allows the destination buffer to be unmapped while an operation is still in progress, i.e. when the number of sources exceeds the hardware channel's maximum (fixed in a subsequent patch). Acked-by: Saeed Bishara <saeed@marvell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Haavard Skinnemoen authored
A DMA controller capable of doing slave transfers may need to know a few things about the slave when preparing the channel. We don't want to add this information to struct dma_channel since the channel hasn't yet been bound to a client at this point. Instead, pass a reference to the client requesting the channel to the driver's device_alloc_chan_resources hook so that it can pick the necessary information from the dma_client struct by itself. [dan.j.williams@intel.com: fixed up fsldma and mv_xor] Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Haavard Skinnemoen authored
This client tests DMA memcpy using various lengths and various offsets into the source and destination buffers. It will initialize both buffers with a repeatable pattern and verify that the DMA engine copies the requested region and nothing more. It will also verify that the bytes aren't swapped around, and that the source buffer isn't modified. The dmatest module can be configured to test a specific device, a specific channel. It can also test multiple channels at the same time, and it can start multiple threads competing for the same channel. Changes since v2: * Support testing multiple channels at the same time * Support testing with multiple threads competing for the same channel * Use counting test patterns in order to catch byte ordering issues Changes since v1: * Remove extra dashes around "help" * Remove "default n" from Kconfig * Turn TEST_BUF_SIZE into a module parameter * Return DMA_NAK instead of DMA_DUP * Print unhandled events * Support testing specific channels and devices * Move to the end of the Makefile Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Saeed Bishara authored
The XOR engine found in Marvell's SoCs and system controllers provides XOR and DMA operation, iSCSI CRC32C calculation, memory initialization, and memory ECC error cleanup operation support. This driver implements the DMA engine API and supports the following capabilities: - memcpy - xor - memset The XOR engine can be used by DMA engine clients implemented in the kernel, one of those clients is the RAID module. In that case, I observed 20% improvement in the raid5 write throughput, and 40% decrease in the CPU utilization when doing array construction, those results obtained on an 5182 running at 500Mhz. When enabling the NET DMA client, the performance decreased, so meanwhile it is recommended to keep this client off. Signed-off-by: Saeed Bishara <saeed@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Kay Sievers authored
Since 43cc71ee, the platform modalias is prefixed with "platform:". Add MODULE_ALIAS() to most of the hotpluggable platform drivers, to re-enable auto loading. Cc: <stable@kernel.org> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Haavard's dma-slave interface would like to test for exclusive access to a channel. The standard channel refcounting is not sufficient in that it tracks more than just client references, it is also inaccurate as reference counts are percpu until the channel is removed. This change also enables a future fix to deallocate resources when a client declines to use a capable channel. Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
The dependency is redundant since all drivers set their specific arch dependencies. The NET_DMA option is modified to be enabled only on platforms where it is known to have a positive effect. HAS_DMA is added as an explicit dependency for the DMADEVICES menu. Acked-by: Adrian Bunk <bunk@kernel.org> Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Haavard Skinnemoen authored
Set the 'parent' field of channel class devices to point to the physical DMA device initialized by the DMA engine driver. This allows drivers to use chan->dev.parent for syncing DMA buffers and adds a 'device' symlink to the real device in /sys/class/dma/dmaXchanY. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
commit 636bdeaa 'dmaengine: ack to flags: make use of the unused bits in the 'ack' field' missed an ->ack conversion in crypto/async_tx/async_memset.c Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Li Yang authored
Signed-off-by: Li Yang <leoli@freescale.com> Acked-by: Zhang Wei <zw@zh-kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
- 28 May, 2008 13 commits
-
-
David Howells authored
> +#define ARCH_KMALLOC_MINALIGN (sizeof(long) * 2) > +#define ARCH_SLAB_MINALIGN (sizeof(long) * 2) This doesn't work if SLAB is selected and slab debugging is enabled as these are passed to the preprocessor, and the preprocessor doesn't understand sizeof. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.dk/linux-2.6-blockLinus Torvalds authored
* 'for-linus' of git://git.kernel.dk/linux-2.6-block: cfq-iosched: fix RCU problem in cfq_cic_lookup() block: make blktrace use per-cpu buffers for message notes Added in elevator switch message to blktrace stream Added in MESSAGE notes for blktraces block: reorder cfq_queue to save space on 64bit builds block: Move the second call to get_request to the end of the loop splice: handle try_to_release_page() failure splice: fix sendfile() issue with relay
-
David Howells authored
Specify the minimum slab/kmalloc alignment to be 8 bytes. This fixes a crash when SLOB is selected as the memory allocator. The FRV arch needs this so that it can use the load- and store-double instructions without faulting. By default SLOB sets the minimum to be 4 bytes. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Vegard Nossum authored
Fix a typo in the header guard of asm/ipc.h. Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
cfq_cic_lookup() needs to properly protect ioc->ioc_data before dereferencing it and also exclude updaters of ioc->ioc_data as well. Also add a number of comments documenting why the existing RCU usage is OK. Thanks a lot to "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> for review and comments! Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Currently it uses a single static char array, but that risks being corrupted when multiple users issue message notes at the same time. Make the buffers dynamically allocated when the trace is setup and make them per-cpu instead. The default max message size of 1k is also very large, the interface is mainly for small text notes. So shrink it to 128 bytes. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Alan D. Brunelle authored
Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Alan D. Brunelle authored
Allows messages to be inserted into blktrace streams. Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Richard Kennedy authored
saves 8 bytes of padding & increases objects/slab from 30 to 32 on my AMD64 config Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Zhang, Yanmin authored
In function get_request_wait, the second call to get_request could be moved to the end of the while loop, because if the first call to get_request fails, the second call will fail without sleep. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
splice currently assumes that try_to_release_page() always suceeds, but it can return failure. If it does, we cannot steal the page. Acked-by: Mingming Cao <cmm@us.ibm.com Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
Tom Zanussi authored
Splice isn't always incrementing the ppos correctly, which broke relay splice. Signed-off-by: Tom Zanussi <zanussi@comcast.net> Tested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: pciehp: add message about pciehp_slot_with_bus option pci hotplug core: add check of duplicate slot name pciehp: move msleep after power off pciehp: poll cmd completion if hotplug interrupt is disabled pciehp: fix slow probing pciehp: fix NULL dereference in interrupt handler shpchp: add message about shpchp_slot_with_bus option PCI: don't enable ASPM on devices with mixed PCIe/PCI functions
-
- 27 May, 2008 10 commits
-
-
Kenji Kaneshige authored
Some (broken?) platform assign the same slot name to multiple hotplug slots. On such system, slot initialization would fail because of name collision. The pciehp driver already have a "slot_with_bus" module option which adds the bus number into the slot name. This patch adds the message about this module option that will be displayed when slot name collision is detected. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
Fix the following errors reported by Jan C. Nordholz in http://bugzilla.kernel.org/show_bug.cgi?id=10751. kobject_add_internal failed for 2 with -EEXIST, don't try to register things with the same name in the same directory. Pid: 1, comm: swapper Tainted: G W 2.6.26-rc3 #1 [<c0266980>] kobject_add_internal+0x140/0x190 [<c0266afd>] kobject_init_and_add+0x2d/0x40 [<c027bc91>] pci_hp_register+0x81/0x2f0 [<c027fd07>] pciehp_probe+0x1a7/0x470 [<c01b3b84>] sysfs_add_one+0x44/0xa0 [<c01b3c1f>] sysfs_addrm_start+0x3f/0xb0 [<c01b497a>] sysfs_create_link+0x8a/0xf0 [<c0279570>] pcie_port_probe_service+0x50/0x80 [<c02e0545>] driver_sysfs_add+0x55/0x70 [<c02e0662>] driver_probe_device+0x82/0x180 [<c02e07cc>] __driver_attach+0x6c/0x70 [<c02dfe0a>] bus_for_each_dev+0x3a/0x60 [<c05db2d0>] pcied_init+0x0/0x80 [<c02e04e6>] driver_attach+0x16/0x20 [<c02e0760>] __driver_attach+0x0/0x70 [<c02e0341>] bus_add_driver+0x1a1/0x220 [<c05db2d0>] pcied_init+0x0/0x80 [<c02e09cd>] driver_register+0x4d/0x120 [<c05db050>] ibm_acpiphp_init+0x0/0x190 [<c0125aab>] printk+0x1b/0x20 [<c05db2d0>] pcied_init+0x0/0x80 [<c05db2de>] pcied_init+0xe/0x80 [<c05c751a>] kernel_init+0x10a/0x300 [<c0120138>] schedule_tail+0x18/0x50 [<c0103b9a>] ret_from_fork+0x6/0x1c [<c05c7410>] kernel_init+0x0/0x300 [<c05c7410>] kernel_init+0x0/0x300 [<c010485b>] kernel_thread_helper+0x7/0x1c ======================= pci_hotplug: Unable to register kobject '2'<3>pciehp: pci_hp_register failed with error -22 Slot with the same name can be registered multiple times if shpchp or pciehp driver is loaded after acpiphp is loaded because ACPI based hotplug driver and Native OS hotplug driver trying to handle the same physical slot. In this case, current pci_hotplug core will call kobject_init_and_add() muliple time with the same name. This is the cause of this problem. To fix this problem, this patch adds the check into pci_hp_register() to see if the slot with the same name. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
According to the PCI Express specification, we must wait for at least 1 second after turning power off before taking any action that relies on power having been removed from the slot/adapter. For this, current pciehp wait for 1 second after issuing the power off command in hpc_power_off_slot() function. But waiting for 1 second in hpc_power_off_slot() can make pciehp probing slow-down because pciehp probe code calls hpc_power_off_slot() if the slot is not occupied just in case. We don't need to wait for 1 second at the pciehp probe time because there is no action on that empty slot. So move 1 second wait from hpc_power_off_slot() to the caller of hpc_power_off_slot(). Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
Fix improper long wait for command completion in pciehp probing. As described in PCI Express specification, software notification is not generated if the command that occurs as a result of a write to the Slot Control register that disables software notification of command completed events. Since pciehp driver doesn't take it into account, such command is issued in pciehp probing, and it causes improper long wait for command completion. This patch changes the pciehp driver to take such command into account. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
Fix the "pciehp probing slow" problem reported from Jan C. Nordholz in http://bugzilla.kernel.org/show_bug.cgi?id=10751. The command completed bit in Slot Status register applies only to commands issued to control the attention indicator, power indicator, power controller, or electromechanical interlock. However, writes to other parts of the Slot Control register would end up writing to the control fields. Hence, any write to Slot Control register is considered as a command. However, if the controller doesn't support any of attention indicator, power indicator, power controller and electromechanical interlock, command completed bit would not set in writing to Slot Control register. In this case, we should not wait for command completed bit set, otherwise all commands would be considered not completed in timeout seconds (1 sec.). The cause of the problem is pciehp driver didn't take this situation into account. This patch changes pciehp to take it into account. This patch also add the check for "No Command Completed Support" bit in Slot Capability register. If it is set, we should not wait for command completed bit set as well. This problem seems to be revealed by the commit c27fb883 that fixed the bug that pciehp did not wait for command completed properly (pciehp just ignored the command completion event). Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
Fix the following NULL dereference problem reported from Pierre Ossman and Ingo Molnar. pciehp: HPC vendor_id 8086 device_id 27d0 ss_vid 0 ss_did 0 pciehp: pciehp_find_slot: slot (device=0x0) not found BUG: unable to handle kernel NULL pointer dereference at 0000000000000070 IP: [<ffffffff80494a8b>] pciehp_handle_presence_change+0x7e/0x113 PGD 0 Oops: 0000 [1] CPU 0 Modules linked in: Pid: 1, comm: swapper Tainted: G W 2.6.26-rc3-sched-devel.git-00001-g2b99b26-dirty #170 RIP: 0010:[<ffffffff80494a8b>] [<ffffffff80494a8b>] pciehp_handle_presence_change+0x7e/0x113 RSP: 0000:ffff81003f83fbb0 EFLAGS: 00010046 RAX: 0000000000000039 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000046 RBP: ffff81003f83fbd0 R08: 0000000000000001 R09: ffffffff80245103 R10: 0000000000000020 R11: 0000000000000000 R12: ffff81003ea53a30 R13: 0000000000000000 R14: 0000000000000011 R15: ffffffff80495926 FS: 0000000000000000(0000) GS:ffffffff80be7400(0000) knlGS:0000000000000000 CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b CR2: 0000000000000070 CR3: 0000000000201000 CR4: 00000000000006a0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process swapper (pid: 1, threadinfo ffff81003f83e000, task ffff81003f840000) Stack: 0000000000000008 ffff81003f83fbf6 ffff81003ea53a30 0000000000000008 ffff81003f83fc10 ffffffff80495ab4 0000000000000011 0000000000000002 0000000000000202 0000000000000202 00000000fffffff4 ffff81003ea53a30 Call Trace: [<ffffffff80495ab4>] pcie_isr+0x18e/0x1bc [<ffffffff80260831>] request_irq+0x106/0x12f [<ffffffff80495fb6>] pcie_init+0x15e/0x6cc [<ffffffff804933a3>] pciehp_probe+0x64/0x541 [<ffffffff8048f4e7>] pcie_port_probe_service+0x4c/0x76 [<ffffffff8054af70>] driver_probe_device+0xd4/0x1f0 [<ffffffff8054b108>] __driver_attach+0x7c/0x7e [<ffffffff8054b08c>] ? __driver_attach+0x0/0x7e [<ffffffff8054a4b6>] bus_for_each_dev+0x53/0x7d [<ffffffff8054ad3c>] driver_attach+0x1c/0x1e [<ffffffff8054a9c2>] bus_add_driver+0xdd/0x25b [<ffffffff80c09d3d>] ? pcied_init+0x0/0x8b [<ffffffff8054b288>] driver_register+0x5f/0x13e [<ffffffff80c09d3d>] ? pcied_init+0x0/0x8b [<ffffffff8048f441>] pcie_port_service_register+0x47/0x49 [<ffffffff80c09d52>] pcied_init+0x15/0x8b [<ffffffff80bf3938>] kernel_init+0x75/0x243 [<ffffffff808639d2>] ? _spin_unlock_irq+0x2b/0x3a [<ffffffff80228d1f>] ? finish_task_switch+0x57/0x9a [<ffffffff8020c258>] child_rip+0xa/0x12 [<ffffffff8020bcec>] ? restore_args+0x0/0x30 [<ffffffff80bf38c3>] ? kernel_init+0x0/0x243 [<ffffffff8020c24e>] ? child_rip+0x0/0x12 Code: 83 80 00 00 00 48 39 f0 75 e1 0f b6 c9 48 c7 c2 00 0e 8d 80 48 c7 c6 8a 60 a6 80 48 c7 c7 10 db a8 80 31 c0 e8 3f 8d d9 ff 31 db <48> 8b 43 70 48 8d 75 ef 48 89 df ff 50 30 80 7d ef 00 74 37 48 RIP [<ffffffff80494a8b>] pciehp_handle_presence_change+0x7e/0x113 RSP <ffff81003f83fbb0> CR2: 0000000000000070 Kernel panic - not syncing: Fatal exception The situation under which it occurs is hw and timing related: it appears to happen on a system that has PCI hotplug hardware but with no active hotplug cards, and another interrupt in the same (shared) IRQ line arrives too early, before the hotplug-slot entry has been set up - as triggered by CONFIG_DEBUG_SHIRQ=y: This patch contains the following two fixes. (1) Clear all events bits in Slot Status register to prevent the pciehp driver from detecting the spurious events that would have been occur before pciehp loading. (2) Add check whether slot initialization had been already done. This is short term fix. We need more structural fixes to install interrupt handler after slot initialization is done. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Kenji Kaneshige authored
Some (broken?) platform assign the same slot name to multiple hotplug slots. On such system, slot initialization would fail because of name collision. The shpchp driver already have a "slot_with_bus" module option which adds the bus number into the slot name. This patch adds the message about this module option that will be displayed when slot name collision is detected. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6: avr32: Fix cpufreq oops when ondemand governor is default avr32: Update defconfigs avr32: export strnlen_user avr32: export copy_page
-
David Woodhouse authored
There's a reason why using C99 initialisers even in the supposedly trivial structs is a good idea. Signed-off-by: Carl-Daniel Hailfinger <c-d.hailfinger.devel.2006@gmx.net> Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Haavard Skinnemoen authored
Move the AP7 cpufreq init to late_initcall() so that we don't try to bring up cpufreq until the governor is ready. x86 also uses late_initcall() for this. Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
-