Commit a0d3c7c5 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time round the update brings in following changes:

   - new tegra driver for ADMA device

   - support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI
     Central Direct Memory Access Engine and few updates to this driver

   - new cyclic capability to sun6i and few updates

   - slave-sg support in bcm2835

   - updates to many drivers like designware, hsu, mv_xor, pxa, edma,
     qcom_hidma & bam"

* tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (84 commits)
  dmaengine: ioatdma: disable relaxed ordering for ioatdma
  dmaengine: of_dma: approximate an average distribution
  dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module
  dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event
  dmaengine: qcom_hidma: add support for object hierarchy
  dmaengine: qcom_hidma: add debugfs hooks
  dmaengine: qcom_hidma: implement lower level hardware interface
  dmaengine: vdma: Add clock support
  Documentation: DT: vdma: Add clock support for dmas
  dmaengine: vdma: Add config structure to differentiate dmas
  MAINTAINERS: Update Tegra DMA maintainers
  dmaengine: tegra-adma: Add support for Tegra210 ADMA
  Documentation: DT: Add binding documentation for NVIDIA ADMA
  dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
  Documentation: DT: vdma: update binding doc for AXI CDMA
  dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  Documentation: DT: vdma: update binding doc for AXI DMA
  dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
  dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC
  dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC
  ...
parents ec67b14c f9114a54
What: /sys/devices/platform/hidma-*/chid
/sys/devices/platform/QCOM8061:*/chid
Date: Dec 2015
KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>"
Description:
Contains the ID of the channel within the HIDMA instance.
It is used to associate a given HIDMA channel with the
priority and weight calls in the management interface.
...@@ -12,6 +12,10 @@ Required properties: ...@@ -12,6 +12,10 @@ Required properties:
- reg: Should contain DMA registers location and length. - reg: Should contain DMA registers location and length.
- interrupts: Should contain the DMA interrupts associated - interrupts: Should contain the DMA interrupts associated
to the DMA channels in ascending order. to the DMA channels in ascending order.
- interrupt-names: Should contain the names of the interrupt
in the form "dmaXX".
Use "dma-shared-all" for the common interrupt line
that is shared by all dma channels.
- #dma-cells: Must be <1>, the cell in the dmas property of the - #dma-cells: Must be <1>, the cell in the dmas property of the
client device represents the DREQ number. client device represents the DREQ number.
- brcm,dma-channel-mask: Bit mask representing the channels - brcm,dma-channel-mask: Bit mask representing the channels
...@@ -34,13 +38,35 @@ dma: dma@7e007000 { ...@@ -34,13 +38,35 @@ dma: dma@7e007000 {
<1 24>, <1 24>,
<1 25>, <1 25>,
<1 26>, <1 26>,
/* dma channel 11-14 share one irq */
<1 27>, <1 27>,
<1 27>,
<1 27>,
<1 27>,
/* unused shared irq for all channels */
<1 28>; <1 28>;
interrupt-names = "dma0",
"dma1",
"dma2",
"dma3",
"dma4",
"dma5",
"dma6",
"dma7",
"dma8",
"dma9",
"dma10",
"dma11",
"dma12",
"dma13",
"dma14",
"dma-shared-all";
#dma-cells = <1>; #dma-cells = <1>;
brcm,dma-channel-mask = <0x7f35>; brcm,dma-channel-mask = <0x7f35>;
}; };
DMA clients connected to the BCM2835 DMA controller must use the format DMA clients connected to the BCM2835 DMA controller must use the format
described in the dma.txt file, using a two-cell specifier for each channel. described in the dma.txt file, using a two-cell specifier for each channel.
......
* Marvell XOR engines * Marvell XOR engines
Required properties: Required properties:
- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor" - compatible: Should be one of the following:
- "marvell,orion-xor"
- "marvell,armada-380-xor"
- "marvell,armada-3700-xor".
- reg: Should contain registers location and length (two sets) - reg: Should contain registers location and length (two sets)
the first set is the low registers, the second set the high the first set is the low registers, the second set the high
registers for the XOR engine. registers for the XOR engine.
......
* NVIDIA Tegra Audio DMA (ADMA) controller
The Tegra Audio DMA controller that is used for transferring data
between system memory and the Audio Processing Engine (APE).
Required properties:
- compatible: Must be "nvidia,tegra210-adma".
- reg: Should contain DMA registers location and length. This should be
a single entry that includes all of the per-channel registers in one
contiguous bank.
- interrupt-parent: Phandle to the interrupt parent controller.
- interrupts: Should contain all of the per-channel DMA interrupts in
ascending order with respect to the DMA channel index.
- clocks: Must contain one entry for the ADMA module clock
(TEGRA210_CLK_D_AUDIO).
- clock-names: Must contain the name "d_audio" for the corresponding
'clocks' entry.
- #dma-cells : Must be 1. The first cell denotes the receive/transmit
request number and should be between 1 and the maximum number of
requests supported. This value corresponds to the RX/TX_REQUEST_SELECT
fields in the ADMA_CHn_CTRL register.
Example:
adma: dma@702e2000 {
compatible = "nvidia,tegra210-adma";
reg = <0x0 0x702e2000 0x0 0x2000>;
interrupt-parent = <&tegra_agic>;
interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&tegra_car TEGRA210_CLK_D_AUDIO>;
clock-names = "d_audio";
#dma-cells = <1>;
};
...@@ -13,6 +13,8 @@ Required properties: ...@@ -13,6 +13,8 @@ Required properties:
- clock-names: must contain "bam_clk" entry - clock-names: must contain "bam_clk" entry
- qcom,ee : indicates the active Execution Environment identifier (0-7) used in - qcom,ee : indicates the active Execution Environment identifier (0-7) used in
the secure world. the secure world.
- qcom,controlled-remotely : optional, indicates that the bam is controlled by
remote proccessor i.e. execution environment.
Example: Example:
......
...@@ -13,6 +13,11 @@ Required properties: ...@@ -13,6 +13,11 @@ Required properties:
- chan_priority: priority of channels. 0 (default): increase from chan 0->n, 1: - chan_priority: priority of channels. 0 (default): increase from chan 0->n, 1:
increase from chan n->0 increase from chan n->0
- block_size: Maximum block size supported by the controller - block_size: Maximum block size supported by the controller
- data-width: Maximum data width supported by hardware per AHB master
(in bytes, power of 2)
Deprecated properties:
- data_width: Maximum data width supported by hardware per AHB master - data_width: Maximum data width supported by hardware per AHB master
(0 - 8bits, 1 - 16bits, ..., 5 - 256bits) (0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
...@@ -38,7 +43,7 @@ Example: ...@@ -38,7 +43,7 @@ Example:
chan_allocation_order = <1>; chan_allocation_order = <1>;
chan_priority = <1>; chan_priority = <1>;
block_size = <0xfff>; block_size = <0xfff>;
data_width = <3 3>; data-width = <8 8>;
}; };
DMA clients connected to the Designware DMA controller must use the format DMA clients connected to the Designware DMA controller must use the format
...@@ -47,8 +52,8 @@ The four cells in order are: ...@@ -47,8 +52,8 @@ The four cells in order are:
1. A phandle pointing to the DMA controller 1. A phandle pointing to the DMA controller
2. The DMA request line number 2. The DMA request line number
3. Source master for transfers on allocated channel 3. Memory master for transfers on allocated channel
4. Destination master for transfers on allocated channel 4. Peripheral master for transfers on allocated channel
Example: Example:
......
...@@ -3,18 +3,44 @@ It can be configured to have one channel or two channels. If configured ...@@ -3,18 +3,44 @@ It can be configured to have one channel or two channels. If configured
as two channels, one is to transmit to the video device and another is as two channels, one is to transmit to the video device and another is
to receive from the video device. to receive from the video device.
Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
target devices. It can be configured to have one channel or two channels.
If configured as two channels, one is to transmit to the device and another
is to receive from the device.
Xilinx AXI CDMA engine, it does transfers between memory-mapped source
address and a memory-mapped destination address.
Required properties: Required properties:
- compatible: Should be "xlnx,axi-vdma-1.00.a" - compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a" or
"xlnx,axi-cdma-1.00.a""
- #dma-cells: Should be <1>, see "dmas" property below - #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain VDMA registers location and length. - reg: Should contain VDMA registers location and length.
- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w. - xlnx,addrwidth: Should be the vdma addressing size in bits(ex: 32 bits).
- dma-ranges: Should be as the following <dma_addr cpu_addr max_len>.
- dma-channel child node: Should have at least one channel and can have up to - dma-channel child node: Should have at least one channel and can have up to
two channels per device. This node specifies the properties of each two channels per device. This node specifies the properties of each
DMA channel (see child node properties below). DMA channel (see child node properties below).
- clocks: Input clock specifier. Refer to common clock bindings.
- clock-names: List of input clocks
For VDMA:
Required elements: "s_axi_lite_aclk"
Optional elements: "m_axi_mm2s_aclk" "m_axi_s2mm_aclk",
"m_axis_mm2s_aclk", "s_axis_s2mm_aclk"
For CDMA:
Required elements: "s_axi_lite_aclk", "m_axi_aclk"
FOR AXIDMA:
Required elements: "s_axi_lite_aclk"
Optional elements: "m_axi_mm2s_aclk", "m_axi_s2mm_aclk",
"m_axi_sg_aclk"
Required properties for VDMA:
- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
Optional properties: Optional properties:
- xlnx,include-sg: Tells configured for Scatter-mode in - xlnx,include-sg: Tells configured for Scatter-mode in
the hardware. the hardware.
Optional properties for VDMA:
- xlnx,flush-fsync: Tells which channel to Flush on Frame sync. - xlnx,flush-fsync: Tells which channel to Flush on Frame sync.
It takes following values: It takes following values:
{1}, flush both channels {1}, flush both channels
...@@ -31,6 +57,7 @@ Required child node properties: ...@@ -31,6 +57,7 @@ Required child node properties:
Optional child node properties: Optional child node properties:
- xlnx,include-dre: Tells hardware is configured for Data - xlnx,include-dre: Tells hardware is configured for Data
Realignment Engine. Realignment Engine.
Optional child node properties for VDMA:
- xlnx,genlock-mode: Tells Genlock synchronization is - xlnx,genlock-mode: Tells Genlock synchronization is
enabled/disabled in hardware. enabled/disabled in hardware.
...@@ -41,8 +68,13 @@ axi_vdma_0: axivdma@40030000 { ...@@ -41,8 +68,13 @@ axi_vdma_0: axivdma@40030000 {
compatible = "xlnx,axi-vdma-1.00.a"; compatible = "xlnx,axi-vdma-1.00.a";
#dma_cells = <1>; #dma_cells = <1>;
reg = < 0x40030000 0x10000 >; reg = < 0x40030000 0x10000 >;
dma-ranges = <0x00000000 0x00000000 0x40000000>;
xlnx,num-fstores = <0x8>; xlnx,num-fstores = <0x8>;
xlnx,flush-fsync = <0x1>; xlnx,flush-fsync = <0x1>;
xlnx,addrwidth = <0x20>;
clocks = <&clk 0>, <&clk 1>, <&clk 2>, <&clk 3>, <&clk 4>;
clock-names = "s_axi_lite_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk",
"m_axis_mm2s_aclk", "s_axis_s2mm_aclk";
dma-channel@40030000 { dma-channel@40030000 {
compatible = "xlnx,axi-vdma-mm2s-channel"; compatible = "xlnx,axi-vdma-mm2s-channel";
interrupts = < 0 54 4 >; interrupts = < 0 54 4 >;
......
...@@ -11017,10 +11017,11 @@ M: Prashant Gaikwad <pgaikwad@nvidia.com> ...@@ -11017,10 +11017,11 @@ M: Prashant Gaikwad <pgaikwad@nvidia.com>
S: Supported S: Supported
F: drivers/clk/tegra/ F: drivers/clk/tegra/
TEGRA DMA DRIVER TEGRA DMA DRIVERS
M: Laxman Dewangan <ldewangan@nvidia.com> M: Laxman Dewangan <ldewangan@nvidia.com>
M: Jon Hunter <jonathanh@nvidia.com>
S: Supported S: Supported
F: drivers/dma/tegra20-apb-dma.c F: drivers/dma/tegra*
TEGRA I2C DRIVER TEGRA I2C DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com> M: Laxman Dewangan <ldewangan@nvidia.com>
......
...@@ -126,7 +126,7 @@ dma@FE000000 { ...@@ -126,7 +126,7 @@ dma@FE000000 {
chan_allocation_order = <0>; chan_allocation_order = <0>;
chan_priority = <1>; chan_priority = <1>;
block_size = <0x7ff>; block_size = <0x7ff>;
data_width = <2>; data-width = <4>;
clocks = <&ahb_clk>; clocks = <&ahb_clk>;
clock-names = "hclk"; clock-names = "hclk";
}; };
......
...@@ -48,9 +48,29 @@ dma: dma@7e007000 { ...@@ -48,9 +48,29 @@ dma: dma@7e007000 {
<1 24>, <1 24>,
<1 25>, <1 25>,
<1 26>, <1 26>,
/* dma channel 11-14 share one irq */
<1 27>, <1 27>,
<1 27>,
<1 27>,
<1 27>,
/* unused shared irq for all channels */
<1 28>; <1 28>;
interrupt-names = "dma0",
"dma1",
"dma2",
"dma3",
"dma4",
"dma5",
"dma6",
"dma7",
"dma8",
"dma9",
"dma10",
"dma11",
"dma12",
"dma13",
"dma14",
"dma-shared-all";
#dma-cells = <1>; #dma-cells = <1>;
brcm,dma-channel-mask = <0x7f35>; brcm,dma-channel-mask = <0x7f35>;
}; };
......
...@@ -117,7 +117,7 @@ dwdma0: dma@ea800000 { ...@@ -117,7 +117,7 @@ dwdma0: dma@ea800000 {
chan_priority = <1>; chan_priority = <1>;
block_size = <0xfff>; block_size = <0xfff>;
dma-masters = <2>; dma-masters = <2>;
data_width = <3 3>; data-width = <8 8>;
}; };
dma@eb000000 { dma@eb000000 {
...@@ -133,7 +133,7 @@ dma@eb000000 { ...@@ -133,7 +133,7 @@ dma@eb000000 {
chan_allocation_order = <1>; chan_allocation_order = <1>;
chan_priority = <1>; chan_priority = <1>;
block_size = <0xfff>; block_size = <0xfff>;
data_width = <3 3>; data-width = <8 8>;
}; };
fsmc: flash@b0000000 { fsmc: flash@b0000000 {
......
...@@ -1365,8 +1365,8 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data) ...@@ -1365,8 +1365,8 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
slave->dma_dev = &dw_dmac0_device.dev; slave->dma_dev = &dw_dmac0_device.dev;
slave->src_id = 0; slave->src_id = 0;
slave->dst_id = 1; slave->dst_id = 1;
slave->src_master = 1; slave->m_master = 1;
slave->dst_master = 0; slave->p_master = 0;
data->dma_slave = slave; data->dma_slave = slave;
data->dma_filter = at32_mci_dma_filter; data->dma_filter = at32_mci_dma_filter;
...@@ -2061,16 +2061,16 @@ at32_add_device_ac97c(unsigned int id, struct ac97c_platform_data *data, ...@@ -2061,16 +2061,16 @@ at32_add_device_ac97c(unsigned int id, struct ac97c_platform_data *data,
if (flags & AC97C_CAPTURE) { if (flags & AC97C_CAPTURE) {
rx_dws->dma_dev = &dw_dmac0_device.dev; rx_dws->dma_dev = &dw_dmac0_device.dev;
rx_dws->src_id = 3; rx_dws->src_id = 3;
rx_dws->src_master = 0; rx_dws->m_master = 0;
rx_dws->dst_master = 1; rx_dws->p_master = 1;
} }
/* Check if DMA slave interface for playback should be configured. */ /* Check if DMA slave interface for playback should be configured. */
if (flags & AC97C_PLAYBACK) { if (flags & AC97C_PLAYBACK) {
tx_dws->dma_dev = &dw_dmac0_device.dev; tx_dws->dma_dev = &dw_dmac0_device.dev;
tx_dws->dst_id = 4; tx_dws->dst_id = 4;
tx_dws->src_master = 0; tx_dws->m_master = 0;
tx_dws->dst_master = 1; tx_dws->p_master = 1;
} }
if (platform_device_add_data(pdev, data, if (platform_device_add_data(pdev, data,
...@@ -2141,8 +2141,8 @@ at32_add_device_abdac(unsigned int id, struct atmel_abdac_pdata *data) ...@@ -2141,8 +2141,8 @@ at32_add_device_abdac(unsigned int id, struct atmel_abdac_pdata *data)
dws->dma_dev = &dw_dmac0_device.dev; dws->dma_dev = &dw_dmac0_device.dev;
dws->dst_id = 2; dws->dst_id = 2;
dws->src_master = 0; dws->m_master = 0;
dws->dst_master = 1; dws->p_master = 1;
if (platform_device_add_data(pdev, data, if (platform_device_add_data(pdev, data,
sizeof(struct atmel_abdac_pdata))) sizeof(struct atmel_abdac_pdata)))
......
...@@ -201,8 +201,8 @@ static struct sata_dwc_host_priv host_pvt; ...@@ -201,8 +201,8 @@ static struct sata_dwc_host_priv host_pvt;
static struct dw_dma_slave sata_dwc_dma_dws = { static struct dw_dma_slave sata_dwc_dma_dws = {
.src_id = 0, .src_id = 0,
.dst_id = 0, .dst_id = 0,
.src_master = 0, .m_master = 1,
.dst_master = 1, .p_master = 0,
}; };
/* /*
...@@ -1248,7 +1248,7 @@ static int sata_dwc_probe(struct platform_device *ofdev) ...@@ -1248,7 +1248,7 @@ static int sata_dwc_probe(struct platform_device *ofdev)
hsdev->dma->dev = &ofdev->dev; hsdev->dma->dev = &ofdev->dev;
/* Initialize AHB DMAC */ /* Initialize AHB DMAC */
err = dw_dma_probe(hsdev->dma, NULL); err = dw_dma_probe(hsdev->dma);
if (err) if (err)
goto error_dma_iomap; goto error_dma_iomap;
......
...@@ -332,7 +332,7 @@ config MPC512X_DMA ...@@ -332,7 +332,7 @@ config MPC512X_DMA
config MV_XOR config MV_XOR
bool "Marvell XOR engine support" bool "Marvell XOR engine support"
depends on PLAT_ORION depends on PLAT_ORION || ARCH_MVEBU || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_ENGINE_RAID select DMA_ENGINE_RAID
select ASYNC_TX_ENABLE_CHANNEL_SWITCH select ASYNC_TX_ENABLE_CHANNEL_SWITCH
...@@ -467,6 +467,20 @@ config TEGRA20_APB_DMA ...@@ -467,6 +467,20 @@ config TEGRA20_APB_DMA
This DMA controller transfers data from memory to peripheral fifo This DMA controller transfers data from memory to peripheral fifo
or vice versa. It does not support memory to memory data transfer. or vice versa. It does not support memory to memory data transfer.
config TEGRA210_ADMA
bool "NVIDIA Tegra210 ADMA support"
depends on ARCH_TEGRA_210_SOC
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select PM_CLK
help
Support for the NVIDIA Tegra210 ADMA controller driver. The
DMA controller has multiple DMA channels and is used to service
various audio clients in the Tegra210 audio processing engine
(APE). This DMA controller transfers data from memory to
peripheral and vice versa. It does not support memory to
memory data transfer.
config TIMB_DMA config TIMB_DMA
tristate "Timberdale FPGA DMA support" tristate "Timberdale FPGA DMA support"
depends on MFD_TIMBERDALE depends on MFD_TIMBERDALE
...@@ -507,7 +521,7 @@ config XGENE_DMA ...@@ -507,7 +521,7 @@ config XGENE_DMA
config XILINX_VDMA config XILINX_VDMA
tristate "Xilinx AXI VDMA Engine" tristate "Xilinx AXI VDMA Engine"
depends on (ARCH_ZYNQ || MICROBLAZE) depends on (ARCH_ZYNQ || MICROBLAZE || ARM64)
select DMA_ENGINE select DMA_ENGINE
help help
Enable support for Xilinx AXI VDMA Soft IP. Enable support for Xilinx AXI VDMA Soft IP.
......
...@@ -59,6 +59,7 @@ obj-$(CONFIG_STM32_DMA) += stm32-dma.o ...@@ -59,6 +59,7 @@ obj-$(CONFIG_STM32_DMA) += stm32-dma.o
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_TI_CPPI41) += cppi41.o obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
......
...@@ -107,16 +107,20 @@ struct pl08x_driver_data; ...@@ -107,16 +107,20 @@ struct pl08x_driver_data;
/** /**
* struct vendor_data - vendor-specific config parameters for PL08x derivatives * struct vendor_data - vendor-specific config parameters for PL08x derivatives
* @channels: the number of channels available in this variant * @channels: the number of channels available in this variant
* @signals: the number of request signals available from the hardware
* @dualmaster: whether this version supports dual AHB masters or not. * @dualmaster: whether this version supports dual AHB masters or not.
* @nomadik: whether the channels have Nomadik security extension bits * @nomadik: whether the channels have Nomadik security extension bits
* that need to be checked for permission before use and some registers are * that need to be checked for permission before use and some registers are
* missing * missing
* @pl080s: whether this version is a PL080S, which has separate register and * @pl080s: whether this version is a PL080S, which has separate register and
* LLI word for transfer size. * LLI word for transfer size.
* @max_transfer_size: the maximum single element transfer size for this
* PL08x variant.
*/ */
struct vendor_data { struct vendor_data {
u8 config_offset; u8 config_offset;
u8 channels; u8 channels;
u8 signals;
bool dualmaster; bool dualmaster;
bool nomadik; bool nomadik;
bool pl080s; bool pl080s;
...@@ -235,7 +239,7 @@ struct pl08x_dma_chan { ...@@ -235,7 +239,7 @@ struct pl08x_dma_chan {
struct virt_dma_chan vc; struct virt_dma_chan vc;
struct pl08x_phy_chan *phychan; struct pl08x_phy_chan *phychan;
const char *name; const char *name;
const struct pl08x_channel_data *cd; struct pl08x_channel_data *cd;
struct dma_slave_config cfg; struct dma_slave_config cfg;
struct pl08x_txd *at; struct pl08x_txd *at;
struct pl08x_driver_data *host; struct pl08x_driver_data *host;
...@@ -1909,6 +1913,12 @@ static int pl08x_dma_init_virtual_channels(struct pl08x_driver_data *pl08x, ...@@ -1909,6 +1913,12 @@ static int pl08x_dma_init_virtual_channels(struct pl08x_driver_data *pl08x,
if (slave) { if (slave) {
chan->cd = &pl08x->pd->slave_channels[i]; chan->cd = &pl08x->pd->slave_channels[i];
/*
* Some implementations have muxed signals, whereas some
* use a mux in front of the signals and need dynamic
* assignment of signals.
*/
chan->signal = i;
pl08x_dma_slave_init(chan); pl08x_dma_slave_init(chan);
} else { } else {
chan->cd = &pl08x->pd->memcpy_channel; chan->cd = &pl08x->pd->memcpy_channel;
...@@ -2050,40 +2060,33 @@ static struct dma_chan *pl08x_of_xlate(struct of_phandle_args *dma_spec, ...@@ -2050,40 +2060,33 @@ static struct dma_chan *pl08x_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma) struct of_dma *ofdma)
{ {
struct pl08x_driver_data *pl08x = ofdma->of_dma_data; struct pl08x_driver_data *pl08x = ofdma->of_dma_data;
struct pl08x_channel_data *data;
struct pl08x_dma_chan *chan;
struct dma_chan *dma_chan; struct dma_chan *dma_chan;
struct pl08x_dma_chan *plchan;
if (!pl08x) if (!pl08x)
return NULL; return NULL;
if (dma_spec->args_count != 2) if (dma_spec->args_count != 2) {
dev_err(&pl08x->adev->dev,
"DMA channel translation requires two cells\n");
return NULL; return NULL;
}
dma_chan = pl08x_find_chan_id(pl08x, dma_spec->args[0]); dma_chan = pl08x_find_chan_id(pl08x, dma_spec->args[0]);
if (dma_chan) if (!dma_chan) {
return dma_get_slave_channel(dma_chan); dev_err(&pl08x->adev->dev,
"DMA slave channel not found\n");
chan = devm_kzalloc(pl08x->slave.dev, sizeof(*chan) + sizeof(*data),
GFP_KERNEL);
if (!chan)
return NULL; return NULL;
}
data = (void *)&chan[1]; plchan = to_pl08x_chan(dma_chan);
data->bus_id = "(none)"; dev_dbg(&pl08x->adev->dev,
data->periph_buses = dma_spec->args[1]; "translated channel for signal %d\n",
dma_spec->args[0]);
chan->cd = data;
chan->host = pl08x;
chan->slave = true;
chan->name = data->bus_id;
chan->state = PL08X_CHAN_IDLE;
chan->signal = dma_spec->args[0];
chan->vc.desc_free = pl08x_desc_free;
vchan_init(&chan->vc, &pl08x->slave);
return dma_get_slave_channel(&chan->vc.chan); /* Augment channel data for applicable AHB buses */
plchan->cd->periph_buses = dma_spec->args[1];
return dma_get_slave_channel(dma_chan);
} }
static int pl08x_of_probe(struct amba_device *adev, static int pl08x_of_probe(struct amba_device *adev,
...@@ -2091,9 +2094,11 @@ static int pl08x_of_probe(struct amba_device *adev, ...@@ -2091,9 +2094,11 @@ static int pl08x_of_probe(struct amba_device *adev,
struct device_node *np) struct device_node *np)
{ {
struct pl08x_platform_data *pd; struct pl08x_platform_data *pd;
struct pl08x_channel_data *chanp = NULL;
u32 cctl_memcpy = 0; u32 cctl_memcpy = 0;
u32 val; u32 val;
int ret; int ret;
int i;
pd = devm_kzalloc(&adev->dev, sizeof(*pd), GFP_KERNEL); pd = devm_kzalloc(&adev->dev, sizeof(*pd), GFP_KERNEL);
if (!pd) if (!pd)
...@@ -2195,6 +2200,27 @@ static int pl08x_of_probe(struct amba_device *adev, ...@@ -2195,6 +2200,27 @@ static int pl08x_of_probe(struct amba_device *adev,
/* Use the buses that can access memory, obviously */ /* Use the buses that can access memory, obviously */
pd->memcpy_channel.periph_buses = pd->mem_buses; pd->memcpy_channel.periph_buses = pd->mem_buses;
/*
* Allocate channel data for all possible slave channels (one
* for each possible signal), channels will then be allocated
* for a device and have it's AHB interfaces set up at
* translation time.
*/
chanp = devm_kcalloc(&adev->dev,
pl08x->vd->signals,
sizeof(struct pl08x_channel_data),
GFP_KERNEL);
if (!chanp)
return -ENOMEM;
pd->slave_channels = chanp;
for (i = 0; i < pl08x->vd->signals; i++) {
/* chanp->periph_buses will be assigned at translation */
chanp->bus_id = kasprintf(GFP_KERNEL, "slave%d", i);
chanp++;
}
pd->num_slave_channels = pl08x->vd->signals;
pl08x->pd = pd; pl08x->pd = pd;
return of_dma_controller_register(adev->dev.of_node, pl08x_of_xlate, return of_dma_controller_register(adev->dev.of_node, pl08x_of_xlate,
...@@ -2234,6 +2260,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -2234,6 +2260,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
goto out_no_pl08x; goto out_no_pl08x;
} }
/* Assign useful pointers to the driver state */
pl08x->adev = adev;
pl08x->vd = vd;
/* Initialize memcpy engine */ /* Initialize memcpy engine */
dma_cap_set(DMA_MEMCPY, pl08x->memcpy.cap_mask); dma_cap_set(DMA_MEMCPY, pl08x->memcpy.cap_mask);
pl08x->memcpy.dev = &adev->dev; pl08x->memcpy.dev = &adev->dev;
...@@ -2284,10 +2314,6 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -2284,10 +2314,6 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
} }
} }
/* Assign useful pointers to the driver state */
pl08x->adev = adev;
pl08x->vd = vd;
/* By default, AHB1 only. If dualmaster, from platform */ /* By default, AHB1 only. If dualmaster, from platform */
pl08x->lli_buses = PL08X_AHB1; pl08x->lli_buses = PL08X_AHB1;
pl08x->mem_buses = PL08X_AHB1; pl08x->mem_buses = PL08X_AHB1;
...@@ -2438,6 +2464,7 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -2438,6 +2464,7 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
static struct vendor_data vendor_pl080 = { static struct vendor_data vendor_pl080 = {
.config_offset = PL080_CH_CONFIG, .config_offset = PL080_CH_CONFIG,
.channels = 8, .channels = 8,
.signals = 16,
.dualmaster = true, .dualmaster = true,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK, .max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
}; };
...@@ -2445,6 +2472,7 @@ static struct vendor_data vendor_pl080 = { ...@@ -2445,6 +2472,7 @@ static struct vendor_data vendor_pl080 = {
static struct vendor_data vendor_nomadik = { static struct vendor_data vendor_nomadik = {
.config_offset = PL080_CH_CONFIG, .config_offset = PL080_CH_CONFIG,
.channels = 8, .channels = 8,
.signals = 32,
.dualmaster = true, .dualmaster = true,
.nomadik = true, .nomadik = true,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK, .max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
...@@ -2453,6 +2481,7 @@ static struct vendor_data vendor_nomadik = { ...@@ -2453,6 +2481,7 @@ static struct vendor_data vendor_nomadik = {
static struct vendor_data vendor_pl080s = { static struct vendor_data vendor_pl080s = {
.config_offset = PL080S_CH_CONFIG, .config_offset = PL080S_CH_CONFIG,
.channels = 8, .channels = 8,
.signals = 32,
.pl080s = true, .pl080s = true,
.max_transfer_size = PL080S_CONTROL_TRANSFER_SIZE_MASK, .max_transfer_size = PL080S_CONTROL_TRANSFER_SIZE_MASK,
}; };
...@@ -2460,6 +2489,7 @@ static struct vendor_data vendor_pl080s = { ...@@ -2460,6 +2489,7 @@ static struct vendor_data vendor_pl080s = {
static struct vendor_data vendor_pl081 = { static struct vendor_data vendor_pl081 = {
.config_offset = PL080_CH_CONFIG, .config_offset = PL080_CH_CONFIG,
.channels = 2, .channels = 2,
.signals = 16,
.dualmaster = false, .dualmaster = false,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK, .max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
}; };
......
This diff is collapsed.
...@@ -289,7 +289,7 @@ enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) ...@@ -289,7 +289,7 @@ enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
do { do {
status = dma_async_is_tx_complete(chan, cookie, NULL, NULL); status = dma_async_is_tx_complete(chan, cookie, NULL, NULL);
if (time_after_eq(jiffies, dma_sync_wait_timeout)) { if (time_after_eq(jiffies, dma_sync_wait_timeout)) {
pr_err("%s: timeout!\n", __func__); dev_err(chan->device->dev, "%s: timeout!\n", __func__);
return DMA_ERROR; return DMA_ERROR;
} }
if (status != DMA_IN_PROGRESS) if (status != DMA_IN_PROGRESS)
...@@ -482,7 +482,8 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps) ...@@ -482,7 +482,8 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
device = chan->device; device = chan->device;
/* check if the channel supports slave transactions */ /* check if the channel supports slave transactions */
if (!test_bit(DMA_SLAVE, device->cap_mask.bits)) if (!(test_bit(DMA_SLAVE, device->cap_mask.bits) ||
test_bit(DMA_CYCLIC, device->cap_mask.bits)))
return -ENXIO; return -ENXIO;
/* /*
...@@ -518,7 +519,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask, ...@@ -518,7 +519,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
struct dma_chan *chan; struct dma_chan *chan;
if (mask && !__dma_device_satisfies_mask(dev, mask)) { if (mask && !__dma_device_satisfies_mask(dev, mask)) {
pr_debug("%s: wrong capabilities\n", __func__); dev_dbg(dev->dev, "%s: wrong capabilities\n", __func__);
return NULL; return NULL;
} }
/* devices with multiple channels need special handling as we need to /* devices with multiple channels need special handling as we need to
...@@ -533,12 +534,12 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask, ...@@ -533,12 +534,12 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
list_for_each_entry(chan, &dev->channels, device_node) { list_for_each_entry(chan, &dev->channels, device_node) {
if (chan->client_count) { if (chan->client_count) {
pr_debug("%s: %s busy\n", dev_dbg(dev->dev, "%s: %s busy\n",
__func__, dma_chan_name(chan)); __func__, dma_chan_name(chan));
continue; continue;
} }
if (fn && !fn(chan, fn_param)) { if (fn && !fn(chan, fn_param)) {
pr_debug("%s: %s filter said false\n", dev_dbg(dev->dev, "%s: %s filter said false\n",
__func__, dma_chan_name(chan)); __func__, dma_chan_name(chan));
continue; continue;
} }
...@@ -567,11 +568,12 @@ static struct dma_chan *find_candidate(struct dma_device *device, ...@@ -567,11 +568,12 @@ static struct dma_chan *find_candidate(struct dma_device *device,
if (err) { if (err) {
if (err == -ENODEV) { if (err == -ENODEV) {
pr_debug("%s: %s module removed\n", __func__, dev_dbg(device->dev, "%s: %s module removed\n",
dma_chan_name(chan)); __func__, dma_chan_name(chan));
list_del_rcu(&device->global_node); list_del_rcu(&device->global_node);
} else } else
pr_debug("%s: failed to get %s: (%d)\n", dev_dbg(device->dev,
"%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err); __func__, dma_chan_name(chan), err);
if (--device->privatecnt == 0) if (--device->privatecnt == 0)
...@@ -602,7 +604,8 @@ struct dma_chan *dma_get_slave_channel(struct dma_chan *chan) ...@@ -602,7 +604,8 @@ struct dma_chan *dma_get_slave_channel(struct dma_chan *chan)
device->privatecnt++; device->privatecnt++;
err = dma_chan_get(chan); err = dma_chan_get(chan);
if (err) { if (err) {
pr_debug("%s: failed to get %s: (%d)\n", dev_dbg(chan->device->dev,
"%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err); __func__, dma_chan_name(chan), err);
chan = NULL; chan = NULL;
if (--device->privatecnt == 0) if (--device->privatecnt == 0)
...@@ -814,7 +817,8 @@ void dmaengine_get(void) ...@@ -814,7 +817,8 @@ void dmaengine_get(void)
list_del_rcu(&device->global_node); list_del_rcu(&device->global_node);
break; break;
} else if (err) } else if (err)
pr_debug("%s: failed to get %s: (%d)\n", dev_dbg(chan->device->dev,
"%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err); __func__, dma_chan_name(chan), err);
} }
} }
...@@ -862,12 +866,12 @@ static bool device_has_all_tx_types(struct dma_device *device) ...@@ -862,12 +866,12 @@ static bool device_has_all_tx_types(struct dma_device *device)
return false; return false;
#endif #endif
#if defined(CONFIG_ASYNC_MEMCPY) || defined(CONFIG_ASYNC_MEMCPY_MODULE) #if IS_ENABLED(CONFIG_ASYNC_MEMCPY)
if (!dma_has_cap(DMA_MEMCPY, device->cap_mask)) if (!dma_has_cap(DMA_MEMCPY, device->cap_mask))
return false; return false;
#endif #endif
#if defined(CONFIG_ASYNC_XOR) || defined(CONFIG_ASYNC_XOR_MODULE) #if IS_ENABLED(CONFIG_ASYNC_XOR)
if (!dma_has_cap(DMA_XOR, device->cap_mask)) if (!dma_has_cap(DMA_XOR, device->cap_mask))
return false; return false;
...@@ -877,7 +881,7 @@ static bool device_has_all_tx_types(struct dma_device *device) ...@@ -877,7 +881,7 @@ static bool device_has_all_tx_types(struct dma_device *device)
#endif #endif
#endif #endif
#if defined(CONFIG_ASYNC_PQ) || defined(CONFIG_ASYNC_PQ_MODULE) #if IS_ENABLED(CONFIG_ASYNC_PQ)
if (!dma_has_cap(DMA_PQ, device->cap_mask)) if (!dma_has_cap(DMA_PQ, device->cap_mask))
return false; return false;
...@@ -1222,7 +1226,8 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) ...@@ -1222,7 +1226,8 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
while (tx->cookie == -EBUSY) { while (tx->cookie == -EBUSY) {
if (time_after_eq(jiffies, dma_sync_wait_timeout)) { if (time_after_eq(jiffies, dma_sync_wait_timeout)) {
pr_err("%s timeout waiting for descriptor submission\n", dev_err(tx->chan->device->dev,
"%s timeout waiting for descriptor submission\n",
__func__); __func__);
return DMA_ERROR; return DMA_ERROR;
} }
......
This diff is collapsed.
...@@ -17,8 +17,8 @@ ...@@ -17,8 +17,8 @@
static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
{ {
const struct dw_dma_platform_data *pdata = (void *)pid->driver_data;
struct dw_dma_chip *chip; struct dw_dma_chip *chip;
struct dw_dma_platform_data *pdata = (void *)pid->driver_data;
int ret; int ret;
ret = pcim_enable_device(pdev); ret = pcim_enable_device(pdev);
...@@ -49,8 +49,9 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) ...@@ -49,8 +49,9 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
chip->dev = &pdev->dev; chip->dev = &pdev->dev;
chip->regs = pcim_iomap_table(pdev)[0]; chip->regs = pcim_iomap_table(pdev)[0];
chip->irq = pdev->irq; chip->irq = pdev->irq;
chip->pdata = pdata;
ret = dw_dma_probe(chip, pdata); ret = dw_dma_probe(chip);
if (ret) if (ret)
return ret; return ret;
......
...@@ -42,13 +42,13 @@ static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, ...@@ -42,13 +42,13 @@ static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec,
slave.src_id = dma_spec->args[0]; slave.src_id = dma_spec->args[0];
slave.dst_id = dma_spec->args[0]; slave.dst_id = dma_spec->args[0];
slave.src_master = dma_spec->args[1]; slave.m_master = dma_spec->args[1];
slave.dst_master = dma_spec->args[2]; slave.p_master = dma_spec->args[2];
if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS || if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS ||
slave.dst_id >= DW_DMA_MAX_NR_REQUESTS || slave.dst_id >= DW_DMA_MAX_NR_REQUESTS ||
slave.src_master >= dw->nr_masters || slave.m_master >= dw->pdata->nr_masters ||
slave.dst_master >= dw->nr_masters)) slave.p_master >= dw->pdata->nr_masters))
return NULL; return NULL;
dma_cap_zero(cap); dma_cap_zero(cap);
...@@ -66,8 +66,8 @@ static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param) ...@@ -66,8 +66,8 @@ static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
.dma_dev = dma_spec->dev, .dma_dev = dma_spec->dev,
.src_id = dma_spec->slave_id, .src_id = dma_spec->slave_id,
.dst_id = dma_spec->slave_id, .dst_id = dma_spec->slave_id,
.src_master = 1, .m_master = 0,
.dst_master = 0, .p_master = 1,
}; };
return dw_dma_filter(chan, &slave); return dw_dma_filter(chan, &slave);
...@@ -103,6 +103,7 @@ dw_dma_parse_dt(struct platform_device *pdev) ...@@ -103,6 +103,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
struct dw_dma_platform_data *pdata; struct dw_dma_platform_data *pdata;
u32 tmp, arr[DW_DMA_MAX_NR_MASTERS]; u32 tmp, arr[DW_DMA_MAX_NR_MASTERS];
u32 nr_masters;
u32 nr_channels; u32 nr_channels;
if (!np) { if (!np) {
...@@ -110,6 +111,11 @@ dw_dma_parse_dt(struct platform_device *pdev) ...@@ -110,6 +111,11 @@ dw_dma_parse_dt(struct platform_device *pdev)
return NULL; return NULL;
} }
if (of_property_read_u32(np, "dma-masters", &nr_masters))
return NULL;
if (nr_masters < 1 || nr_masters > DW_DMA_MAX_NR_MASTERS)
return NULL;
if (of_property_read_u32(np, "dma-channels", &nr_channels)) if (of_property_read_u32(np, "dma-channels", &nr_channels))
return NULL; return NULL;
...@@ -117,6 +123,7 @@ dw_dma_parse_dt(struct platform_device *pdev) ...@@ -117,6 +123,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
if (!pdata) if (!pdata)
return NULL; return NULL;
pdata->nr_masters = nr_masters;
pdata->nr_channels = nr_channels; pdata->nr_channels = nr_channels;
if (of_property_read_bool(np, "is_private")) if (of_property_read_bool(np, "is_private"))
...@@ -131,17 +138,13 @@ dw_dma_parse_dt(struct platform_device *pdev) ...@@ -131,17 +138,13 @@ dw_dma_parse_dt(struct platform_device *pdev)
if (!of_property_read_u32(np, "block_size", &tmp)) if (!of_property_read_u32(np, "block_size", &tmp))
pdata->block_size = tmp; pdata->block_size = tmp;
if (!of_property_read_u32(np, "dma-masters", &tmp)) { if (!of_property_read_u32_array(np, "data-width", arr, nr_masters)) {
if (tmp > DW_DMA_MAX_NR_MASTERS) for (tmp = 0; tmp < nr_masters; tmp++)
return NULL;
pdata->nr_masters = tmp;
}
if (!of_property_read_u32_array(np, "data_width", arr,
pdata->nr_masters))
for (tmp = 0; tmp < pdata->nr_masters; tmp++)
pdata->data_width[tmp] = arr[tmp]; pdata->data_width[tmp] = arr[tmp];
} else if (!of_property_read_u32_array(np, "data_width", arr, nr_masters)) {
for (tmp = 0; tmp < nr_masters; tmp++)
pdata->data_width[tmp] = BIT(arr[tmp] & 0x07);
}
return pdata; return pdata;
} }
...@@ -158,7 +161,7 @@ static int dw_probe(struct platform_device *pdev) ...@@ -158,7 +161,7 @@ static int dw_probe(struct platform_device *pdev)
struct dw_dma_chip *chip; struct dw_dma_chip *chip;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *mem; struct resource *mem;
struct dw_dma_platform_data *pdata; const struct dw_dma_platform_data *pdata;
int err; int err;
chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL); chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
...@@ -183,6 +186,7 @@ static int dw_probe(struct platform_device *pdev) ...@@ -183,6 +186,7 @@ static int dw_probe(struct platform_device *pdev)
pdata = dw_dma_parse_dt(pdev); pdata = dw_dma_parse_dt(pdev);
chip->dev = dev; chip->dev = dev;
chip->pdata = pdata;
chip->clk = devm_clk_get(chip->dev, "hclk"); chip->clk = devm_clk_get(chip->dev, "hclk");
if (IS_ERR(chip->clk)) if (IS_ERR(chip->clk))
...@@ -193,7 +197,7 @@ static int dw_probe(struct platform_device *pdev) ...@@ -193,7 +197,7 @@ static int dw_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev); pm_runtime_enable(&pdev->dev);
err = dw_dma_probe(chip, pdata); err = dw_dma_probe(chip);
if (err) if (err)
goto err_dw_dma_probe; goto err_dw_dma_probe;
......
...@@ -114,10 +114,6 @@ struct dw_dma_regs { ...@@ -114,10 +114,6 @@ struct dw_dma_regs {
#define dma_writel_native writel #define dma_writel_native writel
#endif #endif
/* To access the registers in early stage of probe */
#define dma_read_byaddr(addr, name) \
dma_readl_native((addr) + offsetof(struct dw_dma_regs, name))
/* Bitfields in DW_PARAMS */ /* Bitfields in DW_PARAMS */
#define DW_PARAMS_NR_CHAN 8 /* number of channels */ #define DW_PARAMS_NR_CHAN 8 /* number of channels */
#define DW_PARAMS_NR_MASTER 11 /* number of AHB masters */ #define DW_PARAMS_NR_MASTER 11 /* number of AHB masters */
...@@ -143,6 +139,10 @@ enum dw_dma_msize { ...@@ -143,6 +139,10 @@ enum dw_dma_msize {
DW_DMA_MSIZE_256, DW_DMA_MSIZE_256,
}; };
/* Bitfields in LLP */
#define DWC_LLP_LMS(x) ((x) & 3) /* list master select */
#define DWC_LLP_LOC(x) ((x) & ~3) /* next lli */
/* Bitfields in CTL_LO */ /* Bitfields in CTL_LO */
#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */ #define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */
#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */ #define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */
...@@ -216,6 +216,8 @@ enum dw_dma_msize { ...@@ -216,6 +216,8 @@ enum dw_dma_msize {
enum dw_dmac_flags { enum dw_dmac_flags {
DW_DMA_IS_CYCLIC = 0, DW_DMA_IS_CYCLIC = 0,
DW_DMA_IS_SOFT_LLP = 1, DW_DMA_IS_SOFT_LLP = 1,
DW_DMA_IS_PAUSED = 2,
DW_DMA_IS_INITIALIZED = 3,
}; };
struct dw_dma_chan { struct dw_dma_chan {
...@@ -224,8 +226,6 @@ struct dw_dma_chan { ...@@ -224,8 +226,6 @@ struct dw_dma_chan {
u8 mask; u8 mask;
u8 priority; u8 priority;
enum dma_transfer_direction direction; enum dma_transfer_direction direction;
bool paused;
bool initialized;
/* software emulation of the LLP transfers */ /* software emulation of the LLP transfers */
struct list_head *tx_node_active; struct list_head *tx_node_active;
...@@ -236,8 +236,6 @@ struct dw_dma_chan { ...@@ -236,8 +236,6 @@ struct dw_dma_chan {
unsigned long flags; unsigned long flags;
struct list_head active_list; struct list_head active_list;
struct list_head queue; struct list_head queue;
struct list_head free_list;
u32 residue;
struct dw_cyclic_desc *cdesc; struct dw_cyclic_desc *cdesc;
unsigned int descs_allocated; unsigned int descs_allocated;
...@@ -249,8 +247,8 @@ struct dw_dma_chan { ...@@ -249,8 +247,8 @@ struct dw_dma_chan {
/* custom slave configuration */ /* custom slave configuration */
u8 src_id; u8 src_id;
u8 dst_id; u8 dst_id;
u8 src_master; u8 m_master;
u8 dst_master; u8 p_master;
/* configuration passed via .device_config */ /* configuration passed via .device_config */
struct dma_slave_config dma_sconfig; struct dma_slave_config dma_sconfig;
...@@ -283,9 +281,8 @@ struct dw_dma { ...@@ -283,9 +281,8 @@ struct dw_dma {
u8 all_chan_mask; u8 all_chan_mask;
u8 in_use; u8 in_use;
/* hardware configuration */ /* platform data */
unsigned char nr_masters; struct dw_dma_platform_data *pdata;
unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
}; };
static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw) static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
...@@ -308,32 +305,51 @@ static inline struct dw_dma *to_dw_dma(struct dma_device *ddev) ...@@ -308,32 +305,51 @@ static inline struct dw_dma *to_dw_dma(struct dma_device *ddev)
return container_of(ddev, struct dw_dma, dma); return container_of(ddev, struct dw_dma, dma);
} }
#ifdef CONFIG_DW_DMAC_BIG_ENDIAN_IO
typedef __be32 __dw32;
#else
typedef __le32 __dw32;
#endif
/* LLI == Linked List Item; a.k.a. DMA block descriptor */ /* LLI == Linked List Item; a.k.a. DMA block descriptor */
struct dw_lli { struct dw_lli {
/* values that are not changed by hardware */ /* values that are not changed by hardware */
u32 sar; __dw32 sar;
u32 dar; __dw32 dar;
u32 llp; /* chain to next lli */ __dw32 llp; /* chain to next lli */
u32 ctllo; __dw32 ctllo;
/* values that may get written back: */ /* values that may get written back: */
u32 ctlhi; __dw32 ctlhi;
/* sstat and dstat can snapshot peripheral register state. /* sstat and dstat can snapshot peripheral register state.
* silicon config may discard either or both... * silicon config may discard either or both...
*/ */
u32 sstat; __dw32 sstat;
u32 dstat; __dw32 dstat;
}; };
struct dw_desc { struct dw_desc {
/* FIRST values the hardware uses */ /* FIRST values the hardware uses */
struct dw_lli lli; struct dw_lli lli;
#ifdef CONFIG_DW_DMAC_BIG_ENDIAN_IO
#define lli_set(d, reg, v) ((d)->lli.reg |= cpu_to_be32(v))
#define lli_clear(d, reg, v) ((d)->lli.reg &= ~cpu_to_be32(v))
#define lli_read(d, reg) be32_to_cpu((d)->lli.reg)
#define lli_write(d, reg, v) ((d)->lli.reg = cpu_to_be32(v))
#else
#define lli_set(d, reg, v) ((d)->lli.reg |= cpu_to_le32(v))
#define lli_clear(d, reg, v) ((d)->lli.reg &= ~cpu_to_le32(v))
#define lli_read(d, reg) le32_to_cpu((d)->lli.reg)
#define lli_write(d, reg, v) ((d)->lli.reg = cpu_to_le32(v))
#endif
/* THEN values for driver housekeeping */ /* THEN values for driver housekeeping */
struct list_head desc_node; struct list_head desc_node;
struct list_head tx_list; struct list_head tx_list;
struct dma_async_tx_descriptor txd; struct dma_async_tx_descriptor txd;
size_t len; size_t len;
size_t total_len; size_t total_len;
u32 residue;
}; };
#define to_dw_desc(h) list_entry(h, struct dw_desc, desc_node) #define to_dw_desc(h) list_entry(h, struct dw_desc, desc_node)
......
...@@ -1537,8 +1537,17 @@ static irqreturn_t dma_ccerr_handler(int irq, void *data) ...@@ -1537,8 +1537,17 @@ static irqreturn_t dma_ccerr_handler(int irq, void *data)
dev_vdbg(ecc->dev, "dma_ccerr_handler\n"); dev_vdbg(ecc->dev, "dma_ccerr_handler\n");
if (!edma_error_pending(ecc)) if (!edma_error_pending(ecc)) {
/*
* The registers indicate no pending error event but the irq
* handler has been called.
* Ask eDMA to re-evaluate the error registers.
*/
dev_err(ecc->dev, "%s: Error interrupt without error event!\n",
__func__);
edma_write(ecc, EDMA_EEVAL, 1);
return IRQ_NONE; return IRQ_NONE;
}
while (1) { while (1) {
/* Event missed register(s) */ /* Event missed register(s) */
......
...@@ -462,13 +462,12 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(struct fsldma_chan *chan) ...@@ -462,13 +462,12 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(struct fsldma_chan *chan)
struct fsl_desc_sw *desc; struct fsl_desc_sw *desc;
dma_addr_t pdesc; dma_addr_t pdesc;
desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc); desc = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
if (!desc) { if (!desc) {
chan_dbg(chan, "out of memory for link descriptor\n"); chan_dbg(chan, "out of memory for link descriptor\n");
return NULL; return NULL;
} }
memset(desc, 0, sizeof(*desc));
INIT_LIST_HEAD(&desc->tx_list); INIT_LIST_HEAD(&desc->tx_list);
dma_async_tx_descriptor_init(&desc->async_tx, &chan->common); dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
desc->async_tx.tx_submit = fsl_dma_tx_submit; desc->async_tx.tx_submit = fsl_dma_tx_submit;
......
...@@ -77,8 +77,8 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc) ...@@ -77,8 +77,8 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc)
hsu_chan_writel(hsuc, HSU_CH_MTSR, mtsr); hsu_chan_writel(hsuc, HSU_CH_MTSR, mtsr);
/* Set descriptors */ /* Set descriptors */
count = (desc->nents - desc->active) % HSU_DMA_CHAN_NR_DESC; count = desc->nents - desc->active;
for (i = 0; i < count; i++) { for (i = 0; i < count && i < HSU_DMA_CHAN_NR_DESC; i++) {
hsu_chan_writel(hsuc, HSU_CH_DxSAR(i), desc->sg[i].addr); hsu_chan_writel(hsuc, HSU_CH_DxSAR(i), desc->sg[i].addr);
hsu_chan_writel(hsuc, HSU_CH_DxTSR(i), desc->sg[i].len); hsu_chan_writel(hsuc, HSU_CH_DxTSR(i), desc->sg[i].len);
...@@ -160,7 +160,7 @@ irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr) ...@@ -160,7 +160,7 @@ irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr)
return IRQ_NONE; return IRQ_NONE;
/* Timeout IRQ, need wait some time, see Errata 2 */ /* Timeout IRQ, need wait some time, see Errata 2 */
if (hsuc->direction == DMA_DEV_TO_MEM && (sr & HSU_CH_SR_DESCTO_ANY)) if (sr & HSU_CH_SR_DESCTO_ANY)
udelay(2); udelay(2);
sr &= ~HSU_CH_SR_DESCTO_ANY; sr &= ~HSU_CH_SR_DESCTO_ANY;
...@@ -420,6 +420,8 @@ int hsu_dma_probe(struct hsu_dma_chip *chip) ...@@ -420,6 +420,8 @@ int hsu_dma_probe(struct hsu_dma_chip *chip)
hsu->dma.dev = chip->dev; hsu->dma.dev = chip->dev;
dma_set_max_seg_size(hsu->dma.dev, HSU_CH_DxTSR_MASK);
ret = dma_async_device_register(&hsu->dma); ret = dma_async_device_register(&hsu->dma);
if (ret) if (ret)
return ret; return ret;
......
...@@ -58,6 +58,10 @@ ...@@ -58,6 +58,10 @@
#define HSU_CH_DCR_CHEI BIT(23) #define HSU_CH_DCR_CHEI BIT(23)
#define HSU_CH_DCR_CHTOI(x) BIT(24 + (x)) #define HSU_CH_DCR_CHTOI(x) BIT(24 + (x))
/* Bits in HSU_CH_DxTSR */
#define HSU_CH_DxTSR_MASK GENMASK(15, 0)
#define HSU_CH_DxTSR_TSR(x) ((x) & HSU_CH_DxTSR_MASK)
struct hsu_dma_sg { struct hsu_dma_sg {
dma_addr_t addr; dma_addr_t addr;
unsigned int len; unsigned int len;
......
...@@ -690,12 +690,11 @@ static int ioat_alloc_chan_resources(struct dma_chan *c) ...@@ -690,12 +690,11 @@ static int ioat_alloc_chan_resources(struct dma_chan *c)
/* allocate a completion writeback area */ /* allocate a completion writeback area */
/* doing 2 32bit writes to mmio since 1 64b write doesn't work */ /* doing 2 32bit writes to mmio since 1 64b write doesn't work */
ioat_chan->completion = ioat_chan->completion =
dma_pool_alloc(ioat_chan->ioat_dma->completion_pool, dma_pool_zalloc(ioat_chan->ioat_dma->completion_pool,
GFP_KERNEL, &ioat_chan->completion_dma); GFP_KERNEL, &ioat_chan->completion_dma);
if (!ioat_chan->completion) if (!ioat_chan->completion)
return -ENOMEM; return -ENOMEM;
memset(ioat_chan->completion, 0, sizeof(*ioat_chan->completion));
writel(((u64)ioat_chan->completion_dma) & 0x00000000FFFFFFFF, writel(((u64)ioat_chan->completion_dma) & 0x00000000FFFFFFFF,
ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW); ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW);
writel(((u64)ioat_chan->completion_dma) >> 32, writel(((u64)ioat_chan->completion_dma) >> 32,
...@@ -1074,6 +1073,7 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca) ...@@ -1074,6 +1073,7 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
struct ioatdma_chan *ioat_chan; struct ioatdma_chan *ioat_chan;
bool is_raid_device = false; bool is_raid_device = false;
int err; int err;
u16 val16;
dma = &ioat_dma->dma_dev; dma = &ioat_dma->dma_dev;
dma->device_prep_dma_memcpy = ioat_dma_prep_memcpy_lock; dma->device_prep_dma_memcpy = ioat_dma_prep_memcpy_lock;
...@@ -1173,6 +1173,17 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca) ...@@ -1173,6 +1173,17 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
if (dca) if (dca)
ioat_dma->dca = ioat_dca_init(pdev, ioat_dma->reg_base); ioat_dma->dca = ioat_dca_init(pdev, ioat_dma->reg_base);
/* disable relaxed ordering */
err = pcie_capability_read_word(pdev, IOAT_DEVCTRL_OFFSET, &val16);
if (err)
return err;
/* clear relaxed ordering enable */
val16 &= ~IOAT_DEVCTRL_ROE;
err = pcie_capability_write_word(pdev, IOAT_DEVCTRL_OFFSET, val16);
if (err)
return err;
return 0; return 0;
} }
......
...@@ -26,6 +26,13 @@ ...@@ -26,6 +26,13 @@
#define IOAT_PCI_CHANERR_INT_OFFSET 0x180 #define IOAT_PCI_CHANERR_INT_OFFSET 0x180
#define IOAT_PCI_CHANERRMASK_INT_OFFSET 0x184 #define IOAT_PCI_CHANERRMASK_INT_OFFSET 0x184
/* PCIe config registers */
/* EXPCAPID + N */
#define IOAT_DEVCTRL_OFFSET 0x8
/* relaxed ordering enable */
#define IOAT_DEVCTRL_ROE 0x10
/* MMIO Device Registers */ /* MMIO Device Registers */
#define IOAT_CHANCNT_OFFSET 0x00 /* 8-bit */ #define IOAT_CHANCNT_OFFSET 0x00 /* 8-bit */
......
...@@ -364,13 +364,12 @@ mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan) ...@@ -364,13 +364,12 @@ mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan)
struct mmp_pdma_desc_sw *desc; struct mmp_pdma_desc_sw *desc;
dma_addr_t pdesc; dma_addr_t pdesc;
desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc); desc = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
if (!desc) { if (!desc) {
dev_err(chan->dev, "out of memory for link descriptor\n"); dev_err(chan->dev, "out of memory for link descriptor\n");
return NULL; return NULL;
} }
memset(desc, 0, sizeof(*desc));
INIT_LIST_HEAD(&desc->tx_list); INIT_LIST_HEAD(&desc->tx_list);
dma_async_tx_descriptor_init(&desc->async_tx, &chan->chan); dma_async_tx_descriptor_init(&desc->async_tx, &chan->chan);
/* each desc has submit */ /* each desc has submit */
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* Copyright (C) Semihalf 2009 * Copyright (C) Semihalf 2009
* Copyright (C) Ilya Yanok, Emcraft Systems 2010 * Copyright (C) Ilya Yanok, Emcraft Systems 2010
* Copyright (C) Alexander Popov, Promcontroller 2014 * Copyright (C) Alexander Popov, Promcontroller 2014
* Copyright (C) Mario Six, Guntermann & Drunck GmbH, 2016
* *
* Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
* (defines, structures and comments) was taken from MPC5121 DMA driver * (defines, structures and comments) was taken from MPC5121 DMA driver
...@@ -26,18 +27,19 @@ ...@@ -26,18 +27,19 @@
*/ */
/* /*
* MPC512x and MPC8308 DMA driver. It supports * MPC512x and MPC8308 DMA driver. It supports memory to memory data transfers
* memory to memory data transfers (tested using dmatest module) and * (tested using dmatest module) and data transfers between memory and
* data transfers between memory and peripheral I/O memory * peripheral I/O memory by means of slave scatter/gather with these
* by means of slave scatter/gather with these limitations: * limitations:
* - chunked transfers (described by s/g lists with more than one item) * - chunked transfers (described by s/g lists with more than one item) are
* are refused as long as proper support for scatter/gather is missing; * refused as long as proper support for scatter/gather is missing
* - transfers on MPC8308 always start from software as this SoC appears * - transfers on MPC8308 always start from software as this SoC does not have
* not to have external request lines for peripheral flow control; * external request lines for peripheral flow control
* - only peripheral devices with 4-byte FIFO access register are supported; * - memory <-> I/O memory transfer chunks of sizes of 1, 2, 4, 16 (for
* - minimal memory <-> I/O memory transfer chunk is 4 bytes and consequently * MPC512x), and 32 bytes are supported, and, consequently, source
* source and destination addresses must be 4-byte aligned * addresses and destination addresses must be aligned accordingly;
* and transfer size must be aligned on (4 * maxburst) boundary; * furthermore, for MPC512x SoCs, the transfer size must be aligned on
* (chunk size * maxburst)
*/ */
#include <linux/module.h> #include <linux/module.h>
...@@ -213,8 +215,10 @@ struct mpc_dma_chan { ...@@ -213,8 +215,10 @@ struct mpc_dma_chan {
/* Settings for access to peripheral FIFO */ /* Settings for access to peripheral FIFO */
dma_addr_t src_per_paddr; dma_addr_t src_per_paddr;
u32 src_tcd_nunits; u32 src_tcd_nunits;
u8 swidth;
dma_addr_t dst_per_paddr; dma_addr_t dst_per_paddr;
u32 dst_tcd_nunits; u32 dst_tcd_nunits;
u8 dwidth;
/* Lock for this structure */ /* Lock for this structure */
spinlock_t lock; spinlock_t lock;
...@@ -247,6 +251,7 @@ static inline struct mpc_dma_chan *dma_chan_to_mpc_dma_chan(struct dma_chan *c) ...@@ -247,6 +251,7 @@ static inline struct mpc_dma_chan *dma_chan_to_mpc_dma_chan(struct dma_chan *c)
static inline struct mpc_dma *dma_chan_to_mpc_dma(struct dma_chan *c) static inline struct mpc_dma *dma_chan_to_mpc_dma(struct dma_chan *c)
{ {
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(c); struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(c);
return container_of(mchan, struct mpc_dma, channels[c->chan_id]); return container_of(mchan, struct mpc_dma, channels[c->chan_id]);
} }
...@@ -446,20 +451,15 @@ static void mpc_dma_tasklet(unsigned long data) ...@@ -446,20 +451,15 @@ static void mpc_dma_tasklet(unsigned long data)
if (es & MPC_DMA_DMAES_SAE) if (es & MPC_DMA_DMAES_SAE)
dev_err(mdma->dma.dev, "- Source Address Error\n"); dev_err(mdma->dma.dev, "- Source Address Error\n");
if (es & MPC_DMA_DMAES_SOE) if (es & MPC_DMA_DMAES_SOE)
dev_err(mdma->dma.dev, "- Source Offset" dev_err(mdma->dma.dev, "- Source Offset Configuration Error\n");
" Configuration Error\n");
if (es & MPC_DMA_DMAES_DAE) if (es & MPC_DMA_DMAES_DAE)
dev_err(mdma->dma.dev, "- Destination Address" dev_err(mdma->dma.dev, "- Destination Address Error\n");
" Error\n");
if (es & MPC_DMA_DMAES_DOE) if (es & MPC_DMA_DMAES_DOE)
dev_err(mdma->dma.dev, "- Destination Offset" dev_err(mdma->dma.dev, "- Destination Offset Configuration Error\n");
" Configuration Error\n");
if (es & MPC_DMA_DMAES_NCE) if (es & MPC_DMA_DMAES_NCE)
dev_err(mdma->dma.dev, "- NBytes/Citter" dev_err(mdma->dma.dev, "- NBytes/Citter Configuration Error\n");
" Configuration Error\n");
if (es & MPC_DMA_DMAES_SGE) if (es & MPC_DMA_DMAES_SGE)
dev_err(mdma->dma.dev, "- Scatter/Gather" dev_err(mdma->dma.dev, "- Scatter/Gather Configuration Error\n");
" Configuration Error\n");
if (es & MPC_DMA_DMAES_SBE) if (es & MPC_DMA_DMAES_SBE)
dev_err(mdma->dma.dev, "- Source Bus Error\n"); dev_err(mdma->dma.dev, "- Source Bus Error\n");
if (es & MPC_DMA_DMAES_DBE) if (es & MPC_DMA_DMAES_DBE)
...@@ -518,8 +518,8 @@ static int mpc_dma_alloc_chan_resources(struct dma_chan *chan) ...@@ -518,8 +518,8 @@ static int mpc_dma_alloc_chan_resources(struct dma_chan *chan)
for (i = 0; i < MPC_DMA_DESCRIPTORS; i++) { for (i = 0; i < MPC_DMA_DESCRIPTORS; i++) {
mdesc = kzalloc(sizeof(struct mpc_dma_desc), GFP_KERNEL); mdesc = kzalloc(sizeof(struct mpc_dma_desc), GFP_KERNEL);
if (!mdesc) { if (!mdesc) {
dev_notice(mdma->dma.dev, "Memory allocation error. " dev_notice(mdma->dma.dev,
"Allocated only %u descriptors\n", i); "Memory allocation error. Allocated only %u descriptors\n", i);
break; break;
} }
...@@ -684,6 +684,15 @@ mpc_dma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, ...@@ -684,6 +684,15 @@ mpc_dma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
return &mdesc->desc; return &mdesc->desc;
} }
inline u8 buswidth_to_dmatsize(u8 buswidth)
{
u8 res;
for (res = 0; buswidth > 1; buswidth /= 2)
res++;
return res;
}
static struct dma_async_tx_descriptor * static struct dma_async_tx_descriptor *
mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction, unsigned int sg_len, enum dma_transfer_direction direction,
...@@ -742,26 +751,40 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -742,26 +751,40 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
memset(tcd, 0, sizeof(struct mpc_dma_tcd)); memset(tcd, 0, sizeof(struct mpc_dma_tcd));
if (!IS_ALIGNED(sg_dma_address(sg), 4))
goto err_prep;
if (direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
tcd->saddr = per_paddr; tcd->saddr = per_paddr;
tcd->daddr = sg_dma_address(sg); tcd->daddr = sg_dma_address(sg);
if (!IS_ALIGNED(sg_dma_address(sg), mchan->dwidth))
goto err_prep;
tcd->soff = 0; tcd->soff = 0;
tcd->doff = 4; tcd->doff = mchan->dwidth;
} else { } else {
tcd->saddr = sg_dma_address(sg); tcd->saddr = sg_dma_address(sg);
tcd->daddr = per_paddr; tcd->daddr = per_paddr;
tcd->soff = 4;
if (!IS_ALIGNED(sg_dma_address(sg), mchan->swidth))
goto err_prep;
tcd->soff = mchan->swidth;
tcd->doff = 0; tcd->doff = 0;
} }
tcd->ssize = MPC_DMA_TSIZE_4; tcd->ssize = buswidth_to_dmatsize(mchan->swidth);
tcd->dsize = MPC_DMA_TSIZE_4; tcd->dsize = buswidth_to_dmatsize(mchan->dwidth);
if (mdma->is_mpc8308) {
tcd->nbytes = sg_dma_len(sg);
if (!IS_ALIGNED(tcd->nbytes, mchan->swidth))
goto err_prep;
/* No major loops for MPC8303 */
tcd->biter = 1;
tcd->citer = 1;
} else {
len = sg_dma_len(sg); len = sg_dma_len(sg);
tcd->nbytes = tcd_nunits * 4; tcd->nbytes = tcd_nunits * tcd->ssize;
if (!IS_ALIGNED(len, tcd->nbytes)) if (!IS_ALIGNED(len, tcd->nbytes))
goto err_prep; goto err_prep;
...@@ -775,6 +798,7 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -775,6 +798,7 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
tcd->biter_linkch = iter >> 9; tcd->biter_linkch = iter >> 9;
tcd->citer = tcd->biter; tcd->citer = tcd->biter;
tcd->citer_linkch = tcd->biter_linkch; tcd->citer_linkch = tcd->biter_linkch;
}
tcd->e_sg = 0; tcd->e_sg = 0;
tcd->d_req = 1; tcd->d_req = 1;
...@@ -796,40 +820,62 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -796,40 +820,62 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return NULL; return NULL;
} }
inline bool is_buswidth_valid(u8 buswidth, bool is_mpc8308)
{
switch (buswidth) {
case 16:
if (is_mpc8308)
return false;
case 1:
case 2:
case 4:
case 32:
break;
default:
return false;
}
return true;
}
static int mpc_dma_device_config(struct dma_chan *chan, static int mpc_dma_device_config(struct dma_chan *chan,
struct dma_slave_config *cfg) struct dma_slave_config *cfg)
{ {
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan); struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
struct mpc_dma *mdma = dma_chan_to_mpc_dma(&mchan->chan);
unsigned long flags; unsigned long flags;
/* /*
* Software constraints: * Software constraints:
* - only transfers between a peripheral device and * - only transfers between a peripheral device and memory are
* memory are supported; * supported
* - only peripheral devices with 4-byte FIFO access register * - transfer chunk sizes of 1, 2, 4, 16 (for MPC512x), and 32 bytes
* are supported; * are supported, and, consequently, source addresses and
* - minimal transfer chunk is 4 bytes and consequently * destination addresses; must be aligned accordingly; furthermore,
* source and destination addresses must be 4-byte aligned * for MPC512x SoCs, the transfer size must be aligned on (chunk
* and transfer size must be aligned on (4 * maxburst) * size * maxburst)
* boundary; * - during the transfer, the RAM address is incremented by the size
* - during the transfer RAM address is being incremented by * of transfer chunk
* the size of minimal transfer chunk; * - the peripheral port's address is constant during the transfer.
* - peripheral port's address is constant during the transfer.
*/ */
if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES || if (!IS_ALIGNED(cfg->src_addr, cfg->src_addr_width) ||
cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES || !IS_ALIGNED(cfg->dst_addr, cfg->dst_addr_width)) {
!IS_ALIGNED(cfg->src_addr, 4) ||
!IS_ALIGNED(cfg->dst_addr, 4)) {
return -EINVAL; return -EINVAL;
} }
if (!is_buswidth_valid(cfg->src_addr_width, mdma->is_mpc8308) ||
!is_buswidth_valid(cfg->dst_addr_width, mdma->is_mpc8308))
return -EINVAL;
spin_lock_irqsave(&mchan->lock, flags); spin_lock_irqsave(&mchan->lock, flags);
mchan->src_per_paddr = cfg->src_addr; mchan->src_per_paddr = cfg->src_addr;
mchan->src_tcd_nunits = cfg->src_maxburst; mchan->src_tcd_nunits = cfg->src_maxburst;
mchan->swidth = cfg->src_addr_width;
mchan->dst_per_paddr = cfg->dst_addr; mchan->dst_per_paddr = cfg->dst_addr;
mchan->dst_tcd_nunits = cfg->dst_maxburst; mchan->dst_tcd_nunits = cfg->dst_maxburst;
mchan->dwidth = cfg->dst_addr_width;
/* Apply defaults */ /* Apply defaults */
if (mchan->src_tcd_nunits == 0) if (mchan->src_tcd_nunits == 0)
...@@ -875,7 +921,6 @@ static int mpc_dma_probe(struct platform_device *op) ...@@ -875,7 +921,6 @@ static int mpc_dma_probe(struct platform_device *op)
mdma = devm_kzalloc(dev, sizeof(struct mpc_dma), GFP_KERNEL); mdma = devm_kzalloc(dev, sizeof(struct mpc_dma), GFP_KERNEL);
if (!mdma) { if (!mdma) {
dev_err(dev, "Memory exhausted!\n");
retval = -ENOMEM; retval = -ENOMEM;
goto err; goto err;
} }
...@@ -999,7 +1044,8 @@ static int mpc_dma_probe(struct platform_device *op) ...@@ -999,7 +1044,8 @@ static int mpc_dma_probe(struct platform_device *op)
out_be32(&mdma->regs->dmaerrl, 0xFFFF); out_be32(&mdma->regs->dmaerrl, 0xFFFF);
} else { } else {
out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_EDCG | out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_EDCG |
MPC_DMA_DMACR_ERGA | MPC_DMA_DMACR_ERCA); MPC_DMA_DMACR_ERGA |
MPC_DMA_DMACR_ERCA);
/* Disable hardware DMA requests */ /* Disable hardware DMA requests */
out_be32(&mdma->regs->dmaerqh, 0); out_be32(&mdma->regs->dmaerqh, 0);
......
...@@ -31,6 +31,12 @@ ...@@ -31,6 +31,12 @@
#include "dmaengine.h" #include "dmaengine.h"
#include "mv_xor.h" #include "mv_xor.h"
enum mv_xor_type {
XOR_ORION,
XOR_ARMADA_38X,
XOR_ARMADA_37XX,
};
enum mv_xor_mode { enum mv_xor_mode {
XOR_MODE_IN_REG, XOR_MODE_IN_REG,
XOR_MODE_IN_DESC, XOR_MODE_IN_DESC,
...@@ -477,7 +483,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src, ...@@ -477,7 +483,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT); BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
dev_dbg(mv_chan_to_devp(mv_chan), dev_dbg(mv_chan_to_devp(mv_chan),
"%s src_cnt: %d len: %u dest %pad flags: %ld\n", "%s src_cnt: %d len: %zu dest %pad flags: %ld\n",
__func__, src_cnt, len, &dest, flags); __func__, src_cnt, len, &dest, flags);
sw_desc = mv_chan_alloc_slot(mv_chan); sw_desc = mv_chan_alloc_slot(mv_chan);
...@@ -933,7 +939,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan) ...@@ -933,7 +939,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
static struct mv_xor_chan * static struct mv_xor_chan *
mv_xor_channel_add(struct mv_xor_device *xordev, mv_xor_channel_add(struct mv_xor_device *xordev,
struct platform_device *pdev, struct platform_device *pdev,
int idx, dma_cap_mask_t cap_mask, int irq, int op_in_desc) int idx, dma_cap_mask_t cap_mask, int irq)
{ {
int ret = 0; int ret = 0;
struct mv_xor_chan *mv_chan; struct mv_xor_chan *mv_chan;
...@@ -945,7 +951,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev, ...@@ -945,7 +951,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan->idx = idx; mv_chan->idx = idx;
mv_chan->irq = irq; mv_chan->irq = irq;
mv_chan->op_in_desc = op_in_desc; if (xordev->xor_type == XOR_ORION)
mv_chan->op_in_desc = XOR_MODE_IN_REG;
else
mv_chan->op_in_desc = XOR_MODE_IN_DESC;
dma_dev = &mv_chan->dmadev; dma_dev = &mv_chan->dmadev;
...@@ -1085,6 +1094,33 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev, ...@@ -1085,6 +1094,33 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
writel(0, base + WINDOW_OVERRIDE_CTRL(1)); writel(0, base + WINDOW_OVERRIDE_CTRL(1));
} }
static void
mv_xor_conf_mbus_windows_a3700(struct mv_xor_device *xordev)
{
void __iomem *base = xordev->xor_high_base;
u32 win_enable = 0;
int i;
for (i = 0; i < 8; i++) {
writel(0, base + WINDOW_BASE(i));
writel(0, base + WINDOW_SIZE(i));
if (i < 4)
writel(0, base + WINDOW_REMAP_HIGH(i));
}
/*
* For Armada3700 open default 4GB Mbus window. The dram
* related configuration are done at AXIS level.
*/
writel(0xffff0000, base + WINDOW_SIZE(0));
win_enable |= 1;
win_enable |= 3 << 16;
writel(win_enable, base + WINDOW_BAR_ENABLE(0));
writel(win_enable, base + WINDOW_BAR_ENABLE(1));
writel(0, base + WINDOW_OVERRIDE_CTRL(0));
writel(0, base + WINDOW_OVERRIDE_CTRL(1));
}
/* /*
* Since this XOR driver is basically used only for RAID5, we don't * Since this XOR driver is basically used only for RAID5, we don't
* need to care about synchronizing ->suspend with DMA activity, * need to care about synchronizing ->suspend with DMA activity,
...@@ -1129,6 +1165,11 @@ static int mv_xor_resume(struct platform_device *dev) ...@@ -1129,6 +1165,11 @@ static int mv_xor_resume(struct platform_device *dev)
XOR_INTR_MASK(mv_chan)); XOR_INTR_MASK(mv_chan));
} }
if (xordev->xor_type == XOR_ARMADA_37XX) {
mv_xor_conf_mbus_windows_a3700(xordev);
return 0;
}
dram = mv_mbus_dram_info(); dram = mv_mbus_dram_info();
if (dram) if (dram)
mv_xor_conf_mbus_windows(xordev, dram); mv_xor_conf_mbus_windows(xordev, dram);
...@@ -1137,8 +1178,9 @@ static int mv_xor_resume(struct platform_device *dev) ...@@ -1137,8 +1178,9 @@ static int mv_xor_resume(struct platform_device *dev)
} }
static const struct of_device_id mv_xor_dt_ids[] = { static const struct of_device_id mv_xor_dt_ids[] = {
{ .compatible = "marvell,orion-xor", .data = (void *)XOR_MODE_IN_REG }, { .compatible = "marvell,orion-xor", .data = (void *)XOR_ORION },
{ .compatible = "marvell,armada-380-xor", .data = (void *)XOR_MODE_IN_DESC }, { .compatible = "marvell,armada-380-xor", .data = (void *)XOR_ARMADA_38X },
{ .compatible = "marvell,armada-3700-xor", .data = (void *)XOR_ARMADA_37XX },
{}, {},
}; };
...@@ -1152,7 +1194,6 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1152,7 +1194,6 @@ static int mv_xor_probe(struct platform_device *pdev)
struct resource *res; struct resource *res;
unsigned int max_engines, max_channels; unsigned int max_engines, max_channels;
int i, ret; int i, ret;
int op_in_desc;
dev_notice(&pdev->dev, "Marvell shared XOR driver\n"); dev_notice(&pdev->dev, "Marvell shared XOR driver\n");
...@@ -1180,12 +1221,30 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1180,12 +1221,30 @@ static int mv_xor_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, xordev); platform_set_drvdata(pdev, xordev);
/*
* We need to know which type of XOR device we use before
* setting up. In non-dt case it can only be the legacy one.
*/
xordev->xor_type = XOR_ORION;
if (pdev->dev.of_node) {
const struct of_device_id *of_id =
of_match_device(mv_xor_dt_ids,
&pdev->dev);
xordev->xor_type = (uintptr_t)of_id->data;
}
/* /*
* (Re-)program MBUS remapping windows if we are asked to. * (Re-)program MBUS remapping windows if we are asked to.
*/ */
if (xordev->xor_type == XOR_ARMADA_37XX) {
mv_xor_conf_mbus_windows_a3700(xordev);
} else {
dram = mv_mbus_dram_info(); dram = mv_mbus_dram_info();
if (dram) if (dram)
mv_xor_conf_mbus_windows(xordev, dram); mv_xor_conf_mbus_windows(xordev, dram);
}
/* Not all platforms can gate the clock, so it is not /* Not all platforms can gate the clock, so it is not
* an error if the clock does not exists. * an error if the clock does not exists.
...@@ -1199,9 +1258,13 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1199,9 +1258,13 @@ static int mv_xor_probe(struct platform_device *pdev)
* order for async_tx to perform well. So we limit the number * order for async_tx to perform well. So we limit the number
* of engines and channels so that we take into account this * of engines and channels so that we take into account this
* constraint. Note that we also want to use channels from * constraint. Note that we also want to use channels from
* separate engines when possible. * separate engines when possible. For dual-CPU Armada 3700
* SoC with single XOR engine allow using its both channels.
*/ */
max_engines = num_present_cpus(); max_engines = num_present_cpus();
if (xordev->xor_type == XOR_ARMADA_37XX)
max_channels = num_present_cpus();
else
max_channels = min_t(unsigned int, max_channels = min_t(unsigned int,
MV_XOR_MAX_CHANNELS, MV_XOR_MAX_CHANNELS,
DIV_ROUND_UP(num_present_cpus(), 2)); DIV_ROUND_UP(num_present_cpus(), 2));
...@@ -1212,15 +1275,11 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1212,15 +1275,11 @@ static int mv_xor_probe(struct platform_device *pdev)
if (pdev->dev.of_node) { if (pdev->dev.of_node) {
struct device_node *np; struct device_node *np;
int i = 0; int i = 0;
const struct of_device_id *of_id =
of_match_device(mv_xor_dt_ids,
&pdev->dev);
for_each_child_of_node(pdev->dev.of_node, np) { for_each_child_of_node(pdev->dev.of_node, np) {
struct mv_xor_chan *chan; struct mv_xor_chan *chan;
dma_cap_mask_t cap_mask; dma_cap_mask_t cap_mask;
int irq; int irq;
op_in_desc = (int)of_id->data;
if (i >= max_channels) if (i >= max_channels)
continue; continue;
...@@ -1237,7 +1296,7 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1237,7 +1296,7 @@ static int mv_xor_probe(struct platform_device *pdev)
} }
chan = mv_xor_channel_add(xordev, pdev, i, chan = mv_xor_channel_add(xordev, pdev, i,
cap_mask, irq, op_in_desc); cap_mask, irq);
if (IS_ERR(chan)) { if (IS_ERR(chan)) {
ret = PTR_ERR(chan); ret = PTR_ERR(chan);
irq_dispose_mapping(irq); irq_dispose_mapping(irq);
...@@ -1266,8 +1325,7 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1266,8 +1325,7 @@ static int mv_xor_probe(struct platform_device *pdev)
} }
chan = mv_xor_channel_add(xordev, pdev, i, chan = mv_xor_channel_add(xordev, pdev, i,
cd->cap_mask, irq, cd->cap_mask, irq);
XOR_MODE_IN_REG);
if (IS_ERR(chan)) { if (IS_ERR(chan)) {
ret = PTR_ERR(chan); ret = PTR_ERR(chan);
goto err_channel_add; goto err_channel_add;
......
...@@ -85,6 +85,7 @@ struct mv_xor_device { ...@@ -85,6 +85,7 @@ struct mv_xor_device {
void __iomem *xor_high_base; void __iomem *xor_high_base;
struct clk *clk; struct clk *clk;
struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS]; struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS];
int xor_type;
}; };
/** /**
......
...@@ -240,8 +240,9 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np, ...@@ -240,8 +240,9 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
struct of_phandle_args dma_spec; struct of_phandle_args dma_spec;
struct of_dma *ofdma; struct of_dma *ofdma;
struct dma_chan *chan; struct dma_chan *chan;
int count, i; int count, i, start;
int ret_no_channel = -ENODEV; int ret_no_channel = -ENODEV;
static atomic_t last_index;
if (!np || !name) { if (!np || !name) {
pr_err("%s: not enough information provided\n", __func__); pr_err("%s: not enough information provided\n", __func__);
...@@ -259,8 +260,15 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np, ...@@ -259,8 +260,15 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
} }
/*
* approximate an average distribution across multiple
* entries with the same name
*/
start = atomic_inc_return(&last_index);
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
if (of_dma_match_channel(np, name, i, &dma_spec)) if (of_dma_match_channel(np, name,
(i + start) % count,
&dma_spec))
continue; continue;
mutex_lock(&of_dma_lock); mutex_lock(&of_dma_lock);
......
...@@ -117,6 +117,7 @@ struct pxad_chan { ...@@ -117,6 +117,7 @@ struct pxad_chan {
/* protected by vc->lock */ /* protected by vc->lock */
struct pxad_phy *phy; struct pxad_phy *phy;
struct dma_pool *desc_pool; /* Descriptors pool */ struct dma_pool *desc_pool; /* Descriptors pool */
dma_cookie_t bus_error;
}; };
struct pxad_device { struct pxad_device {
...@@ -563,6 +564,7 @@ static void pxad_launch_chan(struct pxad_chan *chan, ...@@ -563,6 +564,7 @@ static void pxad_launch_chan(struct pxad_chan *chan,
return; return;
} }
} }
chan->bus_error = 0;
/* /*
* Program the descriptor's address into the DMA controller, * Program the descriptor's address into the DMA controller,
...@@ -666,6 +668,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -666,6 +668,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
struct virt_dma_desc *vd, *tmp; struct virt_dma_desc *vd, *tmp;
unsigned int dcsr; unsigned int dcsr;
unsigned long flags; unsigned long flags;
dma_cookie_t last_started = 0;
BUG_ON(!chan); BUG_ON(!chan);
...@@ -678,6 +681,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -678,6 +681,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
dev_dbg(&chan->vc.chan.dev->device, dev_dbg(&chan->vc.chan.dev->device,
"%s(): checking txd %p[%x]: completed=%d\n", "%s(): checking txd %p[%x]: completed=%d\n",
__func__, vd, vd->tx.cookie, is_desc_completed(vd)); __func__, vd, vd->tx.cookie, is_desc_completed(vd));
last_started = vd->tx.cookie;
if (to_pxad_sw_desc(vd)->cyclic) { if (to_pxad_sw_desc(vd)->cyclic) {
vchan_cyclic_callback(vd); vchan_cyclic_callback(vd);
break; break;
...@@ -690,7 +694,12 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -690,7 +694,12 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
} }
} }
if (dcsr & PXA_DCSR_STOPSTATE) { if (dcsr & PXA_DCSR_BUSERR) {
chan->bus_error = last_started;
phy_disable(phy);
}
if (!chan->bus_error && dcsr & PXA_DCSR_STOPSTATE) {
dev_dbg(&chan->vc.chan.dev->device, dev_dbg(&chan->vc.chan.dev->device,
"%s(): channel stopped, submitted_empty=%d issued_empty=%d", "%s(): channel stopped, submitted_empty=%d issued_empty=%d",
__func__, __func__,
...@@ -1249,6 +1258,9 @@ static enum dma_status pxad_tx_status(struct dma_chan *dchan, ...@@ -1249,6 +1258,9 @@ static enum dma_status pxad_tx_status(struct dma_chan *dchan,
struct pxad_chan *chan = to_pxad_chan(dchan); struct pxad_chan *chan = to_pxad_chan(dchan);
enum dma_status ret; enum dma_status ret;
if (cookie == chan->bus_error)
return DMA_ERROR;
ret = dma_cookie_status(dchan, cookie, txstate); ret = dma_cookie_status(dchan, cookie, txstate);
if (likely(txstate && (ret != DMA_ERROR))) if (likely(txstate && (ret != DMA_ERROR)))
dma_set_residue(txstate, pxad_residue(chan, cookie)); dma_set_residue(txstate, pxad_residue(chan, cookie));
...@@ -1321,7 +1333,7 @@ static int pxad_init_phys(struct platform_device *op, ...@@ -1321,7 +1333,7 @@ static int pxad_init_phys(struct platform_device *op,
return 0; return 0;
} }
static const struct of_device_id const pxad_dt_ids[] = { static const struct of_device_id pxad_dt_ids[] = {
{ .compatible = "marvell,pdma-1.0", }, { .compatible = "marvell,pdma-1.0", },
{} {}
}; };
......
obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
hdma_mgmt-objs := hidma_mgmt.o hidma_mgmt_sys.o hdma_mgmt-objs := hidma_mgmt.o hidma_mgmt_sys.o
obj-$(CONFIG_QCOM_HIDMA) += hdma.o
hdma-objs := hidma_ll.o hidma.o hidma_dbg.o
...@@ -342,7 +342,7 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = { ...@@ -342,7 +342,7 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = {
#define BAM_DESC_FIFO_SIZE SZ_32K #define BAM_DESC_FIFO_SIZE SZ_32K
#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1) #define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
#define BAM_MAX_DATA_SIZE (SZ_32K - 8) #define BAM_FIFO_SIZE (SZ_32K - 8)
struct bam_chan { struct bam_chan {
struct virt_dma_chan vc; struct virt_dma_chan vc;
...@@ -387,6 +387,7 @@ struct bam_device { ...@@ -387,6 +387,7 @@ struct bam_device {
/* execution environment ID, from DT */ /* execution environment ID, from DT */
u32 ee; u32 ee;
bool controlled_remotely;
const struct reg_offset_data *layout; const struct reg_offset_data *layout;
...@@ -458,7 +459,7 @@ static void bam_chan_init_hw(struct bam_chan *bchan, ...@@ -458,7 +459,7 @@ static void bam_chan_init_hw(struct bam_chan *bchan,
*/ */
writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)), writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR)); bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
writel_relaxed(BAM_DESC_FIFO_SIZE, writel_relaxed(BAM_FIFO_SIZE,
bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES)); bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */ /* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
...@@ -604,7 +605,7 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, ...@@ -604,7 +605,7 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
/* calculate number of required entries */ /* calculate number of required entries */
for_each_sg(sgl, sg, sg_len, i) for_each_sg(sgl, sg, sg_len, i)
num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE); num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_FIFO_SIZE);
/* allocate enough room to accomodate the number of entries */ /* allocate enough room to accomodate the number of entries */
async_desc = kzalloc(sizeof(*async_desc) + async_desc = kzalloc(sizeof(*async_desc) +
...@@ -635,10 +636,10 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, ...@@ -635,10 +636,10 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
desc->addr = cpu_to_le32(sg_dma_address(sg) + desc->addr = cpu_to_le32(sg_dma_address(sg) +
curr_offset); curr_offset);
if (remainder > BAM_MAX_DATA_SIZE) { if (remainder > BAM_FIFO_SIZE) {
desc->size = cpu_to_le16(BAM_MAX_DATA_SIZE); desc->size = cpu_to_le16(BAM_FIFO_SIZE);
remainder -= BAM_MAX_DATA_SIZE; remainder -= BAM_FIFO_SIZE;
curr_offset += BAM_MAX_DATA_SIZE; curr_offset += BAM_FIFO_SIZE;
} else { } else {
desc->size = cpu_to_le16(remainder); desc->size = cpu_to_le16(remainder);
remainder = 0; remainder = 0;
...@@ -801,13 +802,17 @@ static irqreturn_t bam_dma_irq(int irq, void *data) ...@@ -801,13 +802,17 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
if (srcs & P_IRQ) if (srcs & P_IRQ)
tasklet_schedule(&bdev->task); tasklet_schedule(&bdev->task);
if (srcs & BAM_IRQ) if (srcs & BAM_IRQ) {
clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS)); clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
/* don't allow reorder of the various accesses to the BAM registers */ /*
* don't allow reorder of the various accesses to the BAM
* registers
*/
mb(); mb();
writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR)); writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
}
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -1038,6 +1043,9 @@ static int bam_init(struct bam_device *bdev) ...@@ -1038,6 +1043,9 @@ static int bam_init(struct bam_device *bdev)
val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES)); val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
bdev->num_channels = val & BAM_NUM_PIPES_MASK; bdev->num_channels = val & BAM_NUM_PIPES_MASK;
if (bdev->controlled_remotely)
return 0;
/* s/w reset bam */ /* s/w reset bam */
/* after reset all pipes are disabled and idle */ /* after reset all pipes are disabled and idle */
val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL)); val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
...@@ -1125,6 +1133,9 @@ static int bam_dma_probe(struct platform_device *pdev) ...@@ -1125,6 +1133,9 @@ static int bam_dma_probe(struct platform_device *pdev)
return ret; return ret;
} }
bdev->controlled_remotely = of_property_read_bool(pdev->dev.of_node,
"qcom,controlled-remotely");
bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk"); bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
if (IS_ERR(bdev->bamclk)) if (IS_ERR(bdev->bamclk))
return PTR_ERR(bdev->bamclk); return PTR_ERR(bdev->bamclk);
...@@ -1163,7 +1174,7 @@ static int bam_dma_probe(struct platform_device *pdev) ...@@ -1163,7 +1174,7 @@ static int bam_dma_probe(struct platform_device *pdev)
/* set max dma segment size */ /* set max dma segment size */
bdev->common.dev = bdev->dev; bdev->common.dev = bdev->dev;
bdev->common.dev->dma_parms = &bdev->dma_parms; bdev->common.dev->dma_parms = &bdev->dma_parms;
ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE); ret = dma_set_max_seg_size(bdev->common.dev, BAM_FIFO_SIZE);
if (ret) { if (ret) {
dev_err(bdev->dev, "cannot set maximum segment size\n"); dev_err(bdev->dev, "cannot set maximum segment size\n");
goto err_bam_channel_exit; goto err_bam_channel_exit;
...@@ -1234,6 +1245,9 @@ static int bam_dma_remove(struct platform_device *pdev) ...@@ -1234,6 +1245,9 @@ static int bam_dma_remove(struct platform_device *pdev)
bam_dma_terminate_all(&bdev->channels[i].vc.chan); bam_dma_terminate_all(&bdev->channels[i].vc.chan);
tasklet_kill(&bdev->channels[i].vc.task); tasklet_kill(&bdev->channels[i].vc.task);
if (!bdev->channels[i].fifo_virt)
continue;
dma_free_wc(bdev->dev, BAM_DESC_FIFO_SIZE, dma_free_wc(bdev->dev, BAM_DESC_FIFO_SIZE,
bdev->channels[i].fifo_virt, bdev->channels[i].fifo_virt,
bdev->channels[i].fifo_phys); bdev->channels[i].fifo_phys);
......
/* /*
* Qualcomm Technologies HIDMA DMA engine interface * Qualcomm Technologies HIDMA DMA engine interface
* *
* Copyright (c) 2015, The Linux Foundation. All rights reserved. * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and * it under the terms of the GNU General Public License version 2 and
...@@ -404,7 +404,7 @@ static int hidma_terminate_channel(struct dma_chan *chan) ...@@ -404,7 +404,7 @@ static int hidma_terminate_channel(struct dma_chan *chan)
spin_unlock_irqrestore(&mchan->lock, irqflags); spin_unlock_irqrestore(&mchan->lock, irqflags);
/* this suspends the existing transfer */ /* this suspends the existing transfer */
rc = hidma_ll_pause(dmadev->lldev); rc = hidma_ll_disable(dmadev->lldev);
if (rc) { if (rc) {
dev_err(dmadev->ddev.dev, "channel did not pause\n"); dev_err(dmadev->ddev.dev, "channel did not pause\n");
goto out; goto out;
...@@ -427,7 +427,7 @@ static int hidma_terminate_channel(struct dma_chan *chan) ...@@ -427,7 +427,7 @@ static int hidma_terminate_channel(struct dma_chan *chan)
list_move(&mdesc->node, &mchan->free); list_move(&mdesc->node, &mchan->free);
} }
rc = hidma_ll_resume(dmadev->lldev); rc = hidma_ll_enable(dmadev->lldev);
out: out:
pm_runtime_mark_last_busy(dmadev->ddev.dev); pm_runtime_mark_last_busy(dmadev->ddev.dev);
pm_runtime_put_autosuspend(dmadev->ddev.dev); pm_runtime_put_autosuspend(dmadev->ddev.dev);
...@@ -488,7 +488,7 @@ static int hidma_pause(struct dma_chan *chan) ...@@ -488,7 +488,7 @@ static int hidma_pause(struct dma_chan *chan)
dmadev = to_hidma_dev(mchan->chan.device); dmadev = to_hidma_dev(mchan->chan.device);
if (!mchan->paused) { if (!mchan->paused) {
pm_runtime_get_sync(dmadev->ddev.dev); pm_runtime_get_sync(dmadev->ddev.dev);
if (hidma_ll_pause(dmadev->lldev)) if (hidma_ll_disable(dmadev->lldev))
dev_warn(dmadev->ddev.dev, "channel did not stop\n"); dev_warn(dmadev->ddev.dev, "channel did not stop\n");
mchan->paused = true; mchan->paused = true;
pm_runtime_mark_last_busy(dmadev->ddev.dev); pm_runtime_mark_last_busy(dmadev->ddev.dev);
...@@ -507,7 +507,7 @@ static int hidma_resume(struct dma_chan *chan) ...@@ -507,7 +507,7 @@ static int hidma_resume(struct dma_chan *chan)
dmadev = to_hidma_dev(mchan->chan.device); dmadev = to_hidma_dev(mchan->chan.device);
if (mchan->paused) { if (mchan->paused) {
pm_runtime_get_sync(dmadev->ddev.dev); pm_runtime_get_sync(dmadev->ddev.dev);
rc = hidma_ll_resume(dmadev->lldev); rc = hidma_ll_enable(dmadev->lldev);
if (!rc) if (!rc)
mchan->paused = false; mchan->paused = false;
else else
...@@ -530,6 +530,43 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg) ...@@ -530,6 +530,43 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
return hidma_ll_inthandler(chirq, lldev); return hidma_ll_inthandler(chirq, lldev);
} }
static ssize_t hidma_show_values(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct platform_device *pdev = to_platform_device(dev);
struct hidma_dev *mdev = platform_get_drvdata(pdev);
buf[0] = 0;
if (strcmp(attr->attr.name, "chid") == 0)
sprintf(buf, "%d\n", mdev->chidx);
return strlen(buf);
}
static int hidma_create_sysfs_entry(struct hidma_dev *dev, char *name,
int mode)
{
struct device_attribute *attrs;
char *name_copy;
attrs = devm_kmalloc(dev->ddev.dev, sizeof(struct device_attribute),
GFP_KERNEL);
if (!attrs)
return -ENOMEM;
name_copy = devm_kstrdup(dev->ddev.dev, name, GFP_KERNEL);
if (!name_copy)
return -ENOMEM;
attrs->attr.name = name_copy;
attrs->attr.mode = mode;
attrs->show = hidma_show_values;
sysfs_attr_init(&attrs->attr);
return device_create_file(dev->ddev.dev, attrs);
}
static int hidma_probe(struct platform_device *pdev) static int hidma_probe(struct platform_device *pdev)
{ {
struct hidma_dev *dmadev; struct hidma_dev *dmadev;
...@@ -644,6 +681,8 @@ static int hidma_probe(struct platform_device *pdev) ...@@ -644,6 +681,8 @@ static int hidma_probe(struct platform_device *pdev)
dmadev->irq = chirq; dmadev->irq = chirq;
tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev); tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
hidma_debug_init(dmadev);
hidma_create_sysfs_entry(dmadev, "chid", S_IRUGO);
dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n"); dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
platform_set_drvdata(pdev, dmadev); platform_set_drvdata(pdev, dmadev);
pm_runtime_mark_last_busy(dmadev->ddev.dev); pm_runtime_mark_last_busy(dmadev->ddev.dev);
...@@ -651,6 +690,7 @@ static int hidma_probe(struct platform_device *pdev) ...@@ -651,6 +690,7 @@ static int hidma_probe(struct platform_device *pdev)
return 0; return 0;
uninit: uninit:
hidma_debug_uninit(dmadev);
hidma_ll_uninit(dmadev->lldev); hidma_ll_uninit(dmadev->lldev);
dmafree: dmafree:
if (dmadev) if (dmadev)
...@@ -668,6 +708,7 @@ static int hidma_remove(struct platform_device *pdev) ...@@ -668,6 +708,7 @@ static int hidma_remove(struct platform_device *pdev)
pm_runtime_get_sync(dmadev->ddev.dev); pm_runtime_get_sync(dmadev->ddev.dev);
dma_async_device_unregister(&dmadev->ddev); dma_async_device_unregister(&dmadev->ddev);
devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev); devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
hidma_debug_uninit(dmadev);
hidma_ll_uninit(dmadev->lldev); hidma_ll_uninit(dmadev->lldev);
hidma_free(dmadev); hidma_free(dmadev);
...@@ -689,7 +730,6 @@ static const struct of_device_id hidma_match[] = { ...@@ -689,7 +730,6 @@ static const struct of_device_id hidma_match[] = {
{.compatible = "qcom,hidma-1.0",}, {.compatible = "qcom,hidma-1.0",},
{}, {},
}; };
MODULE_DEVICE_TABLE(of, hidma_match); MODULE_DEVICE_TABLE(of, hidma_match);
static struct platform_driver hidma_driver = { static struct platform_driver hidma_driver = {
......
/* /*
* Qualcomm Technologies HIDMA data structures * Qualcomm Technologies HIDMA data structures
* *
* Copyright (c) 2014, The Linux Foundation. All rights reserved. * Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and * it under the terms of the GNU General Public License version 2 and
...@@ -20,32 +20,29 @@ ...@@ -20,32 +20,29 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#define TRE_SIZE 32 /* each TRE is 32 bytes */ #define HIDMA_TRE_SIZE 32 /* each TRE is 32 bytes */
#define TRE_CFG_IDX 0 #define HIDMA_TRE_CFG_IDX 0
#define TRE_LEN_IDX 1 #define HIDMA_TRE_LEN_IDX 1
#define TRE_SRC_LOW_IDX 2 #define HIDMA_TRE_SRC_LOW_IDX 2
#define TRE_SRC_HI_IDX 3 #define HIDMA_TRE_SRC_HI_IDX 3
#define TRE_DEST_LOW_IDX 4 #define HIDMA_TRE_DEST_LOW_IDX 4
#define TRE_DEST_HI_IDX 5 #define HIDMA_TRE_DEST_HI_IDX 5
struct hidma_tx_status {
u8 err_info; /* error record in this transfer */
u8 err_code; /* completion code */
};
struct hidma_tre { struct hidma_tre {
atomic_t allocated; /* if this channel is allocated */ atomic_t allocated; /* if this channel is allocated */
bool queued; /* flag whether this is pending */ bool queued; /* flag whether this is pending */
u16 status; /* status */ u16 status; /* status */
u32 chidx; /* index of the tre */ u32 idx; /* index of the tre */
u32 dma_sig; /* signature of the tre */ u32 dma_sig; /* signature of the tre */
const char *dev_name; /* name of the device */ const char *dev_name; /* name of the device */
void (*callback)(void *data); /* requester callback */ void (*callback)(void *data); /* requester callback */
void *data; /* Data associated with this channel*/ void *data; /* Data associated with this channel*/
struct hidma_lldev *lldev; /* lldma device pointer */ struct hidma_lldev *lldev; /* lldma device pointer */
u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy */ u32 tre_local[HIDMA_TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy */
u32 tre_index; /* the offset where this was written*/ u32 tre_index; /* the offset where this was written*/
u32 int_flags; /* interrupt flags */ u32 int_flags; /* interrupt flags */
u8 err_info; /* error record in this transfer */
u8 err_code; /* completion code */
}; };
struct hidma_lldev { struct hidma_lldev {
...@@ -61,22 +58,21 @@ struct hidma_lldev { ...@@ -61,22 +58,21 @@ struct hidma_lldev {
void __iomem *evca; /* Event Channel address */ void __iomem *evca; /* Event Channel address */
struct hidma_tre struct hidma_tre
**pending_tre_list; /* Pointers to pending TREs */ **pending_tre_list; /* Pointers to pending TREs */
struct hidma_tx_status
*tx_status_list; /* Pointers to pending TREs status*/
s32 pending_tre_count; /* Number of TREs pending */ s32 pending_tre_count; /* Number of TREs pending */
void *tre_ring; /* TRE ring */ void *tre_ring; /* TRE ring */
dma_addr_t tre_ring_handle; /* TRE ring to be shared with HW */ dma_addr_t tre_dma; /* TRE ring to be shared with HW */
u32 tre_ring_size; /* Byte size of the ring */ u32 tre_ring_size; /* Byte size of the ring */
u32 tre_processed_off; /* last processed TRE */ u32 tre_processed_off; /* last processed TRE */
void *evre_ring; /* EVRE ring */ void *evre_ring; /* EVRE ring */
dma_addr_t evre_ring_handle; /* EVRE ring to be shared with HW */ dma_addr_t evre_dma; /* EVRE ring to be shared with HW */
u32 evre_ring_size; /* Byte size of the ring */ u32 evre_ring_size; /* Byte size of the ring */
u32 evre_processed_off; /* last processed EVRE */ u32 evre_processed_off; /* last processed EVRE */
u32 tre_write_offset; /* TRE write location */ u32 tre_write_offset; /* TRE write location */
struct tasklet_struct task; /* task delivering notifications */ struct tasklet_struct task; /* task delivering notifications */
struct tasklet_struct rst_task; /* task to reset HW */
DECLARE_KFIFO_PTR(handoff_fifo, DECLARE_KFIFO_PTR(handoff_fifo,
struct hidma_tre *); /* pending TREs FIFO */ struct hidma_tre *); /* pending TREs FIFO */
}; };
...@@ -145,8 +141,8 @@ enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch); ...@@ -145,8 +141,8 @@ enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
bool hidma_ll_isenabled(struct hidma_lldev *llhndl); bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch); void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
void hidma_ll_start(struct hidma_lldev *llhndl); void hidma_ll_start(struct hidma_lldev *llhndl);
int hidma_ll_pause(struct hidma_lldev *llhndl); int hidma_ll_disable(struct hidma_lldev *lldev);
int hidma_ll_resume(struct hidma_lldev *llhndl); int hidma_ll_enable(struct hidma_lldev *llhndl);
void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch, void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
dma_addr_t src, dma_addr_t dest, u32 len, u32 flags); dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
int hidma_ll_setup(struct hidma_lldev *lldev); int hidma_ll_setup(struct hidma_lldev *lldev);
...@@ -157,4 +153,6 @@ int hidma_ll_uninit(struct hidma_lldev *llhndl); ...@@ -157,4 +153,6 @@ int hidma_ll_uninit(struct hidma_lldev *llhndl);
irqreturn_t hidma_ll_inthandler(int irq, void *arg); irqreturn_t hidma_ll_inthandler(int irq, void *arg);
void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info, void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
u8 err_code); u8 err_code);
int hidma_debug_init(struct hidma_dev *dmadev);
void hidma_debug_uninit(struct hidma_dev *dmadev);
#endif #endif
/*
* Qualcomm Technologies HIDMA debug file
*
* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/list.h>
#include <linux/pm_runtime.h>
#include "hidma.h"
static void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
{
struct hidma_lldev *lldev = llhndl;
struct hidma_tre *tre;
u32 length;
dma_addr_t src_start;
dma_addr_t dest_start;
u32 *tre_local;
if (tre_ch >= lldev->nr_tres) {
dev_err(lldev->dev, "invalid TRE number in chstats:%d", tre_ch);
return;
}
tre = &lldev->trepool[tre_ch];
seq_printf(s, "------Channel %d -----\n", tre_ch);
seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
seq_printf(s, "queued = 0x%x\n", tre->queued);
seq_printf(s, "err_info = 0x%x\n", tre->err_info);
seq_printf(s, "err_code = 0x%x\n", tre->err_code);
seq_printf(s, "status = 0x%x\n", tre->status);
seq_printf(s, "idx = 0x%x\n", tre->idx);
seq_printf(s, "dma_sig = 0x%x\n", tre->dma_sig);
seq_printf(s, "dev_name=%s\n", tre->dev_name);
seq_printf(s, "callback=%p\n", tre->callback);
seq_printf(s, "data=%p\n", tre->data);
seq_printf(s, "tre_index = 0x%x\n", tre->tre_index);
tre_local = &tre->tre_local[0];
src_start = tre_local[HIDMA_TRE_SRC_LOW_IDX];
src_start = ((u64) (tre_local[HIDMA_TRE_SRC_HI_IDX]) << 32) + src_start;
dest_start = tre_local[HIDMA_TRE_DEST_LOW_IDX];
dest_start += ((u64) (tre_local[HIDMA_TRE_DEST_HI_IDX]) << 32);
length = tre_local[HIDMA_TRE_LEN_IDX];
seq_printf(s, "src=%pap\n", &src_start);
seq_printf(s, "dest=%pap\n", &dest_start);
seq_printf(s, "length = 0x%x\n", length);
}
static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
{
struct hidma_lldev *lldev = llhndl;
seq_puts(s, "------Device -----\n");
seq_printf(s, "lldev init = 0x%x\n", lldev->initialized);
seq_printf(s, "trch_state = 0x%x\n", lldev->trch_state);
seq_printf(s, "evch_state = 0x%x\n", lldev->evch_state);
seq_printf(s, "chidx = 0x%x\n", lldev->chidx);
seq_printf(s, "nr_tres = 0x%x\n", lldev->nr_tres);
seq_printf(s, "trca=%p\n", lldev->trca);
seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_dma);
seq_printf(s, "tre_ring_size = 0x%x\n", lldev->tre_ring_size);
seq_printf(s, "tre_processed_off = 0x%x\n", lldev->tre_processed_off);
seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
seq_printf(s, "evca=%p\n", lldev->evca);
seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_dma);
seq_printf(s, "evre_ring_size = 0x%x\n", lldev->evre_ring_size);
seq_printf(s, "evre_processed_off = 0x%x\n", lldev->evre_processed_off);
seq_printf(s, "tre_write_offset = 0x%x\n", lldev->tre_write_offset);
}
/*
* hidma_chan_stats: display HIDMA channel statistics
*
* Display the statistics for the current HIDMA virtual channel device.
*/
static int hidma_chan_stats(struct seq_file *s, void *unused)
{
struct hidma_chan *mchan = s->private;
struct hidma_desc *mdesc;
struct hidma_dev *dmadev = mchan->dmadev;
pm_runtime_get_sync(dmadev->ddev.dev);
seq_printf(s, "paused=%u\n", mchan->paused);
seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
seq_puts(s, "prepared\n");
list_for_each_entry(mdesc, &mchan->prepared, node)
hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
seq_puts(s, "active\n");
list_for_each_entry(mdesc, &mchan->active, node)
hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
seq_puts(s, "completed\n");
list_for_each_entry(mdesc, &mchan->completed, node)
hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
hidma_ll_devstats(s, mchan->dmadev->lldev);
pm_runtime_mark_last_busy(dmadev->ddev.dev);
pm_runtime_put_autosuspend(dmadev->ddev.dev);
return 0;
}
/*
* hidma_dma_info: display HIDMA device info
*
* Display the info for the current HIDMA device.
*/
static int hidma_dma_info(struct seq_file *s, void *unused)
{
struct hidma_dev *dmadev = s->private;
resource_size_t sz;
seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
seq_printf(s, "dev_trca_phys=%pa\n", &dmadev->trca_resource->start);
sz = resource_size(dmadev->trca_resource);
seq_printf(s, "dev_trca_size=%pa\n", &sz);
seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
seq_printf(s, "dev_evca_phys=%pa\n", &dmadev->evca_resource->start);
sz = resource_size(dmadev->evca_resource);
seq_printf(s, "dev_evca_size=%pa\n", &sz);
return 0;
}
static int hidma_chan_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, hidma_chan_stats, inode->i_private);
}
static int hidma_dma_info_open(struct inode *inode, struct file *file)
{
return single_open(file, hidma_dma_info, inode->i_private);
}
static const struct file_operations hidma_chan_fops = {
.open = hidma_chan_stats_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static const struct file_operations hidma_dma_fops = {
.open = hidma_dma_info_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
void hidma_debug_uninit(struct hidma_dev *dmadev)
{
debugfs_remove_recursive(dmadev->debugfs);
debugfs_remove_recursive(dmadev->stats);
}
int hidma_debug_init(struct hidma_dev *dmadev)
{
int rc = 0;
int chidx = 0;
struct list_head *position = NULL;
dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev), NULL);
if (!dmadev->debugfs) {
rc = -ENODEV;
return rc;
}
/* walk through the virtual channel list */
list_for_each(position, &dmadev->ddev.channels) {
struct hidma_chan *chan;
chan = list_entry(position, struct hidma_chan,
chan.device_node);
sprintf(chan->dbg_name, "chan%d", chidx);
chan->debugfs = debugfs_create_dir(chan->dbg_name,
dmadev->debugfs);
if (!chan->debugfs) {
rc = -ENOMEM;
goto cleanup;
}
chan->stats = debugfs_create_file("stats", S_IRUGO,
chan->debugfs, chan,
&hidma_chan_fops);
if (!chan->stats) {
rc = -ENOMEM;
goto cleanup;
}
chidx++;
}
dmadev->stats = debugfs_create_file("stats", S_IRUGO,
dmadev->debugfs, dmadev,
&hidma_dma_fops);
if (!dmadev->stats) {
rc = -ENOMEM;
goto cleanup;
}
return 0;
cleanup:
hidma_debug_uninit(dmadev);
return rc;
}
This diff is collapsed.
/* /*
* Qualcomm Technologies HIDMA DMA engine Management interface * Qualcomm Technologies HIDMA DMA engine Management interface
* *
* Copyright (c) 2015, The Linux Foundation. All rights reserved. * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and * it under the terms of the GNU General Public License version 2 and
...@@ -17,13 +17,14 @@ ...@@ -17,13 +17,14 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/property.h> #include <linux/property.h>
#include <linux/interrupt.h> #include <linux/of_irq.h>
#include <linux/platform_device.h> #include <linux/of_platform.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/dma-mapping.h>
#include "hidma_mgmt.h" #include "hidma_mgmt.h"
...@@ -298,5 +299,109 @@ static struct platform_driver hidma_mgmt_driver = { ...@@ -298,5 +299,109 @@ static struct platform_driver hidma_mgmt_driver = {
}, },
}; };
module_platform_driver(hidma_mgmt_driver); #if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
static int object_counter;
static int __init hidma_mgmt_of_populate_channels(struct device_node *np)
{
struct platform_device *pdev_parent = of_find_device_by_node(np);
struct platform_device_info pdevinfo;
struct of_phandle_args out_irq;
struct device_node *child;
struct resource *res;
const __be32 *cell;
int ret = 0, size, i, num;
u64 addr, addr_size;
for_each_available_child_of_node(np, child) {
struct resource *res_iter;
struct platform_device *new_pdev;
cell = of_get_property(child, "reg", &size);
if (!cell) {
ret = -EINVAL;
goto out;
}
size /= sizeof(*cell);
num = size /
(of_n_addr_cells(child) + of_n_size_cells(child)) + 1;
/* allocate a resource array */
res = kcalloc(num, sizeof(*res), GFP_KERNEL);
if (!res) {
ret = -ENOMEM;
goto out;
}
/* read each reg value */
i = 0;
res_iter = res;
while (i < size) {
addr = of_read_number(&cell[i],
of_n_addr_cells(child));
i += of_n_addr_cells(child);
addr_size = of_read_number(&cell[i],
of_n_size_cells(child));
i += of_n_size_cells(child);
res_iter->start = addr;
res_iter->end = res_iter->start + addr_size - 1;
res_iter->flags = IORESOURCE_MEM;
res_iter++;
}
ret = of_irq_parse_one(child, 0, &out_irq);
if (ret)
goto out;
res_iter->start = irq_create_of_mapping(&out_irq);
res_iter->name = "hidma event irq";
res_iter->flags = IORESOURCE_IRQ;
memset(&pdevinfo, 0, sizeof(pdevinfo));
pdevinfo.fwnode = &child->fwnode;
pdevinfo.parent = pdev_parent ? &pdev_parent->dev : NULL;
pdevinfo.name = child->name;
pdevinfo.id = object_counter++;
pdevinfo.res = res;
pdevinfo.num_res = num;
pdevinfo.data = NULL;
pdevinfo.size_data = 0;
pdevinfo.dma_mask = DMA_BIT_MASK(64);
new_pdev = platform_device_register_full(&pdevinfo);
if (!new_pdev) {
ret = -ENODEV;
goto out;
}
of_dma_configure(&new_pdev->dev, child);
kfree(res);
res = NULL;
}
out:
kfree(res);
return ret;
}
#endif
static int __init hidma_mgmt_init(void)
{
#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
struct device_node *child;
for (child = of_find_matching_node(NULL, hidma_mgmt_match); child;
child = of_find_matching_node(child, hidma_mgmt_match)) {
/* device tree based firmware here */
hidma_mgmt_of_populate_channels(child);
of_node_put(child);
}
#endif
platform_driver_register(&hidma_mgmt_driver);
return 0;
}
module_init(hidma_mgmt_init);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
This diff is collapsed.
...@@ -54,6 +54,7 @@ ...@@ -54,6 +54,7 @@
#define TEGRA_APBDMA_CSR_ONCE BIT(27) #define TEGRA_APBDMA_CSR_ONCE BIT(27)
#define TEGRA_APBDMA_CSR_FLOW BIT(21) #define TEGRA_APBDMA_CSR_FLOW BIT(21)
#define TEGRA_APBDMA_CSR_REQ_SEL_SHIFT 16 #define TEGRA_APBDMA_CSR_REQ_SEL_SHIFT 16
#define TEGRA_APBDMA_CSR_REQ_SEL_MASK 0x1F
#define TEGRA_APBDMA_CSR_WCOUNT_MASK 0xFFFC #define TEGRA_APBDMA_CSR_WCOUNT_MASK 0xFFFC
/* STATUS register */ /* STATUS register */
...@@ -114,6 +115,8 @@ ...@@ -114,6 +115,8 @@
/* Channel base address offset from APBDMA base address */ /* Channel base address offset from APBDMA base address */
#define TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET 0x1000 #define TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET 0x1000
#define TEGRA_APBDMA_SLAVE_ID_INVALID (TEGRA_APBDMA_CSR_REQ_SEL_MASK + 1)
struct tegra_dma; struct tegra_dma;
/* /*
...@@ -353,8 +356,11 @@ static int tegra_dma_slave_config(struct dma_chan *dc, ...@@ -353,8 +356,11 @@ static int tegra_dma_slave_config(struct dma_chan *dc,
} }
memcpy(&tdc->dma_sconfig, sconfig, sizeof(*sconfig)); memcpy(&tdc->dma_sconfig, sconfig, sizeof(*sconfig));
if (!tdc->slave_id) if (tdc->slave_id == TEGRA_APBDMA_SLAVE_ID_INVALID) {
if (sconfig->slave_id > TEGRA_APBDMA_CSR_REQ_SEL_MASK)
return -EINVAL;
tdc->slave_id = sconfig->slave_id; tdc->slave_id = sconfig->slave_id;
}
tdc->config_init = true; tdc->config_init = true;
return 0; return 0;
} }
...@@ -1236,7 +1242,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc) ...@@ -1236,7 +1242,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
} }
pm_runtime_put(tdma->dev); pm_runtime_put(tdma->dev);
tdc->slave_id = 0; tdc->slave_id = TEGRA_APBDMA_SLAVE_ID_INVALID;
} }
static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec, static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
...@@ -1246,6 +1252,11 @@ static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec, ...@@ -1246,6 +1252,11 @@ static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
struct dma_chan *chan; struct dma_chan *chan;
struct tegra_dma_channel *tdc; struct tegra_dma_channel *tdc;
if (dma_spec->args[0] > TEGRA_APBDMA_CSR_REQ_SEL_MASK) {
dev_err(tdma->dev, "Invalid slave id: %d\n", dma_spec->args[0]);
return NULL;
}
chan = dma_get_any_slave_channel(&tdma->dma_dev); chan = dma_get_any_slave_channel(&tdma->dma_dev);
if (!chan) if (!chan)
return NULL; return NULL;
...@@ -1389,6 +1400,7 @@ static int tegra_dma_probe(struct platform_device *pdev) ...@@ -1389,6 +1400,7 @@ static int tegra_dma_probe(struct platform_device *pdev)
&tdma->dma_dev.channels); &tdma->dma_dev.channels);
tdc->tdma = tdma; tdc->tdma = tdma;
tdc->id = i; tdc->id = i;
tdc->slave_id = TEGRA_APBDMA_SLAVE_ID_INVALID;
tasklet_init(&tdc->tasklet, tegra_dma_tasklet, tasklet_init(&tdc->tasklet, tegra_dma_tasklet,
(unsigned long)tdc); (unsigned long)tdc);
......
This diff is collapsed.
This diff is collapsed.
...@@ -144,16 +144,16 @@ static int pxa2xx_spi_pci_probe(struct pci_dev *dev, ...@@ -144,16 +144,16 @@ static int pxa2xx_spi_pci_probe(struct pci_dev *dev,
struct dw_dma_slave *slave = c->tx_param; struct dw_dma_slave *slave = c->tx_param;
slave->dma_dev = &dma_dev->dev; slave->dma_dev = &dma_dev->dev;
slave->src_master = 1; slave->m_master = 0;
slave->dst_master = 0; slave->p_master = 1;
} }
if (c->rx_param) { if (c->rx_param) {
struct dw_dma_slave *slave = c->rx_param; struct dw_dma_slave *slave = c->rx_param;
slave->dma_dev = &dma_dev->dev; slave->dma_dev = &dma_dev->dev;
slave->src_master = 1; slave->m_master = 0;
slave->dst_master = 0; slave->p_master = 1;
} }
spi_pdata.dma_filter = lpss_dma_filter; spi_pdata.dma_filter = lpss_dma_filter;
......
...@@ -1454,13 +1454,13 @@ byt_serial_setup(struct serial_private *priv, ...@@ -1454,13 +1454,13 @@ byt_serial_setup(struct serial_private *priv,
return -EINVAL; return -EINVAL;
} }
rx_param->src_master = 1; rx_param->m_master = 0;
rx_param->dst_master = 0; rx_param->p_master = 1;
dma->rxconf.src_maxburst = 16; dma->rxconf.src_maxburst = 16;
tx_param->src_master = 1; tx_param->m_master = 0;
tx_param->dst_master = 0; tx_param->p_master = 1;
dma->txconf.dst_maxburst = 16; dma->txconf.dst_maxburst = 16;
......
...@@ -86,7 +86,7 @@ struct pl08x_channel_data { ...@@ -86,7 +86,7 @@ struct pl08x_channel_data {
* @mem_buses: buses which memory can be accessed from: PL08X_AHB1 | PL08X_AHB2 * @mem_buses: buses which memory can be accessed from: PL08X_AHB1 | PL08X_AHB2
*/ */
struct pl08x_platform_data { struct pl08x_platform_data {
const struct pl08x_channel_data *slave_channels; struct pl08x_channel_data *slave_channels;
unsigned int num_slave_channels; unsigned int num_slave_channels;
struct pl08x_channel_data memcpy_channel; struct pl08x_channel_data memcpy_channel;
int (*get_xfer_signal)(const struct pl08x_channel_data *); int (*get_xfer_signal)(const struct pl08x_channel_data *);
......
...@@ -27,6 +27,7 @@ struct dw_dma; ...@@ -27,6 +27,7 @@ struct dw_dma;
* @regs: memory mapped I/O space * @regs: memory mapped I/O space
* @clk: hclk clock * @clk: hclk clock
* @dw: struct dw_dma that is filed by dw_dma_probe() * @dw: struct dw_dma that is filed by dw_dma_probe()
* @pdata: pointer to platform data
*/ */
struct dw_dma_chip { struct dw_dma_chip {
struct device *dev; struct device *dev;
...@@ -34,10 +35,12 @@ struct dw_dma_chip { ...@@ -34,10 +35,12 @@ struct dw_dma_chip {
void __iomem *regs; void __iomem *regs;
struct clk *clk; struct clk *clk;
struct dw_dma *dw; struct dw_dma *dw;
const struct dw_dma_platform_data *pdata;
}; };
/* Export to the platform drivers */ /* Export to the platform drivers */
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata); int dw_dma_probe(struct dw_dma_chip *chip);
int dw_dma_remove(struct dw_dma_chip *chip); int dw_dma_remove(struct dw_dma_chip *chip);
/* DMA API extensions */ /* DMA API extensions */
......
...@@ -41,6 +41,20 @@ struct xilinx_vdma_config { ...@@ -41,6 +41,20 @@ struct xilinx_vdma_config {
int ext_fsync; int ext_fsync;
}; };
/**
* enum xdma_ip_type: DMA IP type.
*
* XDMA_TYPE_AXIDMA: Axi dma ip.
* XDMA_TYPE_CDMA: Axi cdma ip.
* XDMA_TYPE_VDMA: Axi vdma ip.
*
*/
enum xdma_ip_type {
XDMA_TYPE_AXIDMA = 0,
XDMA_TYPE_CDMA,
XDMA_TYPE_VDMA,
};
int xilinx_vdma_channel_set_config(struct dma_chan *dchan, int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
struct xilinx_vdma_config *cfg); struct xilinx_vdma_config *cfg);
......
This diff is collapsed.
...@@ -21,15 +21,15 @@ ...@@ -21,15 +21,15 @@
* @dma_dev: required DMA master device * @dma_dev: required DMA master device
* @src_id: src request line * @src_id: src request line
* @dst_id: dst request line * @dst_id: dst request line
* @src_master: src master for transfers on allocated channel. * @m_master: memory master for transfers on allocated channel
* @dst_master: dest master for transfers on allocated channel. * @p_master: peripheral master for transfers on allocated channel
*/ */
struct dw_dma_slave { struct dw_dma_slave {
struct device *dma_dev; struct device *dma_dev;
u8 src_id; u8 src_id;
u8 dst_id; u8 dst_id;
u8 src_master; u8 m_master;
u8 dst_master; u8 p_master;
}; };
/** /**
...@@ -43,7 +43,7 @@ struct dw_dma_slave { ...@@ -43,7 +43,7 @@ struct dw_dma_slave {
* @block_size: Maximum block size supported by the controller * @block_size: Maximum block size supported by the controller
* @nr_masters: Number of AHB masters supported by the controller * @nr_masters: Number of AHB masters supported by the controller
* @data_width: Maximum data width supported by hardware per AHB master * @data_width: Maximum data width supported by hardware per AHB master
* (0 - 8bits, 1 - 16bits, ..., 5 - 256bits) * (in bytes, power of 2)
*/ */
struct dw_dma_platform_data { struct dw_dma_platform_data {
unsigned int nr_channels; unsigned int nr_channels;
...@@ -55,7 +55,7 @@ struct dw_dma_platform_data { ...@@ -55,7 +55,7 @@ struct dw_dma_platform_data {
#define CHAN_PRIORITY_ASCENDING 0 /* chan0 highest */ #define CHAN_PRIORITY_ASCENDING 0 /* chan0 highest */
#define CHAN_PRIORITY_DESCENDING 1 /* chan7 highest */ #define CHAN_PRIORITY_DESCENDING 1 /* chan7 highest */
unsigned char chan_priority; unsigned char chan_priority;
unsigned short block_size; unsigned int block_size;
unsigned char nr_masters; unsigned char nr_masters;
unsigned char data_width[DW_DMA_MAX_NR_MASTERS]; unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
}; };
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment