Commit 35271227 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have aded a new capability for scatter-gathered memset
  using dmaengine APIs.  This is supported in xdmac & hdmac drivers

  We have added support for reusing descriptors for examples like video
  buffers etc.  Driver will follow

  The behaviour of descriptor ack has been clarified and documented

  New devices added are:
   - dma controller in sun[457]i SoCs
   - lpc18xx dmamux
   - ZTE ZX296702 dma controller
   - Analog Devices AXI-DMAC DMA controller
   - eDMA support for dma-crossbar
   - imx6sx support in imx-sdma driver
   - imx-sdma device to device support

  Other:
   - jz4780 fixes
   - ioatdma large refactor and cleanup for removal of ioat v1 and v2
     which is deprecated and fixes
   - ACPI support in X-Gene DMA engine driver
   - ipu irq fixes
   - mvxor fixes
   - minor fixes spread thru drivers"

[ The Kconfig and Makefile entries got re-sorted alphabetically, and I
  handled the conflict with the new Intel integrated IDMA driver by
  slightly mis-sorting it on purpose: "IDMA64" got sorted after "IMX" in
  order to keep the Intel entries together.  I think it might be a good
  idea to just rename the IDMA64 config entry to INTEL_IDMA64 to make
  the sorting be a true sort, not this mismash.

  Also, this merge disables the COMPILE_TEST for the sun4i DMA
  controller, because it does not compile cleanly at all.     - Linus ]

* tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (89 commits)
  dmaengine: ioatdma: add Broadwell EP ioatdma PCI dev IDs
  dmaengine :ipu: change ipu_irq_handler() to remove compile warning
  dmaengine: ioatdma: Fix variable array length
  dmaengine: ioatdma: fix sparse "error" with prep lock
  dmaengine: hdmac: Add memset capabilities
  dmaengine: sort the sh Makefile
  dmaengine: sort the sh Kconfig
  dmaengine: sort the dw Kconfig
  dmaengine: sort the Kconfig
  dmaengine: sort the makefile
  drivers/dma: make mv_xor.c driver explicitly non-modular
  dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller
  devicetree: Add bindings documentation for Analog Devices AXI-DMAC
  dmaengine: xgene-dma: Fix the lock to allow client for further submission of requests
  dmaengine: ioatdma: fix coccinelle warning
  dmaengine: ioatdma: fix zero day warning on incompatible pointer type
  dmaengine: tegra-apb: Simplify locking for device using global pause
  dmaengine: tegra-apb: Remove unnecessary return statements and variables
  dmaengine: tegra-apb: Avoid unnecessary channel base address calculation
  dmaengine: tegra-apb: Remove unused variables
  ...
parents 88a99886 ab98193d
Analog Device AXI-DMAC DMA controller
Required properties:
- compatible: Must be "adi,axi-dmac-1.00.a".
- reg: Specification for the controllers memory mapped register map.
- interrupts: Specification for the controllers interrupt.
- clocks: Phandle and specifier to the controllers AXI interface clock
- #dma-cells: Must be 1.
Required sub-nodes:
- adi,channels: This sub-node must contain a sub-node for each DMA channel. For
the channel sub-nodes the following bindings apply. They must match the
configuration options of the peripheral as it was instantiated.
Required properties for adi,channels sub-node:
- #size-cells: Must be 0
- #address-cells: Must be 1
Required channel sub-node properties:
- reg: Which channel this node refers to.
- adi,length-width: Width of the DMA transfer length register.
- adi,source-bus-width,
adi,destination-bus-width: Width of the source or destination bus in bits.
- adi,source-bus-type,
adi,destination-bus-type: Type of the source or destination bus. Must be one
of the following:
0 (AXI_DMAC_TYPE_AXI_MM): Memory mapped AXI interface
1 (AXI_DMAC_TYPE_AXI_STREAM): Streaming AXI interface
2 (AXI_DMAC_TYPE_AXI_FIFO): FIFO interface
Optional channel properties:
- adi,cyclic: Must be set if the channel supports hardware cyclic DMA
transfers.
- adi,2d: Must be set if the channel supports hardware 2D DMA transfers.
DMA clients connected to the AXI-DMAC DMA controller must use the format
described in the dma.txt file using a one-cell specifier. The value of the
specifier refers to the DMA channel index.
Example:
dma: dma@7c420000 {
compatible = "adi,axi-dmac-1.00.a";
reg = <0x7c420000 0x10000>;
interrupts = <0 57 0>;
clocks = <&clkc 16>;
#dma-cells = <1>;
adi,channels {
#size-cells = <0>;
#address-cells = <1>;
dma-channel@0 {
reg = <0>;
adi,source-bus-width = <32>;
adi,source-bus-type = <ADI_AXI_DMAC_TYPE_MM_AXI>;
adi,destination-bus-width = <64>;
adi,destination-bus-type = <ADI_AXI_DMAC_TYPE_FIFO>;
};
};
};
* ARM PrimeCells PL080 and PL081 and derivatives DMA controller
Required properties:
- compatible: "arm,pl080", "arm,primecell";
"arm,pl081", "arm,primecell";
- reg: Address range of the PL08x registers
- interrupt: The PL08x interrupt number
- clocks: The clock running the IP core clock
- clock-names: Must contain "apb_pclk"
- lli-bus-interface-ahb1: if AHB master 1 is eligible for fetching LLIs
- lli-bus-interface-ahb2: if AHB master 2 is eligible for fetching LLIs
- mem-bus-interface-ahb1: if AHB master 1 is eligible for fetching memory contents
- mem-bus-interface-ahb2: if AHB master 2 is eligible for fetching memory contents
- #dma-cells: must be <2>. First cell should contain the DMA request,
second cell should contain either 1 or 2 depending on
which AHB master that is used.
Optional properties:
- dma-channels: contains the total number of DMA channels supported by the DMAC
- dma-requests: contains the total number of DMA requests supported by the DMAC
- memcpy-burst-size: the size of the bursts for memcpy: 1, 4, 8, 16, 32
64, 128 or 256 bytes are legal values
- memcpy-bus-width: the bus width used for memcpy: 8, 16 or 32 are legal
values
Clients
Required properties:
- dmas: List of DMA controller phandle, request channel and AHB master id
- dma-names: Names of the aforementioned requested channels
Example:
dmac0: dma-controller@10130000 {
compatible = "arm,pl080", "arm,primecell";
reg = <0x10130000 0x1000>;
interrupt-parent = <&vica>;
interrupts = <15>;
clocks = <&hclkdma0>;
clock-names = "apb_pclk";
lli-bus-interface-ahb1;
lli-bus-interface-ahb2;
mem-bus-interface-ahb2;
memcpy-burst-size = <256>;
memcpy-bus-width = <32>;
#dma-cells = <2>;
};
device@40008000 {
...
dmas = <&dmac0 0 2
&dmac0 1 2>;
dma-names = "tx", "rx";
...
};
NXP LPC18xx/43xx DMA MUX (DMA request router)
Required properties:
- compatible: "nxp,lpc1850-dmamux"
- reg: Memory map for accessing module
- #dma-cells: Should be set to <3>.
* 1st cell contain the master dma request signal
* 2nd cell contain the mux value (0-3) for the peripheral
* 3rd cell contain either 1 or 2 depending on the AHB
master used.
- dma-requests: Number of DMA requests for the mux
- dma-masters: phandle pointing to the DMA controller
The DMA controller node need to have the following poroperties:
- dma-requests: Number of DMA requests the controller can handle
Example:
dmac: dma@40002000 {
compatible = "nxp,lpc1850-gpdma", "arm,pl080", "arm,primecell";
arm,primecell-periphid = <0x00041080>;
reg = <0x40002000 0x1000>;
interrupts = <2>;
clocks = <&ccu1 CLK_CPU_DMA>;
clock-names = "apb_pclk";
#dma-cells = <2>;
dma-channels = <8>;
dma-requests = <16>;
lli-bus-interface-ahb1;
lli-bus-interface-ahb2;
mem-bus-interface-ahb1;
mem-bus-interface-ahb2;
memcpy-burst-size = <256>;
memcpy-bus-width = <32>;
};
dmamux: dma-mux {
compatible = "nxp,lpc1850-dmamux";
#dma-cells = <3>;
dma-requests = <64>;
dma-masters = <&dmac>;
};
uart0: serial@40081000 {
compatible = "nxp,lpc1850-uart", "ns16550a";
reg = <0x40081000 0x1000>;
reg-shift = <2>;
interrupts = <24>;
clocks = <&ccu2 CLK_APB0_UART0>, <&ccu1 CLK_CPU_UART0>;
clock-names = "uartclk", "reg";
dmas = <&dmamux 1 1 2
&dmamux 2 1 2>;
dma-names = "tx", "rx";
};
......@@ -12,10 +12,13 @@ XOR engine has. Those sub-nodes have the following required
properties:
- interrupts: interrupt of the XOR channel
And the following optional properties:
The sub-nodes used to contain one or several of the following
properties, but they are now deprecated:
- dmacap,memcpy to indicate that the XOR channel is capable of memcpy operations
- dmacap,memset to indicate that the XOR channel is capable of memset operations
- dmacap,xor to indicate that the XOR channel is capable of xor operations
- dmacap,interrupt to indicate that the XOR channel is capable of
generating interrupts
Example:
......@@ -28,13 +31,8 @@ xor@d0060900 {
xor00 {
interrupts = <51>;
dmacap,memcpy;
dmacap,xor;
};
xor01 {
interrupts = <52>;
dmacap,memcpy;
dmacap,xor;
dmacap,memset;
};
};
Allwinner A10 DMA Controller
This driver follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Must be "allwinner,sun4i-a10-dma"
- reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device
- clocks: Should contain a reference to the parent AHB clock
- #dma-cells : Should be 2, first cell denoting normal or dedicated dma,
second cell holding the request line number.
Example:
dma: dma-controller@01c02000 {
compatible = "allwinner,sun4i-a10-dma";
reg = <0x01c02000 0x1000>;
interrupts = <27>;
clocks = <&ahb_gates 6>;
#dma-cells = <2>;
};
Clients:
DMA clients connected to the Allwinner A10 DMA controller must use the
format described in the dma.txt file, using a three-cell specifier for
each channel: a phandle plus two integer cells.
The three cells in order are:
1. A phandle pointing to the DMA controller.
2. Whether it is using normal (0) or dedicated (1) channels
3. The port ID as specified in the datasheet
Example:
spi2: spi@01c17000 {
compatible = "allwinner,sun4i-a10-spi";
reg = <0x01c17000 0x1000>;
interrupts = <0 12 4>;
clocks = <&ahb_gates 22>, <&spi2_clk>;
clock-names = "ahb", "mod";
dmas = <&dma 1 29>, <&dma 1 28>;
dma-names = "rx", "tx";
status = "disabled";
#address-cells = <1>;
#size-cells = <0>;
};
* ZTE ZX296702 DMA controller
Required properties:
- compatible: Should be "zte,zx296702-dma"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel
- #dma-cells: see dma.txt, should be 1, para number
- dma-channels: physical channels supported
- dma-requests: virtual channels supported, each virtual channel
have specific request line
- clocks: clock required
Example:
Controller:
dma: dma-controller@0x09c00000{
compatible = "zte,zx296702-dma";
reg = <0x09c00000 0x1000>;
clocks = <&topclk ZX296702_DMA_ACLK>;
interrupts = <GIC_SPI 66 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <24>;
dma-requests = <24>;
};
Client:
Use specific request line passing from dmax
For example, spdif0 tx channel request line is 4
spdif0: spdif0@0b004000 {
#sound-dai-cells = <0>;
compatible = "zte,zx296702-spdif";
reg = <0x0b004000 0x1000>;
clocks = <&lsp0clk ZX296702_SPDIF0_DIV>;
clock-names = "tx";
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 4>;
dma-names = "tx";
}
......@@ -345,12 +345,29 @@ where to put them)
that abstracts it away.
* DMA_CTRL_ACK
- If set, the transfer can be reused after being completed.
- There is a guarantee the transfer won't be freed until it is acked
by async_tx_ack().
- If clear, the descriptor cannot be reused by provider until the
client acknowledges receipt, i.e. has has a chance to establish any
dependency chains
- This can be acked by invoking async_tx_ack()
- If set, does not mean descriptor can be reused
* DMA_CTRL_REUSE
- If set, the descriptor can be reused after being completed. It should
not be freed by provider if this flag is set.
- The descriptor should be prepared for reuse by invoking
dmaengine_desc_set_reuse() which will set DMA_CTRL_REUSE.
- dmaengine_desc_set_reuse() will succeed only when channel support
reusable descriptor as exhibited by capablities
- As a consequence, if a device driver wants to skip the dma_map_sg() and
dma_unmap_sg() in between 2 transfers, because the DMA'd data wasn't used,
it can resubmit the transfer right after its completion.
- Descriptor can be freed in few ways
- Clearing DMA_CTRL_REUSE by invoking dmaengine_desc_clear_reuse()
and submitting for last txn
- Explicitly invoking dmaengine_desc_free(), this can succeed only
when DMA_CTRL_REUSE is already set
- Terminating the channel
General Design Notes
--------------------
......
......@@ -735,6 +735,12 @@ X: drivers/iio/*/adjd*
F: drivers/staging/iio/*/ad*
F: staging/iio/trigger/iio-trig-bfin-timer.c
ANALOG DEVICES INC DMA DRIVERS
M: Lars-Peter Clausen <lars@metafoo.de>
W: http://ez.analog.com/community/linux-device-drivers
S: Supported
F: drivers/dma/dma-axi-dmac.c
ANDROID DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Arve Hjønnevåg <arve@android.com>
......
This diff is collapsed.
#dmaengine debug flags
subdir-ccflags-$(CONFIG_DMADEVICES_DEBUG) := -DDEBUG
subdir-ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
#core
obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o
obj-$(CONFIG_DMA_ACPI) += acpi-dma.o
obj-$(CONFIG_DMA_OF) += of-dma.o
#dmatest
obj-$(CONFIG_DMATEST) += dmatest.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_FSL_DMA) += fsldma.o
obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_MV_XOR) += mv_xor.o
obj-$(CONFIG_IDMA64) += idma64.o
obj-$(CONFIG_DW_DMAC_CORE) += dw/
#devices
obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
obj-$(CONFIG_MX3_IPU) += ipu/
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_DMA_SUN4I) += sun4i-dma.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
obj-$(CONFIG_DW_DMAC_CORE) += dw/
obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_FSL_DMA) += fsldma.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_FSL_RAID) += fsl_raid.o
obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
obj-$(CONFIG_MV_XOR) += mv_xor.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_MX3_IPU) += ipu/
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o
obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
obj-$(CONFIG_FSL_RAID) += fsl_raid.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-y += xilinx/
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
obj-y += xilinx/
......@@ -83,6 +83,8 @@
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/pm_runtime.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
......@@ -2030,10 +2032,188 @@ static inline void init_pl08x_debugfs(struct pl08x_driver_data *pl08x)
}
#endif
#ifdef CONFIG_OF
static struct dma_chan *pl08x_find_chan_id(struct pl08x_driver_data *pl08x,
u32 id)
{
struct pl08x_dma_chan *chan;
list_for_each_entry(chan, &pl08x->slave.channels, vc.chan.device_node) {
if (chan->signal == id)
return &chan->vc.chan;
}
return NULL;
}
static struct dma_chan *pl08x_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct pl08x_driver_data *pl08x = ofdma->of_dma_data;
struct pl08x_channel_data *data;
struct pl08x_dma_chan *chan;
struct dma_chan *dma_chan;
if (!pl08x)
return NULL;
if (dma_spec->args_count != 2)
return NULL;
dma_chan = pl08x_find_chan_id(pl08x, dma_spec->args[0]);
if (dma_chan)
return dma_get_slave_channel(dma_chan);
chan = devm_kzalloc(pl08x->slave.dev, sizeof(*chan) + sizeof(*data),
GFP_KERNEL);
if (!chan)
return NULL;
data = (void *)&chan[1];
data->bus_id = "(none)";
data->periph_buses = dma_spec->args[1];
chan->cd = data;
chan->host = pl08x;
chan->slave = true;
chan->name = data->bus_id;
chan->state = PL08X_CHAN_IDLE;
chan->signal = dma_spec->args[0];
chan->vc.desc_free = pl08x_desc_free;
vchan_init(&chan->vc, &pl08x->slave);
return dma_get_slave_channel(&chan->vc.chan);
}
static int pl08x_of_probe(struct amba_device *adev,
struct pl08x_driver_data *pl08x,
struct device_node *np)
{
struct pl08x_platform_data *pd;
u32 cctl_memcpy = 0;
u32 val;
int ret;
pd = devm_kzalloc(&adev->dev, sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
/* Eligible bus masters for fetching LLIs */
if (of_property_read_bool(np, "lli-bus-interface-ahb1"))
pd->lli_buses |= PL08X_AHB1;
if (of_property_read_bool(np, "lli-bus-interface-ahb2"))
pd->lli_buses |= PL08X_AHB2;
if (!pd->lli_buses) {
dev_info(&adev->dev, "no bus masters for LLIs stated, assume all\n");
pd->lli_buses |= PL08X_AHB1 | PL08X_AHB2;
}
/* Eligible bus masters for memory access */
if (of_property_read_bool(np, "mem-bus-interface-ahb1"))
pd->mem_buses |= PL08X_AHB1;
if (of_property_read_bool(np, "mem-bus-interface-ahb2"))
pd->mem_buses |= PL08X_AHB2;
if (!pd->mem_buses) {
dev_info(&adev->dev, "no bus masters for memory stated, assume all\n");
pd->mem_buses |= PL08X_AHB1 | PL08X_AHB2;
}
/* Parse the memcpy channel properties */
ret = of_property_read_u32(np, "memcpy-burst-size", &val);
if (ret) {
dev_info(&adev->dev, "no memcpy burst size specified, using 1 byte\n");
val = 1;
}
switch (val) {
default:
dev_err(&adev->dev, "illegal burst size for memcpy, set to 1\n");
/* Fall through */
case 1:
cctl_memcpy |= PL080_BSIZE_1 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_1 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 4:
cctl_memcpy |= PL080_BSIZE_4 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_4 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 8:
cctl_memcpy |= PL080_BSIZE_8 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_8 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 16:
cctl_memcpy |= PL080_BSIZE_16 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_16 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 32:
cctl_memcpy |= PL080_BSIZE_32 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_32 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 64:
cctl_memcpy |= PL080_BSIZE_64 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_64 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 128:
cctl_memcpy |= PL080_BSIZE_128 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_128 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 256:
cctl_memcpy |= PL080_BSIZE_256 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_256 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
}
ret = of_property_read_u32(np, "memcpy-bus-width", &val);
if (ret) {
dev_info(&adev->dev, "no memcpy bus width specified, using 8 bits\n");
val = 8;
}
switch (val) {
default:
dev_err(&adev->dev, "illegal bus width for memcpy, set to 8 bits\n");
/* Fall through */
case 8:
cctl_memcpy |= PL080_WIDTH_8BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_8BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
case 16:
cctl_memcpy |= PL080_WIDTH_16BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_16BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
case 32:
cctl_memcpy |= PL080_WIDTH_32BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_32BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
}
/* This is currently the only thing making sense */
cctl_memcpy |= PL080_CONTROL_PROT_SYS;
/* Set up memcpy channel */
pd->memcpy_channel.bus_id = "memcpy";
pd->memcpy_channel.cctl_memcpy = cctl_memcpy;
/* Use the buses that can access memory, obviously */
pd->memcpy_channel.periph_buses = pd->mem_buses;
pl08x->pd = pd;
return of_dma_controller_register(adev->dev.of_node, pl08x_of_xlate,
pl08x);
}
#else
static inline int pl08x_of_probe(struct amba_device *adev,
struct pl08x_driver_data *pl08x,
struct device_node *np)
{
return -EINVAL;
}
#endif
static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
{
struct pl08x_driver_data *pl08x;
const struct vendor_data *vd = id->data;
struct device_node *np = adev->dev.of_node;
u32 tsfr_size;
int ret = 0;
int i;
......@@ -2093,9 +2273,15 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
/* Get the platform data */
pl08x->pd = dev_get_platdata(&adev->dev);
if (!pl08x->pd) {
dev_err(&adev->dev, "no platform data supplied\n");
ret = -EINVAL;
goto out_no_platdata;
if (np) {
ret = pl08x_of_probe(adev, pl08x, np);
if (ret)
goto out_no_platdata;
} else {
dev_err(&adev->dev, "no platform data supplied\n");
ret = -EINVAL;
goto out_no_platdata;
}
}
/* Assign useful pointers to the driver state */
......
......@@ -448,6 +448,7 @@ static void
atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
{
struct dma_async_tx_descriptor *txd = &desc->txd;
struct at_dma *atdma = to_at_dma(atchan->chan_common.device);
dev_vdbg(chan2dev(&atchan->chan_common),
"descriptor %u complete\n", txd->cookie);
......@@ -456,6 +457,13 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
if (!atc_chan_is_cyclic(atchan))
dma_cookie_complete(txd);
/* If the transfer was a memset, free our temporary buffer */
if (desc->memset) {
dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
desc->memset_paddr);
desc->memset = false;
}
/* move children to free_list */
list_splice_init(&desc->tx_list, &atchan->free_list);
/* move myself to free_list */
......@@ -717,14 +725,14 @@ atc_prep_dma_interleaved(struct dma_chan *chan,
size_t len = 0;
int i;
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
dev_info(chan2dev(chan),
"%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags);
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
/*
* The controller can only "skip" X bytes every Y bytes, so we
* need to make sure we are given a template that fit that
......@@ -873,6 +881,93 @@ atc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
return NULL;
}
/**
* atc_prep_dma_memset - prepare a memcpy operation
* @chan: the channel to prepare operation on
* @dest: operation virtual destination address
* @value: value to set memory buffer to
* @len: operation length
* @flags: tx descriptor status flags
*/
static struct dma_async_tx_descriptor *
atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
size_t len, unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
struct at_desc *desc = NULL;
size_t xfer_count;
u32 ctrla;
u32 ctrlb;
dev_vdbg(chan2dev(chan), "%s: d0x%x v0x%x l0x%zx f0x%lx\n", __func__,
dest, value, len, flags);
if (unlikely(!len)) {
dev_dbg(chan2dev(chan), "%s: length is zero!\n", __func__);
return NULL;
}
if (!is_dma_fill_aligned(chan->device, dest, 0, len)) {
dev_dbg(chan2dev(chan), "%s: buffer is not aligned\n",
__func__);
return NULL;
}
xfer_count = len >> 2;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n",
__func__);
return NULL;
}
ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN
| ATC_SRC_ADDR_MODE_FIXED
| ATC_DST_ADDR_MODE_INCR
| ATC_FC_MEM2MEM;
ctrla = ATC_SRC_WIDTH(2) |
ATC_DST_WIDTH(2);
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan), "%s: can't get a descriptor\n",
__func__);
return NULL;
}
desc->memset_vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC,
&desc->memset_paddr);
if (!desc->memset_vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
goto err_put_desc;
}
*desc->memset_vaddr = value;
desc->memset = true;
desc->lli.saddr = desc->memset_paddr;
desc->lli.daddr = dest;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->txd.cookie = -EBUSY;
desc->len = len;
desc->total_len = len;
/* set end-of-link on the descriptor */
set_desc_eol(desc);
desc->txd.flags = flags;
return &desc->txd;
err_put_desc:
atc_desc_put(atchan, desc);
return NULL;
}
/**
* atc_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
......@@ -1755,6 +1850,8 @@ static int __init at_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMSET, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_PRIVATE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask);
......@@ -1818,7 +1915,16 @@ static int __init at_dma_probe(struct platform_device *pdev)
if (!atdma->dma_desc_pool) {
dev_err(&pdev->dev, "No memory for descriptors dma pool\n");
err = -ENOMEM;
goto err_pool_create;
goto err_desc_pool_create;
}
/* create a pool of consistent memory blocks for memset blocks */
atdma->memset_pool = dma_pool_create("at_hdmac_memset_pool",
&pdev->dev, sizeof(int), 4, 0);
if (!atdma->memset_pool) {
dev_err(&pdev->dev, "No memory for memset dma pool\n");
err = -ENOMEM;
goto err_memset_pool_create;
}
/* clear any pending interrupt */
......@@ -1864,6 +1970,11 @@ static int __init at_dma_probe(struct platform_device *pdev)
if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy;
if (dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask)) {
atdma->dma_common.device_prep_dma_memset = atc_prep_dma_memset;
atdma->dma_common.fill_align = DMAENGINE_ALIGN_4_BYTES;
}
if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)) {
atdma->dma_common.device_prep_slave_sg = atc_prep_slave_sg;
/* controller can do slave DMA: can trigger cyclic transfers */
......@@ -1884,8 +1995,9 @@ static int __init at_dma_probe(struct platform_device *pdev)
dma_writel(atdma, EN, AT_DMA_ENABLE);
dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s), %d channels\n",
dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s%s), %d channels\n",
dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask) ? "cpy " : "",
dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask) ? "set " : "",
dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) ? "slave " : "",
dma_has_cap(DMA_SG, atdma->dma_common.cap_mask) ? "sg-cpy " : "",
plat_dat->nr_channels);
......@@ -1910,8 +2022,10 @@ static int __init at_dma_probe(struct platform_device *pdev)
err_of_dma_controller_register:
dma_async_device_unregister(&atdma->dma_common);
dma_pool_destroy(atdma->memset_pool);
err_memset_pool_create:
dma_pool_destroy(atdma->dma_desc_pool);
err_pool_create:
err_desc_pool_create:
free_irq(platform_get_irq(pdev, 0), atdma);
err_irq:
clk_disable_unprepare(atdma->clk);
......@@ -1936,6 +2050,7 @@ static int at_dma_remove(struct platform_device *pdev)
at_dma_off(atdma);
dma_async_device_unregister(&atdma->dma_common);
dma_pool_destroy(atdma->memset_pool);
dma_pool_destroy(atdma->dma_desc_pool);
free_irq(platform_get_irq(pdev, 0), atdma);
......
......@@ -200,6 +200,11 @@ struct at_desc {
size_t boundary;
size_t dst_hole;
size_t src_hole;
/* Memset temporary buffer */
bool memset;
dma_addr_t memset_paddr;
int *memset_vaddr;
};
static inline struct at_desc *
......@@ -330,6 +335,7 @@ struct at_dma {
u8 all_chan_mask;
struct dma_pool *dma_desc_pool;
struct dma_pool *memset_pool;
/* AT THE END channels table */
struct at_dma_chan chan[0];
};
......
......@@ -625,12 +625,12 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *first = NULL, *prev = NULL;
struct scatterlist *sg;
int i;
unsigned int xfer_size = 0;
unsigned long irqflags;
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *first = NULL, *prev = NULL;
struct scatterlist *sg;
int i;
unsigned int xfer_size = 0;
unsigned long irqflags;
struct dma_async_tx_descriptor *ret = NULL;
if (!sgl)
......@@ -797,10 +797,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
list_add_tail(&desc->desc_node, &first->descs_list);
}
prev->lld.mbr_nda = first->tx_dma_desc.phys;
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
at_xdmac_queue_desc(chan, prev, first);
first->tx_dma_desc.flags = flags;
first->xfer_size = buf_len;
first->direction = direction;
......@@ -1135,7 +1132,7 @@ static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan,
* SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction.
*/
u32 chan_cc = AT_XDMAC_CC_DAM_INCREMENTED_AM
u32 chan_cc = AT_XDMAC_CC_DAM_UBS_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
......@@ -1203,6 +1200,168 @@ at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
return &desc->tx_dma_desc;
}
static struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memset_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, int value,
unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc, *pdesc = NULL,
*ppdesc = NULL, *first = NULL;
struct scatterlist *sg, *psg = NULL, *ppsg = NULL;
size_t stride = 0, pstride = 0, len = 0;
int i;
if (!sgl)
return NULL;
dev_dbg(chan2dev(chan), "%s: sg_len=%d, value=0x%x, flags=0x%lx\n",
__func__, sg_len, value, flags);
/* Prepare descriptors. */
for_each_sg(sgl, sg, sg_len, i) {
dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n",
__func__, sg_dma_address(sg), sg_dma_len(sg),
value, flags);
desc = at_xdmac_memset_create_desc(chan, atchan,
sg_dma_address(sg),
sg_dma_len(sg),
value);
if (!desc && first)
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
if (!first)
first = desc;
/* Update our strides */
pstride = stride;
if (psg)
stride = sg_dma_address(sg) -
(sg_dma_address(psg) + sg_dma_len(psg));
/*
* The scatterlist API gives us only the address and
* length of each elements.
*
* Unfortunately, we don't have the stride, which we
* will need to compute.
*
* That make us end up in a situation like this one:
* len stride len stride len
* +-------+ +-------+ +-------+
* | N-2 | | N-1 | | N |
* +-------+ +-------+ +-------+
*
* We need all these three elements (N-2, N-1 and N)
* to actually take the decision on whether we need to
* queue N-1 or reuse N-2.
*
* We will only consider N if it is the last element.
*/
if (ppdesc && pdesc) {
if ((stride == pstride) &&
(sg_dma_len(ppsg) == sg_dma_len(psg))) {
dev_dbg(chan2dev(chan),
"%s: desc 0x%p can be merged with desc 0x%p\n",
__func__, pdesc, ppdesc);
/*
* Increment the block count of the
* N-2 descriptor
*/
at_xdmac_increment_block_count(chan, ppdesc);
ppdesc->lld.mbr_dus = stride;
/*
* Put back the N-1 descriptor in the
* free descriptor list
*/
list_add_tail(&pdesc->desc_node,
&atchan->free_descs_list);
/*
* Make our N-1 descriptor pointer
* point to the N-2 since they were
* actually merged.
*/
pdesc = ppdesc;
/*
* Rule out the case where we don't have
* pstride computed yet (our second sg
* element)
*
* We also want to catch the case where there
* would be a negative stride,
*/
} else if (pstride ||
sg_dma_address(sg) < sg_dma_address(psg)) {
/*
* Queue the N-1 descriptor after the
* N-2
*/
at_xdmac_queue_desc(chan, ppdesc, pdesc);
/*
* Add the N-1 descriptor to the list
* of the descriptors used for this
* transfer
*/
list_add_tail(&desc->desc_node,
&first->descs_list);
dev_dbg(chan2dev(chan),
"%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
}
}
/*
* If we are the last element, just see if we have the
* same size than the previous element.
*
* If so, we can merge it with the previous descriptor
* since we don't care about the stride anymore.
*/
if ((i == (sg_len - 1)) &&
sg_dma_len(ppsg) == sg_dma_len(psg)) {
dev_dbg(chan2dev(chan),
"%s: desc 0x%p can be merged with desc 0x%p\n",
__func__, desc, pdesc);
/*
* Increment the block count of the N-1
* descriptor
*/
at_xdmac_increment_block_count(chan, pdesc);
pdesc->lld.mbr_dus = stride;
/*
* Put back the N descriptor in the free
* descriptor list
*/
list_add_tail(&desc->desc_node,
&atchan->free_descs_list);
}
/* Update our descriptors */
ppdesc = pdesc;
pdesc = desc;
/* Update our scatter pointers */
ppsg = psg;
psg = sg;
len += sg_dma_len(sg);
}
first->tx_dma_desc.cookie = -EBUSY;
first->tx_dma_desc.flags = flags;
first->xfer_size = len;
return &first->tx_dma_desc;
}
static enum dma_status
at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate)
......@@ -1736,6 +1895,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
dma_cap_set(DMA_INTERLEAVE, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET_SG, atxdmac->dma.cap_mask);
dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask);
/*
* Without DMA_PRIVATE the driver is not able to allocate more than
......@@ -1751,6 +1911,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->dma.device_prep_interleaved_dma = at_xdmac_prep_interleaved;
atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy;
atxdmac->dma.device_prep_dma_memset = at_xdmac_prep_dma_memset;
atxdmac->dma.device_prep_dma_memset_sg = at_xdmac_prep_dma_memset_sg;
atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg;
atxdmac->dma.device_config = at_xdmac_device_config;
atxdmac->dma.device_pause = at_xdmac_device_pause;
......
......@@ -2730,7 +2730,7 @@ static int __init coh901318_probe(struct platform_device *pdev)
* This controller can only access address at even 32bit boundaries,
* i.e. 2^2
*/
base->dma_memcpy.copy_align = 2;
base->dma_memcpy.copy_align = DMAENGINE_ALIGN_4_BYTES;
err = dma_async_device_register(&base->dma_memcpy);
if (err)
......
/*
* Driver for the Analog Devices AXI-DMAC core
*
* Copyright 2013-2015 Analog Devices Inc.
* Author: Lars-Peter Clausen <lars@metafoo.de>
*
* Licensed under the GPL-2.
*/
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <dt-bindings/dma/axi-dmac.h>
#include "dmaengine.h"
#include "virt-dma.h"
/*
* The AXI-DMAC is a soft IP core that is used in FPGA designs. The core has
* various instantiation parameters which decided the exact feature set support
* by the core.
*
* Each channel of the core has a source interface and a destination interface.
* The number of channels and the type of the channel interfaces is selected at
* configuration time. A interface can either be a connected to a central memory
* interconnect, which allows access to system memory, or it can be connected to
* a dedicated bus which is directly connected to a data port on a peripheral.
* Given that those are configuration options of the core that are selected when
* it is instantiated this means that they can not be changed by software at
* runtime. By extension this means that each channel is uni-directional. It can
* either be device to memory or memory to device, but not both. Also since the
* device side is a dedicated data bus only connected to a single peripheral
* there is no address than can or needs to be configured for the device side.
*/
#define AXI_DMAC_REG_IRQ_MASK 0x80
#define AXI_DMAC_REG_IRQ_PENDING 0x84
#define AXI_DMAC_REG_IRQ_SOURCE 0x88
#define AXI_DMAC_REG_CTRL 0x400
#define AXI_DMAC_REG_TRANSFER_ID 0x404
#define AXI_DMAC_REG_START_TRANSFER 0x408
#define AXI_DMAC_REG_FLAGS 0x40c
#define AXI_DMAC_REG_DEST_ADDRESS 0x410
#define AXI_DMAC_REG_SRC_ADDRESS 0x414
#define AXI_DMAC_REG_X_LENGTH 0x418
#define AXI_DMAC_REG_Y_LENGTH 0x41c
#define AXI_DMAC_REG_DEST_STRIDE 0x420
#define AXI_DMAC_REG_SRC_STRIDE 0x424
#define AXI_DMAC_REG_TRANSFER_DONE 0x428
#define AXI_DMAC_REG_ACTIVE_TRANSFER_ID 0x42c
#define AXI_DMAC_REG_STATUS 0x430
#define AXI_DMAC_REG_CURRENT_SRC_ADDR 0x434
#define AXI_DMAC_REG_CURRENT_DEST_ADDR 0x438
#define AXI_DMAC_CTRL_ENABLE BIT(0)
#define AXI_DMAC_CTRL_PAUSE BIT(1)
#define AXI_DMAC_IRQ_SOT BIT(0)
#define AXI_DMAC_IRQ_EOT BIT(1)
#define AXI_DMAC_FLAG_CYCLIC BIT(0)
struct axi_dmac_sg {
dma_addr_t src_addr;
dma_addr_t dest_addr;
unsigned int x_len;
unsigned int y_len;
unsigned int dest_stride;
unsigned int src_stride;
unsigned int id;
};
struct axi_dmac_desc {
struct virt_dma_desc vdesc;
bool cyclic;
unsigned int num_submitted;
unsigned int num_completed;
unsigned int num_sgs;
struct axi_dmac_sg sg[];
};
struct axi_dmac_chan {
struct virt_dma_chan vchan;
struct axi_dmac_desc *next_desc;
struct list_head active_descs;
enum dma_transfer_direction direction;
unsigned int src_width;
unsigned int dest_width;
unsigned int src_type;
unsigned int dest_type;
unsigned int max_length;
unsigned int align_mask;
bool hw_cyclic;
bool hw_2d;
};
struct axi_dmac {
void __iomem *base;
int irq;
struct clk *clk;
struct dma_device dma_dev;
struct axi_dmac_chan chan;
struct device_dma_parameters dma_parms;
};
static struct axi_dmac *chan_to_axi_dmac(struct axi_dmac_chan *chan)
{
return container_of(chan->vchan.chan.device, struct axi_dmac,
dma_dev);
}
static struct axi_dmac_chan *to_axi_dmac_chan(struct dma_chan *c)
{
return container_of(c, struct axi_dmac_chan, vchan.chan);
}
static struct axi_dmac_desc *to_axi_dmac_desc(struct virt_dma_desc *vdesc)
{
return container_of(vdesc, struct axi_dmac_desc, vdesc);
}
static void axi_dmac_write(struct axi_dmac *axi_dmac, unsigned int reg,
unsigned int val)
{
writel(val, axi_dmac->base + reg);
}
static int axi_dmac_read(struct axi_dmac *axi_dmac, unsigned int reg)
{
return readl(axi_dmac->base + reg);
}
static int axi_dmac_src_is_mem(struct axi_dmac_chan *chan)
{
return chan->src_type == AXI_DMAC_BUS_TYPE_AXI_MM;
}
static int axi_dmac_dest_is_mem(struct axi_dmac_chan *chan)
{
return chan->dest_type == AXI_DMAC_BUS_TYPE_AXI_MM;
}
static bool axi_dmac_check_len(struct axi_dmac_chan *chan, unsigned int len)
{
if (len == 0 || len > chan->max_length)
return false;
if ((len & chan->align_mask) != 0) /* Not aligned */
return false;
return true;
}
static bool axi_dmac_check_addr(struct axi_dmac_chan *chan, dma_addr_t addr)
{
if ((addr & chan->align_mask) != 0) /* Not aligned */
return false;
return true;
}
static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
{
struct axi_dmac *dmac = chan_to_axi_dmac(chan);
struct virt_dma_desc *vdesc;
struct axi_dmac_desc *desc;
struct axi_dmac_sg *sg;
unsigned int flags = 0;
unsigned int val;
val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER);
if (val) /* Queue is full, wait for the next SOT IRQ */
return;
desc = chan->next_desc;
if (!desc) {
vdesc = vchan_next_desc(&chan->vchan);
if (!vdesc)
return;
list_move_tail(&vdesc->node, &chan->active_descs);
desc = to_axi_dmac_desc(vdesc);
}
sg = &desc->sg[desc->num_submitted];
desc->num_submitted++;
if (desc->num_submitted == desc->num_sgs)
chan->next_desc = NULL;
else
chan->next_desc = desc;
sg->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID);
if (axi_dmac_dest_is_mem(chan)) {
axi_dmac_write(dmac, AXI_DMAC_REG_DEST_ADDRESS, sg->dest_addr);
axi_dmac_write(dmac, AXI_DMAC_REG_DEST_STRIDE, sg->dest_stride);
}
if (axi_dmac_src_is_mem(chan)) {
axi_dmac_write(dmac, AXI_DMAC_REG_SRC_ADDRESS, sg->src_addr);
axi_dmac_write(dmac, AXI_DMAC_REG_SRC_STRIDE, sg->src_stride);
}
/*
* If the hardware supports cyclic transfers and there is no callback to
* call, enable hw cyclic mode to avoid unnecessary interrupts.
*/
if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback)
flags |= AXI_DMAC_FLAG_CYCLIC;
axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->x_len - 1);
axi_dmac_write(dmac, AXI_DMAC_REG_Y_LENGTH, sg->y_len - 1);
axi_dmac_write(dmac, AXI_DMAC_REG_FLAGS, flags);
axi_dmac_write(dmac, AXI_DMAC_REG_START_TRANSFER, 1);
}
static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan)
{
return list_first_entry_or_null(&chan->active_descs,
struct axi_dmac_desc, vdesc.node);
}
static void axi_dmac_transfer_done(struct axi_dmac_chan *chan,
unsigned int completed_transfers)
{
struct axi_dmac_desc *active;
struct axi_dmac_sg *sg;
active = axi_dmac_active_desc(chan);
if (!active)
return;
if (active->cyclic) {
vchan_cyclic_callback(&active->vdesc);
} else {
do {
sg = &active->sg[active->num_completed];
if (!(BIT(sg->id) & completed_transfers))
break;
active->num_completed++;
if (active->num_completed == active->num_sgs) {
list_del(&active->vdesc.node);
vchan_cookie_complete(&active->vdesc);
active = axi_dmac_active_desc(chan);
}
} while (active);
}
}
static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid)
{
struct axi_dmac *dmac = devid;
unsigned int pending;
pending = axi_dmac_read(dmac, AXI_DMAC_REG_IRQ_PENDING);
axi_dmac_write(dmac, AXI_DMAC_REG_IRQ_PENDING, pending);
spin_lock(&dmac->chan.vchan.lock);
/* One or more transfers have finished */
if (pending & AXI_DMAC_IRQ_EOT) {
unsigned int completed;
completed = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_DONE);
axi_dmac_transfer_done(&dmac->chan, completed);
}
/* Space has become available in the descriptor queue */
if (pending & AXI_DMAC_IRQ_SOT)
axi_dmac_start_transfer(&dmac->chan);
spin_unlock(&dmac->chan.vchan.lock);
return IRQ_HANDLED;
}
static int axi_dmac_terminate_all(struct dma_chan *c)
{
struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
struct axi_dmac *dmac = chan_to_axi_dmac(chan);
unsigned long flags;
LIST_HEAD(head);
spin_lock_irqsave(&chan->vchan.lock, flags);
axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, 0);
chan->next_desc = NULL;
vchan_get_all_descriptors(&chan->vchan, &head);
list_splice_tail_init(&chan->active_descs, &head);
spin_unlock_irqrestore(&chan->vchan.lock, flags);
vchan_dma_desc_free_list(&chan->vchan, &head);
return 0;
}
static void axi_dmac_issue_pending(struct dma_chan *c)
{
struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
struct axi_dmac *dmac = chan_to_axi_dmac(chan);
unsigned long flags;
axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, AXI_DMAC_CTRL_ENABLE);
spin_lock_irqsave(&chan->vchan.lock, flags);
if (vchan_issue_pending(&chan->vchan))
axi_dmac_start_transfer(chan);
spin_unlock_irqrestore(&chan->vchan.lock, flags);
}
static struct axi_dmac_desc *axi_dmac_alloc_desc(unsigned int num_sgs)
{
struct axi_dmac_desc *desc;
desc = kzalloc(sizeof(struct axi_dmac_desc) +
sizeof(struct axi_dmac_sg) * num_sgs, GFP_NOWAIT);
if (!desc)
return NULL;
desc->num_sgs = num_sgs;
return desc;
}
static struct dma_async_tx_descriptor *axi_dmac_prep_slave_sg(
struct dma_chan *c, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
{
struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
struct axi_dmac_desc *desc;
struct scatterlist *sg;
unsigned int i;
if (direction != chan->direction)
return NULL;
desc = axi_dmac_alloc_desc(sg_len);
if (!desc)
return NULL;
for_each_sg(sgl, sg, sg_len, i) {
if (!axi_dmac_check_addr(chan, sg_dma_address(sg)) ||
!axi_dmac_check_len(chan, sg_dma_len(sg))) {
kfree(desc);
return NULL;
}
if (direction == DMA_DEV_TO_MEM)
desc->sg[i].dest_addr = sg_dma_address(sg);
else
desc->sg[i].src_addr = sg_dma_address(sg);
desc->sg[i].x_len = sg_dma_len(sg);
desc->sg[i].y_len = 1;
}
desc->cyclic = false;
return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
}
static struct dma_async_tx_descriptor *axi_dmac_prep_dma_cyclic(
struct dma_chan *c, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags)
{
struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
struct axi_dmac_desc *desc;
unsigned int num_periods, i;
if (direction != chan->direction)
return NULL;
if (!axi_dmac_check_len(chan, buf_len) ||
!axi_dmac_check_addr(chan, buf_addr))
return NULL;
if (period_len == 0 || buf_len % period_len)
return NULL;
num_periods = buf_len / period_len;
desc = axi_dmac_alloc_desc(num_periods);
if (!desc)
return NULL;
for (i = 0; i < num_periods; i++) {
if (direction == DMA_DEV_TO_MEM)
desc->sg[i].dest_addr = buf_addr;
else
desc->sg[i].src_addr = buf_addr;
desc->sg[i].x_len = period_len;
desc->sg[i].y_len = 1;
buf_addr += period_len;
}
desc->cyclic = true;
return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
}
static struct dma_async_tx_descriptor *axi_dmac_prep_interleaved(
struct dma_chan *c, struct dma_interleaved_template *xt,
unsigned long flags)
{
struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
struct axi_dmac_desc *desc;
size_t dst_icg, src_icg;
if (xt->frame_size != 1)
return NULL;
if (xt->dir != chan->direction)
return NULL;
if (axi_dmac_src_is_mem(chan)) {
if (!xt->src_inc || !axi_dmac_check_addr(chan, xt->src_start))
return NULL;
}
if (axi_dmac_dest_is_mem(chan)) {
if (!xt->dst_inc || !axi_dmac_check_addr(chan, xt->dst_start))
return NULL;
}
dst_icg = dmaengine_get_dst_icg(xt, &xt->sgl[0]);
src_icg = dmaengine_get_src_icg(xt, &xt->sgl[0]);
if (chan->hw_2d) {
if (!axi_dmac_check_len(chan, xt->sgl[0].size) ||
!axi_dmac_check_len(chan, xt->numf))
return NULL;
if (xt->sgl[0].size + dst_icg > chan->max_length ||
xt->sgl[0].size + src_icg > chan->max_length)
return NULL;
} else {
if (dst_icg != 0 || src_icg != 0)
return NULL;
if (chan->max_length / xt->sgl[0].size < xt->numf)
return NULL;
if (!axi_dmac_check_len(chan, xt->sgl[0].size * xt->numf))
return NULL;
}
desc = axi_dmac_alloc_desc(1);
if (!desc)
return NULL;
if (axi_dmac_src_is_mem(chan)) {
desc->sg[0].src_addr = xt->src_start;
desc->sg[0].src_stride = xt->sgl[0].size + src_icg;
}
if (axi_dmac_dest_is_mem(chan)) {
desc->sg[0].dest_addr = xt->dst_start;
desc->sg[0].dest_stride = xt->sgl[0].size + dst_icg;
}
if (chan->hw_2d) {
desc->sg[0].x_len = xt->sgl[0].size;
desc->sg[0].y_len = xt->numf;
} else {
desc->sg[0].x_len = xt->sgl[0].size * xt->numf;
desc->sg[0].y_len = 1;
}
return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
}
static void axi_dmac_free_chan_resources(struct dma_chan *c)
{
vchan_free_chan_resources(to_virt_chan(c));
}
static void axi_dmac_desc_free(struct virt_dma_desc *vdesc)
{
kfree(container_of(vdesc, struct axi_dmac_desc, vdesc));
}
/*
* The configuration stored in the devicetree matches the configuration
* parameters of the peripheral instance and allows the driver to know which
* features are implemented and how it should behave.
*/
static int axi_dmac_parse_chan_dt(struct device_node *of_chan,
struct axi_dmac_chan *chan)
{
u32 val;
int ret;
ret = of_property_read_u32(of_chan, "reg", &val);
if (ret)
return ret;
/* We only support 1 channel for now */
if (val != 0)
return -EINVAL;
ret = of_property_read_u32(of_chan, "adi,source-bus-type", &val);
if (ret)
return ret;
if (val > AXI_DMAC_BUS_TYPE_FIFO)
return -EINVAL;
chan->src_type = val;
ret = of_property_read_u32(of_chan, "adi,destination-bus-type", &val);
if (ret)
return ret;
if (val > AXI_DMAC_BUS_TYPE_FIFO)
return -EINVAL;
chan->dest_type = val;
ret = of_property_read_u32(of_chan, "adi,source-bus-width", &val);
if (ret)
return ret;
chan->src_width = val / 8;
ret = of_property_read_u32(of_chan, "adi,destination-bus-width", &val);
if (ret)
return ret;
chan->dest_width = val / 8;
ret = of_property_read_u32(of_chan, "adi,length-width", &val);
if (ret)
return ret;
if (val >= 32)
chan->max_length = UINT_MAX;
else
chan->max_length = (1ULL << val) - 1;
chan->align_mask = max(chan->dest_width, chan->src_width) - 1;
if (axi_dmac_dest_is_mem(chan) && axi_dmac_src_is_mem(chan))
chan->direction = DMA_MEM_TO_MEM;
else if (!axi_dmac_dest_is_mem(chan) && axi_dmac_src_is_mem(chan))
chan->direction = DMA_MEM_TO_DEV;
else if (axi_dmac_dest_is_mem(chan) && !axi_dmac_src_is_mem(chan))
chan->direction = DMA_DEV_TO_MEM;
else
chan->direction = DMA_DEV_TO_DEV;
chan->hw_cyclic = of_property_read_bool(of_chan, "adi,cyclic");
chan->hw_2d = of_property_read_bool(of_chan, "adi,2d");
return 0;
}
static int axi_dmac_probe(struct platform_device *pdev)
{
struct device_node *of_channels, *of_chan;
struct dma_device *dma_dev;
struct axi_dmac *dmac;
struct resource *res;
int ret;
dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL);
if (!dmac)
return -ENOMEM;
dmac->irq = platform_get_irq(pdev, 0);
if (dmac->irq <= 0)
return -EINVAL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dmac->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(dmac->base))
return PTR_ERR(dmac->base);
dmac->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(dmac->clk))
return PTR_ERR(dmac->clk);
INIT_LIST_HEAD(&dmac->chan.active_descs);
of_channels = of_get_child_by_name(pdev->dev.of_node, "adi,channels");
if (of_channels == NULL)
return -ENODEV;
for_each_child_of_node(of_channels, of_chan) {
ret = axi_dmac_parse_chan_dt(of_chan, &dmac->chan);
if (ret) {
of_node_put(of_chan);
of_node_put(of_channels);
return -EINVAL;
}
}
of_node_put(of_channels);
pdev->dev.dma_parms = &dmac->dma_parms;
dma_set_max_seg_size(&pdev->dev, dmac->chan.max_length);
dma_dev = &dmac->dma_dev;
dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
dma_cap_set(DMA_CYCLIC, dma_dev->cap_mask);
dma_dev->device_free_chan_resources = axi_dmac_free_chan_resources;
dma_dev->device_tx_status = dma_cookie_status;
dma_dev->device_issue_pending = axi_dmac_issue_pending;
dma_dev->device_prep_slave_sg = axi_dmac_prep_slave_sg;
dma_dev->device_prep_dma_cyclic = axi_dmac_prep_dma_cyclic;
dma_dev->device_prep_interleaved_dma = axi_dmac_prep_interleaved;
dma_dev->device_terminate_all = axi_dmac_terminate_all;
dma_dev->dev = &pdev->dev;
dma_dev->chancnt = 1;
dma_dev->src_addr_widths = BIT(dmac->chan.src_width);
dma_dev->dst_addr_widths = BIT(dmac->chan.dest_width);
dma_dev->directions = BIT(dmac->chan.direction);
dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
INIT_LIST_HEAD(&dma_dev->channels);
dmac->chan.vchan.desc_free = axi_dmac_desc_free;
vchan_init(&dmac->chan.vchan, dma_dev);
ret = clk_prepare_enable(dmac->clk);
if (ret < 0)
return ret;
axi_dmac_write(dmac, AXI_DMAC_REG_IRQ_MASK, 0x00);
ret = dma_async_device_register(dma_dev);
if (ret)
goto err_clk_disable;
ret = of_dma_controller_register(pdev->dev.of_node,
of_dma_xlate_by_chan_id, dma_dev);
if (ret)
goto err_unregister_device;
ret = request_irq(dmac->irq, axi_dmac_interrupt_handler, 0,
dev_name(&pdev->dev), dmac);
if (ret)
goto err_unregister_of;
platform_set_drvdata(pdev, dmac);
return 0;
err_unregister_of:
of_dma_controller_free(pdev->dev.of_node);
err_unregister_device:
dma_async_device_unregister(&dmac->dma_dev);
err_clk_disable:
clk_disable_unprepare(dmac->clk);
return ret;
}
static int axi_dmac_remove(struct platform_device *pdev)
{
struct axi_dmac *dmac = platform_get_drvdata(pdev);
of_dma_controller_free(pdev->dev.of_node);
free_irq(dmac->irq, dmac);
tasklet_kill(&dmac->chan.vchan.task);
dma_async_device_unregister(&dmac->dma_dev);
clk_disable_unprepare(dmac->clk);
return 0;
}
static const struct of_device_id axi_dmac_of_match_table[] = {
{ .compatible = "adi,axi-dmac-1.00.a" },
{ },
};
static struct platform_driver axi_dmac_driver = {
.driver = {
.name = "dma-axi-dmac",
.of_match_table = axi_dmac_of_match_table,
},
.probe = axi_dmac_probe,
.remove = axi_dmac_remove,
};
module_platform_driver(axi_dmac_driver);
MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
MODULE_DESCRIPTION("DMA controller driver for the AXI-DMAC controller");
MODULE_LICENSE("GPL v2");
......@@ -145,7 +145,8 @@ struct jz4780_dma_dev {
struct jz4780_dma_chan chan[JZ_DMA_NR_CHANNELS];
};
struct jz4780_dma_data {
struct jz4780_dma_filter_data {
struct device_node *of_node;
uint32_t transfer_type;
int channel;
};
......@@ -214,11 +215,25 @@ static void jz4780_dma_desc_free(struct virt_dma_desc *vdesc)
kfree(desc);
}
static uint32_t jz4780_dma_transfer_size(unsigned long val, int *ord)
static uint32_t jz4780_dma_transfer_size(unsigned long val, uint32_t *shift)
{
*ord = ffs(val) - 1;
int ord = ffs(val) - 1;
switch (*ord) {
/*
* 8 byte transfer sizes unsupported so fall back on 4. If it's larger
* than the maximum, just limit it. It is perfectly safe to fall back
* in this way since we won't exceed the maximum burst size supported
* by the device, the only effect is reduced efficiency. This is better
* than refusing to perform the request at all.
*/
if (ord == 3)
ord = 2;
else if (ord > 7)
ord = 7;
*shift = ord;
switch (ord) {
case 0:
return JZ_DMA_SIZE_1_BYTE;
case 1:
......@@ -231,20 +246,17 @@ static uint32_t jz4780_dma_transfer_size(unsigned long val, int *ord)
return JZ_DMA_SIZE_32_BYTE;
case 6:
return JZ_DMA_SIZE_64_BYTE;
case 7:
return JZ_DMA_SIZE_128_BYTE;
default:
return -EINVAL;
return JZ_DMA_SIZE_128_BYTE;
}
}
static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
static int jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
struct jz4780_dma_hwdesc *desc, dma_addr_t addr, size_t len,
enum dma_transfer_direction direction)
{
struct dma_slave_config *config = &jzchan->config;
uint32_t width, maxburst, tsz;
int ord;
if (direction == DMA_MEM_TO_DEV) {
desc->dcm = JZ_DMA_DCM_SAI;
......@@ -271,8 +283,8 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
* divisible by the transfer size, and we must not use more than the
* maximum burst specified by the user.
*/
tsz = jz4780_dma_transfer_size(addr | len | (width * maxburst), &ord);
jzchan->transfer_shift = ord;
tsz = jz4780_dma_transfer_size(addr | len | (width * maxburst),
&jzchan->transfer_shift);
switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
......@@ -289,12 +301,14 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
desc->dcm |= width << JZ_DMA_DCM_SP_SHIFT;
desc->dcm |= width << JZ_DMA_DCM_DP_SHIFT;
desc->dtc = len >> ord;
desc->dtc = len >> jzchan->transfer_shift;
return 0;
}
static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction direction, unsigned long flags)
enum dma_transfer_direction direction, unsigned long flags,
void *context)
{
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_desc *desc;
......@@ -307,12 +321,11 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg(
for (i = 0; i < sg_len; i++) {
err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i],
sg_dma_address(&sgl[i]),
sg_dma_len(&sgl[i]),
direction);
sg_dma_address(&sgl[i]),
sg_dma_len(&sgl[i]),
direction);
if (err < 0)
return ERR_PTR(err);
return NULL;
desc->desc[i].dcm |= JZ_DMA_DCM_TIE;
......@@ -354,9 +367,9 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_cyclic(
for (i = 0; i < periods; i++) {
err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr,
period_len, direction);
period_len, direction);
if (err < 0)
return ERR_PTR(err);
return NULL;
buf_addr += period_len;
......@@ -390,15 +403,13 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy(
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_desc *desc;
uint32_t tsz;
int ord;
desc = jz4780_dma_desc_alloc(jzchan, 1, DMA_MEMCPY);
if (!desc)
return NULL;
tsz = jz4780_dma_transfer_size(dest | src | len, &ord);
if (tsz < 0)
return ERR_PTR(tsz);
tsz = jz4780_dma_transfer_size(dest | src | len,
&jzchan->transfer_shift);
desc->desc[0].dsa = src;
desc->desc[0].dta = dest;
......@@ -407,7 +418,7 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy(
tsz << JZ_DMA_DCM_TSZ_SHIFT |
JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_SP_SHIFT |
JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_DP_SHIFT;
desc->desc[0].dtc = len >> ord;
desc->desc[0].dtc = len >> jzchan->transfer_shift;
return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags);
}
......@@ -484,8 +495,9 @@ static void jz4780_dma_issue_pending(struct dma_chan *chan)
spin_unlock_irqrestore(&jzchan->vchan.lock, flags);
}
static int jz4780_dma_terminate_all(struct jz4780_dma_chan *jzchan)
static int jz4780_dma_terminate_all(struct dma_chan *chan)
{
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan);
unsigned long flags;
LIST_HEAD(head);
......@@ -507,9 +519,11 @@ static int jz4780_dma_terminate_all(struct jz4780_dma_chan *jzchan)
return 0;
}
static int jz4780_dma_slave_config(struct jz4780_dma_chan *jzchan,
const struct dma_slave_config *config)
static int jz4780_dma_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
if ((config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
|| (config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES))
return -EINVAL;
......@@ -567,8 +581,8 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
txstate->residue = 0;
if (vdesc && jzchan->desc && vdesc == &jzchan->desc->vdesc
&& jzchan->desc->status & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT))
status = DMA_ERROR;
&& jzchan->desc->status & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT))
status = DMA_ERROR;
spin_unlock_irqrestore(&jzchan->vchan.lock, flags);
return status;
......@@ -671,7 +685,10 @@ static bool jz4780_dma_filter_fn(struct dma_chan *chan, void *param)
{
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan);
struct jz4780_dma_data *data = param;
struct jz4780_dma_filter_data *data = param;
if (jzdma->dma_device.dev->of_node != data->of_node)
return false;
if (data->channel > -1) {
if (data->channel != jzchan->id)
......@@ -690,11 +707,12 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec,
{
struct jz4780_dma_dev *jzdma = ofdma->of_dma_data;
dma_cap_mask_t mask = jzdma->dma_device.cap_mask;
struct jz4780_dma_data data;
struct jz4780_dma_filter_data data;
if (dma_spec->args_count != 2)
return NULL;
data.of_node = ofdma->of_node;
data.transfer_type = dma_spec->args[0];
data.channel = dma_spec->args[1];
......@@ -713,9 +731,14 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec,
data.channel);
return NULL;
}
}
return dma_request_channel(mask, jz4780_dma_filter_fn, &data);
jzdma->chan[data.channel].transfer_type = data.transfer_type;
return dma_get_slave_channel(
&jzdma->chan[data.channel].vchan.chan);
} else {
return dma_request_channel(mask, jz4780_dma_filter_fn, &data);
}
}
static int jz4780_dma_probe(struct platform_device *pdev)
......@@ -743,23 +766,26 @@ static int jz4780_dma_probe(struct platform_device *pdev)
if (IS_ERR(jzdma->base))
return PTR_ERR(jzdma->base);
jzdma->irq = platform_get_irq(pdev, 0);
if (jzdma->irq < 0) {
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
dev_err(dev, "failed to get IRQ: %d\n", ret);
return jzdma->irq;
return ret;
}
ret = devm_request_irq(dev, jzdma->irq, jz4780_dma_irq_handler, 0,
dev_name(dev), jzdma);
jzdma->irq = ret;
ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev),
jzdma);
if (ret) {
dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq);
return -EINVAL;
return ret;
}
jzdma->clk = devm_clk_get(dev, NULL);
if (IS_ERR(jzdma->clk)) {
dev_err(dev, "failed to get clock\n");
return PTR_ERR(jzdma->clk);
ret = PTR_ERR(jzdma->clk);
goto err_free_irq;
}
clk_prepare_enable(jzdma->clk);
......@@ -775,13 +801,13 @@ static int jz4780_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_CYCLIC, dd->cap_mask);
dd->dev = dev;
dd->copy_align = 2; /* 2^2 = 4 byte alignment */
dd->copy_align = DMAENGINE_ALIGN_4_BYTES;
dd->device_alloc_chan_resources = jz4780_dma_alloc_chan_resources;
dd->device_free_chan_resources = jz4780_dma_free_chan_resources;
dd->device_prep_slave_sg = jz4780_dma_prep_slave_sg;
dd->device_prep_dma_cyclic = jz4780_dma_prep_dma_cyclic;
dd->device_prep_dma_memcpy = jz4780_dma_prep_dma_memcpy;
dd->device_config = jz4780_dma_slave_config;
dd->device_config = jz4780_dma_config;
dd->device_terminate_all = jz4780_dma_terminate_all;
dd->device_tx_status = jz4780_dma_tx_status;
dd->device_issue_pending = jz4780_dma_issue_pending;
......@@ -790,7 +816,6 @@ static int jz4780_dma_probe(struct platform_device *pdev)
dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
/*
* Enable DMA controller, mark all channels as not programmable.
* Also set the FMSC bit - it increases MSC performance, so it makes
......@@ -832,15 +857,24 @@ static int jz4780_dma_probe(struct platform_device *pdev)
err_disable_clk:
clk_disable_unprepare(jzdma->clk);
err_free_irq:
free_irq(jzdma->irq, jzdma);
return ret;
}
static int jz4780_dma_remove(struct platform_device *pdev)
{
struct jz4780_dma_dev *jzdma = platform_get_drvdata(pdev);
int i;
of_dma_controller_free(pdev->dev.of_node);
devm_free_irq(&pdev->dev, jzdma->irq, jzdma);
free_irq(jzdma->irq, jzdma);
for (i = 0; i < JZ_DMA_NR_CHANNELS; i++)
tasklet_kill(&jzdma->chan[i].vchan.task);
dma_async_device_unregister(&jzdma->dma_device);
return 0;
}
......
......@@ -6,6 +6,9 @@ config DW_DMAC_CORE
tristate
select DMA_ENGINE
config DW_DMAC_BIG_ENDIAN_IO
bool
config DW_DMAC
tristate "Synopsys DesignWare AHB DMA platform driver"
select DW_DMAC_CORE
......@@ -23,6 +26,3 @@ config DW_DMAC_PCI
Support the Synopsys DesignWare AHB DMA controller on the
platfroms that enumerate it as a PCI device. For example,
Intel Medfield has integrated this GPDMA controller.
config DW_DMAC_BIG_ENDIAN_IO
bool
......@@ -1000,7 +1000,7 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
* code using dma memcpy must make sure alignment of
* length is at dma->copy_align boundary.
*/
dma->copy_align = DMA_SLAVE_BUSWIDTH_4_BYTES;
dma->copy_align = DMAENGINE_ALIGN_4_BYTES;
INIT_LIST_HEAD(&dma->channels);
}
......
......@@ -99,21 +99,13 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc)
static void hsu_dma_stop_channel(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_disable(hsuc);
hsu_chan_writel(hsuc, HSU_CH_DCR, 0);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static void hsu_dma_start_channel(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_dma_chan_start(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static void hsu_dma_start_transfer(struct hsu_dma_chan *hsuc)
......@@ -139,9 +131,9 @@ static u32 hsu_dma_chan_get_sr(struct hsu_dma_chan *hsuc)
unsigned long flags;
u32 sr;
spin_lock_irqsave(&hsuc->lock, flags);
spin_lock_irqsave(&hsuc->vchan.lock, flags);
sr = hsu_chan_readl(hsuc, HSU_CH_SR);
spin_unlock_irqrestore(&hsuc->lock, flags);
spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
return sr;
}
......@@ -273,14 +265,11 @@ static size_t hsu_dma_active_desc_size(struct hsu_dma_chan *hsuc)
struct hsu_dma_desc *desc = hsuc->desc;
size_t bytes = hsu_dma_desc_size(desc);
int i;
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
i = desc->active % HSU_DMA_CHAN_NR_DESC;
do {
bytes += hsu_chan_readl(hsuc, HSU_CH_DxTSR(i));
} while (--i >= 0);
spin_unlock_irqrestore(&hsuc->lock, flags);
return bytes;
}
......@@ -327,24 +316,6 @@ static int hsu_dma_slave_config(struct dma_chan *chan,
return 0;
}
static void hsu_dma_chan_deactivate(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_disable(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static void hsu_dma_chan_activate(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_enable(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static int hsu_dma_pause(struct dma_chan *chan)
{
struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
......@@ -352,7 +323,7 @@ static int hsu_dma_pause(struct dma_chan *chan)
spin_lock_irqsave(&hsuc->vchan.lock, flags);
if (hsuc->desc && hsuc->desc->status == DMA_IN_PROGRESS) {
hsu_dma_chan_deactivate(hsuc);
hsu_chan_disable(hsuc);
hsuc->desc->status = DMA_PAUSED;
}
spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
......@@ -368,7 +339,7 @@ static int hsu_dma_resume(struct dma_chan *chan)
spin_lock_irqsave(&hsuc->vchan.lock, flags);
if (hsuc->desc && hsuc->desc->status == DMA_PAUSED) {
hsuc->desc->status = DMA_IN_PROGRESS;
hsu_dma_chan_activate(hsuc);
hsu_chan_enable(hsuc);
}
spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
......@@ -441,8 +412,6 @@ int hsu_dma_probe(struct hsu_dma_chip *chip)
hsuc->direction = (i & 0x1) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
hsuc->reg = addr + i * HSU_DMA_CHAN_LENGTH;
spin_lock_init(&hsuc->lock);
}
dma_cap_set(DMA_SLAVE, hsu->dma.cap_mask);
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment