Commit ce615f5c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
 "Core:
   - Support out of order dma completion
   - Support for repeating transaction

  New controllers:
   - Support for Actions S700 DMA engine
   - Renesas R8A774E1, r8a7742 controller binding
   - New driver for Xilinx DPDMA controller

  Other:
   - Support of out of order dma completion in idxd driver
   - W=1 warning cleanup of subsystem
   - Updates to ti-k3-dma, dw, idxd drivers"

* tag 'dmaengine-5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (68 commits)
  dmaengine: dw: Don't include unneeded header to platform data header
  dmaengine: Actions: Add support for S700 DMA engine
  dmaengine: Actions: get rid of bit fields from dma descriptor
  dt-bindings: dmaengine: convert Actions Semi Owl SoCs bindings to yaml
  dmaengine: idxd: add missing invalid flags field to completion
  dmaengine: dw: Initialize max_sg_burst capability
  dmaengine: dw: Introduce max burst length hw config
  dmaengine: dw: Initialize min and max burst DMA device capability
  dmaengine: dw: Set DMA device max segment size parameter
  dmaengine: dw: Take HC_LLP flag into account for noLLP auto-config
  dmaengine: Introduce DMA-device device_caps callback
  dmaengine: Introduce max SG burst capability
  dmaengine: Introduce min burst length capability
  dt-bindings: dma: dw: Add max burst transaction length property
  dt-bindings: dma: dw: Convert DW DMAC to DT binding
  dmaengine: ti: k3-udma: Query throughput level information from hardware
  dmaengine: ti: k3-udma: Use defines for capabilities register parsing
  dmaengine: xilinx: dpdma: Fix kerneldoc warning
  dmaengine: xilinx: dpdma: add missing kernel doc
  dmaengine: xilinx: dpdma: remove comparison of unsigned expression
  ...
parents 81e11336 00043a26
What: sys/bus/dsa/devices/dsa<m>/version What: /sys/bus/dsa/devices/dsa<m>/version
Date: Apr 15, 2020 Date: Apr 15, 2020
KernelVersion: 5.8.0 KernelVersion: 5.8.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The hardware version number. Description: The hardware version number.
What: sys/bus/dsa/devices/dsa<m>/cdev_major What: /sys/bus/dsa/devices/dsa<m>/cdev_major
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The major number that the character device driver assigned to Description: The major number that the character device driver assigned to
this device. this device.
What: sys/bus/dsa/devices/dsa<m>/errors What: /sys/bus/dsa/devices/dsa<m>/errors
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The error information for this device. Description: The error information for this device.
What: sys/bus/dsa/devices/dsa<m>/max_batch_size What: /sys/bus/dsa/devices/dsa<m>/max_batch_size
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The largest number of work descriptors in a batch. Description: The largest number of work descriptors in a batch.
What: sys/bus/dsa/devices/dsa<m>/max_work_queues_size What: /sys/bus/dsa/devices/dsa<m>/max_work_queues_size
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The maximum work queue size supported by this device. Description: The maximum work queue size supported by this device.
What: sys/bus/dsa/devices/dsa<m>/max_engines What: /sys/bus/dsa/devices/dsa<m>/max_engines
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The maximum number of engines supported by this device. Description: The maximum number of engines supported by this device.
What: sys/bus/dsa/devices/dsa<m>/max_groups What: /sys/bus/dsa/devices/dsa<m>/max_groups
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The maximum number of groups can be created under this device. Description: The maximum number of groups can be created under this device.
What: sys/bus/dsa/devices/dsa<m>/max_tokens What: /sys/bus/dsa/devices/dsa<m>/max_tokens
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
...@@ -50,7 +50,7 @@ Description: The total number of bandwidth tokens supported by this device. ...@@ -50,7 +50,7 @@ Description: The total number of bandwidth tokens supported by this device.
implementation, and these resources are allocated by engines to implementation, and these resources are allocated by engines to
support operations. support operations.
What: sys/bus/dsa/devices/dsa<m>/max_transfer_size What: /sys/bus/dsa/devices/dsa<m>/max_transfer_size
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
...@@ -58,57 +58,57 @@ Description: The number of bytes to be read from the source address to ...@@ -58,57 +58,57 @@ Description: The number of bytes to be read from the source address to
perform the operation. The maximum transfer size is dependent on perform the operation. The maximum transfer size is dependent on
the workqueue the descriptor was submitted to. the workqueue the descriptor was submitted to.
What: sys/bus/dsa/devices/dsa<m>/max_work_queues What: /sys/bus/dsa/devices/dsa<m>/max_work_queues
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The maximum work queue number that this device supports. Description: The maximum work queue number that this device supports.
What: sys/bus/dsa/devices/dsa<m>/numa_node What: /sys/bus/dsa/devices/dsa<m>/numa_node
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The numa node number for this device. Description: The numa node number for this device.
What: sys/bus/dsa/devices/dsa<m>/op_cap What: /sys/bus/dsa/devices/dsa<m>/op_cap
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The operation capability bit mask specify the operation types Description: The operation capability bit mask specify the operation types
supported by the this device. supported by the this device.
What: sys/bus/dsa/devices/dsa<m>/state What: /sys/bus/dsa/devices/dsa<m>/state
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The state information of this device. It can be either enabled Description: The state information of this device. It can be either enabled
or disabled. or disabled.
What: sys/bus/dsa/devices/dsa<m>/group<m>.<n> What: /sys/bus/dsa/devices/dsa<m>/group<m>.<n>
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The assigned group under this device. Description: The assigned group under this device.
What: sys/bus/dsa/devices/dsa<m>/engine<m>.<n> What: /sys/bus/dsa/devices/dsa<m>/engine<m>.<n>
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The assigned engine under this device. Description: The assigned engine under this device.
What: sys/bus/dsa/devices/dsa<m>/wq<m>.<n> What: /sys/bus/dsa/devices/dsa<m>/wq<m>.<n>
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The assigned work queue under this device. Description: The assigned work queue under this device.
What: sys/bus/dsa/devices/dsa<m>/configurable What: /sys/bus/dsa/devices/dsa<m>/configurable
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: To indicate if this device is configurable or not. Description: To indicate if this device is configurable or not.
What: sys/bus/dsa/devices/dsa<m>/token_limit What: /sys/bus/dsa/devices/dsa<m>/token_limit
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
...@@ -116,19 +116,19 @@ Description: The maximum number of bandwidth tokens that may be in use at ...@@ -116,19 +116,19 @@ Description: The maximum number of bandwidth tokens that may be in use at
one time by operations that access low bandwidth memory in the one time by operations that access low bandwidth memory in the
device. device.
What: sys/bus/dsa/devices/wq<m>.<n>/group_id What: /sys/bus/dsa/devices/wq<m>.<n>/group_id
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The group id that this work queue belongs to. Description: The group id that this work queue belongs to.
What: sys/bus/dsa/devices/wq<m>.<n>/size What: /sys/bus/dsa/devices/wq<m>.<n>/size
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The work queue size for this work queue. Description: The work queue size for this work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/type What: /sys/bus/dsa/devices/wq<m>.<n>/type
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
...@@ -136,20 +136,20 @@ Description: The type of this work queue, it can be "kernel" type for work ...@@ -136,20 +136,20 @@ Description: The type of this work queue, it can be "kernel" type for work
queue usages in the kernel space or "user" type for work queue queue usages in the kernel space or "user" type for work queue
usages by applications in user space. usages by applications in user space.
What: sys/bus/dsa/devices/wq<m>.<n>/cdev_minor What: /sys/bus/dsa/devices/wq<m>.<n>/cdev_minor
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The minor number assigned to this work queue by the character Description: The minor number assigned to this work queue by the character
device driver. device driver.
What: sys/bus/dsa/devices/wq<m>.<n>/mode What: /sys/bus/dsa/devices/wq<m>.<n>/mode
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The work queue mode type for this work queue. Description: The work queue mode type for this work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/priority What: /sys/bus/dsa/devices/wq<m>.<n>/priority
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
...@@ -157,20 +157,20 @@ Description: The priority value of this work queue, it is a vlue relative to ...@@ -157,20 +157,20 @@ Description: The priority value of this work queue, it is a vlue relative to
other work queue in the same group to control quality of service other work queue in the same group to control quality of service
for dispatching work from multiple workqueues in the same group. for dispatching work from multiple workqueues in the same group.
What: sys/bus/dsa/devices/wq<m>.<n>/state What: /sys/bus/dsa/devices/wq<m>.<n>/state
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The current state of the work queue. Description: The current state of the work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/threshold What: /sys/bus/dsa/devices/wq<m>.<n>/threshold
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The number of entries in this work queue that may be filled Description: The number of entries in this work queue that may be filled
via a limited portal. via a limited portal.
What: sys/bus/dsa/devices/engine<m>.<n>/group_id What: /sys/bus/dsa/devices/engine<m>.<n>/group_id
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
......
...@@ -16,6 +16,7 @@ Optional properties: ...@@ -16,6 +16,7 @@ Optional properties:
- dma-channels: contains the total number of DMA channels supported by the DMAC - dma-channels: contains the total number of DMA channels supported by the DMAC
- dma-requests: contains the total number of DMA requests supported by the DMAC - dma-requests: contains the total number of DMA requests supported by the DMAC
- arm,pl330-broken-no-flushp: quirk for avoiding to execute DMAFLUSHP - arm,pl330-broken-no-flushp: quirk for avoiding to execute DMAFLUSHP
- arm,pl330-periph-burst: quirk for performing burst transfer only
- resets: contains an entry for each entry in reset-names. - resets: contains an entry for each entry in reset-names.
See ../reset/reset.txt for details. See ../reset/reset.txt for details.
- reset-names: must contain at least "dma", and optional is "dma-ocp". - reset-names: must contain at least "dma", and optional is "dma-ocp".
......
* Actions Semi Owl SoCs DMA controller
This binding follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Should be "actions,s900-dma".
- reg: Should contain DMA registers location and length.
- interrupts: Should contain 4 interrupts shared by all channel.
- #dma-cells: Must be <1>. Used to represent the number of integer
cells in the dmas property of client device.
- dma-channels: Physical channels supported.
- dma-requests: Number of DMA request signals supported by the controller.
Refer to Documentation/devicetree/bindings/dma/dma.txt
- clocks: Phandle and Specifier of the clock feeding the DMA controller.
Example:
Controller:
dma: dma-controller@e0260000 {
compatible = "actions,s900-dma";
reg = <0x0 0xe0260000 0x0 0x1000>;
interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 60 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <12>;
dma-requests = <46>;
clocks = <&clock CLK_DMAC>;
};
Client:
DMA clients connected to the Actions Semi Owl SoCs DMA controller must
use the format described in the dma.txt file, using a two-cell specifier
for each channel.
The two cells in order are:
1. A phandle pointing to the DMA controller.
2. The channel id.
uart5: serial@e012a000 {
...
dma-names = "tx", "rx";
dmas = <&dma 26>, <&dma 27>;
...
};
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/owl-dma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Actions Semi Owl SoCs DMA controller
description: |
The OWL DMA is a general-purpose direct memory access controller capable of
supporting 10 and 12 independent DMA channels for S700 and S900 SoCs
respectively.
maintainers:
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
allOf:
- $ref: "dma-controller.yaml#"
properties:
compatible:
enum:
- actions,s900-dma
- actions,s700-dma
reg:
maxItems: 1
interrupts:
description:
controller supports 4 interrupts, which are freely assignable to the
DMA channels.
maxItems: 4
"#dma-cells":
const: 1
dma-channels:
maximum: 12
dma-requests:
maximum: 46
clocks:
maxItems: 1
description:
Phandle and Specifier of the clock feeding the DMA controller.
power-domains:
maxItems: 1
required:
- compatible
- reg
- interrupts
- "#dma-cells"
- dma-channels
- dma-requests
- clocks
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
dma: dma-controller@e0260000 {
compatible = "actions,s900-dma";
reg = <0xe0260000 0x1000>;
interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 60 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <12>;
dma-requests = <46>;
clocks = <&clock 22>;
};
...
...@@ -23,6 +23,7 @@ properties: ...@@ -23,6 +23,7 @@ properties:
- renesas,dmac-r8a774a1 # RZ/G2M - renesas,dmac-r8a774a1 # RZ/G2M
- renesas,dmac-r8a774b1 # RZ/G2N - renesas,dmac-r8a774b1 # RZ/G2N
- renesas,dmac-r8a774c0 # RZ/G2E - renesas,dmac-r8a774c0 # RZ/G2E
- renesas,dmac-r8a774e1 # RZ/G2H
- renesas,dmac-r8a7790 # R-Car H2 - renesas,dmac-r8a7790 # R-Car H2
- renesas,dmac-r8a7791 # R-Car M2-W - renesas,dmac-r8a7791 # R-Car M2-W
- renesas,dmac-r8a7792 # R-Car V2H - renesas,dmac-r8a7792 # R-Car V2H
......
...@@ -16,6 +16,7 @@ properties: ...@@ -16,6 +16,7 @@ properties:
compatible: compatible:
items: items:
- enum: - enum:
- renesas,r8a7742-usb-dmac # RZ/G1H
- renesas,r8a7743-usb-dmac # RZ/G1M - renesas,r8a7743-usb-dmac # RZ/G1M
- renesas,r8a7744-usb-dmac # RZ/G1N - renesas,r8a7744-usb-dmac # RZ/G1N
- renesas,r8a7745-usb-dmac # RZ/G1E - renesas,r8a7745-usb-dmac # RZ/G1E
...@@ -23,6 +24,7 @@ properties: ...@@ -23,6 +24,7 @@ properties:
- renesas,r8a774a1-usb-dmac # RZ/G2M - renesas,r8a774a1-usb-dmac # RZ/G2M
- renesas,r8a774b1-usb-dmac # RZ/G2N - renesas,r8a774b1-usb-dmac # RZ/G2N
- renesas,r8a774c0-usb-dmac # RZ/G2E - renesas,r8a774c0-usb-dmac # RZ/G2E
- renesas,r8a774e1-usb-dmac # RZ/G2H
- renesas,r8a7790-usb-dmac # R-Car H2 - renesas,r8a7790-usb-dmac # R-Car H2
- renesas,r8a7791-usb-dmac # R-Car M2-W - renesas,r8a7791-usb-dmac # R-Car M2-W
- renesas,r8a7793-usb-dmac # R-Car M2-N - renesas,r8a7793-usb-dmac # R-Car M2-N
......
# SPDX-License-Identifier: GPL-2.0-only
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/snps,dma-spear1340.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Synopsys Designware DMA Controller
maintainers:
- Viresh Kumar <vireshk@kernel.org>
- Andy Shevchenko <andriy.shevchenko@linux.intel.com>
allOf:
- $ref: "dma-controller.yaml#"
properties:
compatible:
const: snps,dma-spear1340
"#dma-cells":
const: 3
description: |
First cell is a phandle pointing to the DMA controller. Second one is
the DMA request line number. Third cell is the memory master identifier
for transfers on dynamically allocated channel. Fourth cell is the
peripheral master identifier for transfers on an allocated channel.
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
clock-names:
description: AHB interface reference clock.
const: hclk
dma-channels:
description: |
Number of DMA channels supported by the controller. In case if
not specified the driver will try to auto-detect this and
the rest of the optional parameters.
minimum: 1
maximum: 8
dma-requests:
minimum: 1
maximum: 16
dma-masters:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Number of DMA masters supported by the controller. In case if
not specified the driver will try to auto-detect this and
the rest of the optional parameters.
minimum: 1
maximum: 4
chan_allocation_order:
$ref: /schemas/types.yaml#definitions/uint32
description: |
DMA channels allocation order specifier. Zero means ascending order
(first free allocated), while one - descending (last free allocated).
default: 0
enum: [0, 1]
chan_priority:
$ref: /schemas/types.yaml#definitions/uint32
description: |
DMA channels priority order. Zero means ascending channels priority
so the very first channel has the highest priority. While 1 means
descending priority (the last channel has the highest priority).
default: 0
enum: [0, 1]
block_size:
$ref: /schemas/types.yaml#definitions/uint32
description: Maximum block size supported by the DMA controller.
enum: [3, 7, 15, 31, 63, 127, 255, 511, 1023, 2047, 4095]
data-width:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: Data bus width per each DMA master in bytes.
items:
maxItems: 4
items:
enum: [4, 8, 16, 32]
data_width:
$ref: /schemas/types.yaml#/definitions/uint32-array
deprecated: true
description: |
Data bus width per each DMA master in (2^n * 8) bits. This property is
deprecated. It' usage is discouraged in favor of data-width one. Moreover
the property incorrectly permits to define data-bus width of 8 and 16
bits, which is impossible in accordance with DW DMAC IP-core data book.
items:
maxItems: 4
items:
enum:
- 0 # 8 bits
- 1 # 16 bits
- 2 # 32 bits
- 3 # 64 bits
- 4 # 128 bits
- 5 # 256 bits
default: 0
multi-block:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
LLP-based multi-block transfer supported by hardware per
each DMA channel.
items:
maxItems: 8
items:
enum: [0, 1]
default: 1
snps,max-burst-len:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Maximum length of the burst transactions supported by the controller.
This property defines the upper limit of the run-time burst setting
(CTLx.SRC_MSIZE/CTLx.DST_MSIZE fields) so the allowed burst length
will be from 1 to max-burst-len words. It's an array property with one
cell per channel in the units determined by the value set in the
CTLx.SRC_TR_WIDTH/CTLx.DST_TR_WIDTH fields (data width).
items:
maxItems: 8
items:
enum: [4, 8, 16, 32, 64, 128, 256]
default: 256
snps,dma-protection-control:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Bits one-to-one passed to the AHB HPROT[3:1] bus. Each bit setting
indicates the following features: bit 0 - privileged mode,
bit 1 - DMA is bufferable, bit 2 - DMA is cacheable.
default: 0
minimum: 0
maximum: 7
unevaluatedProperties: false
required:
- compatible
- "#dma-cells"
- reg
- interrupts
examples:
- |
dma-controller@fc000000 {
compatible = "snps,dma-spear1340";
reg = <0xfc000000 0x1000>;
interrupt-parent = <&vic1>;
interrupts = <12>;
dma-channels = <8>;
dma-requests = <16>;
dma-masters = <4>;
#dma-cells = <3>;
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
data-width = <8 8>;
multi-block = <0 0 0 0 0 0 0 0>;
snps,max-burst-len = <16 16 4 4 4 4 4 4>;
};
...
* Synopsys Designware DMA Controller
Required properties:
- compatible: "snps,dma-spear1340"
- reg: Address range of the DMAC registers
- interrupt: Should contain the DMAC interrupt number
- dma-channels: Number of channels supported by hardware
- dma-requests: Number of DMA request lines supported, up to 16
- dma-masters: Number of AHB masters supported by the controller
- #dma-cells: must be <3>
- chan_allocation_order: order of allocation of channel, 0 (default): ascending,
1: descending
- chan_priority: priority of channels. 0 (default): increase from chan 0->n, 1:
increase from chan n->0
- block_size: Maximum block size supported by the controller
- data-width: Maximum data width supported by hardware per AHB master
(in bytes, power of 2)
Deprecated properties:
- data_width: Maximum data width supported by hardware per AHB master
(0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
Optional properties:
- multi-block: Multi block transfers supported by hardware. Array property with
one cell per channel. 0: not supported, 1 (default): supported.
- snps,dma-protection-control: AHB HPROT[3:1] protection setting.
The default value is 0 (for non-cacheable, non-buffered,
unprivileged data access).
Refer to include/dt-bindings/dma/dw-dmac.h for possible values.
Example:
dmahost: dma@fc000000 {
compatible = "snps,dma-spear1340";
reg = <0xfc000000 0x1000>;
interrupt-parent = <&vic1>;
interrupts = <12>;
dma-channels = <8>;
dma-requests = <16>;
dma-masters = <2>;
#dma-cells = <3>;
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
data-width = <8 8>;
};
DMA clients connected to the Designware DMA controller must use the format
described in the dma.txt file, using a four-cell specifier for each channel.
The four cells in order are:
1. A phandle pointing to the DMA controller
2. The DMA request line number
3. Memory master for transfers on allocated channel
4. Peripheral master for transfers on allocated channel
Example:
serial@e0000000 {
compatible = "arm,pl011", "arm,primecell";
reg = <0xe0000000 0x1000>;
interrupts = <0 35 0x4>;
dmas = <&dmahost 12 0 1>,
<&dmahost 13 1 0>;
dma-names = "rx", "rx";
};
...@@ -239,6 +239,22 @@ Currently, the types available are: ...@@ -239,6 +239,22 @@ Currently, the types available are:
want to transfer a portion of uncompressed data directly to the want to transfer a portion of uncompressed data directly to the
display to print it display to print it
- DMA_COMPLETION_NO_ORDER
- The device does not support in order completion.
- The driver should return DMA_OUT_OF_ORDER for device_tx_status if
the device is setting this capability.
- All cookie tracking and checking API should be treated as invalid if
the device exports this capability.
- At this point, this is incompatible with polling option for dmatest.
- If this cap is set, the user is recommended to provide an unique
identifier for each descriptor sent to the DMA device in order to
properly track the completion.
- DMA_REPEAT - DMA_REPEAT
- The device supports repeated transfers. A repeated transfer, indicated by - The device supports repeated transfers. A repeated transfer, indicated by
...@@ -420,6 +436,9 @@ supported. ...@@ -420,6 +436,9 @@ supported.
- In the case of a cyclic transfer, it should only take into - In the case of a cyclic transfer, it should only take into
account the current period. account the current period.
- Should return DMA_OUT_OF_ORDER if the device does not support in order
completion and is completing the operation out of order.
- This function can be called in an interrupt context. - This function can be called in an interrupt context.
- device_config - device_config
...@@ -509,7 +528,7 @@ dma_cookie_t ...@@ -509,7 +528,7 @@ dma_cookie_t
DMA_CTRL_ACK DMA_CTRL_ACK
- If clear, the descriptor cannot be reused by provider until the - If clear, the descriptor cannot be reused by provider until the
client acknowledges receipt, i.e. has has a chance to establish any client acknowledges receipt, i.e. has a chance to establish any
dependency chains dependency chains
- This can be acked by invoking async_tx_ack() - This can be acked by invoking async_tx_ack()
......
...@@ -11296,6 +11296,19 @@ W: http://www.monstr.eu/fdt/ ...@@ -11296,6 +11296,19 @@ W: http://www.monstr.eu/fdt/
T: git git://git.monstr.eu/linux-2.6-microblaze.git T: git git://git.monstr.eu/linux-2.6-microblaze.git
F: arch/microblaze/ F: arch/microblaze/
MICROCHIP AT91 DMA DRIVERS
M: Ludovic Desroches <ludovic.desroches@microchip.com>
M: Tudor Ambarus <tudor.ambarus@microchip.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: dmaengine@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/dma/atmel-dma.txt
F: drivers/dma/at_hdmac.c
F: drivers/dma/at_hdmac_regs.h
F: drivers/dma/at_xdmac.c
F: include/dt-bindings/dma/at91.h
F: include/linux/platform_data/dma-atmel.h
MICROCHIP AT91 SERIAL DRIVER MICROCHIP AT91 SERIAL DRIVER
M: Richard Genoud <richard.genoud@gmail.com> M: Richard Genoud <richard.genoud@gmail.com>
S: Maintained S: Maintained
...@@ -11324,17 +11337,6 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) ...@@ -11324,17 +11337,6 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported S: Supported
F: sound/soc/atmel F: sound/soc/atmel
MICROCHIP DMA DRIVER
M: Ludovic Desroches <ludovic.desroches@microchip.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: dmaengine@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/dma/atmel-dma.txt
F: drivers/dma/at_hdmac.c
F: drivers/dma/at_hdmac_regs.h
F: include/dt-bindings/dma/at91.h
F: include/linux/platform_data/dma-atmel.h
MICROCHIP ECC DRIVER MICROCHIP ECC DRIVER
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@microchip.com>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
...@@ -11470,13 +11472,6 @@ L: linux-wireless@vger.kernel.org ...@@ -11470,13 +11472,6 @@ L: linux-wireless@vger.kernel.org
S: Supported S: Supported
F: drivers/net/wireless/microchip/wilc1000/ F: drivers/net/wireless/microchip/wilc1000/
MICROCHIP XDMA DRIVER
M: Ludovic Desroches <ludovic.desroches@microchip.com>
L: linux-arm-kernel@lists.infradead.org
L: dmaengine@vger.kernel.org
S: Supported
F: drivers/dma/at_xdmac.c
MICROSEMI MIPS SOCS MICROSEMI MIPS SOCS
M: Alexandre Belloni <alexandre.belloni@bootlin.com> M: Alexandre Belloni <alexandre.belloni@bootlin.com>
M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
......
...@@ -285,8 +285,9 @@ config INTEL_IDMA64 ...@@ -285,8 +285,9 @@ config INTEL_IDMA64
config INTEL_IDXD config INTEL_IDXD
tristate "Intel Data Accelerators support" tristate "Intel Data Accelerators support"
depends on PCI && X86_64 depends on PCI && X86_64
depends on PCI_MSI
depends on SBITMAP
select DMA_ENGINE select DMA_ENGINE
select SBITMAP
help help
Enable support for the Intel(R) data accelerators present Enable support for the Intel(R) data accelerators present
in Intel Xeon CPU. in Intel Xeon CPU.
......
...@@ -358,19 +358,12 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev, ...@@ -358,19 +358,12 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
{ {
struct acpi_dma_parser_data pdata; struct acpi_dma_parser_data pdata;
struct acpi_dma_spec *dma_spec = &pdata.dma_spec; struct acpi_dma_spec *dma_spec = &pdata.dma_spec;
struct acpi_device *adev = ACPI_COMPANION(dev);
struct list_head resource_list; struct list_head resource_list;
struct acpi_device *adev;
struct acpi_dma *adma; struct acpi_dma *adma;
struct dma_chan *chan = NULL; struct dma_chan *chan = NULL;
int found; int found;
int ret;
/* Check if the device was enumerated by ACPI */
if (!dev)
return ERR_PTR(-ENODEV);
adev = ACPI_COMPANION(dev);
if (!adev)
return ERR_PTR(-ENODEV);
memset(&pdata, 0, sizeof(pdata)); memset(&pdata, 0, sizeof(pdata));
pdata.index = index; pdata.index = index;
...@@ -380,9 +373,11 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev, ...@@ -380,9 +373,11 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
dma_spec->slave_id = -1; dma_spec->slave_id = -1;
INIT_LIST_HEAD(&resource_list); INIT_LIST_HEAD(&resource_list);
acpi_dev_get_resources(adev, &resource_list, ret = acpi_dev_get_resources(adev, &resource_list,
acpi_dma_parse_fixed_dma, &pdata); acpi_dma_parse_fixed_dma, &pdata);
acpi_dev_free_resource_list(&resource_list); acpi_dev_free_resource_list(&resource_list);
if (ret < 0)
return ERR_PTR(ret);
if (dma_spec->slave_id < 0 || dma_spec->chan_id < 0) if (dma_spec->slave_id < 0 || dma_spec->chan_id < 0)
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
......
...@@ -153,7 +153,8 @@ struct msgdma_extended_desc { ...@@ -153,7 +153,8 @@ struct msgdma_extended_desc {
* struct msgdma_sw_desc - implements a sw descriptor * struct msgdma_sw_desc - implements a sw descriptor
* @async_tx: support for the async_tx api * @async_tx: support for the async_tx api
* @hw_desc: assosiated HW descriptor * @hw_desc: assosiated HW descriptor
* @free_list: node of the free SW descriprots list * @node: node to move from the free list to the tx list
* @tx_list: transmit list node
*/ */
struct msgdma_sw_desc { struct msgdma_sw_desc {
struct dma_async_tx_descriptor async_tx; struct dma_async_tx_descriptor async_tx;
...@@ -162,7 +163,7 @@ struct msgdma_sw_desc { ...@@ -162,7 +163,7 @@ struct msgdma_sw_desc {
struct list_head tx_list; struct list_head tx_list;
}; };
/** /*
* struct msgdma_device - DMA device structure * struct msgdma_device - DMA device structure
*/ */
struct msgdma_device { struct msgdma_device {
...@@ -258,6 +259,7 @@ static void msgdma_free_desc_list(struct msgdma_device *mdev, ...@@ -258,6 +259,7 @@ static void msgdma_free_desc_list(struct msgdma_device *mdev,
* @dst: Destination buffer address * @dst: Destination buffer address
* @src: Source buffer address * @src: Source buffer address
* @len: Transfer length * @len: Transfer length
* @stride: Read/write stride value to set
*/ */
static void msgdma_desc_config(struct msgdma_extended_desc *desc, static void msgdma_desc_config(struct msgdma_extended_desc *desc,
dma_addr_t dst, dma_addr_t src, size_t len, dma_addr_t dst, dma_addr_t src, size_t len,
......
...@@ -656,7 +656,7 @@ static irqreturn_t at_dma_interrupt(int irq, void *dev_id) ...@@ -656,7 +656,7 @@ static irqreturn_t at_dma_interrupt(int irq, void *dev_id)
/** /**
* atc_tx_submit - set the prepared descriptor(s) to be executed by the engine * atc_tx_submit - set the prepared descriptor(s) to be executed by the engine
* @desc: descriptor at the head of the transaction chain * @tx: descriptor at the head of the transaction chain
* *
* Queue chain if DMA engine is working already * Queue chain if DMA engine is working already
* *
...@@ -1196,7 +1196,7 @@ atc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -1196,7 +1196,7 @@ atc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return NULL; return NULL;
} }
/** /*
* atc_dma_cyclic_check_values * atc_dma_cyclic_check_values
* Check for too big/unaligned periods and unaligned DMA buffer * Check for too big/unaligned periods and unaligned DMA buffer
*/ */
...@@ -1217,7 +1217,7 @@ atc_dma_cyclic_check_values(unsigned int reg_width, dma_addr_t buf_addr, ...@@ -1217,7 +1217,7 @@ atc_dma_cyclic_check_values(unsigned int reg_width, dma_addr_t buf_addr,
return -EINVAL; return -EINVAL;
} }
/** /*
* atc_dma_cyclic_fill_desc - Fill one period descriptor * atc_dma_cyclic_fill_desc - Fill one period descriptor
*/ */
static int static int
......
...@@ -592,13 +592,25 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps) ...@@ -592,13 +592,25 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
caps->src_addr_widths = device->src_addr_widths; caps->src_addr_widths = device->src_addr_widths;
caps->dst_addr_widths = device->dst_addr_widths; caps->dst_addr_widths = device->dst_addr_widths;
caps->directions = device->directions; caps->directions = device->directions;
caps->min_burst = device->min_burst;
caps->max_burst = device->max_burst; caps->max_burst = device->max_burst;
caps->max_sg_burst = device->max_sg_burst;
caps->residue_granularity = device->residue_granularity; caps->residue_granularity = device->residue_granularity;
caps->descriptor_reuse = device->descriptor_reuse; caps->descriptor_reuse = device->descriptor_reuse;
caps->cmd_pause = !!device->device_pause; caps->cmd_pause = !!device->device_pause;
caps->cmd_resume = !!device->device_resume; caps->cmd_resume = !!device->device_resume;
caps->cmd_terminate = !!device->device_terminate_all; caps->cmd_terminate = !!device->device_terminate_all;
/*
* DMA engine device might be configured with non-uniformly
* distributed slave capabilities per device channels. In this
* case the corresponding driver may provide the device_caps
* callback to override the generic capabilities with
* channel-specific ones.
*/
if (device->device_caps)
device->device_caps(chan, caps);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dma_get_slave_caps); EXPORT_SYMBOL_GPL(dma_get_slave_caps);
......
...@@ -829,7 +829,10 @@ static int dmatest_func(void *data) ...@@ -829,7 +829,10 @@ static int dmatest_func(void *data)
result("test timed out", total_tests, src->off, dst->off, result("test timed out", total_tests, src->off, dst->off,
len, 0); len, 0);
goto error_unmap_continue; goto error_unmap_continue;
} else if (status != DMA_COMPLETE) { } else if (status != DMA_COMPLETE &&
!(dma_has_cap(DMA_COMPLETION_NO_ORDER,
dev->cap_mask) &&
status == DMA_OUT_OF_ORDER)) {
result(status == DMA_ERROR ? result(status == DMA_ERROR ?
"completion error status" : "completion error status" :
"completion busy status", total_tests, src->off, "completion busy status", total_tests, src->off,
...@@ -1007,6 +1010,12 @@ static int dmatest_add_channel(struct dmatest_info *info, ...@@ -1007,6 +1010,12 @@ static int dmatest_add_channel(struct dmatest_info *info,
dtc->chan = chan; dtc->chan = chan;
INIT_LIST_HEAD(&dtc->threads); INIT_LIST_HEAD(&dtc->threads);
if (dma_has_cap(DMA_COMPLETION_NO_ORDER, dma_dev->cap_mask) &&
info->params.polled) {
info->params.polled = false;
pr_warn("DMA_COMPLETION_NO_ORDER, polled disabled\n");
}
if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) { if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) {
if (dmatest == 0) { if (dmatest == 0) {
cnt = dmatest_add_threads(info, dtc, DMA_MEMCPY); cnt = dmatest_add_threads(info, dtc, DMA_MEMCPY);
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DW_DMAC_CORE) += dw_dmac_core.o obj-$(CONFIG_DW_DMAC_CORE) += dw_dmac_core.o
dw_dmac_core-objs := core.o dw.o idma32.o dw_dmac_core-y := core.o dw.o idma32.o
dw_dmac_core-$(CONFIG_ACPI) += acpi.o
obj-$(CONFIG_DW_DMAC) += dw_dmac.o obj-$(CONFIG_DW_DMAC) += dw_dmac.o
dw_dmac-y := platform.o dw_dmac-y := platform.o
dw_dmac-$(CONFIG_ACPI) += acpi.o
dw_dmac-$(CONFIG_OF) += of.o dw_dmac-$(CONFIG_OF) += of.o
obj-$(CONFIG_DW_DMAC_PCI) += dw_dmac_pci.o obj-$(CONFIG_DW_DMAC_PCI) += dw_dmac_pci.o
dw_dmac_pci-objs := pci.o dw_dmac_pci-y := pci.o
...@@ -41,6 +41,7 @@ void dw_dma_acpi_controller_register(struct dw_dma *dw) ...@@ -41,6 +41,7 @@ void dw_dma_acpi_controller_register(struct dw_dma *dw)
if (ret) if (ret)
dev_err(dev, "could not register acpi_dma_controller\n"); dev_err(dev, "could not register acpi_dma_controller\n");
} }
EXPORT_SYMBOL_GPL(dw_dma_acpi_controller_register);
void dw_dma_acpi_controller_free(struct dw_dma *dw) void dw_dma_acpi_controller_free(struct dw_dma *dw)
{ {
...@@ -51,3 +52,4 @@ void dw_dma_acpi_controller_free(struct dw_dma *dw) ...@@ -51,3 +52,4 @@ void dw_dma_acpi_controller_free(struct dw_dma *dw)
acpi_dma_controller_free(dev); acpi_dma_controller_free(dev);
} }
EXPORT_SYMBOL_GPL(dw_dma_acpi_controller_free);
...@@ -786,6 +786,11 @@ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) ...@@ -786,6 +786,11 @@ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig)); memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig));
dwc->dma_sconfig.src_maxburst =
clamp(dwc->dma_sconfig.src_maxburst, 0U, dwc->max_burst);
dwc->dma_sconfig.dst_maxburst =
clamp(dwc->dma_sconfig.dst_maxburst, 0U, dwc->max_burst);
dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst); dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst);
dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst); dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst);
...@@ -1037,6 +1042,25 @@ static void dwc_free_chan_resources(struct dma_chan *chan) ...@@ -1037,6 +1042,25 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
dev_vdbg(chan2dev(chan), "%s: done\n", __func__); dev_vdbg(chan2dev(chan), "%s: done\n", __func__);
} }
static void dwc_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
caps->max_burst = dwc->max_burst;
/*
* It might be crucial for some devices to have the hardware
* accelerated multi-block transfers supported, aka LLPs in DW DMAC
* notation. So if LLPs are supported then max_sg_burst is set to
* zero which means unlimited number of SG entries can be handled in a
* single DMA transaction, otherwise it's just one SG entry.
*/
if (dwc->nollp)
caps->max_sg_burst = 1;
else
caps->max_sg_burst = 0;
}
int do_dma_probe(struct dw_dma_chip *chip) int do_dma_probe(struct dw_dma_chip *chip)
{ {
struct dw_dma *dw = chip->dw; struct dw_dma *dw = chip->dw;
...@@ -1166,11 +1190,23 @@ int do_dma_probe(struct dw_dma_chip *chip) ...@@ -1166,11 +1190,23 @@ int do_dma_probe(struct dw_dma_chip *chip)
*/ */
dwc->block_size = dwc->block_size =
(4 << ((pdata->block_size >> 4 * i) & 0xf)) - 1; (4 << ((pdata->block_size >> 4 * i) & 0xf)) - 1;
/*
* According to the DW DMA databook the true scatter-
* gether LLPs aren't available if either multi-block
* config is disabled (CHx_MULTI_BLK_EN == 0) or the
* LLP register is hard-coded to zeros
* (CHx_HC_LLP == 1).
*/
dwc->nollp = dwc->nollp =
(dwc_params >> DWC_PARAMS_MBLK_EN & 0x1) == 0; (dwc_params >> DWC_PARAMS_MBLK_EN & 0x1) == 0 ||
(dwc_params >> DWC_PARAMS_HC_LLP & 0x1) == 1;
dwc->max_burst =
(0x4 << (dwc_params >> DWC_PARAMS_MSIZE & 0x7));
} else { } else {
dwc->block_size = pdata->block_size; dwc->block_size = pdata->block_size;
dwc->nollp = !pdata->multi_block[i]; dwc->nollp = !pdata->multi_block[i];
dwc->max_burst = pdata->max_burst[i] ?: DW_DMA_MAX_BURST;
} }
} }
...@@ -1193,6 +1229,7 @@ int do_dma_probe(struct dw_dma_chip *chip) ...@@ -1193,6 +1229,7 @@ int do_dma_probe(struct dw_dma_chip *chip)
dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy; dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy;
dw->dma.device_prep_slave_sg = dwc_prep_slave_sg; dw->dma.device_prep_slave_sg = dwc_prep_slave_sg;
dw->dma.device_caps = dwc_caps;
dw->dma.device_config = dwc_config; dw->dma.device_config = dwc_config;
dw->dma.device_pause = dwc_pause; dw->dma.device_pause = dwc_pause;
dw->dma.device_resume = dwc_resume; dw->dma.device_resume = dwc_resume;
...@@ -1202,12 +1239,21 @@ int do_dma_probe(struct dw_dma_chip *chip) ...@@ -1202,12 +1239,21 @@ int do_dma_probe(struct dw_dma_chip *chip)
dw->dma.device_issue_pending = dwc_issue_pending; dw->dma.device_issue_pending = dwc_issue_pending;
/* DMA capabilities */ /* DMA capabilities */
dw->dma.min_burst = DW_DMA_MIN_BURST;
dw->dma.max_burst = DW_DMA_MAX_BURST;
dw->dma.src_addr_widths = DW_DMA_BUSWIDTHS; dw->dma.src_addr_widths = DW_DMA_BUSWIDTHS;
dw->dma.dst_addr_widths = DW_DMA_BUSWIDTHS; dw->dma.dst_addr_widths = DW_DMA_BUSWIDTHS;
dw->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) | dw->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) |
BIT(DMA_MEM_TO_MEM); BIT(DMA_MEM_TO_MEM);
dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
/*
* For now there is no hardware with non uniform maximum block size
* across all of the device channels, so we set the maximum segment
* size as the block size found for the very first channel.
*/
dma_set_max_seg_size(dw->dma.dev, dw->chan[0].block_size);
err = dma_async_device_register(&dw->dma); err = dma_async_device_register(&dw->dma);
if (err) if (err)
goto err_dma_register; goto err_dma_register;
......
...@@ -98,6 +98,11 @@ struct dw_dma_platform_data *dw_dma_parse_dt(struct platform_device *pdev) ...@@ -98,6 +98,11 @@ struct dw_dma_platform_data *dw_dma_parse_dt(struct platform_device *pdev)
pdata->multi_block[tmp] = 1; pdata->multi_block[tmp] = 1;
} }
if (of_property_read_u32_array(np, "snps,max-burst-len", pdata->max_burst,
nr_channels)) {
memset32(pdata->max_burst, DW_DMA_MAX_BURST, nr_channels);
}
if (!of_property_read_u32(np, "snps,dma-protection-control", &tmp)) { if (!of_property_read_u32(np, "snps,dma-protection-control", &tmp)) {
if (tmp > CHAN_PROTCTL_MASK) if (tmp > CHAN_PROTCTL_MASK)
return NULL; return NULL;
......
...@@ -60,6 +60,8 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) ...@@ -60,6 +60,8 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
if (ret) if (ret)
return ret; return ret;
dw_dma_acpi_controller_register(chip->dw);
pci_set_drvdata(pdev, data); pci_set_drvdata(pdev, data);
return 0; return 0;
...@@ -71,6 +73,8 @@ static void dw_pci_remove(struct pci_dev *pdev) ...@@ -71,6 +73,8 @@ static void dw_pci_remove(struct pci_dev *pdev)
struct dw_dma_chip *chip = data->chip; struct dw_dma_chip *chip = data->chip;
int ret; int ret;
dw_dma_acpi_controller_free(chip->dw);
ret = data->remove(chip); ret = data->remove(chip);
if (ret) if (ret)
dev_warn(&pdev->dev, "can't remove device properly: %d\n", ret); dev_warn(&pdev->dev, "can't remove device properly: %d\n", ret);
......
...@@ -125,6 +125,8 @@ struct dw_dma_regs { ...@@ -125,6 +125,8 @@ struct dw_dma_regs {
/* Bitfields in DWC_PARAMS */ /* Bitfields in DWC_PARAMS */
#define DWC_PARAMS_MBLK_EN 11 /* multi block transfer */ #define DWC_PARAMS_MBLK_EN 11 /* multi block transfer */
#define DWC_PARAMS_HC_LLP 13 /* set LLP register to zero */
#define DWC_PARAMS_MSIZE 16 /* max group transaction size */
/* bursts size */ /* bursts size */
enum dw_dma_msize { enum dw_dma_msize {
...@@ -283,6 +285,7 @@ struct dw_dma_chan { ...@@ -283,6 +285,7 @@ struct dw_dma_chan {
/* hardware configuration */ /* hardware configuration */
unsigned int block_size; unsigned int block_size;
bool nollp; bool nollp;
u32 max_burst;
/* custom slave configuration */ /* custom slave configuration */
struct dw_dma_slave dws; struct dw_dma_slave dws;
......
...@@ -147,6 +147,7 @@ struct ep93xx_dma_desc { ...@@ -147,6 +147,7 @@ struct ep93xx_dma_desc {
* is set via .device_config before slave operation is * is set via .device_config before slave operation is
* prepared * prepared
* @runtime_ctrl: M2M runtime values for the control register. * @runtime_ctrl: M2M runtime values for the control register.
* @slave_config: slave configuration
* *
* As EP93xx DMA controller doesn't support real chained DMA descriptors we * As EP93xx DMA controller doesn't support real chained DMA descriptors we
* will have slightly different scheme here: @active points to a head of * will have slightly different scheme here: @active points to a head of
...@@ -187,6 +188,7 @@ struct ep93xx_dma_chan { ...@@ -187,6 +188,7 @@ struct ep93xx_dma_chan {
* @dma_dev: holds the dmaengine device * @dma_dev: holds the dmaengine device
* @m2m: is this an M2M or M2P device * @m2m: is this an M2M or M2P device
* @hw_setup: method which sets the channel up for operation * @hw_setup: method which sets the channel up for operation
* @hw_synchronize: synchronizes DMA channel termination to current context
* @hw_shutdown: shuts the channel down and flushes whatever is left * @hw_shutdown: shuts the channel down and flushes whatever is left
* @hw_submit: pushes active descriptor(s) to the hardware * @hw_submit: pushes active descriptor(s) to the hardware
* @hw_interrupt: handle the interrupt * @hw_interrupt: handle the interrupt
......
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
/* Registers for bit and genmask */ /* Registers for bit and genmask */
#define FSL_QDMA_CQIDR_SQT BIT(15) #define FSL_QDMA_CQIDR_SQT BIT(15)
#define QDMA_CCDF_FOTMAT BIT(29) #define QDMA_CCDF_FORMAT BIT(29)
#define QDMA_CCDF_SER BIT(30) #define QDMA_CCDF_SER BIT(30)
#define QDMA_SG_FIN BIT(30) #define QDMA_SG_FIN BIT(30)
#define QDMA_SG_LEN_MASK GENMASK(29, 0) #define QDMA_SG_LEN_MASK GENMASK(29, 0)
...@@ -110,8 +110,19 @@ ...@@ -110,8 +110,19 @@
#define FSL_QDMA_CMD_DSEN_OFFSET 19 #define FSL_QDMA_CMD_DSEN_OFFSET 19
#define FSL_QDMA_CMD_LWC_OFFSET 16 #define FSL_QDMA_CMD_LWC_OFFSET 16
/* Field definition for Descriptor status */
#define QDMA_CCDF_STATUS_RTE BIT(5)
#define QDMA_CCDF_STATUS_WTE BIT(4)
#define QDMA_CCDF_STATUS_CDE BIT(2)
#define QDMA_CCDF_STATUS_SDE BIT(1)
#define QDMA_CCDF_STATUS_DDE BIT(0)
#define QDMA_CCDF_STATUS_MASK (QDMA_CCDF_STATUS_RTE | \
QDMA_CCDF_STATUS_WTE | \
QDMA_CCDF_STATUS_CDE | \
QDMA_CCDF_STATUS_SDE | \
QDMA_CCDF_STATUS_DDE)
/* Field definition for Descriptor offset */ /* Field definition for Descriptor offset */
#define QDMA_CCDF_STATUS 20
#define QDMA_CCDF_OFFSET 20 #define QDMA_CCDF_OFFSET 20
#define QDMA_SDDF_CMD(x) (((u64)(x)) << 32) #define QDMA_SDDF_CMD(x) (((u64)(x)) << 32)
...@@ -136,7 +147,7 @@ ...@@ -136,7 +147,7 @@
* @__reserved1: Reserved field. * @__reserved1: Reserved field.
* @cfg8b_w1: Compound descriptor command queue origin produced * @cfg8b_w1: Compound descriptor command queue origin produced
* by qDMA and dynamic debug field. * by qDMA and dynamic debug field.
* @data Pointer to the memory 40-bit address, describes DMA * @data: Pointer to the memory 40-bit address, describes DMA
* source information and DMA destination information. * source information and DMA destination information.
*/ */
struct fsl_qdma_format { struct fsl_qdma_format {
...@@ -243,13 +254,14 @@ qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf) ...@@ -243,13 +254,14 @@ qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
static inline void static inline void
qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset) qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
{ {
ccdf->cfg = cpu_to_le32(QDMA_CCDF_FOTMAT | offset); ccdf->cfg = cpu_to_le32(QDMA_CCDF_FORMAT |
(offset << QDMA_CCDF_OFFSET));
} }
static inline int static inline int
qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf) qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
{ {
return (le32_to_cpu(ccdf->status) & QDMA_CCDF_MASK) >> QDMA_CCDF_STATUS; return (le32_to_cpu(ccdf->status) & QDMA_CCDF_STATUS_MASK);
} }
static inline void static inline void
...@@ -618,6 +630,7 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, ...@@ -618,6 +630,7 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
{ {
bool duplicate; bool duplicate;
u32 reg, i, count; u32 reg, i, count;
u8 completion_status;
struct fsl_qdma_queue *temp_queue; struct fsl_qdma_queue *temp_queue;
struct fsl_qdma_format *status_addr; struct fsl_qdma_format *status_addr;
struct fsl_qdma_comp *fsl_comp = NULL; struct fsl_qdma_comp *fsl_comp = NULL;
...@@ -677,6 +690,8 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, ...@@ -677,6 +690,8 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
} }
list_del(&fsl_comp->list); list_del(&fsl_comp->list);
completion_status = qdma_ccdf_get_status(status_addr);
reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR); reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
reg |= FSL_QDMA_BSQMR_DI; reg |= FSL_QDMA_BSQMR_DI;
qdma_desc_addr_set64(status_addr, 0x0); qdma_desc_addr_set64(status_addr, 0x0);
...@@ -686,6 +701,31 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, ...@@ -686,6 +701,31 @@ fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma,
qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR); qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
spin_unlock(&temp_queue->queue_lock); spin_unlock(&temp_queue->queue_lock);
/* The completion_status is evaluated here
* (outside of spin lock)
*/
if (completion_status) {
/* A completion error occurred! */
if (completion_status & QDMA_CCDF_STATUS_WTE) {
/* Write transaction error */
fsl_comp->vdesc.tx_result.result =
DMA_TRANS_WRITE_FAILED;
} else if (completion_status & QDMA_CCDF_STATUS_RTE) {
/* Read transaction error */
fsl_comp->vdesc.tx_result.result =
DMA_TRANS_READ_FAILED;
} else {
/* Command/source/destination
* description error
*/
fsl_comp->vdesc.tx_result.result =
DMA_TRANS_ABORTED;
dev_err(fsl_qdma->dma_dev.dev,
"DMA status descriptor error %x\n",
completion_status);
}
}
spin_lock(&fsl_comp->qchan->vchan.lock); spin_lock(&fsl_comp->qchan->vchan.lock);
vchan_cookie_complete(&fsl_comp->vdesc); vchan_cookie_complete(&fsl_comp->vdesc);
fsl_comp->qchan->status = DMA_COMPLETE; fsl_comp->qchan->status = DMA_COMPLETE;
...@@ -700,11 +740,22 @@ static irqreturn_t fsl_qdma_error_handler(int irq, void *dev_id) ...@@ -700,11 +740,22 @@ static irqreturn_t fsl_qdma_error_handler(int irq, void *dev_id)
unsigned int intr; unsigned int intr;
struct fsl_qdma_engine *fsl_qdma = dev_id; struct fsl_qdma_engine *fsl_qdma = dev_id;
void __iomem *status = fsl_qdma->status_base; void __iomem *status = fsl_qdma->status_base;
unsigned int decfdw0r;
unsigned int decfdw1r;
unsigned int decfdw2r;
unsigned int decfdw3r;
intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR); intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
if (intr) if (intr) {
dev_err(fsl_qdma->dma_dev.dev, "DMA transaction error!\n"); decfdw0r = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW0R);
decfdw1r = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW1R);
decfdw2r = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW2R);
decfdw3r = qdma_readl(fsl_qdma, status + FSL_QDMA_DECFDW3R);
dev_err(fsl_qdma->dma_dev.dev,
"DMA transaction error! (%x: %x-%x-%x-%x)\n",
intr, decfdw0r, decfdw1r, decfdw2r, decfdw3r);
}
qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + FSL_QDMA_DEDR); qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + FSL_QDMA_DEDR);
return IRQ_HANDLED; return IRQ_HANDLED;
......
...@@ -511,7 +511,6 @@ static int hisi_dma_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -511,7 +511,6 @@ static int hisi_dma_probe(struct pci_dev *pdev, const struct pci_device_id *id)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct hisi_dma_dev *hdma_dev; struct hisi_dma_dev *hdma_dev;
struct dma_device *dma_dev; struct dma_device *dma_dev;
size_t dev_size;
int ret; int ret;
ret = pcim_enable_device(pdev); ret = pcim_enable_device(pdev);
...@@ -534,9 +533,7 @@ static int hisi_dma_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -534,9 +533,7 @@ static int hisi_dma_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (ret) if (ret)
return ret; return ret;
dev_size = sizeof(struct hisi_dma_chan) * HISI_DMA_CHAN_NUM + hdma_dev = devm_kzalloc(dev, struct_size(hdma_dev, chan, HISI_DMA_CHAN_NUM), GFP_KERNEL);
sizeof(*hdma_dev);
hdma_dev = devm_kzalloc(dev, dev_size, GFP_KERNEL);
if (!hdma_dev) if (!hdma_dev)
return -EINVAL; return -EINVAL;
......
...@@ -115,6 +115,9 @@ static int idxd_cdev_release(struct inode *node, struct file *filep) ...@@ -115,6 +115,9 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
dev_dbg(dev, "%s called\n", __func__); dev_dbg(dev, "%s called\n", __func__);
filep->private_data = NULL; filep->private_data = NULL;
/* Wait for in-flight operations to complete. */
idxd_wq_drain(wq);
kfree(ctx); kfree(ctx);
mutex_lock(&wq->wq_lock); mutex_lock(&wq->wq_lock);
idxd_wq_put(wq); idxd_wq_put(wq);
......
This diff is collapsed.
...@@ -133,7 +133,7 @@ static enum dma_status idxd_dma_tx_status(struct dma_chan *dma_chan, ...@@ -133,7 +133,7 @@ static enum dma_status idxd_dma_tx_status(struct dma_chan *dma_chan,
dma_cookie_t cookie, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
{ {
return dma_cookie_status(dma_chan, cookie, txstate); return DMA_OUT_OF_ORDER;
} }
/* /*
...@@ -174,6 +174,7 @@ int idxd_register_dma_device(struct idxd_device *idxd) ...@@ -174,6 +174,7 @@ int idxd_register_dma_device(struct idxd_device *idxd)
INIT_LIST_HEAD(&dma->channels); INIT_LIST_HEAD(&dma->channels);
dma->dev = &idxd->pdev->dev; dma->dev = &idxd->pdev->dev;
dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
dma->device_release = idxd_dma_release; dma->device_release = idxd_dma_release;
if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) { if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) {
......
...@@ -104,7 +104,6 @@ struct idxd_wq { ...@@ -104,7 +104,6 @@ struct idxd_wq {
enum idxd_wq_state state; enum idxd_wq_state state;
unsigned long flags; unsigned long flags;
union wqcfg wqcfg; union wqcfg wqcfg;
atomic_t dq_count; /* dedicated queue flow control */
u32 vec_ptr; /* interrupt steering */ u32 vec_ptr; /* interrupt steering */
struct dsa_hw_desc **hw_descs; struct dsa_hw_desc **hw_descs;
int num_descs; int num_descs;
...@@ -112,10 +111,8 @@ struct idxd_wq { ...@@ -112,10 +111,8 @@ struct idxd_wq {
dma_addr_t compls_addr; dma_addr_t compls_addr;
int compls_size; int compls_size;
struct idxd_desc **descs; struct idxd_desc **descs;
struct sbitmap sbmap; struct sbitmap_queue sbq;
struct dma_chan dma_chan; struct dma_chan dma_chan;
struct percpu_rw_semaphore submit_lock;
wait_queue_head_t submit_waitq;
char name[WQ_NAME_SIZE + 1]; char name[WQ_NAME_SIZE + 1];
}; };
...@@ -145,6 +142,7 @@ enum idxd_device_state { ...@@ -145,6 +142,7 @@ enum idxd_device_state {
enum idxd_device_flag { enum idxd_device_flag {
IDXD_FLAG_CONFIGURABLE = 0, IDXD_FLAG_CONFIGURABLE = 0,
IDXD_FLAG_CMD_RUNNING,
}; };
struct idxd_device { struct idxd_device {
...@@ -161,6 +159,7 @@ struct idxd_device { ...@@ -161,6 +159,7 @@ struct idxd_device {
void __iomem *reg_base; void __iomem *reg_base;
spinlock_t dev_lock; /* spinlock for device */ spinlock_t dev_lock; /* spinlock for device */
struct completion *cmd_done;
struct idxd_group *groups; struct idxd_group *groups;
struct idxd_wq *wqs; struct idxd_wq *wqs;
struct idxd_engine *engines; struct idxd_engine *engines;
...@@ -183,12 +182,14 @@ struct idxd_device { ...@@ -183,12 +182,14 @@ struct idxd_device {
int nr_tokens; /* non-reserved tokens */ int nr_tokens; /* non-reserved tokens */
union sw_err_reg sw_err; union sw_err_reg sw_err;
wait_queue_head_t cmd_waitq;
struct msix_entry *msix_entries; struct msix_entry *msix_entries;
int num_wq_irqs; int num_wq_irqs;
struct idxd_irq_entry *irq_entries; struct idxd_irq_entry *irq_entries;
struct dma_device dma_dev; struct dma_device dma_dev;
struct workqueue_struct *wq;
struct work_struct work;
}; };
/* IDXD software descriptor */ /* IDXD software descriptor */
...@@ -201,6 +202,7 @@ struct idxd_desc { ...@@ -201,6 +202,7 @@ struct idxd_desc {
struct llist_node llnode; struct llist_node llnode;
struct list_head list; struct list_head list;
int id; int id;
int cpu;
struct idxd_wq *wq; struct idxd_wq *wq;
}; };
...@@ -271,14 +273,14 @@ irqreturn_t idxd_wq_thread(int irq, void *data); ...@@ -271,14 +273,14 @@ irqreturn_t idxd_wq_thread(int irq, void *data);
void idxd_mask_error_interrupts(struct idxd_device *idxd); void idxd_mask_error_interrupts(struct idxd_device *idxd);
void idxd_unmask_error_interrupts(struct idxd_device *idxd); void idxd_unmask_error_interrupts(struct idxd_device *idxd);
void idxd_mask_msix_vectors(struct idxd_device *idxd); void idxd_mask_msix_vectors(struct idxd_device *idxd);
int idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id); void idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id);
int idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id); void idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id);
/* device control */ /* device control */
void idxd_device_init_reset(struct idxd_device *idxd);
int idxd_device_enable(struct idxd_device *idxd); int idxd_device_enable(struct idxd_device *idxd);
int idxd_device_disable(struct idxd_device *idxd); int idxd_device_disable(struct idxd_device *idxd);
int idxd_device_reset(struct idxd_device *idxd); void idxd_device_reset(struct idxd_device *idxd);
int __idxd_device_reset(struct idxd_device *idxd);
void idxd_device_cleanup(struct idxd_device *idxd); void idxd_device_cleanup(struct idxd_device *idxd);
int idxd_device_config(struct idxd_device *idxd); int idxd_device_config(struct idxd_device *idxd);
void idxd_device_wqs_clear_state(struct idxd_device *idxd); void idxd_device_wqs_clear_state(struct idxd_device *idxd);
...@@ -288,6 +290,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq); ...@@ -288,6 +290,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq);
void idxd_wq_free_resources(struct idxd_wq *wq); void idxd_wq_free_resources(struct idxd_wq *wq);
int idxd_wq_enable(struct idxd_wq *wq); int idxd_wq_enable(struct idxd_wq *wq);
int idxd_wq_disable(struct idxd_wq *wq); int idxd_wq_disable(struct idxd_wq *wq);
void idxd_wq_drain(struct idxd_wq *wq);
int idxd_wq_map_portal(struct idxd_wq *wq); int idxd_wq_map_portal(struct idxd_wq *wq);
void idxd_wq_unmap_portal(struct idxd_wq *wq); void idxd_wq_unmap_portal(struct idxd_wq *wq);
void idxd_wq_disable_cleanup(struct idxd_wq *wq); void idxd_wq_disable_cleanup(struct idxd_wq *wq);
......
...@@ -141,22 +141,12 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) ...@@ -141,22 +141,12 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
return rc; return rc;
} }
static void idxd_wqs_free_lock(struct idxd_device *idxd)
{
int i;
for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i];
percpu_free_rwsem(&wq->submit_lock);
}
}
static int idxd_setup_internals(struct idxd_device *idxd) static int idxd_setup_internals(struct idxd_device *idxd)
{ {
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
int i; int i;
init_waitqueue_head(&idxd->cmd_waitq);
idxd->groups = devm_kcalloc(dev, idxd->max_groups, idxd->groups = devm_kcalloc(dev, idxd->max_groups,
sizeof(struct idxd_group), GFP_KERNEL); sizeof(struct idxd_group), GFP_KERNEL);
if (!idxd->groups) if (!idxd->groups)
...@@ -181,19 +171,11 @@ static int idxd_setup_internals(struct idxd_device *idxd) ...@@ -181,19 +171,11 @@ static int idxd_setup_internals(struct idxd_device *idxd)
for (i = 0; i < idxd->max_wqs; i++) { for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i]; struct idxd_wq *wq = &idxd->wqs[i];
int rc;
wq->id = i; wq->id = i;
wq->idxd = idxd; wq->idxd = idxd;
mutex_init(&wq->wq_lock); mutex_init(&wq->wq_lock);
atomic_set(&wq->dq_count, 0);
init_waitqueue_head(&wq->submit_waitq);
wq->idxd_cdev.minor = -1; wq->idxd_cdev.minor = -1;
rc = percpu_init_rwsem(&wq->submit_lock);
if (rc < 0) {
idxd_wqs_free_lock(idxd);
return rc;
}
} }
for (i = 0; i < idxd->max_engines; i++) { for (i = 0; i < idxd->max_engines; i++) {
...@@ -201,6 +183,10 @@ static int idxd_setup_internals(struct idxd_device *idxd) ...@@ -201,6 +183,10 @@ static int idxd_setup_internals(struct idxd_device *idxd)
idxd->engines[i].id = i; idxd->engines[i].id = i;
} }
idxd->wq = create_workqueue(dev_name(dev));
if (!idxd->wq)
return -ENOMEM;
return 0; return 0;
} }
...@@ -296,9 +282,7 @@ static int idxd_probe(struct idxd_device *idxd) ...@@ -296,9 +282,7 @@ static int idxd_probe(struct idxd_device *idxd)
int rc; int rc;
dev_dbg(dev, "%s entered and resetting device\n", __func__); dev_dbg(dev, "%s entered and resetting device\n", __func__);
rc = idxd_device_reset(idxd); idxd_device_init_reset(idxd);
if (rc < 0)
return rc;
dev_dbg(dev, "IDXD reset complete\n"); dev_dbg(dev, "IDXD reset complete\n");
idxd_read_caps(idxd); idxd_read_caps(idxd);
...@@ -433,11 +417,8 @@ static void idxd_shutdown(struct pci_dev *pdev) ...@@ -433,11 +417,8 @@ static void idxd_shutdown(struct pci_dev *pdev)
int rc, i; int rc, i;
struct idxd_irq_entry *irq_entry; struct idxd_irq_entry *irq_entry;
int msixcnt = pci_msix_vec_count(pdev); int msixcnt = pci_msix_vec_count(pdev);
unsigned long flags;
spin_lock_irqsave(&idxd->dev_lock, flags);
rc = idxd_device_disable(idxd); rc = idxd_device_disable(idxd);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
if (rc) if (rc)
dev_err(&pdev->dev, "Disabling device failed\n"); dev_err(&pdev->dev, "Disabling device failed\n");
...@@ -453,6 +434,8 @@ static void idxd_shutdown(struct pci_dev *pdev) ...@@ -453,6 +434,8 @@ static void idxd_shutdown(struct pci_dev *pdev)
idxd_flush_pending_llist(irq_entry); idxd_flush_pending_llist(irq_entry);
idxd_flush_work_list(irq_entry); idxd_flush_work_list(irq_entry);
} }
destroy_workqueue(idxd->wq);
} }
static void idxd_remove(struct pci_dev *pdev) static void idxd_remove(struct pci_dev *pdev)
...@@ -462,7 +445,6 @@ static void idxd_remove(struct pci_dev *pdev) ...@@ -462,7 +445,6 @@ static void idxd_remove(struct pci_dev *pdev)
dev_dbg(&pdev->dev, "%s called\n", __func__); dev_dbg(&pdev->dev, "%s called\n", __func__);
idxd_cleanup_sysfs(idxd); idxd_cleanup_sysfs(idxd);
idxd_shutdown(pdev); idxd_shutdown(pdev);
idxd_wqs_free_lock(idxd);
mutex_lock(&idxd_idr_lock); mutex_lock(&idxd_idr_lock);
idr_remove(&idxd_idrs[idxd->type], idxd->id); idr_remove(&idxd_idrs[idxd->type], idxd->id);
mutex_unlock(&idxd_idr_lock); mutex_unlock(&idxd_idr_lock);
......
...@@ -23,16 +23,13 @@ void idxd_device_wqs_clear_state(struct idxd_device *idxd) ...@@ -23,16 +23,13 @@ void idxd_device_wqs_clear_state(struct idxd_device *idxd)
} }
} }
static int idxd_restart(struct idxd_device *idxd) static void idxd_device_reinit(struct work_struct *work)
{ {
int i, rc; struct idxd_device *idxd = container_of(work, struct idxd_device, work);
struct device *dev = &idxd->pdev->dev;
lockdep_assert_held(&idxd->dev_lock); int rc, i;
rc = __idxd_device_reset(idxd);
if (rc < 0)
goto out;
idxd_device_reset(idxd);
rc = idxd_device_config(idxd); rc = idxd_device_config(idxd);
if (rc < 0) if (rc < 0)
goto out; goto out;
...@@ -47,19 +44,16 @@ static int idxd_restart(struct idxd_device *idxd) ...@@ -47,19 +44,16 @@ static int idxd_restart(struct idxd_device *idxd)
if (wq->state == IDXD_WQ_ENABLED) { if (wq->state == IDXD_WQ_ENABLED) {
rc = idxd_wq_enable(wq); rc = idxd_wq_enable(wq);
if (rc < 0) { if (rc < 0) {
dev_warn(&idxd->pdev->dev, dev_warn(dev, "Unable to re-enable wq %s\n",
"Unable to re-enable wq %s\n",
dev_name(&wq->conf_dev)); dev_name(&wq->conf_dev));
} }
} }
} }
return 0; return;
out: out:
idxd_device_wqs_clear_state(idxd); idxd_device_wqs_clear_state(idxd);
idxd->state = IDXD_DEV_HALTED;
return rc;
} }
irqreturn_t idxd_irq_handler(int vec, void *data) irqreturn_t idxd_irq_handler(int vec, void *data)
...@@ -78,7 +72,7 @@ irqreturn_t idxd_misc_thread(int vec, void *data) ...@@ -78,7 +72,7 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
union gensts_reg gensts; union gensts_reg gensts;
u32 cause, val = 0; u32 cause, val = 0;
int i, rc; int i;
bool err = false; bool err = false;
cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET); cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET);
...@@ -117,8 +111,8 @@ irqreturn_t idxd_misc_thread(int vec, void *data) ...@@ -117,8 +111,8 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
} }
if (cause & IDXD_INTC_CMD) { if (cause & IDXD_INTC_CMD) {
/* Driver does use command interrupts */
val |= IDXD_INTC_CMD; val |= IDXD_INTC_CMD;
complete(idxd->cmd_done);
} }
if (cause & IDXD_INTC_OCCUPY) { if (cause & IDXD_INTC_OCCUPY) {
...@@ -145,21 +139,24 @@ irqreturn_t idxd_misc_thread(int vec, void *data) ...@@ -145,21 +139,24 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET); gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
if (gensts.state == IDXD_DEVICE_STATE_HALT) { if (gensts.state == IDXD_DEVICE_STATE_HALT) {
spin_lock_bh(&idxd->dev_lock); idxd->state = IDXD_DEV_HALTED;
if (gensts.reset_type == IDXD_DEVICE_RESET_SOFTWARE) { if (gensts.reset_type == IDXD_DEVICE_RESET_SOFTWARE) {
rc = idxd_restart(idxd); /*
if (rc < 0) * If we need a software reset, we will throw the work
dev_err(&idxd->pdev->dev, * on a system workqueue in order to allow interrupts
"idxd restart failed, device halt."); * for the device command completions.
*/
INIT_WORK(&idxd->work, idxd_device_reinit);
queue_work(idxd->wq, &idxd->work);
} else { } else {
spin_lock_bh(&idxd->dev_lock);
idxd_device_wqs_clear_state(idxd); idxd_device_wqs_clear_state(idxd);
idxd->state = IDXD_DEV_HALTED;
dev_err(&idxd->pdev->dev, dev_err(&idxd->pdev->dev,
"idxd halted, need %s.\n", "idxd halted, need %s.\n",
gensts.reset_type == IDXD_DEVICE_RESET_FLR ? gensts.reset_type == IDXD_DEVICE_RESET_FLR ?
"FLR" : "system reset"); "FLR" : "system reset");
spin_unlock_bh(&idxd->dev_lock);
} }
spin_unlock_bh(&idxd->dev_lock);
} }
out: out:
...@@ -264,8 +261,6 @@ irqreturn_t idxd_wq_thread(int irq, void *data) ...@@ -264,8 +261,6 @@ irqreturn_t idxd_wq_thread(int irq, void *data)
processed = idxd_desc_process(irq_entry); processed = idxd_desc_process(irq_entry);
idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id); idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id);
/* catch anything unprocessed after unmasking */
processed += idxd_desc_process(irq_entry);
if (processed == 0) if (processed == 0)
return IRQ_NONE; return IRQ_NONE;
......
...@@ -8,61 +8,61 @@ ...@@ -8,61 +8,61 @@
#include "idxd.h" #include "idxd.h"
#include "registers.h" #include "registers.h"
struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype) static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu)
{ {
struct idxd_desc *desc; struct idxd_desc *desc;
int idx;
desc = wq->descs[idx];
memset(desc->hw, 0, sizeof(struct dsa_hw_desc));
memset(desc->completion, 0, sizeof(struct dsa_completion_record));
desc->cpu = cpu;
return desc;
}
struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype)
{
int cpu, idx;
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
DEFINE_SBQ_WAIT(wait);
struct sbq_wait_state *ws;
struct sbitmap_queue *sbq;
if (idxd->state != IDXD_DEV_ENABLED) if (idxd->state != IDXD_DEV_ENABLED)
return ERR_PTR(-EIO); return ERR_PTR(-EIO);
if (optype == IDXD_OP_BLOCK) sbq = &wq->sbq;
percpu_down_read(&wq->submit_lock); idx = sbitmap_queue_get(sbq, &cpu);
else if (!percpu_down_read_trylock(&wq->submit_lock)) if (idx < 0) {
return ERR_PTR(-EBUSY); if (optype == IDXD_OP_NONBLOCK)
if (!atomic_add_unless(&wq->dq_count, 1, wq->size)) {
int rc;
if (optype == IDXD_OP_NONBLOCK) {
percpu_up_read(&wq->submit_lock);
return ERR_PTR(-EAGAIN); return ERR_PTR(-EAGAIN);
}
percpu_up_read(&wq->submit_lock);
percpu_down_write(&wq->submit_lock);
rc = wait_event_interruptible(wq->submit_waitq,
atomic_add_unless(&wq->dq_count,
1, wq->size) ||
idxd->state != IDXD_DEV_ENABLED);
percpu_up_write(&wq->submit_lock);
if (rc < 0)
return ERR_PTR(-EINTR);
if (idxd->state != IDXD_DEV_ENABLED)
return ERR_PTR(-EIO);
} else { } else {
percpu_up_read(&wq->submit_lock); return __get_desc(wq, idx, cpu);
} }
idx = sbitmap_get(&wq->sbmap, 0, false); ws = &sbq->ws[0];
if (idx < 0) { for (;;) {
atomic_dec(&wq->dq_count); sbitmap_prepare_to_wait(sbq, ws, &wait, TASK_INTERRUPTIBLE);
return ERR_PTR(-EAGAIN); if (signal_pending_state(TASK_INTERRUPTIBLE, current))
break;
idx = sbitmap_queue_get(sbq, &cpu);
if (idx > 0)
break;
schedule();
} }
desc = wq->descs[idx]; sbitmap_finish_wait(sbq, ws, &wait);
memset(desc->hw, 0, sizeof(struct dsa_hw_desc)); if (idx < 0)
memset(desc->completion, 0, sizeof(struct dsa_completion_record)); return ERR_PTR(-EAGAIN);
return desc;
return __get_desc(wq, idx, cpu);
} }
void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc) void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{ {
atomic_dec(&wq->dq_count); int cpu = desc->cpu;
sbitmap_clear_bit(&wq->sbmap, desc->id); desc->cpu = -1;
wake_up(&wq->submit_waitq); sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
} }
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
......
...@@ -118,12 +118,11 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -118,12 +118,11 @@ static int idxd_config_bus_probe(struct device *dev)
if (!try_module_get(THIS_MODULE)) if (!try_module_get(THIS_MODULE))
return -ENXIO; return -ENXIO;
spin_lock_irqsave(&idxd->dev_lock, flags);
/* Perform IDXD configuration and enabling */ /* Perform IDXD configuration and enabling */
spin_lock_irqsave(&idxd->dev_lock, flags);
rc = idxd_device_config(idxd); rc = idxd_device_config(idxd);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
if (rc < 0) { if (rc < 0) {
spin_unlock_irqrestore(&idxd->dev_lock, flags);
module_put(THIS_MODULE); module_put(THIS_MODULE);
dev_warn(dev, "Device config failed: %d\n", rc); dev_warn(dev, "Device config failed: %d\n", rc);
return rc; return rc;
...@@ -132,18 +131,15 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -132,18 +131,15 @@ static int idxd_config_bus_probe(struct device *dev)
/* start device */ /* start device */
rc = idxd_device_enable(idxd); rc = idxd_device_enable(idxd);
if (rc < 0) { if (rc < 0) {
spin_unlock_irqrestore(&idxd->dev_lock, flags);
module_put(THIS_MODULE); module_put(THIS_MODULE);
dev_warn(dev, "Device enable failed: %d\n", rc); dev_warn(dev, "Device enable failed: %d\n", rc);
return rc; return rc;
} }
spin_unlock_irqrestore(&idxd->dev_lock, flags);
dev_info(dev, "Device %s enabled\n", dev_name(dev)); dev_info(dev, "Device %s enabled\n", dev_name(dev));
rc = idxd_register_dma_device(idxd); rc = idxd_register_dma_device(idxd);
if (rc < 0) { if (rc < 0) {
spin_unlock_irqrestore(&idxd->dev_lock, flags);
module_put(THIS_MODULE); module_put(THIS_MODULE);
dev_dbg(dev, "Failed to register dmaengine device\n"); dev_dbg(dev, "Failed to register dmaengine device\n");
return rc; return rc;
...@@ -188,8 +184,8 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -188,8 +184,8 @@ static int idxd_config_bus_probe(struct device *dev)
spin_lock_irqsave(&idxd->dev_lock, flags); spin_lock_irqsave(&idxd->dev_lock, flags);
rc = idxd_device_config(idxd); rc = idxd_device_config(idxd);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
if (rc < 0) { if (rc < 0) {
spin_unlock_irqrestore(&idxd->dev_lock, flags);
mutex_unlock(&wq->wq_lock); mutex_unlock(&wq->wq_lock);
dev_warn(dev, "Writing WQ %d config failed: %d\n", dev_warn(dev, "Writing WQ %d config failed: %d\n",
wq->id, rc); wq->id, rc);
...@@ -198,13 +194,11 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -198,13 +194,11 @@ static int idxd_config_bus_probe(struct device *dev)
rc = idxd_wq_enable(wq); rc = idxd_wq_enable(wq);
if (rc < 0) { if (rc < 0) {
spin_unlock_irqrestore(&idxd->dev_lock, flags);
mutex_unlock(&wq->wq_lock); mutex_unlock(&wq->wq_lock);
dev_warn(dev, "WQ %d enabling failed: %d\n", dev_warn(dev, "WQ %d enabling failed: %d\n",
wq->id, rc); wq->id, rc);
return rc; return rc;
} }
spin_unlock_irqrestore(&idxd->dev_lock, flags);
rc = idxd_wq_map_portal(wq); rc = idxd_wq_map_portal(wq);
if (rc < 0) { if (rc < 0) {
...@@ -212,7 +206,6 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -212,7 +206,6 @@ static int idxd_config_bus_probe(struct device *dev)
rc = idxd_wq_disable(wq); rc = idxd_wq_disable(wq);
if (rc < 0) if (rc < 0)
dev_warn(dev, "IDXD wq disable failed\n"); dev_warn(dev, "IDXD wq disable failed\n");
spin_unlock_irqrestore(&idxd->dev_lock, flags);
mutex_unlock(&wq->wq_lock); mutex_unlock(&wq->wq_lock);
return rc; return rc;
} }
...@@ -248,7 +241,6 @@ static void disable_wq(struct idxd_wq *wq) ...@@ -248,7 +241,6 @@ static void disable_wq(struct idxd_wq *wq)
{ {
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
unsigned long flags;
int rc; int rc;
mutex_lock(&wq->wq_lock); mutex_lock(&wq->wq_lock);
...@@ -269,9 +261,8 @@ static void disable_wq(struct idxd_wq *wq) ...@@ -269,9 +261,8 @@ static void disable_wq(struct idxd_wq *wq)
idxd_wq_unmap_portal(wq); idxd_wq_unmap_portal(wq);
spin_lock_irqsave(&idxd->dev_lock, flags); idxd_wq_drain(wq);
rc = idxd_wq_disable(wq); rc = idxd_wq_disable(wq);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
idxd_wq_free_resources(wq); idxd_wq_free_resources(wq);
wq->client_count = 0; wq->client_count = 0;
...@@ -287,7 +278,6 @@ static void disable_wq(struct idxd_wq *wq) ...@@ -287,7 +278,6 @@ static void disable_wq(struct idxd_wq *wq)
static int idxd_config_bus_remove(struct device *dev) static int idxd_config_bus_remove(struct device *dev)
{ {
int rc; int rc;
unsigned long flags;
dev_dbg(dev, "%s called for %s\n", __func__, dev_name(dev)); dev_dbg(dev, "%s called for %s\n", __func__, dev_name(dev));
...@@ -313,14 +303,14 @@ static int idxd_config_bus_remove(struct device *dev) ...@@ -313,14 +303,14 @@ static int idxd_config_bus_remove(struct device *dev)
} }
idxd_unregister_dma_device(idxd); idxd_unregister_dma_device(idxd);
spin_lock_irqsave(&idxd->dev_lock, flags);
rc = idxd_device_disable(idxd); rc = idxd_device_disable(idxd);
for (i = 0; i < idxd->max_wqs; i++) { for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i]; struct idxd_wq *wq = &idxd->wqs[i];
mutex_lock(&wq->wq_lock);
idxd_wq_disable_cleanup(wq); idxd_wq_disable_cleanup(wq);
mutex_unlock(&wq->wq_lock);
} }
spin_unlock_irqrestore(&idxd->dev_lock, flags);
module_put(THIS_MODULE); module_put(THIS_MODULE);
if (rc < 0) if (rc < 0)
dev_warn(dev, "Device disable failed\n"); dev_warn(dev, "Device disable failed\n");
......
...@@ -335,7 +335,7 @@ struct sdma_desc { ...@@ -335,7 +335,7 @@ struct sdma_desc {
* @sdma: pointer to the SDMA engine for this channel * @sdma: pointer to the SDMA engine for this channel
* @channel: the channel number, matches dmaengine chan_id + 1 * @channel: the channel number, matches dmaengine chan_id + 1
* @direction: transfer type. Needed for setting SDMA script * @direction: transfer type. Needed for setting SDMA script
* @slave_config Slave configuration * @slave_config: Slave configuration
* @peripheral_type: Peripheral type. Needed for setting SDMA script * @peripheral_type: Peripheral type. Needed for setting SDMA script
* @event_id0: aka dma request line * @event_id0: aka dma request line
* @event_id1: for channels that use 2 events * @event_id1: for channels that use 2 events
...@@ -354,8 +354,10 @@ struct sdma_desc { ...@@ -354,8 +354,10 @@ struct sdma_desc {
* @shp_addr: value for gReg[6] * @shp_addr: value for gReg[6]
* @per_addr: value for gReg[2] * @per_addr: value for gReg[2]
* @status: status of dma channel * @status: status of dma channel
* @context_loaded: ensure context is only loaded once
* @data: specific sdma interface structure * @data: specific sdma interface structure
* @bd_pool: dma_pool for bd * @bd_pool: dma_pool for bd
* @terminate_worker: used to call back into terminate work function
*/ */
struct sdma_channel { struct sdma_channel {
struct virt_dma_chan vc; struct virt_dma_chan vc;
......
...@@ -193,7 +193,7 @@ void ioat_issue_pending(struct dma_chan *c) ...@@ -193,7 +193,7 @@ void ioat_issue_pending(struct dma_chan *c)
/** /**
* ioat_update_pending - log pending descriptors * ioat_update_pending - log pending descriptors
* @ioat: ioat+ channel * @ioat_chan: ioat+ channel
* *
* Check if the number of unsubmitted descriptors has exceeded the * Check if the number of unsubmitted descriptors has exceeded the
* watermark. Called with prep_lock held * watermark. Called with prep_lock held
...@@ -457,7 +457,7 @@ ioat_alloc_ring(struct dma_chan *c, int order, gfp_t flags) ...@@ -457,7 +457,7 @@ ioat_alloc_ring(struct dma_chan *c, int order, gfp_t flags)
/** /**
* ioat_check_space_lock - verify space and grab ring producer lock * ioat_check_space_lock - verify space and grab ring producer lock
* @ioat: ioat,3 channel (ring) to operate on * @ioat_chan: ioat,3 channel (ring) to operate on
* @num_descs: allocation length * @num_descs: allocation length
*/ */
int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs) int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs)
...@@ -585,7 +585,8 @@ desc_get_errstat(struct ioatdma_chan *ioat_chan, struct ioat_ring_ent *desc) ...@@ -585,7 +585,8 @@ desc_get_errstat(struct ioatdma_chan *ioat_chan, struct ioat_ring_ent *desc)
/** /**
* __cleanup - reclaim used descriptors * __cleanup - reclaim used descriptors
* @ioat: channel (ring) to clean * @ioat_chan: channel (ring) to clean
* @phys_complete: zeroed (or not) completion address (from status)
*/ */
static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete) static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete)
{ {
......
...@@ -602,7 +602,7 @@ static void ioat_enumerate_channels(struct ioatdma_device *ioat_dma) ...@@ -602,7 +602,7 @@ static void ioat_enumerate_channels(struct ioatdma_device *ioat_dma)
/** /**
* ioat_free_chan_resources - release all the descriptors * ioat_free_chan_resources - release all the descriptors
* @chan: the channel to be cleaned * @c: the channel to be cleaned
*/ */
static void ioat_free_chan_resources(struct dma_chan *c) static void ioat_free_chan_resources(struct dma_chan *c)
{ {
......
...@@ -406,8 +406,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan); ...@@ -406,8 +406,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan);
/** /**
* iop_adma_alloc_chan_resources - returns the number of allocated descriptors * iop_adma_alloc_chan_resources - returns the number of allocated descriptors
* @chan - allocate descriptor resources for this channel * @chan: allocate descriptor resources for this channel
* @client - current client requesting the channel be ready for requests
* *
* Note: We keep the slots for 1 operation on iop_chan->chain at all times. To * Note: We keep the slots for 1 operation on iop_chan->chain at all times. To
* avoid deadlock, via async_xor, num_descs_in_pool must at a minimum be * avoid deadlock, via async_xor, num_descs_in_pool must at a minimum be
......
...@@ -107,10 +107,10 @@ enum mtk_hsdma_vdesc_flag { ...@@ -107,10 +107,10 @@ enum mtk_hsdma_vdesc_flag {
* struct mtk_hsdma_pdesc - This is the struct holding info describing physical * struct mtk_hsdma_pdesc - This is the struct holding info describing physical
* descriptor (PD) and its placement must be kept at * descriptor (PD) and its placement must be kept at
* 4-bytes alignment in little endian order. * 4-bytes alignment in little endian order.
* @desc[1-4]: The control pad used to indicate hardware how to * @desc1: | The control pad used to indicate hardware how to
* deal with the descriptor such as source and * @desc2: | deal with the descriptor such as source and
* destination address and data length. The maximum * @desc3: | destination address and data length. The maximum
* data length each pdesc can handle is 0x3f80 bytes * @desc4: | data length each pdesc can handle is 0x3f80 bytes
*/ */
struct mtk_hsdma_pdesc { struct mtk_hsdma_pdesc {
__le32 desc1; __le32 desc1;
......
...@@ -290,7 +290,7 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan) ...@@ -290,7 +290,7 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan)
spin_unlock_irqrestore(&pdev->phy_lock, flags); spin_unlock_irqrestore(&pdev->phy_lock, flags);
} }
/** /*
* start_pending_queue - transfer any pending transactions * start_pending_queue - transfer any pending transactions
* pending list ==> running list * pending list ==> running list
*/ */
...@@ -381,7 +381,7 @@ mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan) ...@@ -381,7 +381,7 @@ mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan)
return desc; return desc;
} }
/** /*
* mmp_pdma_alloc_chan_resources - Allocate resources for DMA channel. * mmp_pdma_alloc_chan_resources - Allocate resources for DMA channel.
* *
* This function will create a dma pool for descriptor allocation. * This function will create a dma pool for descriptor allocation.
...@@ -854,7 +854,7 @@ static enum dma_status mmp_pdma_tx_status(struct dma_chan *dchan, ...@@ -854,7 +854,7 @@ static enum dma_status mmp_pdma_tx_status(struct dma_chan *dchan,
return ret; return ret;
} }
/** /*
* mmp_pdma_issue_pending - Issue the DMA start command * mmp_pdma_issue_pending - Issue the DMA start command
* pending list ==> running list * pending list ==> running list
*/ */
...@@ -1060,7 +1060,7 @@ static int mmp_pdma_probe(struct platform_device *op) ...@@ -1060,7 +1060,7 @@ static int mmp_pdma_probe(struct platform_device *op)
pdev->dma_channels = dma_channels; pdev->dma_channels = dma_channels;
for (i = 0; i < dma_channels; i++) { for (i = 0; i < dma_channels; i++) {
if (platform_get_irq(op, i) > 0) if (platform_get_irq_optional(op, i) > 0)
irq_num++; irq_num++;
} }
......
...@@ -682,7 +682,7 @@ static int mmp_tdma_probe(struct platform_device *pdev) ...@@ -682,7 +682,7 @@ static int mmp_tdma_probe(struct platform_device *pdev)
if (irq_num != chan_num) { if (irq_num != chan_num) {
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(&pdev->dev, irq, ret = devm_request_irq(&pdev->dev, irq,
mmp_tdma_int_handler, 0, "tdma", tdev); mmp_tdma_int_handler, IRQF_SHARED, "tdma", tdev);
if (ret) if (ret)
return ret; return ret;
} }
......
...@@ -135,9 +135,11 @@ struct mv_xor_v2_descriptor { ...@@ -135,9 +135,11 @@ struct mv_xor_v2_descriptor {
/** /**
* struct mv_xor_v2_device - implements a xor device * struct mv_xor_v2_device - implements a xor device
* @lock: lock for the engine * @lock: lock for the engine
* @clk: reference to the 'core' clock
* @reg_clk: reference to the 'reg' clock
* @dma_base: memory mapped DMA register base * @dma_base: memory mapped DMA register base
* @glob_base: memory mapped global register base * @glob_base: memory mapped global register base
* @irq_tasklet: * @irq_tasklet: tasklet used for IRQ handling call-backs
* @free_sw_desc: linked list of free SW descriptors * @free_sw_desc: linked list of free SW descriptors
* @dmadev: dma device * @dmadev: dma device
* @dmachan: dma channel * @dmachan: dma channel
...@@ -146,6 +148,8 @@ struct mv_xor_v2_descriptor { ...@@ -146,6 +148,8 @@ struct mv_xor_v2_descriptor {
* @sw_desq: SW descriptors queue * @sw_desq: SW descriptors queue
* @desc_size: HW descriptor size * @desc_size: HW descriptor size
* @npendings: number of pending descriptors (for which tx_submit has * @npendings: number of pending descriptors (for which tx_submit has
* @hw_queue_idx: HW queue index
* @msi_desc: local interrupt descriptor information
* been called, but not yet issue_pending) * been called, but not yet issue_pending)
*/ */
struct mv_xor_v2_device { struct mv_xor_v2_device {
......
...@@ -144,6 +144,7 @@ struct nbpf_link_desc { ...@@ -144,6 +144,7 @@ struct nbpf_link_desc {
* @async_tx: dmaengine object * @async_tx: dmaengine object
* @user_wait: waiting for a user ack * @user_wait: waiting for a user ack
* @length: total transfer length * @length: total transfer length
* @chan: associated DMAC channel
* @sg: list of hardware descriptors, represented by struct nbpf_link_desc * @sg: list of hardware descriptors, represented by struct nbpf_link_desc
* @node: member in channel descriptor lists * @node: member in channel descriptor lists
*/ */
...@@ -174,13 +175,17 @@ struct nbpf_desc_page { ...@@ -174,13 +175,17 @@ struct nbpf_desc_page {
/** /**
* struct nbpf_channel - one DMAC channel * struct nbpf_channel - one DMAC channel
* @dma_chan: standard dmaengine channel object * @dma_chan: standard dmaengine channel object
* @tasklet: channel specific tasklet used for callbacks
* @base: register address base * @base: register address base
* @nbpf: DMAC * @nbpf: DMAC
* @name: IRQ name * @name: IRQ name
* @irq: IRQ number * @irq: IRQ number
* @slave_addr: address for slave DMA * @slave_src_addr: source address for slave DMA
* @slave_width:slave data size in bytes * @slave_src_width: source slave data size in bytes
* @slave_burst:maximum slave burst size in bytes * @slave_src_burst: maximum source slave burst size in bytes
* @slave_dst_addr: destination address for slave DMA
* @slave_dst_width: destination slave data size in bytes
* @slave_dst_burst: maximum destination slave burst size in bytes
* @terminal: DMA terminal, assigned to this channel * @terminal: DMA terminal, assigned to this channel
* @dmarq_cfg: DMA request line configuration - high / low, edge / level for NBPF_CHAN_CFG * @dmarq_cfg: DMA request line configuration - high / low, edge / level for NBPF_CHAN_CFG
* @flags: configuration flags from DT * @flags: configuration flags from DT
...@@ -191,6 +196,8 @@ struct nbpf_desc_page { ...@@ -191,6 +196,8 @@ struct nbpf_desc_page {
* @active: list of descriptors, scheduled for processing * @active: list of descriptors, scheduled for processing
* @done: list of completed descriptors, waiting post-processing * @done: list of completed descriptors, waiting post-processing
* @desc_page: list of additionally allocated descriptor pages - if any * @desc_page: list of additionally allocated descriptor pages - if any
* @running: linked descriptor of running transaction
* @paused: are translations on this channel paused?
*/ */
struct nbpf_channel { struct nbpf_channel {
struct dma_chan dma_chan; struct dma_chan dma_chan;
......
...@@ -46,7 +46,7 @@ static struct of_dma *of_dma_find_controller(struct of_phandle_args *dma_spec) ...@@ -46,7 +46,7 @@ static struct of_dma *of_dma_find_controller(struct of_phandle_args *dma_spec)
/** /**
* of_dma_router_xlate - translation function for router devices * of_dma_router_xlate - translation function for router devices
* @dma_spec: pointer to DMA specifier as found in the device tree * @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data (router information) * @ofdma: pointer to DMA controller data (router information)
* *
* The function creates new dma_spec to be passed to the router driver's * The function creates new dma_spec to be passed to the router driver's
* of_dma_route_allocate() function to prepare a dma_spec which will be used * of_dma_route_allocate() function to prepare a dma_spec which will be used
...@@ -92,7 +92,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec, ...@@ -92,7 +92,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
* @np: device node of DMA controller * @np: device node of DMA controller
* @of_dma_xlate: translation function which converts a phandle * @of_dma_xlate: translation function which converts a phandle
* arguments list into a dma_chan structure * arguments list into a dma_chan structure
* @data pointer to controller specific data to be used by * @data: pointer to controller specific data to be used by
* translation function * translation function
* *
* Returns 0 on success or appropriate errno value on error. * Returns 0 on success or appropriate errno value on error.
...@@ -295,7 +295,7 @@ EXPORT_SYMBOL_GPL(of_dma_request_slave_channel); ...@@ -295,7 +295,7 @@ EXPORT_SYMBOL_GPL(of_dma_request_slave_channel);
/** /**
* of_dma_simple_xlate - Simple DMA engine translation function * of_dma_simple_xlate - Simple DMA engine translation function
* @dma_spec: pointer to DMA specifier as found in the device tree * @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data * @ofdma: pointer to DMA controller data
* *
* A simple translation function for devices that use a 32-bit value for the * A simple translation function for devices that use a 32-bit value for the
* filter_param when calling the DMA engine dma_request_channel() function. * filter_param when calling the DMA engine dma_request_channel() function.
...@@ -323,7 +323,7 @@ EXPORT_SYMBOL_GPL(of_dma_simple_xlate); ...@@ -323,7 +323,7 @@ EXPORT_SYMBOL_GPL(of_dma_simple_xlate);
/** /**
* of_dma_xlate_by_chan_id - Translate dt property to DMA channel by channel id * of_dma_xlate_by_chan_id - Translate dt property to DMA channel by channel id
* @dma_spec: pointer to DMA specifier as found in the device tree * @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data * @ofdma: pointer to DMA controller data
* *
* This function can be used as the of xlate callback for DMA driver which wants * This function can be used as the of xlate callback for DMA driver which wants
* to match the channel based on the channel id. When using this xlate function * to match the channel based on the channel id. When using this xlate function
......
...@@ -120,30 +120,38 @@ ...@@ -120,30 +120,38 @@
#define BIT_FIELD(val, width, shift, newshift) \ #define BIT_FIELD(val, width, shift, newshift) \
((((val) >> (shift)) & ((BIT(width)) - 1)) << (newshift)) ((((val) >> (shift)) & ((BIT(width)) - 1)) << (newshift))
/* Frame count value is fixed as 1 */
#define FCNT_VAL 0x1
/** /**
* struct owl_dma_lli_hw - Hardware link list for dma transfer * owl_dmadesc_offsets - Describe DMA descriptor, hardware link
* @next_lli: physical address of the next link list * list for dma transfer
* @saddr: source physical address * @OWL_DMADESC_NEXT_LLI: physical address of the next link list
* @daddr: destination physical address * @OWL_DMADESC_SADDR: source physical address
* @flen: frame length * @OWL_DMADESC_DADDR: destination physical address
* @fcnt: frame count * @OWL_DMADESC_FLEN: frame length
* @src_stride: source stride * @OWL_DMADESC_SRC_STRIDE: source stride
* @dst_stride: destination stride * @OWL_DMADESC_DST_STRIDE: destination stride
* @ctrla: dma_mode and linklist ctrl config * @OWL_DMADESC_CTRLA: dma_mode and linklist ctrl config
* @ctrlb: interrupt config * @OWL_DMADESC_CTRLB: interrupt config
* @const_num: data for constant fill * @OWL_DMADESC_CONST_NUM: data for constant fill
*/ */
struct owl_dma_lli_hw { enum owl_dmadesc_offsets {
u32 next_lli; OWL_DMADESC_NEXT_LLI = 0,
u32 saddr; OWL_DMADESC_SADDR,
u32 daddr; OWL_DMADESC_DADDR,
u32 flen:20; OWL_DMADESC_FLEN,
u32 fcnt:12; OWL_DMADESC_SRC_STRIDE,
u32 src_stride; OWL_DMADESC_DST_STRIDE,
u32 dst_stride; OWL_DMADESC_CTRLA,
u32 ctrla; OWL_DMADESC_CTRLB,
u32 ctrlb; OWL_DMADESC_CONST_NUM,
u32 const_num; OWL_DMADESC_SIZE
};
enum owl_dma_id {
S900_DMA,
S700_DMA,
}; };
/** /**
...@@ -153,7 +161,7 @@ struct owl_dma_lli_hw { ...@@ -153,7 +161,7 @@ struct owl_dma_lli_hw {
* @node: node for txd's lli_list * @node: node for txd's lli_list
*/ */
struct owl_dma_lli { struct owl_dma_lli {
struct owl_dma_lli_hw hw; u32 hw[OWL_DMADESC_SIZE];
dma_addr_t phys; dma_addr_t phys;
struct list_head node; struct list_head node;
}; };
...@@ -210,6 +218,7 @@ struct owl_dma_vchan { ...@@ -210,6 +218,7 @@ struct owl_dma_vchan {
* @pchans: array of data for the physical channels * @pchans: array of data for the physical channels
* @nr_vchans: the number of physical channels * @nr_vchans: the number of physical channels
* @vchans: array of data for the physical channels * @vchans: array of data for the physical channels
* @devid: device id based on OWL SoC
*/ */
struct owl_dma { struct owl_dma {
struct dma_device dma; struct dma_device dma;
...@@ -224,6 +233,7 @@ struct owl_dma { ...@@ -224,6 +233,7 @@ struct owl_dma {
unsigned int nr_vchans; unsigned int nr_vchans;
struct owl_dma_vchan *vchans; struct owl_dma_vchan *vchans;
enum owl_dma_id devid;
}; };
static void pchan_update(struct owl_dma_pchan *pchan, u32 reg, static void pchan_update(struct owl_dma_pchan *pchan, u32 reg,
...@@ -313,11 +323,20 @@ static inline u32 llc_hw_ctrlb(u32 int_ctl) ...@@ -313,11 +323,20 @@ static inline u32 llc_hw_ctrlb(u32 int_ctl)
{ {
u32 ctl; u32 ctl;
/*
* Irrespective of the SoC, ctrlb value starts filling from
* bit 18.
*/
ctl = BIT_FIELD(int_ctl, 7, 0, 18); ctl = BIT_FIELD(int_ctl, 7, 0, 18);
return ctl; return ctl;
} }
static u32 llc_hw_flen(struct owl_dma_lli *lli)
{
return lli->hw[OWL_DMADESC_FLEN] & GENMASK(19, 0);
}
static void owl_dma_free_lli(struct owl_dma *od, static void owl_dma_free_lli(struct owl_dma *od,
struct owl_dma_lli *lli) struct owl_dma_lli *lli)
{ {
...@@ -349,8 +368,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd, ...@@ -349,8 +368,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
list_add_tail(&next->node, &txd->lli_list); list_add_tail(&next->node, &txd->lli_list);
if (prev) { if (prev) {
prev->hw.next_lli = next->phys; prev->hw[OWL_DMADESC_NEXT_LLI] = next->phys;
prev->hw.ctrla |= llc_hw_ctrla(OWL_DMA_MODE_LME, 0); prev->hw[OWL_DMADESC_CTRLA] |=
llc_hw_ctrla(OWL_DMA_MODE_LME, 0);
} }
return next; return next;
...@@ -363,8 +383,8 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan, ...@@ -363,8 +383,8 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
struct dma_slave_config *sconfig, struct dma_slave_config *sconfig,
bool is_cyclic) bool is_cyclic)
{ {
struct owl_dma_lli_hw *hw = &lli->hw; struct owl_dma *od = to_owl_dma(vchan->vc.chan.device);
u32 mode; u32 mode, ctrlb;
mode = OWL_DMA_MODE_PW(0); mode = OWL_DMA_MODE_PW(0);
...@@ -405,22 +425,40 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan, ...@@ -405,22 +425,40 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
return -EINVAL; return -EINVAL;
} }
hw->next_lli = 0; /* One link list by default */ lli->hw[OWL_DMADESC_CTRLA] = llc_hw_ctrla(mode,
hw->saddr = src; OWL_DMA_LLC_SAV_LOAD_NEXT |
hw->daddr = dst; OWL_DMA_LLC_DAV_LOAD_NEXT);
hw->fcnt = 1; /* Frame count fixed as 1 */
hw->flen = len; /* Max frame length is 1MB */
hw->src_stride = 0;
hw->dst_stride = 0;
hw->ctrla = llc_hw_ctrla(mode,
OWL_DMA_LLC_SAV_LOAD_NEXT |
OWL_DMA_LLC_DAV_LOAD_NEXT);
if (is_cyclic) if (is_cyclic)
hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK); ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK);
else else
hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK); ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
lli->hw[OWL_DMADESC_NEXT_LLI] = 0; /* One link list by default */
lli->hw[OWL_DMADESC_SADDR] = src;
lli->hw[OWL_DMADESC_DADDR] = dst;
lli->hw[OWL_DMADESC_SRC_STRIDE] = 0;
lli->hw[OWL_DMADESC_DST_STRIDE] = 0;
if (od->devid == S700_DMA) {
/* Max frame length is 1MB */
lli->hw[OWL_DMADESC_FLEN] = len;
/*
* On S700, word starts from offset 0x1C is shared between
* frame count and ctrlb, where first 12 bits are for frame
* count and rest of 20 bits are for ctrlb.
*/
lli->hw[OWL_DMADESC_CTRLB] = FCNT_VAL | ctrlb;
} else {
/*
* On S900, word starts from offset 0xC is shared between
* frame length (max frame length is 1MB) and frame count,
* where first 20 bits are for frame length and rest of
* 12 bits are for frame count.
*/
lli->hw[OWL_DMADESC_FLEN] = len | FCNT_VAL << 20;
lli->hw[OWL_DMADESC_CTRLB] = ctrlb;
}
return 0; return 0;
} }
...@@ -582,7 +620,7 @@ static irqreturn_t owl_dma_interrupt(int irq, void *dev_id) ...@@ -582,7 +620,7 @@ static irqreturn_t owl_dma_interrupt(int irq, void *dev_id)
global_irq_pending = dma_readl(od, OWL_DMA_IRQ_PD0); global_irq_pending = dma_readl(od, OWL_DMA_IRQ_PD0);
if (chan_irq_pending && !(global_irq_pending & BIT(i))) { if (chan_irq_pending && !(global_irq_pending & BIT(i))) {
dev_dbg(od->dma.dev, dev_dbg(od->dma.dev,
"global and channel IRQ pending match err\n"); "global and channel IRQ pending match err\n");
...@@ -752,7 +790,7 @@ static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan) ...@@ -752,7 +790,7 @@ static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan)
/* Start from the next active node */ /* Start from the next active node */
if (lli->phys == next_lli_phy) { if (lli->phys == next_lli_phy) {
list_for_each_entry(lli, &txd->lli_list, node) list_for_each_entry(lli, &txd->lli_list, node)
bytes += lli->hw.flen; bytes += llc_hw_flen(lli);
break; break;
} }
} }
...@@ -783,7 +821,7 @@ static enum dma_status owl_dma_tx_status(struct dma_chan *chan, ...@@ -783,7 +821,7 @@ static enum dma_status owl_dma_tx_status(struct dma_chan *chan,
if (vd) { if (vd) {
txd = to_owl_txd(&vd->tx); txd = to_owl_txd(&vd->tx);
list_for_each_entry(lli, &txd->lli_list, node) list_for_each_entry(lli, &txd->lli_list, node)
bytes += lli->hw.flen; bytes += llc_hw_flen(lli);
} else { } else {
bytes = owl_dma_getbytes_chan(vchan); bytes = owl_dma_getbytes_chan(vchan);
} }
...@@ -1040,6 +1078,13 @@ static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec, ...@@ -1040,6 +1078,13 @@ static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec,
return chan; return chan;
} }
static const struct of_device_id owl_dma_match[] = {
{ .compatible = "actions,s900-dma", .data = (void *)S900_DMA,},
{ .compatible = "actions,s700-dma", .data = (void *)S700_DMA,},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, owl_dma_match);
static int owl_dma_probe(struct platform_device *pdev) static int owl_dma_probe(struct platform_device *pdev)
{ {
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
...@@ -1069,6 +1114,8 @@ static int owl_dma_probe(struct platform_device *pdev) ...@@ -1069,6 +1114,8 @@ static int owl_dma_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "dma-channels %d, dma-requests %d\n", dev_info(&pdev->dev, "dma-channels %d, dma-requests %d\n",
nr_channels, nr_requests); nr_channels, nr_requests);
od->devid = (enum owl_dma_id)of_device_get_match_data(&pdev->dev);
od->nr_pchans = nr_channels; od->nr_pchans = nr_channels;
od->nr_vchans = nr_requests; od->nr_vchans = nr_requests;
...@@ -1201,12 +1248,6 @@ static int owl_dma_remove(struct platform_device *pdev) ...@@ -1201,12 +1248,6 @@ static int owl_dma_remove(struct platform_device *pdev)
return 0; return 0;
} }
static const struct of_device_id owl_dma_match[] = {
{ .compatible = "actions,s900-dma", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, owl_dma_match);
static struct platform_driver owl_dma_driver = { static struct platform_driver owl_dma_driver = {
.probe = owl_dma_probe, .probe = owl_dma_probe,
.remove = owl_dma_remove, .remove = owl_dma_remove,
......
...@@ -33,7 +33,8 @@ ...@@ -33,7 +33,8 @@
#define PL330_MAX_PERI 32 #define PL330_MAX_PERI 32
#define PL330_MAX_BURST 16 #define PL330_MAX_BURST 16
#define PL330_QUIRK_BROKEN_NO_FLUSHP BIT(0) #define PL330_QUIRK_BROKEN_NO_FLUSHP BIT(0)
#define PL330_QUIRK_PERIPH_BURST BIT(1)
enum pl330_cachectrl { enum pl330_cachectrl {
CCTRL0, /* Noncacheable and nonbufferable */ CCTRL0, /* Noncacheable and nonbufferable */
...@@ -284,7 +285,7 @@ struct pl330_config { ...@@ -284,7 +285,7 @@ struct pl330_config {
u32 irq_ns; u32 irq_ns;
}; };
/** /*
* Request Configuration. * Request Configuration.
* The PL330 core does not modify this and uses the last * The PL330 core does not modify this and uses the last
* working configuration if the request doesn't provide any. * working configuration if the request doesn't provide any.
...@@ -509,6 +510,10 @@ static struct pl330_of_quirks { ...@@ -509,6 +510,10 @@ static struct pl330_of_quirks {
{ {
.quirk = "arm,pl330-broken-no-flushp", .quirk = "arm,pl330-broken-no-flushp",
.id = PL330_QUIRK_BROKEN_NO_FLUSHP, .id = PL330_QUIRK_BROKEN_NO_FLUSHP,
},
{
.quirk = "arm,pl330-periph-burst",
.id = PL330_QUIRK_PERIPH_BURST,
} }
}; };
...@@ -885,6 +890,12 @@ static inline void _execute_DBGINSN(struct pl330_thread *thrd, ...@@ -885,6 +890,12 @@ static inline void _execute_DBGINSN(struct pl330_thread *thrd,
void __iomem *regs = thrd->dmac->base; void __iomem *regs = thrd->dmac->base;
u32 val; u32 val;
/* If timed out due to halted state-machine */
if (_until_dmac_idle(thrd)) {
dev_err(thrd->dmac->ddma.dev, "DMAC halted!\n");
return;
}
val = (insn[0] << 16) | (insn[1] << 24); val = (insn[0] << 16) | (insn[1] << 24);
if (!as_manager) { if (!as_manager) {
val |= (1 << 0); val |= (1 << 0);
...@@ -895,12 +906,6 @@ static inline void _execute_DBGINSN(struct pl330_thread *thrd, ...@@ -895,12 +906,6 @@ static inline void _execute_DBGINSN(struct pl330_thread *thrd,
val = le32_to_cpu(*((__le32 *)&insn[2])); val = le32_to_cpu(*((__le32 *)&insn[2]));
writel(val, regs + DBGINST1); writel(val, regs + DBGINST1);
/* If timed out due to halted state-machine */
if (_until_dmac_idle(thrd)) {
dev_err(thrd->dmac->ddma.dev, "DMAC halted!\n");
return;
}
/* Get going */ /* Get going */
writel(0, regs + DBGCMD); writel(0, regs + DBGCMD);
} }
...@@ -1183,9 +1188,6 @@ static inline int _ldst_peripheral(struct pl330_dmac *pl330, ...@@ -1183,9 +1188,6 @@ static inline int _ldst_peripheral(struct pl330_dmac *pl330,
{ {
int off = 0; int off = 0;
if (pl330->quirks & PL330_QUIRK_BROKEN_NO_FLUSHP)
cond = BURST;
/* /*
* do FLUSHP at beginning to clear any stale dma requests before the * do FLUSHP at beginning to clear any stale dma requests before the
* first WFP. * first WFP.
...@@ -1209,6 +1211,9 @@ static int _bursts(struct pl330_dmac *pl330, unsigned dry_run, u8 buf[], ...@@ -1209,6 +1211,9 @@ static int _bursts(struct pl330_dmac *pl330, unsigned dry_run, u8 buf[],
int off = 0; int off = 0;
enum pl330_cond cond = BRST_LEN(pxs->ccr) > 1 ? BURST : SINGLE; enum pl330_cond cond = BRST_LEN(pxs->ccr) > 1 ? BURST : SINGLE;
if (pl330->quirks & PL330_QUIRK_PERIPH_BURST)
cond = BURST;
switch (pxs->desc->rqtype) { switch (pxs->desc->rqtype) {
case DMA_MEM_TO_DEV: case DMA_MEM_TO_DEV:
/* fall through */ /* fall through */
...@@ -1231,8 +1236,9 @@ static int _bursts(struct pl330_dmac *pl330, unsigned dry_run, u8 buf[], ...@@ -1231,8 +1236,9 @@ static int _bursts(struct pl330_dmac *pl330, unsigned dry_run, u8 buf[],
} }
/* /*
* transfer dregs with single transfers to peripheral, or a reduced size burst * only the unaligned burst transfers have the dregs.
* for mem-to-mem. * so, still transfer dregs with a reduced size burst
* for mem-to-mem, mem-to-dev or dev-to-mem.
*/ */
static int _dregs(struct pl330_dmac *pl330, unsigned int dry_run, u8 buf[], static int _dregs(struct pl330_dmac *pl330, unsigned int dry_run, u8 buf[],
const struct _xfer_spec *pxs, int transfer_length) const struct _xfer_spec *pxs, int transfer_length)
...@@ -1243,22 +1249,31 @@ static int _dregs(struct pl330_dmac *pl330, unsigned int dry_run, u8 buf[], ...@@ -1243,22 +1249,31 @@ static int _dregs(struct pl330_dmac *pl330, unsigned int dry_run, u8 buf[],
if (transfer_length == 0) if (transfer_length == 0)
return off; return off;
/*
* dregs_len = (total bytes - BURST_TO_BYTE(bursts, ccr)) /
* BRST_SIZE(ccr)
* the dregs len must be smaller than burst len,
* so, for higher efficiency, we can modify CCR
* to use a reduced size burst len for the dregs.
*/
dregs_ccr = pxs->ccr;
dregs_ccr &= ~((0xf << CC_SRCBRSTLEN_SHFT) |
(0xf << CC_DSTBRSTLEN_SHFT));
dregs_ccr |= (((transfer_length - 1) & 0xf) <<
CC_SRCBRSTLEN_SHFT);
dregs_ccr |= (((transfer_length - 1) & 0xf) <<
CC_DSTBRSTLEN_SHFT);
switch (pxs->desc->rqtype) { switch (pxs->desc->rqtype) {
case DMA_MEM_TO_DEV: case DMA_MEM_TO_DEV:
/* fall through */ /* fall through */
case DMA_DEV_TO_MEM: case DMA_DEV_TO_MEM:
off += _ldst_peripheral(pl330, dry_run, &buf[off], pxs, off += _emit_MOV(dry_run, &buf[off], CCR, dregs_ccr);
transfer_length, SINGLE); off += _ldst_peripheral(pl330, dry_run, &buf[off], pxs, 1,
BURST);
break; break;
case DMA_MEM_TO_MEM: case DMA_MEM_TO_MEM:
dregs_ccr = pxs->ccr;
dregs_ccr &= ~((0xf << CC_SRCBRSTLEN_SHFT) |
(0xf << CC_DSTBRSTLEN_SHFT));
dregs_ccr |= (((transfer_length - 1) & 0xf) <<
CC_SRCBRSTLEN_SHFT);
dregs_ccr |= (((transfer_length - 1) & 0xf) <<
CC_DSTBRSTLEN_SHFT);
off += _emit_MOV(dry_run, &buf[off], CCR, dregs_ccr); off += _emit_MOV(dry_run, &buf[off], CCR, dregs_ccr);
off += _ldst_memtomem(dry_run, &buf[off], pxs, 1); off += _ldst_memtomem(dry_run, &buf[off], pxs, 1);
break; break;
...@@ -2221,9 +2236,7 @@ static bool pl330_prep_slave_fifo(struct dma_pl330_chan *pch, ...@@ -2221,9 +2236,7 @@ static bool pl330_prep_slave_fifo(struct dma_pl330_chan *pch,
static int fixup_burst_len(int max_burst_len, int quirks) static int fixup_burst_len(int max_burst_len, int quirks)
{ {
if (quirks & PL330_QUIRK_BROKEN_NO_FLUSHP) if (max_burst_len > PL330_MAX_BURST)
return 1;
else if (max_burst_len > PL330_MAX_BURST)
return PL330_MAX_BURST; return PL330_MAX_BURST;
else if (max_burst_len < 1) else if (max_burst_len < 1)
return 1; return 1;
...@@ -3128,8 +3141,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -3128,8 +3141,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
pd->dst_addr_widths = PL330_DMA_BUSWIDTHS; pd->dst_addr_widths = PL330_DMA_BUSWIDTHS;
pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
pd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; pd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
pd->max_burst = ((pl330->quirks & PL330_QUIRK_BROKEN_NO_FLUSHP) ? pd->max_burst = PL330_MAX_BURST;
1 : PL330_MAX_BURST);
ret = dma_async_device_register(pd); ret = dma_async_device_register(pd);
if (ret) { if (ret) {
......
...@@ -381,6 +381,7 @@ struct d40_desc { ...@@ -381,6 +381,7 @@ struct d40_desc {
* struct d40_lcla_pool - LCLA pool settings and data. * struct d40_lcla_pool - LCLA pool settings and data.
* *
* @base: The virtual address of LCLA. 18 bit aligned. * @base: The virtual address of LCLA. 18 bit aligned.
* @dma_addr: DMA address, if mapped
* @base_unaligned: The orignal kmalloc pointer, if kmalloc is used. * @base_unaligned: The orignal kmalloc pointer, if kmalloc is used.
* This pointer is only there for clean-up on error. * This pointer is only there for clean-up on error.
* @pages: The number of pages needed for all physical channels. * @pages: The number of pages needed for all physical channels.
...@@ -534,6 +535,7 @@ struct d40_gen_dmac { ...@@ -534,6 +535,7 @@ struct d40_gen_dmac {
* mode" allocated physical channels. * mode" allocated physical channels.
* @num_log_chans: The number of logical channels. Calculated from * @num_log_chans: The number of logical channels. Calculated from
* num_phy_chans. * num_phy_chans.
* @dma_parms: DMA parameters for the channel
* @dma_both: dma_device channels that can do both memcpy and slave transfers. * @dma_both: dma_device channels that can do both memcpy and slave transfers.
* @dma_slave: dma_device channels that can do only do slave transfers. * @dma_slave: dma_device channels that can do only do slave transfers.
* @dma_memcpy: dma_device channels that can do only do memcpy transfers. * @dma_memcpy: dma_device channels that can do only do memcpy transfers.
......
...@@ -307,7 +307,7 @@ static void set_pchan_interrupt(struct sun4i_dma_dev *priv, ...@@ -307,7 +307,7 @@ static void set_pchan_interrupt(struct sun4i_dma_dev *priv,
spin_unlock_irqrestore(&priv->lock, flags); spin_unlock_irqrestore(&priv->lock, flags);
} }
/** /*
* Execute pending operations on a vchan * Execute pending operations on a vchan
* *
* When given a vchan, this function will try to acquire a suitable * When given a vchan, this function will try to acquire a suitable
...@@ -419,7 +419,7 @@ static int sanitize_config(struct dma_slave_config *sconfig, ...@@ -419,7 +419,7 @@ static int sanitize_config(struct dma_slave_config *sconfig,
return 0; return 0;
} }
/** /*
* Generate a promise, to be used in a normal DMA contract. * Generate a promise, to be used in a normal DMA contract.
* *
* A NDMA promise contains all the information required to program the * A NDMA promise contains all the information required to program the
...@@ -486,7 +486,7 @@ generate_ndma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest, ...@@ -486,7 +486,7 @@ generate_ndma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest,
return NULL; return NULL;
} }
/** /*
* Generate a promise, to be used in a dedicated DMA contract. * Generate a promise, to be used in a dedicated DMA contract.
* *
* A DDMA promise contains all the information required to program the * A DDMA promise contains all the information required to program the
...@@ -543,7 +543,7 @@ generate_ddma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest, ...@@ -543,7 +543,7 @@ generate_ddma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest,
return NULL; return NULL;
} }
/** /*
* Generate a contract * Generate a contract
* *
* Contracts function as DMA descriptors. As our hardware does not support * Contracts function as DMA descriptors. As our hardware does not support
...@@ -565,7 +565,7 @@ static struct sun4i_dma_contract *generate_dma_contract(void) ...@@ -565,7 +565,7 @@ static struct sun4i_dma_contract *generate_dma_contract(void)
return contract; return contract;
} }
/** /*
* Get next promise on a cyclic transfer * Get next promise on a cyclic transfer
* *
* Cyclic contracts contain a series of promises which are executed on a * Cyclic contracts contain a series of promises which are executed on a
...@@ -589,7 +589,7 @@ get_next_cyclic_promise(struct sun4i_dma_contract *contract) ...@@ -589,7 +589,7 @@ get_next_cyclic_promise(struct sun4i_dma_contract *contract)
return promise; return promise;
} }
/** /*
* Free a contract and all its associated promises * Free a contract and all its associated promises
*/ */
static void sun4i_dma_free_contract(struct virt_dma_desc *vd) static void sun4i_dma_free_contract(struct virt_dma_desc *vd)
......
...@@ -186,17 +186,17 @@ static void k3_udma_glue_dump_tx_rt_chn(struct k3_udma_glue_tx_channel *chn, ...@@ -186,17 +186,17 @@ static void k3_udma_glue_dump_tx_rt_chn(struct k3_udma_glue_tx_channel *chn,
struct device *dev = chn->common.dev; struct device *dev = chn->common.dev;
dev_dbg(dev, "=== dump ===> %s\n", mark); dev_dbg(dev, "=== dump ===> %s\n", mark);
dev_dbg(dev, "0x%08X: %08X\n", UDMA_TCHAN_RT_CTL_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_CTL_REG,
xudma_tchanrt_read(chn->udma_tchanx, UDMA_TCHAN_RT_CTL_REG)); xudma_tchanrt_read(chn->udma_tchanx, UDMA_CHAN_RT_CTL_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_TCHAN_RT_PEER_RT_EN_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_PEER_RT_EN_REG,
xudma_tchanrt_read(chn->udma_tchanx, xudma_tchanrt_read(chn->udma_tchanx,
UDMA_TCHAN_RT_PEER_RT_EN_REG)); UDMA_CHAN_RT_PEER_RT_EN_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_TCHAN_RT_PCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_PCNT_REG,
xudma_tchanrt_read(chn->udma_tchanx, UDMA_TCHAN_RT_PCNT_REG)); xudma_tchanrt_read(chn->udma_tchanx, UDMA_CHAN_RT_PCNT_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_TCHAN_RT_BCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_BCNT_REG,
xudma_tchanrt_read(chn->udma_tchanx, UDMA_TCHAN_RT_BCNT_REG)); xudma_tchanrt_read(chn->udma_tchanx, UDMA_CHAN_RT_BCNT_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_TCHAN_RT_SBCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_SBCNT_REG,
xudma_tchanrt_read(chn->udma_tchanx, UDMA_TCHAN_RT_SBCNT_REG)); xudma_tchanrt_read(chn->udma_tchanx, UDMA_CHAN_RT_SBCNT_REG));
} }
static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
...@@ -381,14 +381,13 @@ int k3_udma_glue_enable_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) ...@@ -381,14 +381,13 @@ int k3_udma_glue_enable_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
u32 txrt_ctl; u32 txrt_ctl;
txrt_ctl = UDMA_PEER_RT_EN_ENABLE; txrt_ctl = UDMA_PEER_RT_EN_ENABLE;
xudma_tchanrt_write(tx_chn->udma_tchanx, xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_CHAN_RT_PEER_RT_EN_REG,
UDMA_TCHAN_RT_PEER_RT_EN_REG,
txrt_ctl); txrt_ctl);
txrt_ctl = xudma_tchanrt_read(tx_chn->udma_tchanx, txrt_ctl = xudma_tchanrt_read(tx_chn->udma_tchanx,
UDMA_TCHAN_RT_CTL_REG); UDMA_CHAN_RT_CTL_REG);
txrt_ctl |= UDMA_CHAN_RT_CTL_EN; txrt_ctl |= UDMA_CHAN_RT_CTL_EN;
xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_TCHAN_RT_CTL_REG, xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_CHAN_RT_CTL_REG,
txrt_ctl); txrt_ctl);
k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn en"); k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn en");
...@@ -400,10 +399,10 @@ void k3_udma_glue_disable_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) ...@@ -400,10 +399,10 @@ void k3_udma_glue_disable_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
{ {
k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn dis1"); k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn dis1");
xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_TCHAN_RT_CTL_REG, 0); xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_CHAN_RT_CTL_REG, 0);
xudma_tchanrt_write(tx_chn->udma_tchanx, xudma_tchanrt_write(tx_chn->udma_tchanx,
UDMA_TCHAN_RT_PEER_RT_EN_REG, 0); UDMA_CHAN_RT_PEER_RT_EN_REG, 0);
k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn dis2"); k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn dis2");
} }
EXPORT_SYMBOL_GPL(k3_udma_glue_disable_tx_chn); EXPORT_SYMBOL_GPL(k3_udma_glue_disable_tx_chn);
...@@ -416,14 +415,14 @@ void k3_udma_glue_tdown_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, ...@@ -416,14 +415,14 @@ void k3_udma_glue_tdown_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn tdown1"); k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn tdown1");
xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_TCHAN_RT_CTL_REG, xudma_tchanrt_write(tx_chn->udma_tchanx, UDMA_CHAN_RT_CTL_REG,
UDMA_CHAN_RT_CTL_EN | UDMA_CHAN_RT_CTL_TDOWN); UDMA_CHAN_RT_CTL_EN | UDMA_CHAN_RT_CTL_TDOWN);
val = xudma_tchanrt_read(tx_chn->udma_tchanx, UDMA_TCHAN_RT_CTL_REG); val = xudma_tchanrt_read(tx_chn->udma_tchanx, UDMA_CHAN_RT_CTL_REG);
while (sync && (val & UDMA_CHAN_RT_CTL_EN)) { while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
val = xudma_tchanrt_read(tx_chn->udma_tchanx, val = xudma_tchanrt_read(tx_chn->udma_tchanx,
UDMA_TCHAN_RT_CTL_REG); UDMA_CHAN_RT_CTL_REG);
udelay(1); udelay(1);
if (i > K3_UDMAX_TDOWN_TIMEOUT_US) { if (i > K3_UDMAX_TDOWN_TIMEOUT_US) {
dev_err(tx_chn->common.dev, "TX tdown timeout\n"); dev_err(tx_chn->common.dev, "TX tdown timeout\n");
...@@ -433,7 +432,7 @@ void k3_udma_glue_tdown_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, ...@@ -433,7 +432,7 @@ void k3_udma_glue_tdown_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
} }
val = xudma_tchanrt_read(tx_chn->udma_tchanx, val = xudma_tchanrt_read(tx_chn->udma_tchanx,
UDMA_TCHAN_RT_PEER_RT_EN_REG); UDMA_CHAN_RT_PEER_RT_EN_REG);
if (sync && (val & UDMA_PEER_RT_EN_ENABLE)) if (sync && (val & UDMA_PEER_RT_EN_ENABLE))
dev_err(tx_chn->common.dev, "TX tdown peer not stopped\n"); dev_err(tx_chn->common.dev, "TX tdown peer not stopped\n");
k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn tdown2"); k3_udma_glue_dump_tx_rt_chn(tx_chn, "txchn tdown2");
...@@ -700,17 +699,17 @@ static void k3_udma_glue_dump_rx_rt_chn(struct k3_udma_glue_rx_channel *chn, ...@@ -700,17 +699,17 @@ static void k3_udma_glue_dump_rx_rt_chn(struct k3_udma_glue_rx_channel *chn,
dev_dbg(dev, "=== dump ===> %s\n", mark); dev_dbg(dev, "=== dump ===> %s\n", mark);
dev_dbg(dev, "0x%08X: %08X\n", UDMA_RCHAN_RT_CTL_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_CTL_REG,
xudma_rchanrt_read(chn->udma_rchanx, UDMA_RCHAN_RT_CTL_REG)); xudma_rchanrt_read(chn->udma_rchanx, UDMA_CHAN_RT_CTL_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_RCHAN_RT_PEER_RT_EN_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_PEER_RT_EN_REG,
xudma_rchanrt_read(chn->udma_rchanx, xudma_rchanrt_read(chn->udma_rchanx,
UDMA_RCHAN_RT_PEER_RT_EN_REG)); UDMA_CHAN_RT_PEER_RT_EN_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_RCHAN_RT_PCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_PCNT_REG,
xudma_rchanrt_read(chn->udma_rchanx, UDMA_RCHAN_RT_PCNT_REG)); xudma_rchanrt_read(chn->udma_rchanx, UDMA_CHAN_RT_PCNT_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_RCHAN_RT_BCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_BCNT_REG,
xudma_rchanrt_read(chn->udma_rchanx, UDMA_RCHAN_RT_BCNT_REG)); xudma_rchanrt_read(chn->udma_rchanx, UDMA_CHAN_RT_BCNT_REG));
dev_dbg(dev, "0x%08X: %08X\n", UDMA_RCHAN_RT_SBCNT_REG, dev_dbg(dev, "0x%08X: %08X\n", UDMA_CHAN_RT_SBCNT_REG,
xudma_rchanrt_read(chn->udma_rchanx, UDMA_RCHAN_RT_SBCNT_REG)); xudma_rchanrt_read(chn->udma_rchanx, UDMA_CHAN_RT_SBCNT_REG));
} }
static int static int
...@@ -1068,13 +1067,12 @@ int k3_udma_glue_enable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) ...@@ -1068,13 +1067,12 @@ int k3_udma_glue_enable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
return -EINVAL; return -EINVAL;
rxrt_ctl = xudma_rchanrt_read(rx_chn->udma_rchanx, rxrt_ctl = xudma_rchanrt_read(rx_chn->udma_rchanx,
UDMA_RCHAN_RT_CTL_REG); UDMA_CHAN_RT_CTL_REG);
rxrt_ctl |= UDMA_CHAN_RT_CTL_EN; rxrt_ctl |= UDMA_CHAN_RT_CTL_EN;
xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_RCHAN_RT_CTL_REG, xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_CHAN_RT_CTL_REG,
rxrt_ctl); rxrt_ctl);
xudma_rchanrt_write(rx_chn->udma_rchanx, xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_CHAN_RT_PEER_RT_EN_REG,
UDMA_RCHAN_RT_PEER_RT_EN_REG,
UDMA_PEER_RT_EN_ENABLE); UDMA_PEER_RT_EN_ENABLE);
k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt en"); k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt en");
...@@ -1087,9 +1085,8 @@ void k3_udma_glue_disable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) ...@@ -1087,9 +1085,8 @@ void k3_udma_glue_disable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt dis1"); k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt dis1");
xudma_rchanrt_write(rx_chn->udma_rchanx, xudma_rchanrt_write(rx_chn->udma_rchanx,
UDMA_RCHAN_RT_PEER_RT_EN_REG, UDMA_CHAN_RT_PEER_RT_EN_REG, 0);
0); xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_CHAN_RT_CTL_REG, 0);
xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_RCHAN_RT_CTL_REG, 0);
k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt dis2"); k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt dis2");
} }
...@@ -1106,14 +1103,14 @@ void k3_udma_glue_tdown_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, ...@@ -1106,14 +1103,14 @@ void k3_udma_glue_tdown_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt tdown1"); k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt tdown1");
xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_RCHAN_RT_PEER_RT_EN_REG, xudma_rchanrt_write(rx_chn->udma_rchanx, UDMA_CHAN_RT_PEER_RT_EN_REG,
UDMA_PEER_RT_EN_ENABLE | UDMA_PEER_RT_EN_TEARDOWN); UDMA_PEER_RT_EN_ENABLE | UDMA_PEER_RT_EN_TEARDOWN);
val = xudma_rchanrt_read(rx_chn->udma_rchanx, UDMA_RCHAN_RT_CTL_REG); val = xudma_rchanrt_read(rx_chn->udma_rchanx, UDMA_CHAN_RT_CTL_REG);
while (sync && (val & UDMA_CHAN_RT_CTL_EN)) { while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
val = xudma_rchanrt_read(rx_chn->udma_rchanx, val = xudma_rchanrt_read(rx_chn->udma_rchanx,
UDMA_RCHAN_RT_CTL_REG); UDMA_CHAN_RT_CTL_REG);
udelay(1); udelay(1);
if (i > K3_UDMAX_TDOWN_TIMEOUT_US) { if (i > K3_UDMAX_TDOWN_TIMEOUT_US) {
dev_err(rx_chn->common.dev, "RX tdown timeout\n"); dev_err(rx_chn->common.dev, "RX tdown timeout\n");
...@@ -1123,7 +1120,7 @@ void k3_udma_glue_tdown_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, ...@@ -1123,7 +1120,7 @@ void k3_udma_glue_tdown_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
} }
val = xudma_rchanrt_read(rx_chn->udma_rchanx, val = xudma_rchanrt_read(rx_chn->udma_rchanx,
UDMA_RCHAN_RT_PEER_RT_EN_REG); UDMA_CHAN_RT_PEER_RT_EN_REG);
if (sync && (val & UDMA_PEER_RT_EN_ENABLE)) if (sync && (val & UDMA_PEER_RT_EN_ENABLE))
dev_err(rx_chn->common.dev, "TX tdown peer not stopped\n"); dev_err(rx_chn->common.dev, "TX tdown peer not stopped\n");
k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt tdown2"); k3_udma_glue_dump_rx_rt_chn(rx_chn, "rxrt tdown2");
......
...@@ -121,13 +121,17 @@ XUDMA_GET_RESOURCE_ID(rflow); ...@@ -121,13 +121,17 @@ XUDMA_GET_RESOURCE_ID(rflow);
#define XUDMA_RT_IO_FUNCTIONS(res) \ #define XUDMA_RT_IO_FUNCTIONS(res) \
u32 xudma_##res##rt_read(struct udma_##res *p, int reg) \ u32 xudma_##res##rt_read(struct udma_##res *p, int reg) \
{ \ { \
return udma_##res##rt_read(p, reg); \ if (!p) \
return 0; \
return udma_read(p->reg_rt, reg); \
} \ } \
EXPORT_SYMBOL(xudma_##res##rt_read); \ EXPORT_SYMBOL(xudma_##res##rt_read); \
\ \
void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \
{ \ { \
udma_##res##rt_write(p, reg, val); \ if (!p) \
return; \
udma_write(p->reg_rt, reg, val); \
} \ } \
EXPORT_SYMBOL(xudma_##res##rt_write) EXPORT_SYMBOL(xudma_##res##rt_write)
XUDMA_RT_IO_FUNCTIONS(tchan); XUDMA_RT_IO_FUNCTIONS(tchan);
......
This diff is collapsed.
...@@ -18,52 +18,41 @@ ...@@ -18,52 +18,41 @@
#define UDMA_RX_FLOW_ID_FW_OES_REG 0x80 #define UDMA_RX_FLOW_ID_FW_OES_REG 0x80
#define UDMA_RX_FLOW_ID_FW_STATUS_REG 0x88 #define UDMA_RX_FLOW_ID_FW_STATUS_REG 0x88
/* TX chan RT regs */ /* TCHANRT/RCHANRT registers */
#define UDMA_TCHAN_RT_CTL_REG 0x0 #define UDMA_CHAN_RT_CTL_REG 0x0
#define UDMA_TCHAN_RT_SWTRIG_REG 0x8 #define UDMA_CHAN_RT_SWTRIG_REG 0x8
#define UDMA_TCHAN_RT_STDATA_REG 0x80 #define UDMA_CHAN_RT_STDATA_REG 0x80
#define UDMA_TCHAN_RT_PEER_REG(i) (0x200 + ((i) * 0x4)) #define UDMA_CHAN_RT_PEER_REG(i) (0x200 + ((i) * 0x4))
#define UDMA_TCHAN_RT_PEER_STATIC_TR_XY_REG \ #define UDMA_CHAN_RT_PEER_STATIC_TR_XY_REG \
UDMA_TCHAN_RT_PEER_REG(0) /* PSI-L: 0x400 */ UDMA_CHAN_RT_PEER_REG(0) /* PSI-L: 0x400 */
#define UDMA_TCHAN_RT_PEER_STATIC_TR_Z_REG \ #define UDMA_CHAN_RT_PEER_STATIC_TR_Z_REG \
UDMA_TCHAN_RT_PEER_REG(1) /* PSI-L: 0x401 */ UDMA_CHAN_RT_PEER_REG(1) /* PSI-L: 0x401 */
#define UDMA_TCHAN_RT_PEER_BCNT_REG \ #define UDMA_CHAN_RT_PEER_BCNT_REG \
UDMA_TCHAN_RT_PEER_REG(4) /* PSI-L: 0x404 */ UDMA_CHAN_RT_PEER_REG(4) /* PSI-L: 0x404 */
#define UDMA_TCHAN_RT_PEER_RT_EN_REG \ #define UDMA_CHAN_RT_PEER_RT_EN_REG \
UDMA_TCHAN_RT_PEER_REG(8) /* PSI-L: 0x408 */ UDMA_CHAN_RT_PEER_REG(8) /* PSI-L: 0x408 */
#define UDMA_TCHAN_RT_PCNT_REG 0x400 #define UDMA_CHAN_RT_PCNT_REG 0x400
#define UDMA_TCHAN_RT_BCNT_REG 0x408 #define UDMA_CHAN_RT_BCNT_REG 0x408
#define UDMA_TCHAN_RT_SBCNT_REG 0x410 #define UDMA_CHAN_RT_SBCNT_REG 0x410
/* RX chan RT regs */ /* UDMA_CAP Registers */
#define UDMA_RCHAN_RT_CTL_REG 0x0 #define UDMA_CAP2_TCHAN_CNT(val) ((val) & 0x1ff)
#define UDMA_RCHAN_RT_SWTRIG_REG 0x8 #define UDMA_CAP2_ECHAN_CNT(val) (((val) >> 9) & 0x1ff)
#define UDMA_RCHAN_RT_STDATA_REG 0x80 #define UDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff)
#define UDMA_CAP3_RFLOW_CNT(val) ((val) & 0x3fff)
#define UDMA_RCHAN_RT_PEER_REG(i) (0x200 + ((i) * 0x4)) #define UDMA_CAP3_HCHAN_CNT(val) (((val) >> 14) & 0x1ff)
#define UDMA_RCHAN_RT_PEER_STATIC_TR_XY_REG \ #define UDMA_CAP3_UCHAN_CNT(val) (((val) >> 23) & 0x1ff)
UDMA_RCHAN_RT_PEER_REG(0) /* PSI-L: 0x400 */
#define UDMA_RCHAN_RT_PEER_STATIC_TR_Z_REG \ /* UDMA_CHAN_RT_CTL_REG */
UDMA_RCHAN_RT_PEER_REG(1) /* PSI-L: 0x401 */
#define UDMA_RCHAN_RT_PEER_BCNT_REG \
UDMA_RCHAN_RT_PEER_REG(4) /* PSI-L: 0x404 */
#define UDMA_RCHAN_RT_PEER_RT_EN_REG \
UDMA_RCHAN_RT_PEER_REG(8) /* PSI-L: 0x408 */
#define UDMA_RCHAN_RT_PCNT_REG 0x400
#define UDMA_RCHAN_RT_BCNT_REG 0x408
#define UDMA_RCHAN_RT_SBCNT_REG 0x410
/* UDMA_TCHAN_RT_CTL_REG/UDMA_RCHAN_RT_CTL_REG */
#define UDMA_CHAN_RT_CTL_EN BIT(31) #define UDMA_CHAN_RT_CTL_EN BIT(31)
#define UDMA_CHAN_RT_CTL_TDOWN BIT(30) #define UDMA_CHAN_RT_CTL_TDOWN BIT(30)
#define UDMA_CHAN_RT_CTL_PAUSE BIT(29) #define UDMA_CHAN_RT_CTL_PAUSE BIT(29)
#define UDMA_CHAN_RT_CTL_FTDOWN BIT(28) #define UDMA_CHAN_RT_CTL_FTDOWN BIT(28)
#define UDMA_CHAN_RT_CTL_ERROR BIT(0) #define UDMA_CHAN_RT_CTL_ERROR BIT(0)
/* UDMA_TCHAN_RT_PEER_RT_EN_REG/UDMA_RCHAN_RT_PEER_RT_EN_REG (PSI-L: 0x408) */ /* UDMA_CHAN_RT_PEER_RT_EN_REG */
#define UDMA_PEER_RT_EN_ENABLE BIT(31) #define UDMA_PEER_RT_EN_ENABLE BIT(31)
#define UDMA_PEER_RT_EN_TEARDOWN BIT(30) #define UDMA_PEER_RT_EN_TEARDOWN BIT(30)
#define UDMA_PEER_RT_EN_PAUSE BIT(29) #define UDMA_PEER_RT_EN_PAUSE BIT(29)
......
...@@ -287,6 +287,8 @@ struct xgene_dma_chan { ...@@ -287,6 +287,8 @@ struct xgene_dma_chan {
/** /**
* struct xgene_dma - internal representation of an X-Gene DMA device * struct xgene_dma - internal representation of an X-Gene DMA device
* @dev: reference to this device's struct device
* @clk: reference to this device's clock
* @err_irq: DMA error irq number * @err_irq: DMA error irq number
* @ring_num: start id number for DMA ring * @ring_num: start id number for DMA ring
* @csr_dma: base for DMA register access * @csr_dma: base for DMA register access
......
...@@ -214,6 +214,7 @@ struct xilinx_dpdma_tx_desc { ...@@ -214,6 +214,7 @@ struct xilinx_dpdma_tx_desc {
* @lock: lock to access struct xilinx_dpdma_chan * @lock: lock to access struct xilinx_dpdma_chan
* @desc_pool: descriptor allocation pool * @desc_pool: descriptor allocation pool
* @err_task: error IRQ bottom half handler * @err_task: error IRQ bottom half handler
* @desc: References to descriptors being processed
* @desc.pending: Descriptor schedule to the hardware, pending execution * @desc.pending: Descriptor schedule to the hardware, pending execution
* @desc.active: Descriptor being executed by the hardware * @desc.active: Descriptor being executed by the hardware
* @xdev: DPDMA device * @xdev: DPDMA device
...@@ -295,6 +296,7 @@ static inline void dpdma_set(void __iomem *base, u32 offset, u32 set) ...@@ -295,6 +296,7 @@ static inline void dpdma_set(void __iomem *base, u32 offset, u32 set)
/** /**
* xilinx_dpdma_sw_desc_set_dma_addrs - Set DMA addresses in the descriptor * xilinx_dpdma_sw_desc_set_dma_addrs - Set DMA addresses in the descriptor
* @xdev: DPDMA device
* @sw_desc: The software descriptor in which to set DMA addresses * @sw_desc: The software descriptor in which to set DMA addresses
* @prev: The previous descriptor * @prev: The previous descriptor
* @dma_addr: array of dma addresses * @dma_addr: array of dma addresses
...@@ -1070,7 +1072,7 @@ static int xilinx_dpdma_config(struct dma_chan *dchan, ...@@ -1070,7 +1072,7 @@ static int xilinx_dpdma_config(struct dma_chan *dchan,
* Abuse the slave_id to indicate that the channel is part of a video * Abuse the slave_id to indicate that the channel is part of a video
* group. * group.
*/ */
if (chan->id >= ZYNQMP_DPDMA_VIDEO0 && chan->id <= ZYNQMP_DPDMA_VIDEO2) if (chan->id <= ZYNQMP_DPDMA_VIDEO2)
chan->video_group = config->slave_id != 0; chan->video_group = config->slave_id != 0;
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
......
...@@ -39,6 +39,7 @@ enum dma_status { ...@@ -39,6 +39,7 @@ enum dma_status {
DMA_IN_PROGRESS, DMA_IN_PROGRESS,
DMA_PAUSED, DMA_PAUSED,
DMA_ERROR, DMA_ERROR,
DMA_OUT_OF_ORDER,
}; };
/** /**
...@@ -61,6 +62,7 @@ enum dma_transaction_type { ...@@ -61,6 +62,7 @@ enum dma_transaction_type {
DMA_SLAVE, DMA_SLAVE,
DMA_CYCLIC, DMA_CYCLIC,
DMA_INTERLEAVE, DMA_INTERLEAVE,
DMA_COMPLETION_NO_ORDER,
DMA_REPEAT, DMA_REPEAT,
DMA_LOAD_EOT, DMA_LOAD_EOT,
/* last transaction type for creation of the capabilities mask */ /* last transaction type for creation of the capabilities mask */
...@@ -164,7 +166,7 @@ struct dma_interleaved_template { ...@@ -164,7 +166,7 @@ struct dma_interleaved_template {
* @DMA_PREP_INTERRUPT - trigger an interrupt (callback) upon completion of * @DMA_PREP_INTERRUPT - trigger an interrupt (callback) upon completion of
* this transaction * this transaction
* @DMA_CTRL_ACK - if clear, the descriptor cannot be reused until the client * @DMA_CTRL_ACK - if clear, the descriptor cannot be reused until the client
* acknowledges receipt, i.e. has has a chance to establish any dependency * acknowledges receipt, i.e. has a chance to establish any dependency
* chains * chains
* @DMA_PREP_PQ_DISABLE_P - prevent generation of P while generating Q * @DMA_PREP_PQ_DISABLE_P - prevent generation of P while generating Q
* @DMA_PREP_PQ_DISABLE_Q - prevent generation of Q while generating P * @DMA_PREP_PQ_DISABLE_Q - prevent generation of Q while generating P
...@@ -479,7 +481,11 @@ enum dma_residue_granularity { ...@@ -479,7 +481,11 @@ enum dma_residue_granularity {
* Since the enum dma_transfer_direction is not defined as bit flag for * Since the enum dma_transfer_direction is not defined as bit flag for
* each type, the dma controller should set BIT(<TYPE>) and same * each type, the dma controller should set BIT(<TYPE>) and same
* should be checked by controller as well * should be checked by controller as well
* @min_burst: min burst capability per-transfer
* @max_burst: max burst capability per-transfer * @max_burst: max burst capability per-transfer
* @max_sg_burst: max number of SG list entries executed in a single burst
* DMA tansaction with no software intervention for reinitialization.
* Zero value means unlimited number of entries.
* @cmd_pause: true, if pause is supported (i.e. for reading residue or * @cmd_pause: true, if pause is supported (i.e. for reading residue or
* for resume later) * for resume later)
* @cmd_resume: true, if resume is supported * @cmd_resume: true, if resume is supported
...@@ -492,7 +498,9 @@ struct dma_slave_caps { ...@@ -492,7 +498,9 @@ struct dma_slave_caps {
u32 src_addr_widths; u32 src_addr_widths;
u32 dst_addr_widths; u32 dst_addr_widths;
u32 directions; u32 directions;
u32 min_burst;
u32 max_burst; u32 max_burst;
u32 max_sg_burst;
bool cmd_pause; bool cmd_pause;
bool cmd_resume; bool cmd_resume;
bool cmd_terminate; bool cmd_terminate;
...@@ -783,7 +791,11 @@ struct dma_filter { ...@@ -783,7 +791,11 @@ struct dma_filter {
* Since the enum dma_transfer_direction is not defined as bit flag for * Since the enum dma_transfer_direction is not defined as bit flag for
* each type, the dma controller should set BIT(<TYPE>) and same * each type, the dma controller should set BIT(<TYPE>) and same
* should be checked by controller as well * should be checked by controller as well
* @min_burst: min burst capability per-transfer
* @max_burst: max burst capability per-transfer * @max_burst: max burst capability per-transfer
* @max_sg_burst: max number of SG list entries executed in a single burst
* DMA tansaction with no software intervention for reinitialization.
* Zero value means unlimited number of entries.
* @residue_granularity: granularity of the transfer residue reported * @residue_granularity: granularity of the transfer residue reported
* by tx_status * by tx_status
* @device_alloc_chan_resources: allocate resources and return the * @device_alloc_chan_resources: allocate resources and return the
...@@ -803,6 +815,8 @@ struct dma_filter { ...@@ -803,6 +815,8 @@ struct dma_filter {
* be called after period_len bytes have been transferred. * be called after period_len bytes have been transferred.
* @device_prep_interleaved_dma: Transfer expression in a generic way. * @device_prep_interleaved_dma: Transfer expression in a generic way.
* @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address
* @device_caps: May be used to override the generic DMA slave capabilities
* with per-channel specific ones
* @device_config: Pushes a new configuration to a channel, return 0 or an error * @device_config: Pushes a new configuration to a channel, return 0 or an error
* code * code
* @device_pause: Pauses any transfer happening on a channel. Returns * @device_pause: Pauses any transfer happening on a channel. Returns
...@@ -853,7 +867,9 @@ struct dma_device { ...@@ -853,7 +867,9 @@ struct dma_device {
u32 src_addr_widths; u32 src_addr_widths;
u32 dst_addr_widths; u32 dst_addr_widths;
u32 directions; u32 directions;
u32 min_burst;
u32 max_burst; u32 max_burst;
u32 max_sg_burst;
bool descriptor_reuse; bool descriptor_reuse;
enum dma_residue_granularity residue_granularity; enum dma_residue_granularity residue_granularity;
...@@ -901,6 +917,8 @@ struct dma_device { ...@@ -901,6 +917,8 @@ struct dma_device {
struct dma_chan *chan, dma_addr_t dst, u64 data, struct dma_chan *chan, dma_addr_t dst, u64 data,
unsigned long flags); unsigned long flags);
void (*device_caps)(struct dma_chan *chan,
struct dma_slave_caps *caps);
int (*device_config)(struct dma_chan *chan, int (*device_config)(struct dma_chan *chan,
struct dma_slave_config *config); struct dma_slave_config *config);
int (*device_pause)(struct dma_chan *chan); int (*device_pause)(struct dma_chan *chan);
......
...@@ -8,10 +8,15 @@ ...@@ -8,10 +8,15 @@
#ifndef _PLATFORM_DATA_DMA_DW_H #ifndef _PLATFORM_DATA_DMA_DW_H
#define _PLATFORM_DATA_DMA_DW_H #define _PLATFORM_DATA_DMA_DW_H
#include <linux/device.h> #include <linux/bits.h>
#include <linux/types.h>
#define DW_DMA_MAX_NR_MASTERS 4 #define DW_DMA_MAX_NR_MASTERS 4
#define DW_DMA_MAX_NR_CHANNELS 8 #define DW_DMA_MAX_NR_CHANNELS 8
#define DW_DMA_MIN_BURST 1
#define DW_DMA_MAX_BURST 256
struct device;
/** /**
* struct dw_dma_slave - Controller-specific information about a slave * struct dw_dma_slave - Controller-specific information about a slave
...@@ -42,6 +47,8 @@ struct dw_dma_slave { ...@@ -42,6 +47,8 @@ struct dw_dma_slave {
* @data_width: Maximum data width supported by hardware per AHB master * @data_width: Maximum data width supported by hardware per AHB master
* (in bytes, power of 2) * (in bytes, power of 2)
* @multi_block: Multi block transfers supported by hardware per channel. * @multi_block: Multi block transfers supported by hardware per channel.
* @max_burst: Maximum value of burst transaction size supported by hardware
* per channel (in units of CTL.SRC_TR_WIDTH/CTL.DST_TR_WIDTH).
* @protctl: Protection control signals setting per channel. * @protctl: Protection control signals setting per channel.
*/ */
struct dw_dma_platform_data { struct dw_dma_platform_data {
...@@ -56,6 +63,7 @@ struct dw_dma_platform_data { ...@@ -56,6 +63,7 @@ struct dw_dma_platform_data {
unsigned char nr_masters; unsigned char nr_masters;
unsigned char data_width[DW_DMA_MAX_NR_MASTERS]; unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
unsigned char multi_block[DW_DMA_MAX_NR_CHANNELS]; unsigned char multi_block[DW_DMA_MAX_NR_CHANNELS];
u32 max_burst[DW_DMA_MAX_NR_CHANNELS];
#define CHAN_PROTCTL_PRIVILEGED BIT(0) #define CHAN_PROTCTL_PRIVILEGED BIT(0)
#define CHAN_PROTCTL_BUFFERABLE BIT(1) #define CHAN_PROTCTL_BUFFERABLE BIT(1)
#define CHAN_PROTCTL_CACHEABLE BIT(2) #define CHAN_PROTCTL_CACHEABLE BIT(2)
......
...@@ -181,6 +181,12 @@ struct dsa_completion_record { ...@@ -181,6 +181,12 @@ struct dsa_completion_record {
uint32_t bytes_completed; uint32_t bytes_completed;
uint64_t fault_addr; uint64_t fault_addr;
union { union {
/* common record */
struct {
uint32_t invalid_flags:24;
uint32_t rsvd2:8;
};
uint16_t delta_rec_size; uint16_t delta_rec_size;
uint16_t crc_val; uint16_t crc_val;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment