Commit a5b871c9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have a bunch of core changes to support dynamic channels,
  hotplug of controllers, new apis for metadata ops etc along with new
  drivers for Intel data accelerators, TI K3 UDMA, PLX DMA engine and
  hisilicon Kunpeng DMA engine. Also usual assorted updates to drivers.

  Core:
   - Support for dynamic channels
   - Removal of various slave wrappers
   - Make few slave request APIs as private to dmaengine
   - Symlinks between channels and slaves
   - Support for hotplug of controllers
   - Support for metadata_ops for dma_async_tx_descriptor
   - Reporting DMA cached data amount
   - Virtual dma channel locking updates

  New drivers/device/feature support support:
   - Driver for Intel data accelerators
   - Driver for TI K3 UDMA
   - Driver for PLX DMA engine
   - Driver for hisilicon Kunpeng DMA engine
   - Support for eDMA support for QorIQ LS1028A in fsl edma driver
   - Support for cyclic dma in sun4i driver
   - Support for X1830 in JZ4780 driver"

* tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (62 commits)
  dmaengine: Create symlinks between DMA channels and slaves
  dmaengine: hisilicon: Add Kunpeng DMA engine support
  dmaengine: idxd: add char driver to expose submission portal to userland
  dmaengine: idxd: connect idxd to dmaengine subsystem
  dmaengine: idxd: add descriptor manipulation routines
  dmaengine: idxd: add sysfs ABI for idxd driver
  dmaengine: idxd: add configuration component of driver
  dmaengine: idxd: Init and probe for Intel data accelerators
  dmaengine: add support to dynamic register/unregister of channels
  dmaengine: break out channel registration
  x86/asm: add iosubmit_cmds512() based on MOVDIR64B CPU instruction
  dmaengine: ti: k3-udma: fix spelling mistake "limted" -> "limited"
  dmaengine: s3c24xx-dma: fix spelling mistake "to" -> "too"
  dmaengine: Move dma_get_{,any_}slave_channel() to private dmaengine.h
  dmaengine: Remove dma_request_slave_channel_compat() wrapper
  dmaengine: Remove dma_device_satisfies_mask() wrapper
  dt-bindings: fsl-imx-sdma: Add i.MX8MM/i.MX8MN/i.MX8MP compatible string
  dmaengine: zynqmp_dma: fix burst length configuration
  dmaengine: sun4i: Add support for cyclic requests with dedicated DMA
  dmaengine: fsl-qdma: fix duplicated argument to &&
  ...
parents 715d1285 71723a96
What: sys/bus/dsa/devices/dsa<m>/cdev_major
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The major number that the character device driver assigned to
this device.
What: sys/bus/dsa/devices/dsa<m>/errors
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The error information for this device.
What: sys/bus/dsa/devices/dsa<m>/max_batch_size
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The largest number of work descriptors in a batch.
What: sys/bus/dsa/devices/dsa<m>/max_work_queues_size
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The maximum work queue size supported by this device.
What: sys/bus/dsa/devices/dsa<m>/max_engines
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The maximum number of engines supported by this device.
What: sys/bus/dsa/devices/dsa<m>/max_groups
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The maximum number of groups can be created under this device.
What: sys/bus/dsa/devices/dsa<m>/max_tokens
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The total number of bandwidth tokens supported by this device.
The bandwidth tokens represent resources within the DSA
implementation, and these resources are allocated by engines to
support operations.
What: sys/bus/dsa/devices/dsa<m>/max_transfer_size
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The number of bytes to be read from the source address to
perform the operation. The maximum transfer size is dependent on
the workqueue the descriptor was submitted to.
What: sys/bus/dsa/devices/dsa<m>/max_work_queues
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The maximum work queue number that this device supports.
What: sys/bus/dsa/devices/dsa<m>/numa_node
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The numa node number for this device.
What: sys/bus/dsa/devices/dsa<m>/op_cap
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The operation capability bit mask specify the operation types
supported by the this device.
What: sys/bus/dsa/devices/dsa<m>/state
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The state information of this device. It can be either enabled
or disabled.
What: sys/bus/dsa/devices/dsa<m>/group<m>.<n>
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The assigned group under this device.
What: sys/bus/dsa/devices/dsa<m>/engine<m>.<n>
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The assigned engine under this device.
What: sys/bus/dsa/devices/dsa<m>/wq<m>.<n>
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The assigned work queue under this device.
What: sys/bus/dsa/devices/dsa<m>/configurable
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: To indicate if this device is configurable or not.
What: sys/bus/dsa/devices/dsa<m>/token_limit
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The maximum number of bandwidth tokens that may be in use at
one time by operations that access low bandwidth memory in the
device.
What: sys/bus/dsa/devices/wq<m>.<n>/group_id
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The group id that this work queue belongs to.
What: sys/bus/dsa/devices/wq<m>.<n>/size
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The work queue size for this work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/type
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The type of this work queue, it can be "kernel" type for work
queue usages in the kernel space or "user" type for work queue
usages by applications in user space.
What: sys/bus/dsa/devices/wq<m>.<n>/cdev_minor
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The minor number assigned to this work queue by the character
device driver.
What: sys/bus/dsa/devices/wq<m>.<n>/mode
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The work queue mode type for this work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/priority
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The priority value of this work queue, it is a vlue relative to
other work queue in the same group to control quality of service
for dispatching work from multiple workqueues in the same group.
What: sys/bus/dsa/devices/wq<m>.<n>/state
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The current state of the work queue.
What: sys/bus/dsa/devices/wq<m>.<n>/threshold
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The number of entries in this work queue that may be filled
via a limited portal.
What: sys/bus/dsa/devices/engine<m>.<n>/group_id
Date: Oct 25, 2019
KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org
Description: The group that this engine belongs to.
...@@ -10,6 +10,7 @@ Required properties: ...@@ -10,6 +10,7 @@ Required properties:
- compatible : - compatible :
- "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC - "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC
- "fsl,imx7ulp-edma" for eDMA2 used similar to that on i.mx7ulp - "fsl,imx7ulp-edma" for eDMA2 used similar to that on i.mx7ulp
- "fsl,fsl,ls1028a-edma" for eDMA used similar to that on Vybrid vf610 SoC
- reg : Specifies base physical address(s) and size of the eDMA registers. - reg : Specifies base physical address(s) and size of the eDMA registers.
The 1st region is eDMA control register's address and size. The 1st region is eDMA control register's address and size.
The 2nd and the 3rd regions are programmable channel multiplexing The 2nd and the 3rd regions are programmable channel multiplexing
......
...@@ -10,6 +10,9 @@ Required properties: ...@@ -10,6 +10,9 @@ Required properties:
"fsl,imx6q-sdma" "fsl,imx6q-sdma"
"fsl,imx7d-sdma" "fsl,imx7d-sdma"
"fsl,imx8mq-sdma" "fsl,imx8mq-sdma"
"fsl,imx8mm-sdma"
"fsl,imx8mn-sdma"
"fsl,imx8mp-sdma"
The -to variants should be preferred since they allow to determine the The -to variants should be preferred since they allow to determine the
correct ROM script addresses needed for the driver to work without additional correct ROM script addresses needed for the driver to work without additional
firmware. firmware.
......
* Ingenic JZ4780 DMA Controller * Ingenic XBurst DMA Controller
Required properties: Required properties:
...@@ -8,10 +8,12 @@ Required properties: ...@@ -8,10 +8,12 @@ Required properties:
* ingenic,jz4770-dma * ingenic,jz4770-dma
* ingenic,jz4780-dma * ingenic,jz4780-dma
* ingenic,x1000-dma * ingenic,x1000-dma
* ingenic,x1830-dma
- reg: Should contain the DMA channel registers location and length, followed - reg: Should contain the DMA channel registers location and length, followed
by the DMA controller registers location and length. by the DMA controller registers location and length.
- interrupts: Should contain the interrupt specifier of the DMA controller. - interrupts: Should contain the interrupt specifier of the DMA controller.
- clocks: Should contain a clock specifier for the JZ4780/X1000 PDMA clock. - clocks: Should contain a clock specifier for the JZ4780/X1000/X1830 PDMA
clock.
- #dma-cells: Must be <2>. Number of integer cells in the dmas property of - #dma-cells: Must be <2>. Number of integer cells in the dmas property of
DMA clients (see below). DMA clients (see below).
......
...@@ -30,6 +30,7 @@ Required Properties: ...@@ -30,6 +30,7 @@ Required Properties:
- "renesas,dmac-r8a7794" (R-Car E2) - "renesas,dmac-r8a7794" (R-Car E2)
- "renesas,dmac-r8a7795" (R-Car H3) - "renesas,dmac-r8a7795" (R-Car H3)
- "renesas,dmac-r8a7796" (R-Car M3-W) - "renesas,dmac-r8a7796" (R-Car M3-W)
- "renesas,dmac-r8a77961" (R-Car M3-W+)
- "renesas,dmac-r8a77965" (R-Car M3-N) - "renesas,dmac-r8a77965" (R-Car M3-N)
- "renesas,dmac-r8a77970" (R-Car V3M) - "renesas,dmac-r8a77970" (R-Car V3M)
- "renesas,dmac-r8a77980" (R-Car V3H) - "renesas,dmac-r8a77980" (R-Car V3H)
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-udma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 NAVSS Unified DMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
description: |
The UDMA-P is intended to perform similar (but significantly upgraded)
functions as the packet-oriented DMA used on previous SoC devices. The UDMA-P
module supports the transmission and reception of various packet types.
The UDMA-P architecture facilitates the segmentation and reassembly of SoC DMA
data structure compliant packets to/from smaller data blocks that are natively
compatible with the specific requirements of each connected peripheral.
Multiple Tx and Rx channels are provided within the DMA which allow multiple
segmentation or reassembly operations to be ongoing. The DMA controller
maintains state information for each of the channels which allows packet
segmentation and reassembly operations to be time division multiplexed between
channels in order to share the underlying DMA hardware. An external DMA
scheduler is used to control the ordering and rate at which this multiplexing
occurs for Transmit operations. The ordering and rate of Receive operations
is indirectly controlled by the order in which blocks are pushed into the DMA
on the Rx PSI-L interface.
The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
channels. Channels in the UDMA-P can be configured to be either Packet-Based
or Third-Party channels on a channel by channel basis.
All transfers within NAVSS is done between PSI-L source and destination
threads.
The peripherals serviced by UDMA can be PSI-L native (sa2ul, cpsw, etc) or
legacy, non PSI-L native peripherals. In the later case a special, small PDMA
is tasked to act as a bridge between the PSI-L fabric and the legacy
peripheral.
PDMAs can be configured via UDMAP peer registers to match with the
configuration of the legacy peripheral.
allOf:
- $ref: "../dma-controller.yaml#"
properties:
"#dma-cells":
const: 1
description: |
The cell is the PSI-L thread ID of the remote (to UDMAP) end.
Valid ranges for thread ID depends on the data movement direction:
for source thread IDs (rx): 0 - 0x7fff
for destination thread IDs (tx): 0x8000 - 0xffff
Please refer to the device documentation for the PSI-L thread map and also
the PSI-L peripheral chapter for the correct thread ID.
compatible:
enum:
- ti,am654-navss-main-udmap
- ti,am654-navss-mcu-udmap
- ti,j721e-navss-main-udmap
- ti,j721e-navss-mcu-udmap
reg:
maxItems: 3
reg-names:
items:
- const: gcfg
- const: rchanrt
- const: tchanrt
msi-parent: true
ti,sci:
description: phandle to TI-SCI compatible System controller node
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
ti,sci-dev-id:
description: TI-SCI device id of UDMAP
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
ti,ringacc:
description: phandle to the ring accelerator node
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
ti,sci-rm-range-tchan:
description: |
Array of UDMA tchan resource subtypes for resource allocation for this
host
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
# Should be enough
maxItems: 255
ti,sci-rm-range-rchan:
description: |
Array of UDMA rchan resource subtypes for resource allocation for this
host
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
# Should be enough
maxItems: 255
ti,sci-rm-range-rflow:
description: |
Array of UDMA rflow resource subtypes for resource allocation for this
host
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
# Should be enough
maxItems: 255
required:
- compatible
- "#dma-cells"
- reg
- reg-names
- msi-parent
- ti,sci
- ti,sci-dev-id
- ti,ringacc
- ti,sci-rm-range-tchan
- ti,sci-rm-range-rchan
- ti,sci-rm-range-rflow
examples:
- |+
cbass_main {
#address-cells = <2>;
#size-cells = <2>;
cbass_main_navss: navss@30800000 {
compatible = "simple-mfd";
#address-cells = <2>;
#size-cells = <2>;
dma-coherent;
dma-ranges;
ranges;
ti,sci-dev-id = <118>;
main_udmap: dma-controller@31150000 {
compatible = "ti,am654-navss-main-udmap";
reg = <0x0 0x31150000 0x0 0x100>,
<0x0 0x34000000 0x0 0x100000>,
<0x0 0x35000000 0x0 0x100000>;
reg-names = "gcfg", "rchanrt", "tchanrt";
#dma-cells = <1>;
ti,ringacc = <&ringacc>;
msi-parent = <&inta_main_udmass>;
ti,sci = <&dmsc>;
ti,sci-dev-id = <188>;
ti,sci-rm-range-tchan = <0x1>, /* TX_HCHAN */
<0x2>; /* TX_CHAN */
ti,sci-rm-range-rchan = <0x4>, /* RX_HCHAN */
<0x5>; /* RX_CHAN */
ti,sci-rm-range-rflow = <0x6>; /* GP RFLOW */
};
};
mcasp0: mcasp@02B00000 {
dmas = <&main_udmap 0xc400>, <&main_udmap 0x4400>;
dma-names = "tx", "rx";
};
crypto: crypto@4E00000 {
compatible = "ti,sa2ul-crypto";
dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>, <&main_udmap 0x4001>;
dma-names = "tx", "rx1", "rx2";
};
};
* Texas Instruments K3 NavigatorSS Ring Accelerator
The Ring Accelerator (RA) is a machine which converts read/write accesses
from/to a constant address into corresponding read/write accesses from/to a
circular data structure in memory. The RA eliminates the need for each DMA
controller which needs to access ring elements from having to know the current
state of the ring (base address, current offset). The DMA controller
performs a read or write access to a specific address range (which maps to the
source interface on the RA) and the RA replaces the address for the transaction
with a new address which corresponds to the head or tail element of the ring
(head for reads, tail for writes).
The Ring Accelerator is a hardware module that is responsible for accelerating
management of the packet queues. The K3 SoCs can have more than one RA instances
Required properties:
- compatible : Must be "ti,am654-navss-ringacc";
- reg : Should contain register location and length of the following
named register regions.
- reg-names : should be
"rt" - The RA Ring Real-time Control/Status Registers
"fifos" - The RA Queues Registers
"proxy_gcfg" - The RA Proxy Global Config Registers
"proxy_target" - The RA Proxy Datapath Registers
- ti,num-rings : Number of rings supported by RA
- ti,sci-rm-range-gp-rings : TI-SCI RM subtype for GP ring range
- ti,sci : phandle on TI-SCI compatible System controller node
- ti,sci-dev-id : TI-SCI device id of the ring accelerator
- msi-parent : phandle for "ti,sci-inta" interrupt controller
Optional properties:
-- ti,dma-ring-reset-quirk : enable ringacc / udma ring state interoperability
issue software w/a
Example:
ringacc: ringacc@3c000000 {
compatible = "ti,am654-navss-ringacc";
reg = <0x0 0x3c000000 0x0 0x400000>,
<0x0 0x38000000 0x0 0x400000>,
<0x0 0x31120000 0x0 0x100>,
<0x0 0x33000000 0x0 0x40000>;
reg-names = "rt", "fifos",
"proxy_gcfg", "proxy_target";
ti,num-rings = <818>;
ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */
ti,dma-ring-reset-quirk;
ti,sci = <&dmsc>;
ti,sci-dev-id = <187>;
msi-parent = <&inta_main_udmass>;
};
client:
dma_ipx: dma_ipx@<addr> {
...
ti,ringacc = <&ringacc>;
...
}
...@@ -151,6 +151,93 @@ The details of these operations are: ...@@ -151,6 +151,93 @@ The details of these operations are:
Note that callbacks will always be invoked from the DMA Note that callbacks will always be invoked from the DMA
engines tasklet, never from interrupt context. engines tasklet, never from interrupt context.
Optional: per descriptor metadata
---------------------------------
DMAengine provides two ways for metadata support.
DESC_METADATA_CLIENT
The metadata buffer is allocated/provided by the client driver and it is
attached to the descriptor.
.. code-block:: c
int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc,
void *data, size_t len);
DESC_METADATA_ENGINE
The metadata buffer is allocated/managed by the DMA driver. The client
driver can ask for the pointer, maximum size and the currently used size of
the metadata and can directly update or read it.
Becasue the DMA driver manages the memory area containing the metadata,
clients must make sure that they do not try to access or get the pointer
after their transfer completion callback has run for the descriptor.
If no completion callback has been defined for the transfer, then the
metadata must not be accessed after issue_pending.
In other words: if the aim is to read back metadata after the transfer is
completed, then the client must use completion callback.
.. code-block:: c
void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc,
size_t *payload_len, size_t *max_len);
int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc,
size_t payload_len);
Client drivers can query if a given mode is supported with:
.. code-block:: c
bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan,
enum dma_desc_metadata_mode mode);
Depending on the used mode client drivers must follow different flow.
DESC_METADATA_CLIENT
- DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
1. prepare the descriptor (dmaengine_prep_*)
construct the metadata in the client's buffer
2. use dmaengine_desc_attach_metadata() to attach the buffer to the
descriptor
3. submit the transfer
- DMA_DEV_TO_MEM:
1. prepare the descriptor (dmaengine_prep_*)
2. use dmaengine_desc_attach_metadata() to attach the buffer to the
descriptor
3. submit the transfer
4. when the transfer is completed, the metadata should be available in the
attached buffer
DESC_METADATA_ENGINE
- DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
1. prepare the descriptor (dmaengine_prep_*)
2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the
engine's metadata area
3. update the metadata at the pointer
4. use dmaengine_desc_set_metadata_len() to tell the DMA engine the
amount of data the client has placed into the metadata buffer
5. submit the transfer
- DMA_DEV_TO_MEM:
1. prepare the descriptor (dmaengine_prep_*)
2. submit the transfer
3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get
the pointer to the engine's metadata area
4. read out the metadata from the pointer
.. note::
When DESC_METADATA_ENGINE mode is used the metadata area for the descriptor
is no longer valid after the transfer has been completed (valid up to the
point when the completion callback returns if used).
Mixed use of DESC_METADATA_CLIENT / DESC_METADATA_ENGINE is not allowed,
client drivers must use either of the modes per descriptor.
4. Submit the transaction 4. Submit the transaction
Once the descriptor has been prepared and the callback information Once the descriptor has been prepared and the callback information
......
...@@ -247,6 +247,54 @@ after each transfer. In case of a ring buffer, they may loop ...@@ -247,6 +247,54 @@ after each transfer. In case of a ring buffer, they may loop
(DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO) (DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO)
are typically fixed. are typically fixed.
Per descriptor metadata support
-------------------------------
Some data movement architecture (DMA controller and peripherals) uses metadata
associated with a transaction. The DMA controller role is to transfer the
payload and the metadata alongside.
The metadata itself is not used by the DMA engine itself, but it contains
parameters, keys, vectors, etc for peripheral or from the peripheral.
The DMAengine framework provides a generic ways to facilitate the metadata for
descriptors. Depending on the architecture the DMA driver can implement either
or both of the methods and it is up to the client driver to choose which one
to use.
- DESC_METADATA_CLIENT
The metadata buffer is allocated/provided by the client driver and it is
attached (via the dmaengine_desc_attach_metadata() helper to the descriptor.
From the DMA driver the following is expected for this mode:
- DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
The data from the provided metadata buffer should be prepared for the DMA
controller to be sent alongside of the payload data. Either by copying to a
hardware descriptor, or highly coupled packet.
- DMA_DEV_TO_MEM
On transfer completion the DMA driver must copy the metadata to the client
provided metadata buffer before notifying the client about the completion.
After the transfer completion, DMA drivers must not touch the metadata
buffer provided by the client.
- DESC_METADATA_ENGINE
The metadata buffer is allocated/managed by the DMA driver. The client driver
can ask for the pointer, maximum size and the currently used size of the
metadata and can directly update or read it. dmaengine_desc_get_metadata_ptr()
and dmaengine_desc_set_metadata_len() is provided as helper functions.
From the DMA driver the following is expected for this mode:
- get_metadata_ptr
Should return a pointer for the metadata buffer, the maximum size of the
metadata buffer and the currently used / valid (if any) bytes in the buffer.
- set_metadata_len
It is called by the clients after it have placed the metadata to the buffer
to let the DMA driver know the number of valid bytes provided.
Note: since the client will ask for the metadata pointer in the completion
callback (in DMA_DEV_TO_MEM case) the DMA driver must ensure that the
descriptor is not freed up prior the callback is called.
Device operations Device operations
----------------- -----------------
......
...@@ -8392,6 +8392,14 @@ Q: https://patchwork.kernel.org/project/linux-dmaengine/list/ ...@@ -8392,6 +8392,14 @@ Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
S: Supported S: Supported
F: drivers/dma/ioat* F: drivers/dma/ioat*
INTEL IADX DRIVER
M: Dave Jiang <dave.jiang@intel.com>
L: dmaengine@vger.kernel.org
S: Supported
F: drivers/dma/idxd/*
F: include/uapi/linux/idxd.h
F: include/linux/idxd.h
INTEL IDLE DRIVER INTEL IDLE DRIVER
M: Jacob Pan <jacob.jun.pan@linux.intel.com> M: Jacob Pan <jacob.jun.pan@linux.intel.com>
M: Len Brown <lenb@kernel.org> M: Len Brown <lenb@kernel.org>
...@@ -13162,6 +13170,11 @@ S: Maintained ...@@ -13162,6 +13170,11 @@ S: Maintained
F: drivers/iio/chemical/pms7003.c F: drivers/iio/chemical/pms7003.c
F: Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml F: Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
PLX DMA DRIVER
M: Logan Gunthorpe <logang@deltatee.com>
S: Maintained
F: drivers/dma/plx_dma.c
PMBUS HARDWARE MONITORING DRIVERS PMBUS HARDWARE MONITORING DRIVERS
M: Guenter Roeck <linux@roeck-us.net> M: Guenter Roeck <linux@roeck-us.net>
L: linux-hwmon@vger.kernel.org L: linux-hwmon@vger.kernel.org
......
...@@ -399,4 +399,40 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset, ...@@ -399,4 +399,40 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset,
extern bool phys_mem_access_encrypted(unsigned long phys_addr, extern bool phys_mem_access_encrypted(unsigned long phys_addr,
unsigned long size); unsigned long size);
/**
* iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
* @__dst: destination, in MMIO space (must be 512-bit aligned)
* @src: source
* @count: number of 512 bits quantities to submit
*
* Submit data from kernel space to MMIO space, in units of 512 bits at a
* time. Order of access is not guaranteed, nor is a memory barrier
* performed afterwards.
*
* Warning: Do not use this helper unless your driver has checked that the CPU
* instruction is supported on the platform.
*/
static inline void iosubmit_cmds512(void __iomem *__dst, const void *src,
size_t count)
{
/*
* Note that this isn't an "on-stack copy", just definition of "dst"
* as a pointer to 64-bytes of stuff that is going to be overwritten.
* In the MOVDIR64B case that may be needed as you can use the
* MOVDIR64B instruction to copy arbitrary memory around. This trick
* lets the compiler know how much gets clobbered.
*/
volatile struct { char _[64]; } *dst = __dst;
const u8 *from = src;
const u8 *end = from + count * 64;
while (from < end) {
/* MOVDIR64B [rdx], rax */
asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
: "=m" (dst)
: "d" (from), "a" (dst));
from += 64;
}
}
#endif /* _ASM_X86_IO_H */ #endif /* _ASM_X86_IO_H */
...@@ -239,6 +239,14 @@ config FSL_RAID ...@@ -239,6 +239,14 @@ config FSL_RAID
the capability to offload memcpy, xor and pq computation the capability to offload memcpy, xor and pq computation
for raid5/6. for raid5/6.
config HISI_DMA
tristate "HiSilicon DMA Engine support"
depends on ARM64 || (COMPILE_TEST && PCI_MSI)
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support HiSilicon Kunpeng DMA engine.
config IMG_MDC_DMA config IMG_MDC_DMA
tristate "IMG MDC support" tristate "IMG MDC support"
depends on MIPS || COMPILE_TEST depends on MIPS || COMPILE_TEST
...@@ -273,6 +281,19 @@ config INTEL_IDMA64 ...@@ -273,6 +281,19 @@ config INTEL_IDMA64
Enable DMA support for Intel Low Power Subsystem such as found on Enable DMA support for Intel Low Power Subsystem such as found on
Intel Skylake PCH. Intel Skylake PCH.
config INTEL_IDXD
tristate "Intel Data Accelerators support"
depends on PCI && X86_64
select DMA_ENGINE
select SBITMAP
help
Enable support for the Intel(R) data accelerators present
in Intel Xeon CPU.
Say Y if you have such a platform.
If unsure, say N.
config INTEL_IOATDMA config INTEL_IOATDMA
tristate "Intel I/OAT DMA support" tristate "Intel I/OAT DMA support"
depends on PCI && X86_64 depends on PCI && X86_64
...@@ -497,6 +518,15 @@ config PXA_DMA ...@@ -497,6 +518,15 @@ config PXA_DMA
16 to 32 channels for peripheral to memory or memory to memory 16 to 32 channels for peripheral to memory or memory to memory
transfers. transfers.
config PLX_DMA
tristate "PLX ExpressLane PEX Switch DMA Engine Support"
depends on PCI
select DMA_ENGINE
help
Some PLX ExpressLane PCI Switches support additional DMA engines.
These are exposed via extra functions on the switch's
upstream port. Each function exposes one DMA channel.
config SIRF_DMA config SIRF_DMA
tristate "CSR SiRFprimaII/SiRFmarco DMA support" tristate "CSR SiRFprimaII/SiRFmarco DMA support"
depends on ARCH_SIRF depends on ARCH_SIRF
......
...@@ -35,12 +35,14 @@ obj-$(CONFIG_FSL_EDMA) += fsl-edma.o fsl-edma-common.o ...@@ -35,12 +35,14 @@ obj-$(CONFIG_FSL_EDMA) += fsl-edma.o fsl-edma-common.o
obj-$(CONFIG_MCF_EDMA) += mcf-edma.o fsl-edma-common.o obj-$(CONFIG_MCF_EDMA) += mcf-edma.o fsl-edma-common.o
obj-$(CONFIG_FSL_QDMA) += fsl-qdma.o obj-$(CONFIG_FSL_QDMA) += fsl-qdma.o
obj-$(CONFIG_FSL_RAID) += fsl_raid.o obj-$(CONFIG_FSL_RAID) += fsl_raid.o
obj-$(CONFIG_HISI_DMA) += hisi_dma.o
obj-$(CONFIG_HSU_DMA) += hsu/ obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_INTEL_IDMA64) += idma64.o obj-$(CONFIG_INTEL_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/ obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IDXD) += idxd/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_K3_DMA) += k3dma.o obj-$(CONFIG_K3_DMA) += k3dma.o
...@@ -59,6 +61,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o ...@@ -59,6 +61,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_OWL_DMA) += owl-dma.o obj-$(CONFIG_OWL_DMA) += owl-dma.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PLX_DMA) += plx_dma.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_PXA_DMA) += pxa_dma.o obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_RENESAS_DMA) += sh/ obj-$(CONFIG_RENESAS_DMA) += sh/
......
...@@ -797,10 +797,7 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan) ...@@ -797,10 +797,7 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan)
/* stop DMA activity */ /* stop DMA activity */
if (c->desc) { if (c->desc) {
if (c->desc->vd.tx.flags & DMA_PREP_INTERRUPT) vchan_terminate_vdesc(&c->desc->vd);
vchan_terminate_vdesc(&c->desc->vd);
else
vchan_vdesc_fini(&c->desc->vd);
c->desc = NULL; c->desc = NULL;
bcm2835_dma_abort(c); bcm2835_dma_abort(c);
} }
......
...@@ -830,6 +830,7 @@ static int axi_dmac_probe(struct platform_device *pdev) ...@@ -830,6 +830,7 @@ static int axi_dmac_probe(struct platform_device *pdev)
struct dma_device *dma_dev; struct dma_device *dma_dev;
struct axi_dmac *dmac; struct axi_dmac *dmac;
struct resource *res; struct resource *res;
struct regmap *regmap;
int ret; int ret;
dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL); dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL);
...@@ -921,10 +922,17 @@ static int axi_dmac_probe(struct platform_device *pdev) ...@@ -921,10 +922,17 @@ static int axi_dmac_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, dmac); platform_set_drvdata(pdev, dmac);
devm_regmap_init_mmio(&pdev->dev, dmac->base, &axi_dmac_regmap_config); regmap = devm_regmap_init_mmio(&pdev->dev, dmac->base,
&axi_dmac_regmap_config);
if (IS_ERR(regmap)) {
ret = PTR_ERR(regmap);
goto err_free_irq;
}
return 0; return 0;
err_free_irq:
free_irq(dmac->irq, dmac);
err_unregister_of: err_unregister_of:
of_dma_controller_free(pdev->dev.of_node); of_dma_controller_free(pdev->dev.of_node);
err_unregister_device: err_unregister_device:
......
...@@ -1021,12 +1021,19 @@ static const struct jz4780_dma_soc_data x1000_dma_soc_data = { ...@@ -1021,12 +1021,19 @@ static const struct jz4780_dma_soc_data x1000_dma_soc_data = {
.flags = JZ_SOC_DATA_PROGRAMMABLE_DMA, .flags = JZ_SOC_DATA_PROGRAMMABLE_DMA,
}; };
static const struct jz4780_dma_soc_data x1830_dma_soc_data = {
.nb_channels = 32,
.transfer_ord_max = 7,
.flags = JZ_SOC_DATA_PROGRAMMABLE_DMA,
};
static const struct of_device_id jz4780_dma_dt_match[] = { static const struct of_device_id jz4780_dma_dt_match[] = {
{ .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data }, { .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data },
{ .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data }, { .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data },
{ .compatible = "ingenic,jz4770-dma", .data = &jz4770_dma_soc_data }, { .compatible = "ingenic,jz4770-dma", .data = &jz4770_dma_soc_data },
{ .compatible = "ingenic,jz4780-dma", .data = &jz4780_dma_soc_data }, { .compatible = "ingenic,jz4780-dma", .data = &jz4780_dma_soc_data },
{ .compatible = "ingenic,x1000-dma", .data = &x1000_dma_soc_data }, { .compatible = "ingenic,x1000-dma", .data = &x1000_dma_soc_data },
{ .compatible = "ingenic,x1830-dma", .data = &x1830_dma_soc_data },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, jz4780_dma_dt_match); MODULE_DEVICE_TABLE(of, jz4780_dma_dt_match);
......
This diff is collapsed.
...@@ -77,6 +77,7 @@ static inline enum dma_status dma_cookie_status(struct dma_chan *chan, ...@@ -77,6 +77,7 @@ static inline enum dma_status dma_cookie_status(struct dma_chan *chan,
state->last = complete; state->last = complete;
state->used = used; state->used = used;
state->residue = 0; state->residue = 0;
state->in_flight_bytes = 0;
} }
return dma_async_is_complete(cookie, complete, used); return dma_async_is_complete(cookie, complete, used);
} }
...@@ -87,6 +88,13 @@ static inline void dma_set_residue(struct dma_tx_state *state, u32 residue) ...@@ -87,6 +88,13 @@ static inline void dma_set_residue(struct dma_tx_state *state, u32 residue)
state->residue = residue; state->residue = residue;
} }
static inline void dma_set_in_flight_bytes(struct dma_tx_state *state,
u32 in_flight_bytes)
{
if (state)
state->in_flight_bytes = in_flight_bytes;
}
struct dmaengine_desc_callback { struct dmaengine_desc_callback {
dma_async_tx_callback callback; dma_async_tx_callback callback;
dma_async_tx_callback_result callback_result; dma_async_tx_callback_result callback_result;
...@@ -171,4 +179,7 @@ dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb) ...@@ -171,4 +179,7 @@ dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb)
return (cb->callback) ? true : false; return (cb->callback) ? true : false;
} }
struct dma_chan *dma_get_slave_channel(struct dma_chan *chan);
struct dma_chan *dma_get_any_slave_channel(struct dma_device *device);
#endif #endif
...@@ -636,14 +636,10 @@ static int dma_chan_terminate_all(struct dma_chan *dchan) ...@@ -636,14 +636,10 @@ static int dma_chan_terminate_all(struct dma_chan *dchan)
vchan_get_all_descriptors(&chan->vc, &head); vchan_get_all_descriptors(&chan->vc, &head);
/*
* As vchan_dma_desc_free_list can access to desc_allocated list
* we need to call it in vc.lock context.
*/
vchan_dma_desc_free_list(&chan->vc, &head);
spin_unlock_irqrestore(&chan->vc.lock, flags); spin_unlock_irqrestore(&chan->vc.lock, flags);
vchan_dma_desc_free_list(&chan->vc, &head);
dev_vdbg(dchan2dev(dchan), "terminated: %s\n", axi_chan_name(chan)); dev_vdbg(dchan2dev(dchan), "terminated: %s\n", axi_chan_name(chan));
return 0; return 0;
......
...@@ -109,10 +109,15 @@ void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan, ...@@ -109,10 +109,15 @@ void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan,
u32 ch = fsl_chan->vchan.chan.chan_id; u32 ch = fsl_chan->vchan.chan.chan_id;
void __iomem *muxaddr; void __iomem *muxaddr;
unsigned int chans_per_mux, ch_off; unsigned int chans_per_mux, ch_off;
int endian_diff[4] = {3, 1, -1, -3};
u32 dmamux_nr = fsl_chan->edma->drvdata->dmamuxs; u32 dmamux_nr = fsl_chan->edma->drvdata->dmamuxs;
chans_per_mux = fsl_chan->edma->n_chans / dmamux_nr; chans_per_mux = fsl_chan->edma->n_chans / dmamux_nr;
ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux; ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux;
if (fsl_chan->edma->drvdata->mux_swap)
ch_off += endian_diff[ch_off % 4];
muxaddr = fsl_chan->edma->muxbase[ch / chans_per_mux]; muxaddr = fsl_chan->edma->muxbase[ch / chans_per_mux];
slot = EDMAMUX_CHCFG_SOURCE(slot); slot = EDMAMUX_CHCFG_SOURCE(slot);
......
...@@ -147,6 +147,7 @@ struct fsl_edma_drvdata { ...@@ -147,6 +147,7 @@ struct fsl_edma_drvdata {
enum edma_version version; enum edma_version version;
u32 dmamuxs; u32 dmamuxs;
bool has_dmaclk; bool has_dmaclk;
bool mux_swap;
int (*setup_irq)(struct platform_device *pdev, int (*setup_irq)(struct platform_device *pdev,
struct fsl_edma_engine *fsl_edma); struct fsl_edma_engine *fsl_edma);
}; };
......
...@@ -233,6 +233,13 @@ static struct fsl_edma_drvdata vf610_data = { ...@@ -233,6 +233,13 @@ static struct fsl_edma_drvdata vf610_data = {
.setup_irq = fsl_edma_irq_init, .setup_irq = fsl_edma_irq_init,
}; };
static struct fsl_edma_drvdata ls1028a_data = {
.version = v1,
.dmamuxs = DMAMUX_NR,
.mux_swap = true,
.setup_irq = fsl_edma_irq_init,
};
static struct fsl_edma_drvdata imx7ulp_data = { static struct fsl_edma_drvdata imx7ulp_data = {
.version = v3, .version = v3,
.dmamuxs = 1, .dmamuxs = 1,
...@@ -242,6 +249,7 @@ static struct fsl_edma_drvdata imx7ulp_data = { ...@@ -242,6 +249,7 @@ static struct fsl_edma_drvdata imx7ulp_data = {
static const struct of_device_id fsl_edma_dt_ids[] = { static const struct of_device_id fsl_edma_dt_ids[] = {
{ .compatible = "fsl,vf610-edma", .data = &vf610_data}, { .compatible = "fsl,vf610-edma", .data = &vf610_data},
{ .compatible = "fsl,ls1028a-edma", .data = &ls1028a_data},
{ .compatible = "fsl,imx7ulp-edma", .data = &imx7ulp_data}, { .compatible = "fsl,imx7ulp-edma", .data = &imx7ulp_data},
{ /* sentinel */ } { /* sentinel */ }
}; };
......
...@@ -304,7 +304,7 @@ static void fsl_qdma_free_chan_resources(struct dma_chan *chan) ...@@ -304,7 +304,7 @@ static void fsl_qdma_free_chan_resources(struct dma_chan *chan)
vchan_dma_desc_free_list(&fsl_chan->vchan, &head); vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
if (!fsl_queue->comp_pool && !fsl_queue->comp_pool) if (!fsl_queue->comp_pool && !fsl_queue->desc_pool)
return; return;
list_for_each_entry_safe(comp_temp, _comp_temp, list_for_each_entry_safe(comp_temp, _comp_temp,
......
This diff is collapsed.
obj-$(CONFIG_INTEL_IDXD) += idxd.o
idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/sched/task.h>
#include <linux/intel-svm.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/cdev.h>
#include <linux/fs.h>
#include <linux/poll.h>
#include <uapi/linux/idxd.h>
#include "registers.h"
#include "idxd.h"
struct idxd_cdev_context {
const char *name;
dev_t devt;
struct ida minor_ida;
};
/*
* ictx is an array based off of accelerator types. enum idxd_type
* is used as index
*/
static struct idxd_cdev_context ictx[IDXD_TYPE_MAX] = {
{ .name = "dsa" },
};
struct idxd_user_context {
struct idxd_wq *wq;
struct task_struct *task;
unsigned int flags;
};
enum idxd_cdev_cleanup {
CDEV_NORMAL = 0,
CDEV_FAILED,
};
static void idxd_cdev_dev_release(struct device *dev)
{
dev_dbg(dev, "releasing cdev device\n");
kfree(dev);
}
static struct device_type idxd_cdev_device_type = {
.name = "idxd_cdev",
.release = idxd_cdev_dev_release,
};
static inline struct idxd_cdev *inode_idxd_cdev(struct inode *inode)
{
struct cdev *cdev = inode->i_cdev;
return container_of(cdev, struct idxd_cdev, cdev);
}
static inline struct idxd_wq *idxd_cdev_wq(struct idxd_cdev *idxd_cdev)
{
return container_of(idxd_cdev, struct idxd_wq, idxd_cdev);
}
static inline struct idxd_wq *inode_wq(struct inode *inode)
{
return idxd_cdev_wq(inode_idxd_cdev(inode));
}
static int idxd_cdev_open(struct inode *inode, struct file *filp)
{
struct idxd_user_context *ctx;
struct idxd_device *idxd;
struct idxd_wq *wq;
struct device *dev;
struct idxd_cdev *idxd_cdev;
wq = inode_wq(inode);
idxd = wq->idxd;
dev = &idxd->pdev->dev;
idxd_cdev = &wq->idxd_cdev;
dev_dbg(dev, "%s called\n", __func__);
if (idxd_wq_refcount(wq) > 1 && wq_dedicated(wq))
return -EBUSY;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->wq = wq;
filp->private_data = ctx;
idxd_wq_get(wq);
return 0;
}
static int idxd_cdev_release(struct inode *node, struct file *filep)
{
struct idxd_user_context *ctx = filep->private_data;
struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
dev_dbg(dev, "%s called\n", __func__);
filep->private_data = NULL;
kfree(ctx);
idxd_wq_put(wq);
return 0;
}
static int check_vma(struct idxd_wq *wq, struct vm_area_struct *vma,
const char *func)
{
struct device *dev = &wq->idxd->pdev->dev;
if ((vma->vm_end - vma->vm_start) > PAGE_SIZE) {
dev_info_ratelimited(dev,
"%s: %s: mapping too large: %lu\n",
current->comm, func,
vma->vm_end - vma->vm_start);
return -EINVAL;
}
return 0;
}
static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct idxd_user_context *ctx = filp->private_data;
struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd;
struct pci_dev *pdev = idxd->pdev;
phys_addr_t base = pci_resource_start(pdev, IDXD_WQ_BAR);
unsigned long pfn;
int rc;
dev_dbg(&pdev->dev, "%s called\n", __func__);
rc = check_vma(wq, vma, __func__);
vma->vm_flags |= VM_DONTCOPY;
pfn = (base + idxd_get_wq_portal_full_offset(wq->id,
IDXD_PORTAL_LIMITED)) >> PAGE_SHIFT;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_private_data = ctx;
return io_remap_pfn_range(vma, vma->vm_start, pfn, PAGE_SIZE,
vma->vm_page_prot);
}
static __poll_t idxd_cdev_poll(struct file *filp,
struct poll_table_struct *wait)
{
struct idxd_user_context *ctx = filp->private_data;
struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd;
struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
unsigned long flags;
__poll_t out = 0;
poll_wait(filp, &idxd_cdev->err_queue, wait);
spin_lock_irqsave(&idxd->dev_lock, flags);
if (idxd->sw_err.valid)
out = EPOLLIN | EPOLLRDNORM;
spin_unlock_irqrestore(&idxd->dev_lock, flags);
return out;
}
static const struct file_operations idxd_cdev_fops = {
.owner = THIS_MODULE,
.open = idxd_cdev_open,
.release = idxd_cdev_release,
.mmap = idxd_cdev_mmap,
.poll = idxd_cdev_poll,
};
int idxd_cdev_get_major(struct idxd_device *idxd)
{
return MAJOR(ictx[idxd->type].devt);
}
static int idxd_wq_cdev_dev_setup(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
struct idxd_cdev_context *cdev_ctx;
struct device *dev;
int minor, rc;
idxd_cdev->dev = kzalloc(sizeof(*idxd_cdev->dev), GFP_KERNEL);
if (!idxd_cdev->dev)
return -ENOMEM;
dev = idxd_cdev->dev;
dev->parent = &idxd->pdev->dev;
dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
idxd->id, wq->id);
dev->bus = idxd_get_bus_type(idxd);
cdev_ctx = &ictx[wq->idxd->type];
minor = ida_simple_get(&cdev_ctx->minor_ida, 0, MINORMASK, GFP_KERNEL);
if (minor < 0) {
rc = minor;
goto ida_err;
}
dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
dev->type = &idxd_cdev_device_type;
rc = device_register(dev);
if (rc < 0) {
dev_err(&idxd->pdev->dev, "device register failed\n");
put_device(dev);
goto dev_reg_err;
}
idxd_cdev->minor = minor;
return 0;
dev_reg_err:
ida_simple_remove(&cdev_ctx->minor_ida, MINOR(dev->devt));
ida_err:
kfree(dev);
idxd_cdev->dev = NULL;
return rc;
}
static void idxd_wq_cdev_cleanup(struct idxd_wq *wq,
enum idxd_cdev_cleanup cdev_state)
{
struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
struct idxd_cdev_context *cdev_ctx;
cdev_ctx = &ictx[wq->idxd->type];
if (cdev_state == CDEV_NORMAL)
cdev_del(&idxd_cdev->cdev);
device_unregister(idxd_cdev->dev);
/*
* The device_type->release() will be called on the device and free
* the allocated struct device. We can just forget it.
*/
ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
idxd_cdev->dev = NULL;
idxd_cdev->minor = -1;
}
int idxd_wq_add_cdev(struct idxd_wq *wq)
{
struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
struct cdev *cdev = &idxd_cdev->cdev;
struct device *dev;
int rc;
rc = idxd_wq_cdev_dev_setup(wq);
if (rc < 0)
return rc;
dev = idxd_cdev->dev;
cdev_init(cdev, &idxd_cdev_fops);
cdev_set_parent(cdev, &dev->kobj);
rc = cdev_add(cdev, dev->devt, 1);
if (rc) {
dev_dbg(&wq->idxd->pdev->dev, "cdev_add failed: %d\n", rc);
idxd_wq_cdev_cleanup(wq, CDEV_FAILED);
return rc;
}
init_waitqueue_head(&idxd_cdev->err_queue);
return 0;
}
void idxd_wq_del_cdev(struct idxd_wq *wq)
{
idxd_wq_cdev_cleanup(wq, CDEV_NORMAL);
}
int idxd_cdev_register(void)
{
int rc, i;
for (i = 0; i < IDXD_TYPE_MAX; i++) {
ida_init(&ictx[i].minor_ida);
rc = alloc_chrdev_region(&ictx[i].devt, 0, MINORMASK,
ictx[i].name);
if (rc)
return rc;
}
return 0;
}
void idxd_cdev_remove(void)
{
int i;
for (i = 0; i < IDXD_TYPE_MAX; i++) {
unregister_chrdev_region(ictx[i].devt, MINORMASK);
ida_destroy(&ictx[i].minor_ida);
}
}
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/dmaengine.h>
#include <uapi/linux/idxd.h>
#include "../dmaengine.h"
#include "registers.h"
#include "idxd.h"
static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c)
{
return container_of(c, struct idxd_wq, dma_chan);
}
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type)
{
struct dma_async_tx_descriptor *tx;
struct dmaengine_result res;
int complete = 1;
if (desc->completion->status == DSA_COMP_SUCCESS)
res.result = DMA_TRANS_NOERROR;
else if (desc->completion->status)
res.result = DMA_TRANS_WRITE_FAILED;
else if (comp_type == IDXD_COMPLETE_ABORT)
res.result = DMA_TRANS_ABORTED;
else
complete = 0;
tx = &desc->txd;
if (complete && tx->cookie) {
dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
dmaengine_desc_get_callback_invoke(tx, &res);
tx->callback = NULL;
tx->callback_result = NULL;
}
}
static void op_flag_setup(unsigned long flags, u32 *desc_flags)
{
*desc_flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
if (flags & DMA_PREP_INTERRUPT)
*desc_flags |= IDXD_OP_FLAG_RCI;
}
static inline void set_completion_address(struct idxd_desc *desc,
u64 *compl_addr)
{
*compl_addr = desc->compl_dma;
}
static inline void idxd_prep_desc_common(struct idxd_wq *wq,
struct dsa_hw_desc *hw, char opcode,
u64 addr_f1, u64 addr_f2, u64 len,
u64 compl, u32 flags)
{
struct idxd_device *idxd = wq->idxd;
hw->flags = flags;
hw->opcode = opcode;
hw->src_addr = addr_f1;
hw->dst_addr = addr_f2;
hw->xfer_size = len;
hw->priv = !!(wq->type == IDXD_WQT_KERNEL);
hw->completion_addr = compl;
/*
* Descriptor completion vectors are 1-8 for MSIX. We will round
* robin through the 8 vectors.
*/
wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
hw->int_handle = wq->vec_ptr;
}
static struct dma_async_tx_descriptor *
idxd_dma_submit_memcpy(struct dma_chan *c, dma_addr_t dma_dest,
dma_addr_t dma_src, size_t len, unsigned long flags)
{
struct idxd_wq *wq = to_idxd_wq(c);
u32 desc_flags;
struct idxd_device *idxd = wq->idxd;
struct idxd_desc *desc;
if (wq->state != IDXD_WQ_ENABLED)
return NULL;
if (len > idxd->max_xfer_bytes)
return NULL;
op_flag_setup(flags, &desc_flags);
desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
if (IS_ERR(desc))
return NULL;
idxd_prep_desc_common(wq, desc->hw, DSA_OPCODE_MEMMOVE,
dma_src, dma_dest, len, desc->compl_dma,
desc_flags);
desc->txd.flags = flags;
return &desc->txd;
}
static int idxd_dma_alloc_chan_resources(struct dma_chan *chan)
{
struct idxd_wq *wq = to_idxd_wq(chan);
struct device *dev = &wq->idxd->pdev->dev;
idxd_wq_get(wq);
dev_dbg(dev, "%s: client_count: %d\n", __func__,
idxd_wq_refcount(wq));
return 0;
}
static void idxd_dma_free_chan_resources(struct dma_chan *chan)
{
struct idxd_wq *wq = to_idxd_wq(chan);
struct device *dev = &wq->idxd->pdev->dev;
idxd_wq_put(wq);
dev_dbg(dev, "%s: client_count: %d\n", __func__,
idxd_wq_refcount(wq));
}
static enum dma_status idxd_dma_tx_status(struct dma_chan *dma_chan,
dma_cookie_t cookie,
struct dma_tx_state *txstate)
{
return dma_cookie_status(dma_chan, cookie, txstate);
}
/*
* issue_pending() does not need to do anything since tx_submit() does the job
* already.
*/
static void idxd_dma_issue_pending(struct dma_chan *dma_chan)
{
}
dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
{
struct dma_chan *c = tx->chan;
struct idxd_wq *wq = to_idxd_wq(c);
dma_cookie_t cookie;
int rc;
struct idxd_desc *desc = container_of(tx, struct idxd_desc, txd);
cookie = dma_cookie_assign(tx);
rc = idxd_submit_desc(wq, desc);
if (rc < 0) {
idxd_free_desc(wq, desc);
return rc;
}
return cookie;
}
static void idxd_dma_release(struct dma_device *device)
{
}
int idxd_register_dma_device(struct idxd_device *idxd)
{
struct dma_device *dma = &idxd->dma_dev;
INIT_LIST_HEAD(&dma->channels);
dma->dev = &idxd->pdev->dev;
dma->device_release = idxd_dma_release;
if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) {
dma_cap_set(DMA_MEMCPY, dma->cap_mask);
dma->device_prep_dma_memcpy = idxd_dma_submit_memcpy;
}
dma->device_tx_status = idxd_dma_tx_status;
dma->device_issue_pending = idxd_dma_issue_pending;
dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources;
dma->device_free_chan_resources = idxd_dma_free_chan_resources;
return dma_async_device_register(&idxd->dma_dev);
}
void idxd_unregister_dma_device(struct idxd_device *idxd)
{
dma_async_device_unregister(&idxd->dma_dev);
}
int idxd_register_dma_channel(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
struct dma_device *dma = &idxd->dma_dev;
struct dma_chan *chan = &wq->dma_chan;
int rc;
memset(&wq->dma_chan, 0, sizeof(struct dma_chan));
chan->device = dma;
list_add_tail(&chan->device_node, &dma->channels);
rc = dma_async_device_channel_register(dma, chan);
if (rc < 0)
return rc;
return 0;
}
void idxd_unregister_dma_channel(struct idxd_wq *wq)
{
dma_async_device_channel_unregister(&wq->idxd->dma_dev, &wq->dma_chan);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
#ifndef _IDXD_H_
#define _IDXD_H_
#include <linux/sbitmap.h>
#include <linux/dmaengine.h>
#include <linux/percpu-rwsem.h>
#include <linux/wait.h>
#include <linux/cdev.h>
#include "registers.h"
#define IDXD_DRIVER_VERSION "1.00"
extern struct kmem_cache *idxd_desc_pool;
#define IDXD_REG_TIMEOUT 50
#define IDXD_DRAIN_TIMEOUT 5000
enum idxd_type {
IDXD_TYPE_UNKNOWN = -1,
IDXD_TYPE_DSA = 0,
IDXD_TYPE_MAX
};
#define IDXD_NAME_SIZE 128
struct idxd_device_driver {
struct device_driver drv;
};
struct idxd_irq_entry {
struct idxd_device *idxd;
int id;
struct llist_head pending_llist;
struct list_head work_list;
};
struct idxd_group {
struct device conf_dev;
struct idxd_device *idxd;
struct grpcfg grpcfg;
int id;
int num_engines;
int num_wqs;
bool use_token_limit;
u8 tokens_allowed;
u8 tokens_reserved;
int tc_a;
int tc_b;
};
#define IDXD_MAX_PRIORITY 0xf
enum idxd_wq_state {
IDXD_WQ_DISABLED = 0,
IDXD_WQ_ENABLED,
};
enum idxd_wq_flag {
WQ_FLAG_DEDICATED = 0,
};
enum idxd_wq_type {
IDXD_WQT_NONE = 0,
IDXD_WQT_KERNEL,
IDXD_WQT_USER,
};
struct idxd_cdev {
struct cdev cdev;
struct device *dev;
int minor;
struct wait_queue_head err_queue;
};
#define IDXD_ALLOCATED_BATCH_SIZE 128U
#define WQ_NAME_SIZE 1024
#define WQ_TYPE_SIZE 10
enum idxd_op_type {
IDXD_OP_BLOCK = 0,
IDXD_OP_NONBLOCK = 1,
};
enum idxd_complete_type {
IDXD_COMPLETE_NORMAL = 0,
IDXD_COMPLETE_ABORT,
};
struct idxd_wq {
void __iomem *dportal;
struct device conf_dev;
struct idxd_cdev idxd_cdev;
struct idxd_device *idxd;
int id;
enum idxd_wq_type type;
struct idxd_group *group;
int client_count;
struct mutex wq_lock; /* mutex for workqueue */
u32 size;
u32 threshold;
u32 priority;
enum idxd_wq_state state;
unsigned long flags;
union wqcfg wqcfg;
atomic_t dq_count; /* dedicated queue flow control */
u32 vec_ptr; /* interrupt steering */
struct dsa_hw_desc **hw_descs;
int num_descs;
struct dsa_completion_record *compls;
dma_addr_t compls_addr;
int compls_size;
struct idxd_desc **descs;
struct sbitmap sbmap;
struct dma_chan dma_chan;
struct percpu_rw_semaphore submit_lock;
wait_queue_head_t submit_waitq;
char name[WQ_NAME_SIZE + 1];
};
struct idxd_engine {
struct device conf_dev;
int id;
struct idxd_group *group;
struct idxd_device *idxd;
};
/* shadow registers */
struct idxd_hw {
u32 version;
union gen_cap_reg gen_cap;
union wq_cap_reg wq_cap;
union group_cap_reg group_cap;
union engine_cap_reg engine_cap;
struct opcap opcap;
};
enum idxd_device_state {
IDXD_DEV_HALTED = -1,
IDXD_DEV_DISABLED = 0,
IDXD_DEV_CONF_READY,
IDXD_DEV_ENABLED,
};
enum idxd_device_flag {
IDXD_FLAG_CONFIGURABLE = 0,
};
struct idxd_device {
enum idxd_type type;
struct device conf_dev;
struct list_head list;
struct idxd_hw hw;
enum idxd_device_state state;
unsigned long flags;
int id;
int major;
struct pci_dev *pdev;
void __iomem *reg_base;
spinlock_t dev_lock; /* spinlock for device */
struct idxd_group *groups;
struct idxd_wq *wqs;
struct idxd_engine *engines;
int num_groups;
u32 msix_perm_offset;
u32 wqcfg_offset;
u32 grpcfg_offset;
u32 perfmon_offset;
u64 max_xfer_bytes;
u32 max_batch_size;
int max_groups;
int max_engines;
int max_tokens;
int max_wqs;
int max_wq_size;
int token_limit;
int nr_tokens; /* non-reserved tokens */
union sw_err_reg sw_err;
struct msix_entry *msix_entries;
int num_wq_irqs;
struct idxd_irq_entry *irq_entries;
struct dma_device dma_dev;
};
/* IDXD software descriptor */
struct idxd_desc {
struct dsa_hw_desc *hw;
dma_addr_t desc_dma;
struct dsa_completion_record *completion;
dma_addr_t compl_dma;
struct dma_async_tx_descriptor txd;
struct llist_node llnode;
struct list_head list;
int id;
struct idxd_wq *wq;
};
#define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev)
#define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev)
extern struct bus_type dsa_bus_type;
static inline bool wq_dedicated(struct idxd_wq *wq)
{
return test_bit(WQ_FLAG_DEDICATED, &wq->flags);
}
enum idxd_portal_prot {
IDXD_PORTAL_UNLIMITED = 0,
IDXD_PORTAL_LIMITED,
};
static inline int idxd_get_wq_portal_offset(enum idxd_portal_prot prot)
{
return prot * 0x1000;
}
static inline int idxd_get_wq_portal_full_offset(int wq_id,
enum idxd_portal_prot prot)
{
return ((wq_id * 4) << PAGE_SHIFT) + idxd_get_wq_portal_offset(prot);
}
static inline void idxd_set_type(struct idxd_device *idxd)
{
struct pci_dev *pdev = idxd->pdev;
if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0)
idxd->type = IDXD_TYPE_DSA;
else
idxd->type = IDXD_TYPE_UNKNOWN;
}
static inline void idxd_wq_get(struct idxd_wq *wq)
{
wq->client_count++;
}
static inline void idxd_wq_put(struct idxd_wq *wq)
{
wq->client_count--;
}
static inline int idxd_wq_refcount(struct idxd_wq *wq)
{
return wq->client_count;
};
const char *idxd_get_dev_name(struct idxd_device *idxd);
int idxd_register_bus_type(void);
void idxd_unregister_bus_type(void);
int idxd_setup_sysfs(struct idxd_device *idxd);
void idxd_cleanup_sysfs(struct idxd_device *idxd);
int idxd_register_driver(void);
void idxd_unregister_driver(void);
struct bus_type *idxd_get_bus_type(struct idxd_device *idxd);
/* device interrupt control */
irqreturn_t idxd_irq_handler(int vec, void *data);
irqreturn_t idxd_misc_thread(int vec, void *data);
irqreturn_t idxd_wq_thread(int irq, void *data);
void idxd_mask_error_interrupts(struct idxd_device *idxd);
void idxd_unmask_error_interrupts(struct idxd_device *idxd);
void idxd_mask_msix_vectors(struct idxd_device *idxd);
int idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id);
int idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id);
/* device control */
int idxd_device_enable(struct idxd_device *idxd);
int idxd_device_disable(struct idxd_device *idxd);
int idxd_device_reset(struct idxd_device *idxd);
int __idxd_device_reset(struct idxd_device *idxd);
void idxd_device_cleanup(struct idxd_device *idxd);
int idxd_device_config(struct idxd_device *idxd);
void idxd_device_wqs_clear_state(struct idxd_device *idxd);
/* work queue control */
int idxd_wq_alloc_resources(struct idxd_wq *wq);
void idxd_wq_free_resources(struct idxd_wq *wq);
int idxd_wq_enable(struct idxd_wq *wq);
int idxd_wq_disable(struct idxd_wq *wq);
int idxd_wq_map_portal(struct idxd_wq *wq);
void idxd_wq_unmap_portal(struct idxd_wq *wq);
/* submission */
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc);
struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype);
void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc);
/* dmaengine */
int idxd_register_dma_device(struct idxd_device *idxd);
void idxd_unregister_dma_device(struct idxd_device *idxd);
int idxd_register_dma_channel(struct idxd_wq *wq);
void idxd_unregister_dma_channel(struct idxd_wq *wq);
void idxd_parse_completion_status(u8 status, enum dmaengine_tx_result *res);
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type);
dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx);
/* cdev */
int idxd_cdev_register(void);
void idxd_cdev_remove(void);
int idxd_cdev_get_major(struct idxd_device *idxd);
int idxd_wq_add_cdev(struct idxd_wq *wq);
void idxd_wq_del_cdev(struct idxd_wq *wq);
#endif
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/dmaengine.h>
#include <uapi/linux/idxd.h>
#include "../dmaengine.h"
#include "idxd.h"
#include "registers.h"
void idxd_device_wqs_clear_state(struct idxd_device *idxd)
{
int i;
lockdep_assert_held(&idxd->dev_lock);
for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i];
wq->state = IDXD_WQ_DISABLED;
}
}
static int idxd_restart(struct idxd_device *idxd)
{
int i, rc;
lockdep_assert_held(&idxd->dev_lock);
rc = __idxd_device_reset(idxd);
if (rc < 0)
goto out;
rc = idxd_device_config(idxd);
if (rc < 0)
goto out;
rc = idxd_device_enable(idxd);
if (rc < 0)
goto out;
for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i];
if (wq->state == IDXD_WQ_ENABLED) {
rc = idxd_wq_enable(wq);
if (rc < 0) {
dev_warn(&idxd->pdev->dev,
"Unable to re-enable wq %s\n",
dev_name(&wq->conf_dev));
}
}
}
return 0;
out:
idxd_device_wqs_clear_state(idxd);
idxd->state = IDXD_DEV_HALTED;
return rc;
}
irqreturn_t idxd_irq_handler(int vec, void *data)
{
struct idxd_irq_entry *irq_entry = data;
struct idxd_device *idxd = irq_entry->idxd;
idxd_mask_msix_vector(idxd, irq_entry->id);
return IRQ_WAKE_THREAD;
}
irqreturn_t idxd_misc_thread(int vec, void *data)
{
struct idxd_irq_entry *irq_entry = data;
struct idxd_device *idxd = irq_entry->idxd;
struct device *dev = &idxd->pdev->dev;
union gensts_reg gensts;
u32 cause, val = 0;
int i, rc;
bool err = false;
cause = ioread32(idxd->reg_base + IDXD_INTCAUSE_OFFSET);
if (cause & IDXD_INTC_ERR) {
spin_lock_bh(&idxd->dev_lock);
for (i = 0; i < 4; i++)
idxd->sw_err.bits[i] = ioread64(idxd->reg_base +
IDXD_SWERR_OFFSET + i * sizeof(u64));
iowrite64(IDXD_SWERR_ACK, idxd->reg_base + IDXD_SWERR_OFFSET);
if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) {
int id = idxd->sw_err.wq_idx;
struct idxd_wq *wq = &idxd->wqs[id];
if (wq->type == IDXD_WQT_USER)
wake_up_interruptible(&wq->idxd_cdev.err_queue);
} else {
int i;
for (i = 0; i < idxd->max_wqs; i++) {
struct idxd_wq *wq = &idxd->wqs[i];
if (wq->type == IDXD_WQT_USER)
wake_up_interruptible(&wq->idxd_cdev.err_queue);
}
}
spin_unlock_bh(&idxd->dev_lock);
val |= IDXD_INTC_ERR;
for (i = 0; i < 4; i++)
dev_warn(dev, "err[%d]: %#16.16llx\n",
i, idxd->sw_err.bits[i]);
err = true;
}
if (cause & IDXD_INTC_CMD) {
/* Driver does use command interrupts */
val |= IDXD_INTC_CMD;
}
if (cause & IDXD_INTC_OCCUPY) {
/* Driver does not utilize occupancy interrupt */
val |= IDXD_INTC_OCCUPY;
}
if (cause & IDXD_INTC_PERFMON_OVFL) {
/*
* Driver does not utilize perfmon counter overflow interrupt
* yet.
*/
val |= IDXD_INTC_PERFMON_OVFL;
}
val ^= cause;
if (val)
dev_warn_once(dev, "Unexpected interrupt cause bits set: %#x\n",
val);
iowrite32(cause, idxd->reg_base + IDXD_INTCAUSE_OFFSET);
if (!err)
return IRQ_HANDLED;
gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
if (gensts.state == IDXD_DEVICE_STATE_HALT) {
spin_lock_bh(&idxd->dev_lock);
if (gensts.reset_type == IDXD_DEVICE_RESET_SOFTWARE) {
rc = idxd_restart(idxd);
if (rc < 0)
dev_err(&idxd->pdev->dev,
"idxd restart failed, device halt.");
} else {
idxd_device_wqs_clear_state(idxd);
idxd->state = IDXD_DEV_HALTED;
dev_err(&idxd->pdev->dev,
"idxd halted, need %s.\n",
gensts.reset_type == IDXD_DEVICE_RESET_FLR ?
"FLR" : "system reset");
}
spin_unlock_bh(&idxd->dev_lock);
}
idxd_unmask_msix_vector(idxd, irq_entry->id);
return IRQ_HANDLED;
}
static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
int *processed)
{
struct idxd_desc *desc, *t;
struct llist_node *head;
int queued = 0;
head = llist_del_all(&irq_entry->pending_llist);
if (!head)
return 0;
llist_for_each_entry_safe(desc, t, head, llnode) {
if (desc->completion->status) {
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL);
idxd_free_desc(desc->wq, desc);
(*processed)++;
} else {
list_add_tail(&desc->list, &irq_entry->work_list);
queued++;
}
}
return queued;
}
static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
int *processed)
{
struct list_head *node, *next;
int queued = 0;
if (list_empty(&irq_entry->work_list))
return 0;
list_for_each_safe(node, next, &irq_entry->work_list) {
struct idxd_desc *desc =
container_of(node, struct idxd_desc, list);
if (desc->completion->status) {
list_del(&desc->list);
/* process and callback */
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL);
idxd_free_desc(desc->wq, desc);
(*processed)++;
} else {
queued++;
}
}
return queued;
}
irqreturn_t idxd_wq_thread(int irq, void *data)
{
struct idxd_irq_entry *irq_entry = data;
int rc, processed = 0, retry = 0;
/*
* There are two lists we are processing. The pending_llist is where
* submmiter adds all the submitted descriptor after sending it to
* the workqueue. It's a lockless singly linked list. The work_list
* is the common linux double linked list. We are in a scenario of
* multiple producers and a single consumer. The producers are all
* the kernel submitters of descriptors, and the consumer is the
* kernel irq handler thread for the msix vector when using threaded
* irq. To work with the restrictions of llist to remain lockless,
* we are doing the following steps:
* 1. Iterate through the work_list and process any completed
* descriptor. Delete the completed entries during iteration.
* 2. llist_del_all() from the pending list.
* 3. Iterate through the llist that was deleted from the pending list
* and process the completed entries.
* 4. If the entry is still waiting on hardware, list_add_tail() to
* the work_list.
* 5. Repeat until no more descriptors.
*/
do {
rc = irq_process_work_list(irq_entry, &processed);
if (rc != 0) {
retry++;
continue;
}
rc = irq_process_pending_llist(irq_entry, &processed);
} while (rc != 0 && retry != 10);
idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id);
if (processed == 0)
return IRQ_NONE;
return IRQ_HANDLED;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
#ifndef _IDXD_REGISTERS_H_
#define _IDXD_REGISTERS_H_
/* PCI Config */
#define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25
#define IDXD_MMIO_BAR 0
#define IDXD_WQ_BAR 2
#define IDXD_PORTAL_SIZE 0x4000
/* MMIO Device BAR0 Registers */
#define IDXD_VER_OFFSET 0x00
#define IDXD_VER_MAJOR_MASK 0xf0
#define IDXD_VER_MINOR_MASK 0x0f
#define GET_IDXD_VER_MAJOR(x) (((x) & IDXD_VER_MAJOR_MASK) >> 4)
#define GET_IDXD_VER_MINOR(x) ((x) & IDXD_VER_MINOR_MASK)
union gen_cap_reg {
struct {
u64 block_on_fault:1;
u64 overlap_copy:1;
u64 cache_control_mem:1;
u64 cache_control_cache:1;
u64 rsvd:3;
u64 int_handle_req:1;
u64 dest_readback:1;
u64 drain_readback:1;
u64 rsvd2:6;
u64 max_xfer_shift:5;
u64 max_batch_shift:4;
u64 max_ims_mult:6;
u64 config_en:1;
u64 max_descs_per_engine:8;
u64 rsvd3:24;
};
u64 bits;
} __packed;
#define IDXD_GENCAP_OFFSET 0x10
union wq_cap_reg {
struct {
u64 total_wq_size:16;
u64 num_wqs:8;
u64 rsvd:24;
u64 shared_mode:1;
u64 dedicated_mode:1;
u64 rsvd2:1;
u64 priority:1;
u64 occupancy:1;
u64 occupancy_int:1;
u64 rsvd3:10;
};
u64 bits;
} __packed;
#define IDXD_WQCAP_OFFSET 0x20
union group_cap_reg {
struct {
u64 num_groups:8;
u64 total_tokens:8;
u64 token_en:1;
u64 token_limit:1;
u64 rsvd:46;
};
u64 bits;
} __packed;
#define IDXD_GRPCAP_OFFSET 0x30
union engine_cap_reg {
struct {
u64 num_engines:8;
u64 rsvd:56;
};
u64 bits;
} __packed;
#define IDXD_ENGCAP_OFFSET 0x38
#define IDXD_OPCAP_NOOP 0x0001
#define IDXD_OPCAP_BATCH 0x0002
#define IDXD_OPCAP_MEMMOVE 0x0008
struct opcap {
u64 bits[4];
};
#define IDXD_OPCAP_OFFSET 0x40
#define IDXD_TABLE_OFFSET 0x60
union offsets_reg {
struct {
u64 grpcfg:16;
u64 wqcfg:16;
u64 msix_perm:16;
u64 ims:16;
u64 perfmon:16;
u64 rsvd:48;
};
u64 bits[2];
} __packed;
#define IDXD_GENCFG_OFFSET 0x80
union gencfg_reg {
struct {
u32 token_limit:8;
u32 rsvd:4;
u32 user_int_en:1;
u32 rsvd2:19;
};
u32 bits;
} __packed;
#define IDXD_GENCTRL_OFFSET 0x88
union genctrl_reg {
struct {
u32 softerr_int_en:1;
u32 rsvd:31;
};
u32 bits;
} __packed;
#define IDXD_GENSTATS_OFFSET 0x90
union gensts_reg {
struct {
u32 state:2;
u32 reset_type:2;
u32 rsvd:28;
};
u32 bits;
} __packed;
enum idxd_device_status_state {
IDXD_DEVICE_STATE_DISABLED = 0,
IDXD_DEVICE_STATE_ENABLED,
IDXD_DEVICE_STATE_DRAIN,
IDXD_DEVICE_STATE_HALT,
};
enum idxd_device_reset_type {
IDXD_DEVICE_RESET_SOFTWARE = 0,
IDXD_DEVICE_RESET_FLR,
IDXD_DEVICE_RESET_WARM,
IDXD_DEVICE_RESET_COLD,
};
#define IDXD_INTCAUSE_OFFSET 0x98
#define IDXD_INTC_ERR 0x01
#define IDXD_INTC_CMD 0x02
#define IDXD_INTC_OCCUPY 0x04
#define IDXD_INTC_PERFMON_OVFL 0x08
#define IDXD_CMD_OFFSET 0xa0
union idxd_command_reg {
struct {
u32 operand:20;
u32 cmd:5;
u32 rsvd:6;
u32 int_req:1;
};
u32 bits;
} __packed;
enum idxd_cmd {
IDXD_CMD_ENABLE_DEVICE = 1,
IDXD_CMD_DISABLE_DEVICE,
IDXD_CMD_DRAIN_ALL,
IDXD_CMD_ABORT_ALL,
IDXD_CMD_RESET_DEVICE,
IDXD_CMD_ENABLE_WQ,
IDXD_CMD_DISABLE_WQ,
IDXD_CMD_DRAIN_WQ,
IDXD_CMD_ABORT_WQ,
IDXD_CMD_RESET_WQ,
IDXD_CMD_DRAIN_PASID,
IDXD_CMD_ABORT_PASID,
IDXD_CMD_REQUEST_INT_HANDLE,
};
#define IDXD_CMDSTS_OFFSET 0xa8
union cmdsts_reg {
struct {
u8 err;
u16 result;
u8 rsvd:7;
u8 active:1;
};
u32 bits;
} __packed;
#define IDXD_CMDSTS_ACTIVE 0x80000000
enum idxd_cmdsts_err {
IDXD_CMDSTS_SUCCESS = 0,
IDXD_CMDSTS_INVAL_CMD,
IDXD_CMDSTS_INVAL_WQIDX,
IDXD_CMDSTS_HW_ERR,
/* enable device errors */
IDXD_CMDSTS_ERR_DEV_ENABLED = 0x10,
IDXD_CMDSTS_ERR_CONFIG,
IDXD_CMDSTS_ERR_BUSMASTER_EN,
IDXD_CMDSTS_ERR_PASID_INVAL,
IDXD_CMDSTS_ERR_WQ_SIZE_ERANGE,
IDXD_CMDSTS_ERR_GRP_CONFIG,
IDXD_CMDSTS_ERR_GRP_CONFIG2,
IDXD_CMDSTS_ERR_GRP_CONFIG3,
IDXD_CMDSTS_ERR_GRP_CONFIG4,
/* enable wq errors */
IDXD_CMDSTS_ERR_DEV_NOTEN = 0x20,
IDXD_CMDSTS_ERR_WQ_ENABLED,
IDXD_CMDSTS_ERR_WQ_SIZE,
IDXD_CMDSTS_ERR_WQ_PRIOR,
IDXD_CMDSTS_ERR_WQ_MODE,
IDXD_CMDSTS_ERR_BOF_EN,
IDXD_CMDSTS_ERR_PASID_EN,
IDXD_CMDSTS_ERR_MAX_BATCH_SIZE,
IDXD_CMDSTS_ERR_MAX_XFER_SIZE,
/* disable device errors */
IDXD_CMDSTS_ERR_DIS_DEV_EN = 0x31,
/* disable WQ, drain WQ, abort WQ, reset WQ */
IDXD_CMDSTS_ERR_DEV_NOT_EN,
/* request interrupt handle */
IDXD_CMDSTS_ERR_INVAL_INT_IDX = 0x41,
IDXD_CMDSTS_ERR_NO_HANDLE,
};
#define IDXD_SWERR_OFFSET 0xc0
#define IDXD_SWERR_VALID 0x00000001
#define IDXD_SWERR_OVERFLOW 0x00000002
#define IDXD_SWERR_ACK (IDXD_SWERR_VALID | IDXD_SWERR_OVERFLOW)
union sw_err_reg {
struct {
u64 valid:1;
u64 overflow:1;
u64 desc_valid:1;
u64 wq_idx_valid:1;
u64 batch:1;
u64 fault_rw:1;
u64 priv:1;
u64 rsvd:1;
u64 error:8;
u64 wq_idx:8;
u64 rsvd2:8;
u64 operation:8;
u64 pasid:20;
u64 rsvd3:4;
u64 batch_idx:16;
u64 rsvd4:16;
u64 invalid_flags:32;
u64 fault_addr;
u64 rsvd5;
};
u64 bits[4];
} __packed;
union msix_perm {
struct {
u32 rsvd:2;
u32 ignore:1;
u32 pasid_en:1;
u32 rsvd2:8;
u32 pasid:20;
};
u32 bits;
} __packed;
union group_flags {
struct {
u32 tc_a:3;
u32 tc_b:3;
u32 rsvd:1;
u32 use_token_limit:1;
u32 tokens_reserved:8;
u32 rsvd2:4;
u32 tokens_allowed:8;
u32 rsvd3:4;
};
u32 bits;
} __packed;
struct grpcfg {
u64 wqs[4];
u64 engines;
union group_flags flags;
} __packed;
union wqcfg {
struct {
/* bytes 0-3 */
u16 wq_size;
u16 rsvd;
/* bytes 4-7 */
u16 wq_thresh;
u16 rsvd1;
/* bytes 8-11 */
u32 mode:1; /* shared or dedicated */
u32 bof:1; /* block on fault */
u32 rsvd2:2;
u32 priority:4;
u32 pasid:20;
u32 pasid_en:1;
u32 priv:1;
u32 rsvd3:2;
/* bytes 12-15 */
u32 max_xfer_shift:5;
u32 max_batch_shift:4;
u32 rsvd4:23;
/* bytes 16-19 */
u16 occupancy_inth;
u16 occupancy_table_sel:1;
u16 rsvd5:15;
/* bytes 20-23 */
u16 occupancy_limit;
u16 occupancy_int_en:1;
u16 rsvd6:15;
/* bytes 24-27 */
u16 occupancy;
u16 occupancy_int:1;
u16 rsvd7:12;
u16 mode_support:1;
u16 wq_state:2;
/* bytes 28-31 */
u32 rsvd8;
};
u32 bits[8];
} __packed;
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -430,9 +430,10 @@ static int mtk_uart_apdma_terminate_all(struct dma_chan *chan) ...@@ -430,9 +430,10 @@ static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
spin_lock_irqsave(&c->vc.lock, flags); spin_lock_irqsave(&c->vc.lock, flags);
vchan_get_all_descriptors(&c->vc, &head); vchan_get_all_descriptors(&c->vc, &head);
vchan_dma_desc_free_list(&c->vc, &head);
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock_irqrestore(&c->vc.lock, flags);
vchan_dma_desc_free_list(&c->vc, &head);
return 0; return 0;
} }
......
...@@ -15,6 +15,8 @@ ...@@ -15,6 +15,8 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_dma.h> #include <linux/of_dma.h>
#include "dmaengine.h"
static LIST_HEAD(of_dma_list); static LIST_HEAD(of_dma_list);
static DEFINE_MUTEX(of_dma_lock); static DEFINE_MUTEX(of_dma_lock);
......
...@@ -674,10 +674,11 @@ static int owl_dma_terminate_all(struct dma_chan *chan) ...@@ -674,10 +674,11 @@ static int owl_dma_terminate_all(struct dma_chan *chan)
} }
vchan_get_all_descriptors(&vchan->vc, &head); vchan_get_all_descriptors(&vchan->vc, &head);
vchan_dma_desc_free_list(&vchan->vc, &head);
spin_unlock_irqrestore(&vchan->vc.lock, flags); spin_unlock_irqrestore(&vchan->vc.lock, flags);
vchan_dma_desc_free_list(&vchan->vc, &head);
return 0; return 0;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -155,9 +155,9 @@ static void sf_pdma_free_chan_resources(struct dma_chan *dchan) ...@@ -155,9 +155,9 @@ static void sf_pdma_free_chan_resources(struct dma_chan *dchan)
kfree(chan->desc); kfree(chan->desc);
chan->desc = NULL; chan->desc = NULL;
vchan_get_all_descriptors(&chan->vchan, &head); vchan_get_all_descriptors(&chan->vchan, &head);
vchan_dma_desc_free_list(&chan->vchan, &head);
sf_pdma_disclaim_chan(chan); sf_pdma_disclaim_chan(chan);
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock_irqrestore(&chan->vchan.lock, flags);
vchan_dma_desc_free_list(&chan->vchan, &head);
} }
static size_t sf_pdma_desc_residue(struct sf_pdma_chan *chan, static size_t sf_pdma_desc_residue(struct sf_pdma_chan *chan,
...@@ -220,8 +220,8 @@ static int sf_pdma_terminate_all(struct dma_chan *dchan) ...@@ -220,8 +220,8 @@ static int sf_pdma_terminate_all(struct dma_chan *dchan)
chan->desc = NULL; chan->desc = NULL;
chan->xfer_err = false; chan->xfer_err = false;
vchan_get_all_descriptors(&chan->vchan, &head); vchan_get_all_descriptors(&chan->vchan, &head);
vchan_dma_desc_free_list(&chan->vchan, &head);
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock_irqrestore(&chan->vchan.lock, flags);
vchan_dma_desc_free_list(&chan->vchan, &head);
return 0; return 0;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -10,3 +10,4 @@ obj-$(CONFIG_ARCH_OMAP2PLUS) += omap_prm.o ...@@ -10,3 +10,4 @@ obj-$(CONFIG_ARCH_OMAP2PLUS) += omap_prm.o
obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o
obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o
obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o
obj-$(CONFIG_TI_K3_RINGACC) += k3-ringacc.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment