Commit 9322af3e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
 "New support:

    - Qualcomm SDM670, SM6115 and SM6375 GPI controller support

    - Ingenic JZ4755 dmaengine support

    - Removal of iop-adma driver

  Updates:

   - Tegra support for dma-channel-mask

   - at_hdmac cleanup and virt-chan support for this driver"

* tag 'dmaengine-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (46 commits)
  dmaengine: Revert "dmaengine: remove s3c24xx driver"
  dmaengine: tegra: Add support for dma-channel-mask
  dt-bindings: dmaengine: Add dma-channel-mask to Tegra GPCDMA
  dmaengine: idxd: Remove linux/msi.h include
  dt-bindings: dmaengine: qcom: gpi: add compatible for SM6375
  dmaengine: idxd: Fix crc_val field for completion record
  dmaengine: at_hdmac: Convert driver to use virt-dma
  dmaengine: at_hdmac: Remove unused member of at_dma_chan
  dmaengine: at_hdmac: Rename "chan_common" to "dma_chan"
  dmaengine: at_hdmac: Rename "dma_common" to "dma_device"
  dmaengine: at_hdmac: Use bitfield access macros
  dmaengine: at_hdmac: Keep register definitions and structures private to at_hdmac.c
  dmaengine: at_hdmac: Set include entries in alphabetic order
  dmaengine: at_hdmac: Use pm_ptr()
  dmaengine: at_hdmac: Use devm_clk_get()
  dmaengine: at_hdmac: Use devm_platform_ioremap_resource
  dmaengine: at_hdmac: Use devm_kzalloc() and struct_size()
  dmaengine: at_hdmac: Introduce atc_get_llis_residue()
  dmaengine: at_hdmac: s/atc_get_bytes_left/atc_get_residue
  dmaengine: at_hdmac: Pass residue by address to avoid unnecessary implicit casts
  ...
parents 1b6a349a 25483ded
...@@ -22,6 +22,7 @@ Date: Oct 25, 2019 ...@@ -22,6 +22,7 @@ Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The largest number of work descriptors in a batch. Description: The largest number of work descriptors in a batch.
It's not visible when the device does not support batch.
What: /sys/bus/dsa/devices/dsa<m>/max_work_queues_size What: /sys/bus/dsa/devices/dsa<m>/max_work_queues_size
Date: Oct 25, 2019 Date: Oct 25, 2019
...@@ -49,6 +50,8 @@ Description: The total number of read buffers supported by this device. ...@@ -49,6 +50,8 @@ Description: The total number of read buffers supported by this device.
The read buffers represent resources within the DSA The read buffers represent resources within the DSA
implementation, and these resources are allocated by engines to implementation, and these resources are allocated by engines to
support operations. See DSA spec v1.2 9.2.4 Total Read Buffers. support operations. See DSA spec v1.2 9.2.4 Total Read Buffers.
It's not visible when the device does not support Read Buffer
allocation control.
What: /sys/bus/dsa/devices/dsa<m>/max_transfer_size What: /sys/bus/dsa/devices/dsa<m>/max_transfer_size
Date: Oct 25, 2019 Date: Oct 25, 2019
...@@ -122,6 +125,8 @@ Contact: dmaengine@vger.kernel.org ...@@ -122,6 +125,8 @@ Contact: dmaengine@vger.kernel.org
Description: The maximum number of read buffers that may be in use at Description: The maximum number of read buffers that may be in use at
one time by operations that access low bandwidth memory in the one time by operations that access low bandwidth memory in the
device. See DSA spec v1.2 9.2.8 GENCFG on Global Read Buffer Limit. device. See DSA spec v1.2 9.2.8 GENCFG on Global Read Buffer Limit.
It's not visible when the device does not support Read Buffer
allocation control.
What: /sys/bus/dsa/devices/dsa<m>/cmd_status What: /sys/bus/dsa/devices/dsa<m>/cmd_status
Date: Aug 28, 2020 Date: Aug 28, 2020
...@@ -205,6 +210,7 @@ KernelVersion: 5.10.0 ...@@ -205,6 +210,7 @@ KernelVersion: 5.10.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The max batch size for this workqueue. Cannot exceed device Description: The max batch size for this workqueue. Cannot exceed device
max batch size. Configurable parameter. max batch size. Configurable parameter.
It's not visible when the device does not support batch.
What: /sys/bus/dsa/devices/wq<m>.<n>/ats_disable What: /sys/bus/dsa/devices/wq<m>.<n>/ats_disable
Date: Nov 13, 2020 Date: Nov 13, 2020
...@@ -250,6 +256,8 @@ KernelVersion: 5.17.0 ...@@ -250,6 +256,8 @@ KernelVersion: 5.17.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: Enable the use of global read buffer limit for the group. See DSA Description: Enable the use of global read buffer limit for the group. See DSA
spec v1.2 9.2.18 GRPCFG Use Global Read Buffer Limit. spec v1.2 9.2.18 GRPCFG Use Global Read Buffer Limit.
It's not visible when the device does not support Read Buffer
allocation control.
What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_allowed What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_allowed
Date: Dec 10, 2021 Date: Dec 10, 2021
...@@ -258,6 +266,8 @@ Contact: dmaengine@vger.kernel.org ...@@ -258,6 +266,8 @@ Contact: dmaengine@vger.kernel.org
Description: Indicates max number of read buffers that may be in use at one time Description: Indicates max number of read buffers that may be in use at one time
by all engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read by all engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read
Buffers Allowed. Buffers Allowed.
It's not visible when the device does not support Read Buffer
allocation control.
What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_reserved What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_reserved
Date: Dec 10, 2021 Date: Dec 10, 2021
...@@ -266,6 +276,8 @@ Contact: dmaengine@vger.kernel.org ...@@ -266,6 +276,8 @@ Contact: dmaengine@vger.kernel.org
Description: Indicates the number of Read Buffers reserved for the use of Description: Indicates the number of Read Buffers reserved for the use of
engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read Buffers engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read Buffers
Reserved. Reserved.
It's not visible when the device does not support Read Buffer
allocation control.
What: /sys/bus/dsa/devices/group<m>.<n>/desc_progress_limit What: /sys/bus/dsa/devices/group<m>.<n>/desc_progress_limit
Date: Sept 14, 2022 Date: Sept 14, 2022
......
...@@ -18,6 +18,7 @@ properties: ...@@ -18,6 +18,7 @@ properties:
- enum: - enum:
- ingenic,jz4740-dma - ingenic,jz4740-dma
- ingenic,jz4725b-dma - ingenic,jz4725b-dma
- ingenic,jz4755-dma
- ingenic,jz4760-dma - ingenic,jz4760-dma
- ingenic,jz4760-bdma - ingenic,jz4760-bdma
- ingenic,jz4760-mdma - ingenic,jz4760-mdma
......
...@@ -39,7 +39,7 @@ properties: ...@@ -39,7 +39,7 @@ properties:
Should contain all of the per-channel DMA interrupts in Should contain all of the per-channel DMA interrupts in
ascending order with respect to the DMA channel index. ascending order with respect to the DMA channel index.
minItems: 1 minItems: 1
maxItems: 31 maxItems: 32
resets: resets:
maxItems: 1 maxItems: 1
...@@ -52,6 +52,9 @@ properties: ...@@ -52,6 +52,9 @@ properties:
dma-coherent: true dma-coherent: true
dma-channel-mask:
maxItems: 1
required: required:
- compatible - compatible
- reg - reg
...@@ -60,6 +63,7 @@ required: ...@@ -60,6 +63,7 @@ required:
- reset-names - reset-names
- "#dma-cells" - "#dma-cells"
- iommus - iommus
- dma-channel-mask
additionalProperties: false additionalProperties: false
...@@ -108,5 +112,6 @@ examples: ...@@ -108,5 +112,6 @@ examples:
#dma-cells = <1>; #dma-cells = <1>;
iommus = <&smmu TEGRA186_SID_GPCDMA_0>; iommus = <&smmu TEGRA186_SID_GPCDMA_0>;
dma-coherent; dma-coherent;
dma-channel-mask = <0xfffffffe>;
}; };
... ...
...@@ -18,14 +18,24 @@ allOf: ...@@ -18,14 +18,24 @@ allOf:
properties: properties:
compatible: compatible:
enum: oneOf:
- qcom,sc7280-gpi-dma - enum:
- qcom,sdm845-gpi-dma - qcom,sdm845-gpi-dma
- qcom,sm6350-gpi-dma - qcom,sm6350-gpi-dma
- qcom,sm8150-gpi-dma - items:
- qcom,sm8250-gpi-dma - enum:
- qcom,sc7280-gpi-dma
- qcom,sm6115-gpi-dma
- qcom,sm6375-gpi-dma
- qcom,sm8350-gpi-dma - qcom,sm8350-gpi-dma
- qcom,sm8450-gpi-dma - qcom,sm8450-gpi-dma
- const: qcom,sm6350-gpi-dma
- items:
- enum:
- qcom,sdm670-gpi-dma
- qcom,sm8150-gpi-dma
- qcom,sm8250-gpi-dma
- const: qcom,sdm845-gpi-dma
reg: reg:
maxItems: 1 maxItems: 1
......
...@@ -450,6 +450,7 @@ SERDEV ...@@ -450,6 +450,7 @@ SERDEV
SLAVE DMA ENGINE SLAVE DMA ENGINE
devm_acpi_dma_controller_register() devm_acpi_dma_controller_register()
devm_acpi_dma_controller_free()
SPI SPI
devm_spi_alloc_master() devm_spi_alloc_master()
......
...@@ -10460,11 +10460,6 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git ...@@ -10460,11 +10460,6 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: drivers/iommu/intel/ F: drivers/iommu/intel/
F: include/linux/intel-svm.h F: include/linux/intel-svm.h
INTEL IOP-ADMA DMA DRIVER
R: Dan Williams <dan.j.williams@intel.com>
S: Odd fixes
F: drivers/dma/iop-adma.c
INTEL IPU3 CSI-2 CIO2 DRIVER INTEL IPU3 CSI-2 CIO2 DRIVER
M: Yong Zhi <yong.zhi@intel.com> M: Yong Zhi <yong.zhi@intel.com>
M: Sakari Ailus <sakari.ailus@linux.intel.com> M: Sakari Ailus <sakari.ailus@linux.intel.com>
...@@ -13629,7 +13624,6 @@ L: dmaengine@vger.kernel.org ...@@ -13629,7 +13624,6 @@ L: dmaengine@vger.kernel.org
S: Supported S: Supported
F: Documentation/devicetree/bindings/dma/atmel-dma.txt F: Documentation/devicetree/bindings/dma/atmel-dma.txt
F: drivers/dma/at_hdmac.c F: drivers/dma/at_hdmac.c
F: drivers/dma/at_hdmac_regs.h
F: drivers/dma/at_xdmac.c F: drivers/dma/at_xdmac.c
F: include/dt-bindings/dma/at91.h F: include/dt-bindings/dma/at91.h
......
...@@ -97,6 +97,7 @@ config AT_HDMAC ...@@ -97,6 +97,7 @@ config AT_HDMAC
tristate "Atmel AHB DMA support" tristate "Atmel AHB DMA support"
depends on ARCH_AT91 depends on ARCH_AT91
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help help
Support the Atmel AHB DMA controller. Support the Atmel AHB DMA controller.
...@@ -357,14 +358,6 @@ config INTEL_IOATDMA ...@@ -357,14 +358,6 @@ config INTEL_IOATDMA
If unsure, say N. If unsure, say N.
config INTEL_IOP_ADMA
tristate "Intel IOP32x ADMA support"
depends on ARCH_IOP32X || COMPILE_TEST
select DMA_ENGINE
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
help
Enable support for the Intel(R) IOP Series RAID engines.
config K3_DMA config K3_DMA
tristate "Hisilicon K3 DMA support" tristate "Hisilicon K3 DMA support"
depends on ARCH_HI3xxx || ARCH_HISI || COMPILE_TEST depends on ARCH_HI3xxx || ARCH_HISI || COMPILE_TEST
......
...@@ -44,7 +44,6 @@ obj-$(CONFIG_IMX_SDMA) += imx-sdma.o ...@@ -44,7 +44,6 @@ obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_INTEL_IDMA64) += idma64.o obj-$(CONFIG_INTEL_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/ obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-y += idxd/ obj-y += idxd/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_K3_DMA) += k3dma.o obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o
obj-$(CONFIG_MILBEAUT_HDMAC) += milbeaut-hdmac.o obj-$(CONFIG_MILBEAUT_HDMAC) += milbeaut-hdmac.o
......
...@@ -21,6 +21,12 @@ ...@@ -21,6 +21,12 @@
#define NCHANNELS_MAX 64 #define NCHANNELS_MAX 64
#define IRQ_NOUTPUTS 4 #define IRQ_NOUTPUTS 4
/*
* For allocation purposes we split the cache
* memory into blocks of fixed size (given in bytes).
*/
#define SRAM_BLOCK 2048
#define RING_WRITE_SLOT GENMASK(1, 0) #define RING_WRITE_SLOT GENMASK(1, 0)
#define RING_READ_SLOT GENMASK(5, 4) #define RING_READ_SLOT GENMASK(5, 4)
#define RING_FULL BIT(9) #define RING_FULL BIT(9)
...@@ -36,6 +42,9 @@ ...@@ -36,6 +42,9 @@
#define REG_TX_STOP 0x0004 #define REG_TX_STOP 0x0004
#define REG_RX_START 0x0008 #define REG_RX_START 0x0008
#define REG_RX_STOP 0x000c #define REG_RX_STOP 0x000c
#define REG_IMPRINT 0x0090
#define REG_TX_SRAM_SIZE 0x0094
#define REG_RX_SRAM_SIZE 0x0098
#define REG_CHAN_CTL(ch) (0x8000 + (ch) * 0x200) #define REG_CHAN_CTL(ch) (0x8000 + (ch) * 0x200)
#define REG_CHAN_CTL_RST_RINGS BIT(0) #define REG_CHAN_CTL_RST_RINGS BIT(0)
...@@ -53,7 +62,9 @@ ...@@ -53,7 +62,9 @@
#define BUS_WIDTH_FRAME_2_WORDS 0x10 #define BUS_WIDTH_FRAME_2_WORDS 0x10
#define BUS_WIDTH_FRAME_4_WORDS 0x20 #define BUS_WIDTH_FRAME_4_WORDS 0x20
#define CHAN_BUFSIZE 0x8000 #define REG_CHAN_SRAM_CARVEOUT(ch) (0x8050 + (ch) * 0x200)
#define CHAN_SRAM_CARVEOUT_SIZE GENMASK(31, 16)
#define CHAN_SRAM_CARVEOUT_BASE GENMASK(15, 0)
#define REG_CHAN_FIFOCTL(ch) (0x8054 + (ch) * 0x200) #define REG_CHAN_FIFOCTL(ch) (0x8054 + (ch) * 0x200)
#define CHAN_FIFOCTL_LIMIT GENMASK(31, 16) #define CHAN_FIFOCTL_LIMIT GENMASK(31, 16)
...@@ -76,6 +87,8 @@ struct admac_chan { ...@@ -76,6 +87,8 @@ struct admac_chan {
struct dma_chan chan; struct dma_chan chan;
struct tasklet_struct tasklet; struct tasklet_struct tasklet;
u32 carveout;
spinlock_t lock; spinlock_t lock;
struct admac_tx *current_tx; struct admac_tx *current_tx;
int nperiod_acks; int nperiod_acks;
...@@ -92,12 +105,24 @@ struct admac_chan { ...@@ -92,12 +105,24 @@ struct admac_chan {
struct list_head to_free; struct list_head to_free;
}; };
struct admac_sram {
u32 size;
/*
* SRAM_CARVEOUT has 16-bit fields, so the SRAM cannot be larger than
* 64K and a 32-bit bitfield over 2K blocks covers it.
*/
u32 allocated;
};
struct admac_data { struct admac_data {
struct dma_device dma; struct dma_device dma;
struct device *dev; struct device *dev;
__iomem void *base; __iomem void *base;
struct reset_control *rstc; struct reset_control *rstc;
struct mutex cache_alloc_lock;
struct admac_sram txcache, rxcache;
int irq; int irq;
int irq_index; int irq_index;
int nchannels; int nchannels;
...@@ -118,6 +143,60 @@ struct admac_tx { ...@@ -118,6 +143,60 @@ struct admac_tx {
struct list_head node; struct list_head node;
}; };
static int admac_alloc_sram_carveout(struct admac_data *ad,
enum dma_transfer_direction dir,
u32 *out)
{
struct admac_sram *sram;
int i, ret = 0, nblocks;
if (dir == DMA_MEM_TO_DEV)
sram = &ad->txcache;
else
sram = &ad->rxcache;
mutex_lock(&ad->cache_alloc_lock);
nblocks = sram->size / SRAM_BLOCK;
for (i = 0; i < nblocks; i++)
if (!(sram->allocated & BIT(i)))
break;
if (i < nblocks) {
*out = FIELD_PREP(CHAN_SRAM_CARVEOUT_BASE, i * SRAM_BLOCK) |
FIELD_PREP(CHAN_SRAM_CARVEOUT_SIZE, SRAM_BLOCK);
sram->allocated |= BIT(i);
} else {
ret = -EBUSY;
}
mutex_unlock(&ad->cache_alloc_lock);
return ret;
}
static void admac_free_sram_carveout(struct admac_data *ad,
enum dma_transfer_direction dir,
u32 carveout)
{
struct admac_sram *sram;
u32 base = FIELD_GET(CHAN_SRAM_CARVEOUT_BASE, carveout);
int i;
if (dir == DMA_MEM_TO_DEV)
sram = &ad->txcache;
else
sram = &ad->rxcache;
if (WARN_ON(base >= sram->size))
return;
mutex_lock(&ad->cache_alloc_lock);
i = base / SRAM_BLOCK;
sram->allocated &= ~BIT(i);
mutex_unlock(&ad->cache_alloc_lock);
}
static void admac_modify(struct admac_data *ad, int reg, u32 mask, u32 val) static void admac_modify(struct admac_data *ad, int reg, u32 mask, u32 val)
{ {
void __iomem *addr = ad->base + reg; void __iomem *addr = ad->base + reg;
...@@ -466,15 +545,28 @@ static void admac_synchronize(struct dma_chan *chan) ...@@ -466,15 +545,28 @@ static void admac_synchronize(struct dma_chan *chan)
static int admac_alloc_chan_resources(struct dma_chan *chan) static int admac_alloc_chan_resources(struct dma_chan *chan)
{ {
struct admac_chan *adchan = to_admac_chan(chan); struct admac_chan *adchan = to_admac_chan(chan);
struct admac_data *ad = adchan->host;
int ret;
dma_cookie_init(&adchan->chan); dma_cookie_init(&adchan->chan);
ret = admac_alloc_sram_carveout(ad, admac_chan_direction(adchan->no),
&adchan->carveout);
if (ret < 0)
return ret;
writel_relaxed(adchan->carveout,
ad->base + REG_CHAN_SRAM_CARVEOUT(adchan->no));
return 0; return 0;
} }
static void admac_free_chan_resources(struct dma_chan *chan) static void admac_free_chan_resources(struct dma_chan *chan)
{ {
struct admac_chan *adchan = to_admac_chan(chan);
admac_terminate_all(chan); admac_terminate_all(chan);
admac_synchronize(chan); admac_synchronize(chan);
admac_free_sram_carveout(adchan->host, admac_chan_direction(adchan->no),
adchan->carveout);
} }
static struct dma_chan *admac_dma_of_xlate(struct of_phandle_args *dma_spec, static struct dma_chan *admac_dma_of_xlate(struct of_phandle_args *dma_spec,
...@@ -712,6 +804,7 @@ static int admac_probe(struct platform_device *pdev) ...@@ -712,6 +804,7 @@ static int admac_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, ad); platform_set_drvdata(pdev, ad);
ad->dev = &pdev->dev; ad->dev = &pdev->dev;
ad->nchannels = nchannels; ad->nchannels = nchannels;
mutex_init(&ad->cache_alloc_lock);
/* /*
* The controller has 4 IRQ outputs. Try them all until * The controller has 4 IRQ outputs. Try them all until
...@@ -801,6 +894,13 @@ static int admac_probe(struct platform_device *pdev) ...@@ -801,6 +894,13 @@ static int admac_probe(struct platform_device *pdev)
goto free_irq; goto free_irq;
} }
ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
dev_info(&pdev->dev, "Audio DMA Controller\n");
dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n",
readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size);
return 0; return 0;
free_irq: free_irq:
......
This diff is collapsed.
This diff is collapsed.
...@@ -1038,6 +1038,13 @@ static const struct jz4780_dma_soc_data jz4725b_dma_soc_data = { ...@@ -1038,6 +1038,13 @@ static const struct jz4780_dma_soc_data jz4725b_dma_soc_data = {
JZ_SOC_DATA_BREAK_LINKS, JZ_SOC_DATA_BREAK_LINKS,
}; };
static const struct jz4780_dma_soc_data jz4755_dma_soc_data = {
.nb_channels = 4,
.transfer_ord_max = 5,
.flags = JZ_SOC_DATA_PER_CHAN_PM | JZ_SOC_DATA_NO_DCKES_DCKEC |
JZ_SOC_DATA_BREAK_LINKS,
};
static const struct jz4780_dma_soc_data jz4760_dma_soc_data = { static const struct jz4780_dma_soc_data jz4760_dma_soc_data = {
.nb_channels = 5, .nb_channels = 5,
.transfer_ord_max = 6, .transfer_ord_max = 6,
...@@ -1101,6 +1108,7 @@ static const struct jz4780_dma_soc_data x1830_dma_soc_data = { ...@@ -1101,6 +1108,7 @@ static const struct jz4780_dma_soc_data x1830_dma_soc_data = {
static const struct of_device_id jz4780_dma_dt_match[] = { static const struct of_device_id jz4780_dma_dt_match[] = {
{ .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data }, { .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data },
{ .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data }, { .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data },
{ .compatible = "ingenic,jz4755-dma", .data = &jz4755_dma_soc_data },
{ .compatible = "ingenic,jz4760-dma", .data = &jz4760_dma_soc_data }, { .compatible = "ingenic,jz4760-dma", .data = &jz4760_dma_soc_data },
{ .compatible = "ingenic,jz4760-mdma", .data = &jz4760_mdma_soc_data }, { .compatible = "ingenic,jz4760-mdma", .data = &jz4760_mdma_soc_data },
{ .compatible = "ingenic,jz4760-bdma", .data = &jz4760_bdma_soc_data }, { .compatible = "ingenic,jz4760-bdma", .data = &jz4760_bdma_soc_data },
......
...@@ -600,7 +600,7 @@ static int idma64_probe(struct idma64_chip *chip) ...@@ -600,7 +600,7 @@ static int idma64_probe(struct idma64_chip *chip)
return 0; return 0;
} }
static int idma64_remove(struct idma64_chip *chip) static void idma64_remove(struct idma64_chip *chip)
{ {
struct idma64 *idma64 = chip->idma64; struct idma64 *idma64 = chip->idma64;
unsigned short i; unsigned short i;
...@@ -618,8 +618,6 @@ static int idma64_remove(struct idma64_chip *chip) ...@@ -618,8 +618,6 @@ static int idma64_remove(struct idma64_chip *chip)
tasklet_kill(&idma64c->vchan.task); tasklet_kill(&idma64c->vchan.task);
} }
return 0;
} }
/* ---------------------------------------------------------------------- */ /* ---------------------------------------------------------------------- */
...@@ -664,7 +662,9 @@ static int idma64_platform_remove(struct platform_device *pdev) ...@@ -664,7 +662,9 @@ static int idma64_platform_remove(struct platform_device *pdev)
{ {
struct idma64_chip *chip = platform_get_drvdata(pdev); struct idma64_chip *chip = platform_get_drvdata(pdev);
return idma64_remove(chip); idma64_remove(chip);
return 0;
} }
static int __maybe_unused idma64_pm_suspend(struct device *dev) static int __maybe_unused idma64_pm_suspend(struct device *dev)
......
...@@ -7,7 +7,6 @@ ...@@ -7,7 +7,6 @@
#include <linux/io-64-nonatomic-lo-hi.h> #include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/msi.h>
#include <uapi/linux/idxd.h> #include <uapi/linux/idxd.h>
#include "../dmaengine.h" #include "../dmaengine.h"
#include "idxd.h" #include "idxd.h"
......
...@@ -528,6 +528,22 @@ static bool idxd_group_attr_progress_limit_invisible(struct attribute *attr, ...@@ -528,6 +528,22 @@ static bool idxd_group_attr_progress_limit_invisible(struct attribute *attr,
!idxd->hw.group_cap.progress_limit; !idxd->hw.group_cap.progress_limit;
} }
static bool idxd_group_attr_read_buffers_invisible(struct attribute *attr,
struct idxd_device *idxd)
{
/*
* Intel IAA does not support Read Buffer allocation control,
* make these attributes invisible.
*/
return (attr == &dev_attr_group_use_token_limit.attr ||
attr == &dev_attr_group_use_read_buffer_limit.attr ||
attr == &dev_attr_group_tokens_allowed.attr ||
attr == &dev_attr_group_read_buffers_allowed.attr ||
attr == &dev_attr_group_tokens_reserved.attr ||
attr == &dev_attr_group_read_buffers_reserved.attr) &&
idxd->data->type == IDXD_TYPE_IAX;
}
static umode_t idxd_group_attr_visible(struct kobject *kobj, static umode_t idxd_group_attr_visible(struct kobject *kobj,
struct attribute *attr, int n) struct attribute *attr, int n)
{ {
...@@ -538,6 +554,9 @@ static umode_t idxd_group_attr_visible(struct kobject *kobj, ...@@ -538,6 +554,9 @@ static umode_t idxd_group_attr_visible(struct kobject *kobj,
if (idxd_group_attr_progress_limit_invisible(attr, idxd)) if (idxd_group_attr_progress_limit_invisible(attr, idxd))
return 0; return 0;
if (idxd_group_attr_read_buffers_invisible(attr, idxd))
return 0;
return attr->mode; return attr->mode;
} }
...@@ -1233,6 +1252,14 @@ static bool idxd_wq_attr_op_config_invisible(struct attribute *attr, ...@@ -1233,6 +1252,14 @@ static bool idxd_wq_attr_op_config_invisible(struct attribute *attr,
!idxd->hw.wq_cap.op_config; !idxd->hw.wq_cap.op_config;
} }
static bool idxd_wq_attr_max_batch_size_invisible(struct attribute *attr,
struct idxd_device *idxd)
{
/* Intel IAA does not support batch processing, make it invisible */
return attr == &dev_attr_wq_max_batch_size.attr &&
idxd->data->type == IDXD_TYPE_IAX;
}
static umode_t idxd_wq_attr_visible(struct kobject *kobj, static umode_t idxd_wq_attr_visible(struct kobject *kobj,
struct attribute *attr, int n) struct attribute *attr, int n)
{ {
...@@ -1243,6 +1270,9 @@ static umode_t idxd_wq_attr_visible(struct kobject *kobj, ...@@ -1243,6 +1270,9 @@ static umode_t idxd_wq_attr_visible(struct kobject *kobj,
if (idxd_wq_attr_op_config_invisible(attr, idxd)) if (idxd_wq_attr_op_config_invisible(attr, idxd))
return 0; return 0;
if (idxd_wq_attr_max_batch_size_invisible(attr, idxd))
return 0;
return attr->mode; return attr->mode;
} }
...@@ -1533,6 +1563,43 @@ static ssize_t cmd_status_store(struct device *dev, struct device_attribute *att ...@@ -1533,6 +1563,43 @@ static ssize_t cmd_status_store(struct device *dev, struct device_attribute *att
} }
static DEVICE_ATTR_RW(cmd_status); static DEVICE_ATTR_RW(cmd_status);
static bool idxd_device_attr_max_batch_size_invisible(struct attribute *attr,
struct idxd_device *idxd)
{
/* Intel IAA does not support batch processing, make it invisible */
return attr == &dev_attr_max_batch_size.attr &&
idxd->data->type == IDXD_TYPE_IAX;
}
static bool idxd_device_attr_read_buffers_invisible(struct attribute *attr,
struct idxd_device *idxd)
{
/*
* Intel IAA does not support Read Buffer allocation control,
* make these attributes invisible.
*/
return (attr == &dev_attr_max_tokens.attr ||
attr == &dev_attr_max_read_buffers.attr ||
attr == &dev_attr_token_limit.attr ||
attr == &dev_attr_read_buffer_limit.attr) &&
idxd->data->type == IDXD_TYPE_IAX;
}
static umode_t idxd_device_attr_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct idxd_device *idxd = confdev_to_idxd(dev);
if (idxd_device_attr_max_batch_size_invisible(attr, idxd))
return 0;
if (idxd_device_attr_read_buffers_invisible(attr, idxd))
return 0;
return attr->mode;
}
static struct attribute *idxd_device_attributes[] = { static struct attribute *idxd_device_attributes[] = {
&dev_attr_version.attr, &dev_attr_version.attr,
&dev_attr_max_groups.attr, &dev_attr_max_groups.attr,
...@@ -1560,6 +1627,7 @@ static struct attribute *idxd_device_attributes[] = { ...@@ -1560,6 +1627,7 @@ static struct attribute *idxd_device_attributes[] = {
static const struct attribute_group idxd_device_attribute_group = { static const struct attribute_group idxd_device_attribute_group = {
.attrs = idxd_device_attributes, .attrs = idxd_device_attributes,
.is_visible = idxd_device_attr_visible,
}; };
static const struct attribute_group *idxd_attribute_groups[] = { static const struct attribute_group *idxd_attribute_groups[] = {
......
...@@ -33,7 +33,7 @@ MODULE_PARM_DESC(completion_timeout, ...@@ -33,7 +33,7 @@ MODULE_PARM_DESC(completion_timeout,
static int idle_timeout = 2000; static int idle_timeout = 2000;
module_param(idle_timeout, int, 0644); module_param(idle_timeout, int, 0644);
MODULE_PARM_DESC(idle_timeout, MODULE_PARM_DESC(idle_timeout,
"set ioat idel timeout [msec] (default 2000 [msec])"); "set ioat idle timeout [msec] (default 2000 [msec])");
#define IDLE_TIMEOUT msecs_to_jiffies(idle_timeout) #define IDLE_TIMEOUT msecs_to_jiffies(idle_timeout)
#define COMPLETION_TIMEOUT msecs_to_jiffies(completion_timeout) #define COMPLETION_TIMEOUT msecs_to_jiffies(completion_timeout)
......
This diff is collapsed.
This diff is collapsed.
...@@ -2286,9 +2286,14 @@ static int gpi_probe(struct platform_device *pdev) ...@@ -2286,9 +2286,14 @@ static int gpi_probe(struct platform_device *pdev)
} }
static const struct of_device_id gpi_of_match[] = { static const struct of_device_id gpi_of_match[] = {
{ .compatible = "qcom,sc7280-gpi-dma", .data = (void *)0x10000 },
{ .compatible = "qcom,sdm845-gpi-dma", .data = (void *)0x0 }, { .compatible = "qcom,sdm845-gpi-dma", .data = (void *)0x0 },
{ .compatible = "qcom,sm6350-gpi-dma", .data = (void *)0x10000 }, { .compatible = "qcom,sm6350-gpi-dma", .data = (void *)0x10000 },
/*
* Do not grow the list for compatible devices. Instead use
* qcom,sdm845-gpi-dma (for ee_offset = 0x0) or qcom,sm6350-gpi-dma
* (for ee_offset = 0x10000).
*/
{ .compatible = "qcom,sc7280-gpi-dma", .data = (void *)0x10000 },
{ .compatible = "qcom,sm8150-gpi-dma", .data = (void *)0x0 }, { .compatible = "qcom,sm8150-gpi-dma", .data = (void *)0x0 },
{ .compatible = "qcom,sm8250-gpi-dma", .data = (void *)0x0 }, { .compatible = "qcom,sm8250-gpi-dma", .data = (void *)0x0 },
{ .compatible = "qcom,sm8350-gpi-dma", .data = (void *)0x10000 }, { .compatible = "qcom,sm8350-gpi-dma", .data = (void *)0x10000 },
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Renesas SuperH DMA Engine support
*
* Copyright (C) 2013 Renesas Electronics, Inc.
*/
#ifndef SHDMA_ARM_H
#define SHDMA_ARM_H
#include "shdma.h"
/* Transmit sizes and respective CHCR register values */
enum {
XMIT_SZ_8BIT = 0,
XMIT_SZ_16BIT = 1,
XMIT_SZ_32BIT = 2,
XMIT_SZ_64BIT = 7,
XMIT_SZ_128BIT = 3,
XMIT_SZ_256BIT = 4,
XMIT_SZ_512BIT = 5,
};
/* log2(size / 8) - used to calculate number of transfers */
#define SH_DMAE_TS_SHIFT { \
[XMIT_SZ_8BIT] = 0, \
[XMIT_SZ_16BIT] = 1, \
[XMIT_SZ_32BIT] = 2, \
[XMIT_SZ_64BIT] = 3, \
[XMIT_SZ_128BIT] = 4, \
[XMIT_SZ_256BIT] = 5, \
[XMIT_SZ_512BIT] = 6, \
}
#define TS_LOW_BIT 0x3 /* --xx */
#define TS_HI_BIT 0xc /* xx-- */
#define TS_LOW_SHIFT (3)
#define TS_HI_SHIFT (20 - 2) /* 2 bits for shifted low TS */
#define TS_INDEX2VAL(i) \
((((i) & TS_LOW_BIT) << TS_LOW_SHIFT) |\
(((i) & TS_HI_BIT) << TS_HI_SHIFT))
#define CHCR_TX(xmit_sz) (DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL((xmit_sz)))
#define CHCR_RX(xmit_sz) (DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL((xmit_sz)))
#endif
...@@ -161,7 +161,10 @@ ...@@ -161,7 +161,10 @@
#define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 5000 /* 5 msec */ #define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 5000 /* 5 msec */
/* Channel base address offset from GPCDMA base address */ /* Channel base address offset from GPCDMA base address */
#define TEGRA_GPCDMA_CHANNEL_BASE_ADD_OFFSET 0x20000 #define TEGRA_GPCDMA_CHANNEL_BASE_ADDR_OFFSET 0x10000
/* Default channel mask reserving channel0 */
#define TEGRA_GPCDMA_DEFAULT_CHANNEL_MASK 0xfffffffe
struct tegra_dma; struct tegra_dma;
struct tegra_dma_channel; struct tegra_dma_channel;
...@@ -246,6 +249,7 @@ struct tegra_dma { ...@@ -246,6 +249,7 @@ struct tegra_dma {
const struct tegra_dma_chip_data *chip_data; const struct tegra_dma_chip_data *chip_data;
unsigned long sid_m2d_reserved; unsigned long sid_m2d_reserved;
unsigned long sid_d2m_reserved; unsigned long sid_d2m_reserved;
u32 chan_mask;
void __iomem *base_addr; void __iomem *base_addr;
struct device *dev; struct device *dev;
struct dma_device dma_dev; struct dma_device dma_dev;
...@@ -1288,7 +1292,7 @@ static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec, ...@@ -1288,7 +1292,7 @@ static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
} }
static const struct tegra_dma_chip_data tegra186_dma_chip_data = { static const struct tegra_dma_chip_data tegra186_dma_chip_data = {
.nr_channels = 31, .nr_channels = 32,
.channel_reg_size = SZ_64K, .channel_reg_size = SZ_64K,
.max_dma_count = SZ_1G, .max_dma_count = SZ_1G,
.hw_support_pause = false, .hw_support_pause = false,
...@@ -1296,7 +1300,7 @@ static const struct tegra_dma_chip_data tegra186_dma_chip_data = { ...@@ -1296,7 +1300,7 @@ static const struct tegra_dma_chip_data tegra186_dma_chip_data = {
}; };
static const struct tegra_dma_chip_data tegra194_dma_chip_data = { static const struct tegra_dma_chip_data tegra194_dma_chip_data = {
.nr_channels = 31, .nr_channels = 32,
.channel_reg_size = SZ_64K, .channel_reg_size = SZ_64K,
.max_dma_count = SZ_1G, .max_dma_count = SZ_1G,
.hw_support_pause = true, .hw_support_pause = true,
...@@ -1304,7 +1308,7 @@ static const struct tegra_dma_chip_data tegra194_dma_chip_data = { ...@@ -1304,7 +1308,7 @@ static const struct tegra_dma_chip_data tegra194_dma_chip_data = {
}; };
static const struct tegra_dma_chip_data tegra234_dma_chip_data = { static const struct tegra_dma_chip_data tegra234_dma_chip_data = {
.nr_channels = 31, .nr_channels = 32,
.channel_reg_size = SZ_64K, .channel_reg_size = SZ_64K,
.max_dma_count = SZ_1G, .max_dma_count = SZ_1G,
.hw_support_pause = true, .hw_support_pause = true,
...@@ -1380,15 +1384,28 @@ static int tegra_dma_probe(struct platform_device *pdev) ...@@ -1380,15 +1384,28 @@ static int tegra_dma_probe(struct platform_device *pdev)
} }
stream_id = iommu_spec->ids[0] & 0xffff; stream_id = iommu_spec->ids[0] & 0xffff;
ret = device_property_read_u32(&pdev->dev, "dma-channel-mask",
&tdma->chan_mask);
if (ret) {
dev_warn(&pdev->dev,
"Missing dma-channel-mask property, using default channel mask %#x\n",
TEGRA_GPCDMA_DEFAULT_CHANNEL_MASK);
tdma->chan_mask = TEGRA_GPCDMA_DEFAULT_CHANNEL_MASK;
}
INIT_LIST_HEAD(&tdma->dma_dev.channels); INIT_LIST_HEAD(&tdma->dma_dev.channels);
for (i = 0; i < cdata->nr_channels; i++) { for (i = 0; i < cdata->nr_channels; i++) {
struct tegra_dma_channel *tdc = &tdma->channels[i]; struct tegra_dma_channel *tdc = &tdma->channels[i];
/* Check for channel mask */
if (!(tdma->chan_mask & BIT(i)))
continue;
tdc->irq = platform_get_irq(pdev, i); tdc->irq = platform_get_irq(pdev, i);
if (tdc->irq < 0) if (tdc->irq < 0)
return tdc->irq; return tdc->irq;
tdc->chan_base_offset = TEGRA_GPCDMA_CHANNEL_BASE_ADD_OFFSET + tdc->chan_base_offset = TEGRA_GPCDMA_CHANNEL_BASE_ADDR_OFFSET +
i * cdata->channel_reg_size; i * cdata->channel_reg_size;
snprintf(tdc->name, sizeof(tdc->name), "gpcdma.%d", i); snprintf(tdc->name, sizeof(tdc->name), "gpcdma.%d", i);
tdc->tdma = tdma; tdc->tdma = tdma;
...@@ -1449,8 +1466,8 @@ static int tegra_dma_probe(struct platform_device *pdev) ...@@ -1449,8 +1466,8 @@ static int tegra_dma_probe(struct platform_device *pdev)
return ret; return ret;
} }
dev_info(&pdev->dev, "GPC DMA driver register %d channels\n", dev_info(&pdev->dev, "GPC DMA driver register %lu channels\n",
cdata->nr_channels); hweight_long(tdma->chan_mask));
return 0; return 0;
} }
...@@ -1473,6 +1490,9 @@ static int __maybe_unused tegra_dma_pm_suspend(struct device *dev) ...@@ -1473,6 +1490,9 @@ static int __maybe_unused tegra_dma_pm_suspend(struct device *dev)
for (i = 0; i < tdma->chip_data->nr_channels; i++) { for (i = 0; i < tdma->chip_data->nr_channels; i++) {
struct tegra_dma_channel *tdc = &tdma->channels[i]; struct tegra_dma_channel *tdc = &tdma->channels[i];
if (!(tdma->chan_mask & BIT(i)))
continue;
if (tdc->dma_desc) { if (tdc->dma_desc) {
dev_err(tdma->dev, "channel %u busy\n", i); dev_err(tdma->dev, "channel %u busy\n", i);
return -EBUSY; return -EBUSY;
...@@ -1492,6 +1512,9 @@ static int __maybe_unused tegra_dma_pm_resume(struct device *dev) ...@@ -1492,6 +1512,9 @@ static int __maybe_unused tegra_dma_pm_resume(struct device *dev)
for (i = 0; i < tdma->chip_data->nr_channels; i++) { for (i = 0; i < tdma->chip_data->nr_channels; i++) {
struct tegra_dma_channel *tdc = &tdma->channels[i]; struct tegra_dma_channel *tdc = &tdma->channels[i];
if (!(tdma->chan_mask & BIT(i)))
continue;
tegra_dma_program_sid(tdc, tdc->stream_id); tegra_dma_program_sid(tdc, tdc->stream_id);
} }
......
...@@ -35,7 +35,7 @@ config DMA_OMAP ...@@ -35,7 +35,7 @@ config DMA_OMAP
DMA engine is found on OMAP and DRA7xx parts. DMA engine is found on OMAP and DRA7xx parts.
config TI_K3_UDMA config TI_K3_UDMA
bool "Texas Instruments UDMA support" tristate "Texas Instruments UDMA support"
depends on ARCH_K3 depends on ARCH_K3
depends on TI_SCI_PROTOCOL depends on TI_SCI_PROTOCOL
depends on TI_SCI_INTA_IRQCHIP depends on TI_SCI_INTA_IRQCHIP
...@@ -48,7 +48,7 @@ config TI_K3_UDMA ...@@ -48,7 +48,7 @@ config TI_K3_UDMA
DMA engine is used in AM65x and j721e. DMA engine is used in AM65x and j721e.
config TI_K3_UDMA_GLUE_LAYER config TI_K3_UDMA_GLUE_LAYER
bool "Texas Instruments UDMA Glue layer for non DMAengine users" tristate "Texas Instruments UDMA Glue layer for non DMAengine users"
depends on ARCH_K3 depends on ARCH_K3
depends on TI_K3_UDMA depends on TI_K3_UDMA
help help
...@@ -56,7 +56,8 @@ config TI_K3_UDMA_GLUE_LAYER ...@@ -56,7 +56,8 @@ config TI_K3_UDMA_GLUE_LAYER
If unsure, say N. If unsure, say N.
config TI_K3_PSIL config TI_K3_PSIL
bool tristate
default TI_K3_UDMA
config TI_DMA_CROSSBAR config TI_DMA_CROSSBAR
bool bool
...@@ -4,11 +4,12 @@ obj-$(CONFIG_TI_EDMA) += edma.o ...@@ -4,11 +4,12 @@ obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_TI_K3_UDMA) += k3-udma.o obj-$(CONFIG_TI_K3_UDMA) += k3-udma.o
obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o
obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \ k3-psil-lib-objs := k3-psil.o \
k3-psil-am654.o \ k3-psil-am654.o \
k3-psil-j721e.o \ k3-psil-j721e.o \
k3-psil-j7200.o \ k3-psil-j7200.o \
k3-psil-am64.o \ k3-psil-am64.o \
k3-psil-j721s2.o \ k3-psil-j721s2.o \
k3-psil-am62.o k3-psil-am62.o
obj-$(CONFIG_TI_K3_PSIL) += k3-psil-lib.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/mutex.h> #include <linux/mutex.h>
...@@ -101,3 +102,4 @@ int psil_set_new_ep_config(struct device *dev, const char *name, ...@@ -101,3 +102,4 @@ int psil_set_new_ep_config(struct device *dev, const char *name,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(psil_set_new_ep_config); EXPORT_SYMBOL_GPL(psil_set_new_ep_config);
MODULE_LICENSE("GPL v2");
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* *
*/ */
#include <linux/module.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
...@@ -1436,4 +1437,6 @@ static int __init k3_udma_glue_class_init(void) ...@@ -1436,4 +1437,6 @@ static int __init k3_udma_glue_class_init(void)
{ {
return class_register(&k3_udma_glue_devclass); return class_register(&k3_udma_glue_devclass);
} }
arch_initcall(k3_udma_glue_class_init);
module_init(k3_udma_glue_class_init);
MODULE_LICENSE("GPL v2");
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
...@@ -4335,18 +4336,10 @@ static const struct of_device_id udma_of_match[] = { ...@@ -4335,18 +4336,10 @@ static const struct of_device_id udma_of_match[] = {
.compatible = "ti,j721e-navss-mcu-udmap", .compatible = "ti,j721e-navss-mcu-udmap",
.data = &j721e_mcu_data, .data = &j721e_mcu_data,
}, },
{ /* Sentinel */ },
};
static const struct of_device_id bcdma_of_match[] = {
{ {
.compatible = "ti,am64-dmss-bcdma", .compatible = "ti,am64-dmss-bcdma",
.data = &am64_bcdma_data, .data = &am64_bcdma_data,
}, },
{ /* Sentinel */ },
};
static const struct of_device_id pktdma_of_match[] = {
{ {
.compatible = "ti,am64-dmss-pktdma", .compatible = "ti,am64-dmss-pktdma",
.data = &am64_pktdma_data, .data = &am64_pktdma_data,
...@@ -5271,15 +5264,10 @@ static int udma_probe(struct platform_device *pdev) ...@@ -5271,15 +5264,10 @@ static int udma_probe(struct platform_device *pdev)
return -ENOMEM; return -ENOMEM;
match = of_match_node(udma_of_match, dev->of_node); match = of_match_node(udma_of_match, dev->of_node);
if (!match)
match = of_match_node(bcdma_of_match, dev->of_node);
if (!match) {
match = of_match_node(pktdma_of_match, dev->of_node);
if (!match) { if (!match) {
dev_err(dev, "No compatible match found\n"); dev_err(dev, "No compatible match found\n");
return -ENODEV; return -ENODEV;
} }
}
ud->match_data = match->data; ud->match_data = match->data;
soc = soc_device_match(k3_soc_devices); soc = soc_device_match(k3_soc_devices);
...@@ -5511,27 +5499,9 @@ static struct platform_driver udma_driver = { ...@@ -5511,27 +5499,9 @@ static struct platform_driver udma_driver = {
}, },
.probe = udma_probe, .probe = udma_probe,
}; };
builtin_platform_driver(udma_driver);
static struct platform_driver bcdma_driver = { module_platform_driver(udma_driver);
.driver = { MODULE_LICENSE("GPL v2");
.name = "ti-bcdma",
.of_match_table = bcdma_of_match,
.suppress_bind_attrs = true,
},
.probe = udma_probe,
};
builtin_platform_driver(bcdma_driver);
static struct platform_driver pktdma_driver = {
.driver = {
.name = "ti-pktdma",
.of_match_table = pktdma_of_match,
.suppress_bind_attrs = true,
},
.probe = udma_probe,
};
builtin_platform_driver(pktdma_driver);
/* Private interfaces to UDMA */ /* Private interfaces to UDMA */
#include "k3-udma-private.c" #include "k3-udma-private.c"
...@@ -1659,6 +1659,8 @@ static void xilinx_dma_issue_pending(struct dma_chan *dchan) ...@@ -1659,6 +1659,8 @@ static void xilinx_dma_issue_pending(struct dma_chan *dchan)
* xilinx_dma_device_config - Configure the DMA channel * xilinx_dma_device_config - Configure the DMA channel
* @dchan: DMA channel * @dchan: DMA channel
* @config: channel configuration * @config: channel configuration
*
* Return: 0 always.
*/ */
static int xilinx_dma_device_config(struct dma_chan *dchan, static int xilinx_dma_device_config(struct dma_chan *dchan,
struct dma_slave_config *config) struct dma_slave_config *config)
...@@ -2924,7 +2926,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, ...@@ -2924,7 +2926,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
* @xdev: Driver specific device structure * @xdev: Driver specific device structure
* @node: Device node * @node: Device node
* *
* Return: 0 always. * Return: '0' on success and failure value on error.
*/ */
static int xilinx_dma_child_probe(struct xilinx_dma_device *xdev, static int xilinx_dma_child_probe(struct xilinx_dma_device *xdev,
struct device_node *node) struct device_node *node)
......
...@@ -730,6 +730,7 @@ struct irq_domain *of_msi_get_domain(struct device *dev, ...@@ -730,6 +730,7 @@ struct irq_domain *of_msi_get_domain(struct device *dev,
return NULL; return NULL;
} }
EXPORT_SYMBOL_GPL(of_msi_get_domain);
/** /**
* of_msi_configure - Set the msi_domain field of a device * of_msi_configure - Set the msi_domain field of a device
......
...@@ -295,7 +295,7 @@ struct dsa_completion_record { ...@@ -295,7 +295,7 @@ struct dsa_completion_record {
}; };
uint32_t delta_rec_size; uint32_t delta_rec_size;
uint32_t crc_val; uint64_t crc_val;
/* DIF check & strip */ /* DIF check & strip */
struct { struct {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment