- 25 Aug, 2013 8 commits
-
-
Daniel Mack authored
Provide a callback to prepare cyclic DMA transfers. This is for instance needed for audio channel transport. Signed-off-by: Daniel Mack <zonque@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
In order to fully support multiple transactions per channel, we need to assure we get an interrupt for each completed transaction. That flags bit is also our only way to tell at which descriptor a transaction ends. So, remove the manual clearing of that bit, and then inline the only remaining command that is left in append_pending_queue() for better readability. Signed-off-by: Daniel Mack <zonque@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
Currently, when an interrupt has occured for a channel, the tasklet worker code will only look at the very last entry in the running list and complete its cookie, and then dispose the entire running chain. Hence, the first transaction's cookie will never complete. In fact, the interrupt we should handle will be the one related to the first descriptor in the chain with the ENDIRQEN bit set, so complete the second transaction that is in fact still running. As a result, the driver can't currently handle multiple transactions on one chanel, and it's likely that no drivers exist that rely on this feature. Fix this by walking the running_chain and look for the first descriptor that has the interrupt-enable bit set. Only queue descriptors up to that point for completion handling, while leaving the rest intact. Also, only make the channel idle if the list is completely empty after such a cycle. Signed-off-by: Daniel Mack <zonque@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
In case of big endian CPU we have to convert either all fields in the structure or leave this job to ACPICA. The second choice seems the best. So, let's remove the ugly conversion that is not fully comprehensive anyway. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
ACPI DMA provides managed function to register the slave DMA controller in the internal container. This patch anounces that function in the corresponding documentation file. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rob Landley <rob@landley.net> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Dan Carpenter authored
If "num_disabled" is equal to STEDMA40_MAX_PHYS (32) then we would write one space beyond the end of the pdata->disable_channels[] array. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Fabio Estevam authored
When CONFIG_ARM_LPAE=y the following build warning are generated: drivers/dma/ste_dma40.c:3228:2: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat] drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat] drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat] drivers/dma/ste_dma40.c:3593:5: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat] According to Documentation/printk-formats.txt '%pa' can be used to properly print 'resource_size_t'. Also, for printing memory region the '%pr' is more convenient. Reported-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Acked-by: Kevin Hilman <khilman@linaro.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Jingoo Han authored
The driver core clears the driver data to NULL after device_release or on probe failure. Thus, it is not needed to manually clear the device driver data to NULL. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 19 Aug, 2013 2 commits
-
-
Shawn Guo authored
With all mxs-dma clients moved to use generic DMA helper, the code left from generic DMA binding conversion can be removed now. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Huang Shijie authored
After the patch: "2ccaef05 dma: imx-sdma: make channel0 operations atomic", the "done" completion is not used any more. Just remove it. Signed-off-by: Huang Shijie <b32955@freescale.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 14 Aug, 2013 10 commits
-
-
Julia Lawall authored
Remove unneeded error handling on the result of a call to platform_get_resource when the value is passed to devm_ioremap_resource. A simplified version of the semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression pdev,res,n,e,e1; expression ret != 0; identifier l; @@ - res = platform_get_resource(pdev, IORESOURCE_MEM, n); ... when != res - if (res == NULL) { ... \(goto l;\|return ret;\) } ... when != res + res = platform_get_resource(pdev, IORESOURCE_MEM, n); e = devm_ioremap_resource(e1, res); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
The PXA DMA controller has a DALGN register which allows for byte-aligned DMA transfers. Use it in case any of the transfer descriptors is not aligned to a mask of ~0x7. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
The DMA_SLAVE is currently set twice. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
That helps check the provided runtime information. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
This patch makes the mmp_pdma controller able to provide DMA resources in DT environments by providing an dma xlate function. of_dma_simple_xlate() isn't used here, because if fails to handle multiple different DMA engines or several instances of the same controller. Instead, a private implementation is provided that makes use of the newly introduced dma_get_slave_channel() call. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
PXA peripherals need to obtain specific DMA request ids which will eventually be stored in the DRCMR register. Currently, clients are expected to store that number inside the slave config block as slave_id, which is unfortunately incompatible with the way DMA resources are handled in DT environments. This patch adds a filter function which stores the filter parameter passed in by of-dma.c into the channel's drcmr register. For backward compatability, cfg->slave_id is still used if set to a non-zero value. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
There's no reason for limiting the maximum transfer length to 0x1000. Take the actual bit mask instead; the PDMA is able to transfer chunks of up to SZ_8K - 1. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
As suggested by Ezequiel García, release the spinlock at the end of the function only, and use a goto for the control flow. Just a minor cleanup. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Daniel Mack authored
The exact same calculation is done twice, so let's factor it out to a macro. Signed-off-by: Daniel Mack <zonque@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Vinod Koul authored
-
- 13 Aug, 2013 7 commits
-
-
Chanho Park authored
This patch adds __pl330_giveback_descs which give back descriptors when fails allocating descriptors. It requires to eliminate duplication for pl330_prep_dma_sg which will be added later. Signed-off-by: Chanho Park <chanho61.park@samsung.com> Acked-by : Jassi Brar <jassisinghbrar@gmail.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Barry Song authored
this patch adds PM ops entries in sirf-dma drivers, so that this driver can support suspend/resume, hibernation and runtime PM. while suspending, sirf-dma will lose all registers, so we save them at suspend and restore in resume for active channels. Signed-off-by: Barry Song <Baohua.Song@csr.com> Signed-off-by: Rongjun Ying <Rongjun.Ying@csr.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Jingoo Han authored
Use the wrapper function for retrieving the platform data instead of accessing dev->platform_data directly. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Jingoo Han authored
sirfsoc_dma_prep_cyclic() returns pointer, thus NULL should be used instead of 0 in order to fix the following sparse warning: drivers/dma/sirf-dma.c:598:24: warning: Using plain integer as NULL pointer Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Jingoo Han authored
%p is used, thus NULL should be used instead of 0 in order to fix the following sparse warning: drivers/dma/mv_xor.c:648:9: warning: Using plain integer as NULL pointer Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Jingoo Han authored
mmp_pdma_alloc_descriptor() is used only in this file. Fix the following sparse warning: drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static? Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Zhangfei Gao authored
Suggested by Arnd, add dma_get_slave_channel interface Dma host driver could get specific channel specificied by request line, rather than filter. host example: static struct dma_chan *xx_of_dma_simple_xlate(struct of_phandle_args *dma_spec, struct of_dma *ofdma) { struct xx_dma_dev *d = ofdma->of_dma_data; unsigned int request = dma_spec->args[0]; if (request > d->dma_requests) return NULL; return dma_get_slave_channel(&(d->chans[request].vc.chan)); } probe: of_dma_controller_register((&op->dev)->of_node, xx_of_dma_simple_xlate, d); Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 05 Aug, 2013 13 commits
-
-
Xiang Wang authored
In mmp pdma, phy channels are allocated/freed dynamically. The mapping from DMA request to DMA channel number in DRCMR should be cleared when a phy channel is freed. Otherwise conflicts will happen when: 1. A is using channel 2 and free it after finished, but A still maps to channel 2 in DRCMR of A. 2. Now another one B gets channel 2. So B maps to channel 2 too in DRCMR of B. In the datasheet, it is described that "Do not map two active requests to the same channel since it produces unpredictable results" and we can observe that during test. Signed-off-by: Xiang Wang <wangx@marvell.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Xiang Wang authored
In mmp pdma, phy channels are allocated/freed dynamically and frequently. But no proper protection is added. Conflict will happen when multi-users are requesting phy channels at the same time. Use spinlock to protect. Signed-off-by: Xiang Wang <wangx@marvell.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Viresh Kumar authored
Andy has done enough work on Synopsys Designware DMA Controller and I believe he should Co-Maintain this driver along with me. So this patch. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
To obey a usual practice let's return DMA_PAUSED status only if dma_cookie_status returned DMA_IN_PROGRESS. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
There is no point to go throught the rest of the function if first call to dma_cookie_status() returned DMA_SUCCESS. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
In the PC world is quite possible that devices are sharing the same interrupt line. The patch prepares dw_dmac driver to such cases. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
In general ~0 does not fit some integer types. Let's do a helper to make a comparison with that constant properly. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
In rare cases (mostly for the testing purposes) the dw_dmac driver might be compiled as a module as well as the other LPSS device drivers (I2C, SPI, HSUART). When udev handles the event of the devices appearing the dw_dmac module is missing. This patch will fix that. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
This patch fixes sparse warning: drivers/dma/acpi-dma.c:76:21: sparse: cast to restricted __le32 Since everything in all ACPI tables is little-endian, by definition, the used types in practice are uXX. Thus, we have to enforce __leXX if we want to convert them to CPU order. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
There is no point to go throught the rest of the function if first call to dma_cookie_status() returned DMA_SUCCESS. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
dma_set_residue() sets only residue value, so user can't rely on the returned values of cookies. That patch standardize the behaviour. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
It's better to use generic dma_cookie_status() that allows user to get standard possible return codes independently of the DMAC driver in charge. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Andy Shevchenko authored
Accordingly to dma_cookie_status() description locking is not required. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: linux-tegra@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-