- 25 Apr, 2021 1 commit
-
-
Tom Zanussi authored
Implement the IDXD performance monitor capability (named 'perfmon' in the DSA (Data Streaming Accelerator) spec [1]), which supports the collection of information about key events occurring during DSA and IAX (Intel Analytics Accelerator) device execution, to assist in performance tuning and debugging. The idxd perfmon support is implemented as part of the IDXD driver and interfaces with the Linux perf framework. It has several features in common with the existing uncore pmu support: - it does not support sampling - does not support per-thread counting However it also has some unique features not present in the core and uncore support: - all general-purpose counters are identical, thus no event constraints - operation is always system-wide While the core perf subsystem assumes that all counters are by default per-cpu, the uncore pmus are socket-scoped and use a cpu mask to restrict counting to one cpu from each socket. IDXD counters use a similar strategy but expand the scope even further; since IDXD counters are system-wide and can be read from any cpu, the IDXD perf driver picks a single cpu to do the work (with cpu hotplug notifiers to choose a different cpu if the chosen one is taken off-line). More specifically, the perf userspace tool by default opens a counter for each cpu for an event. However, if it finds a cpumask file associated with the pmu under sysfs, as is the case with the uncore pmus, it will open counters only on the cpus specified by the cpumask. Since perfmon only needs to open a single counter per event for a given IDXD device, the perfmon driver will create a sysfs cpumask file for the device and insert the first cpu of the system into it. When a user uses perf to open an event, perf will open a single counter on the cpu specified by the cpu mask. This amounts to the default system-wide rather than per-cpu counting mentioned previously for perfmon pmu events. In order to keep the cpu mask up-to-date, the driver implements cpu hotplug support for multiple devices, as IDXD usually enumerates and registers more than one idxd device. The perfmon driver implements basic perfmon hardware capability discovery and configuration, and is initialized by the IDXD driver's probe function. During initialization, the driver retrieves the total number of supported performance counters, the pmu ID, and the device type from idxd device, and registers itself under the Linux perf framework. The perf userspace tool can be used to monitor single or multiple events depending on the given configuration, as well as event groups, which are also supported by the perfmon driver. The user configures events using the perf tool command-line interface by specifying the event and corresponding event category, along with an optional set of filters that can be used to restrict counting to specific work queues, traffic classes, page and transfer sizes, and engines (See [1] for specifics). With the configuration specified by the user, the perf tool issues a system call passing that information to the kernel, which uses it to initialize the specified event(s). The event(s) are opened and started, and following termination of the perf command, they're stopped. At that point, the perfmon driver will read the latest count for the event(s), calculate the difference between the latest counter values and previously tracked counter values, and display the final incremental count as the event count for the cycle. An overflow handler registered on the IDXD irq path is used to account for counter overflows, which are signaled by an overflow interrupt. Below are a couple of examples of perf usage for monitoring DSA events. The following monitors all events in the 'engine' category. Becuuse no filters are specified, this captures all engine events for the workload, which in this case is 19 iterations of the work generated by the kernel dmatest module. Details describing the events can be found in Appendix D of [1], Performance Monitoring Events, but briefly they are: event 0x1: total input data processed, in 32-byte units event 0x2: total data written, in 32-byte units event 0x4: number of work descriptors that read the source event 0x8: number of work descriptors that write the destination event 0x10: number of work descriptors dispatched from batch descriptors event 0x20: number of work descriptors dispatched from work queues # perf stat -e dsa0/event=0x1,event_category=0x1/, dsa0/event=0x2,event_category=0x1/, dsa0/event=0x4,event_category=0x1/, dsa0/event=0x8,event_category=0x1/, dsa0/event=0x10,event_category=0x1/, dsa0/event=0x20,event_category=0x1/ modprobe dmatest channel=dma0chan0 timeout=2000 iterations=19 run=1 wait=1 Performance counter stats for 'system wide': 5,332 dsa0/event=0x1,event_category=0x1/ 5,327 dsa0/event=0x2,event_category=0x1/ 19 dsa0/event=0x4,event_category=0x1/ 19 dsa0/event=0x8,event_category=0x1/ 0 dsa0/event=0x10,event_category=0x1/ 19 dsa0/event=0x20,event_category=0x1/ 21.977436186 seconds time elapsed The command below illustrates filter usage with a simple example. It specifies that MEM_MOVE operations should be counted for the DSA device dsa0 (event 0x8 corresponds to the EV_MEM_MOVE event - Number of Memory Move Descriptors, which is part of event category 0x3 - Operations. The detailed category and event IDs are available in Appendix D, Performance Monitoring Events, of [1]). In addition to the event and event category, a number of filters are also specified (the detailed filter values are available in Chapter 6.4 (Filter Support) of [1]), which will restrict counting to only those events that meet all of the filter criteria. In this case, the filters specify that only MEM_MOVE operations that are serviced by work queue wq0 and specifically engine number engine0 and traffic class tc0 having sizes between 0 and 4k and page size of between 0 and 1G result in a counter hit; anything else will be filtered out and not appear in the final count. Note that filters are optional - any filter not specified is assumed to be all ones and will pass anything. # perf stat -e dsa0/filter_wq=0x1,filter_tc=0x1,filter_sz=0x7, filter_eng=0x1,event=0x8,event_category=0x3/ modprobe dmatest channel=dma0chan0 timeout=2000 iterations=19 run=1 wait=1 Performance counter stats for 'system wide': 19 dsa0/filter_wq=0x1,filter_tc=0x1,filter_sz=0x7, filter_eng=0x1,event=0x8,event_category=0x3/ 21.865914091 seconds time elapsed The output above reflects that the unspecified workload resulted in the counting of 19 MEM_MOVE operation events that met the filter criteria. [1]: https://software.intel.com/content/www/us/en/develop/download/intel-data-streaming-accelerator-preliminary-architecture-specification.html [ Based on work originally by Jing Lin. ] Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Link: https://lore.kernel.org/r/0c5080a7d541904c4ad42b848c76a1ce056ddac7.1619276133.git.zanussi@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 23 Apr, 2021 8 commits
-
-
Dave Jiang authored
Remove interrupt masking and just let the hard irq handler keep firing for new events. This is less of a performance impact vs the MMIO readback inside the pci_msi_{mask,unmas}_irq(). Especially with a loaded system those flushes can be stuck behind large amounts of MMIO writes to flush. When guest kernel is running on top of VFIO mdev, mask/unmask causes a vmexit each time and is not desirable. Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161894523436.3210025.1834640110556139277.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Create a dedicated lock for device command operations. Put the device command operation under finer grained locking instead of using the idxd->dev_lock. Suggested-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894525685.3210132.16160045731436382560.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Unmask the halt error interrupt so it gets reported to the interrupt handler. When halt state interrupt is received, quiesce the kernel WQs and unmap the portals to stop submission. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894441167.3202472.9485946398140619501.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Enable IOMMU_DEV_FEAT_SVA before attempt to bind pasid. This is needed according to iommu_sva_bind_device() comment. Currently Intel IOMMU code does this before bind call. It really needs to be controlled by the driver. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894440621.3202472.17644507396206848134.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Convert sprintf() to sysfs_emit() in order to check buffer overrun on sysfs outputs. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894440044.3202472.13926639619695319753.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
DSA spec states that when Request Interrupt Handle and Release Interrupt Handle command bits are set in the CMDCAP register, these device commands must be supported by the driver. The interrupt handle is programmed in a descriptor. When Request Interrupt Handle is not supported, the interrupt handle is the index of the desired entry in the MSI-X table. When the command is supported, driver must use the command to obtain a handle to be programmed in the submitted descriptor. A requested handle may be revoked. After the handle is revoked, any use of the handle will result in Invalid Interrupt Handle error. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894439422.3202472.17579543737810265471.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The read-only configuration mode is defined by the DSA spec as a mode of the device WQ configuration. When GENCAP register bit 31 is set to 0, the device is in RO mode and group configuration and some fields of the workqueue configuration registers are read-only and reflect the fixed configuration of the device. Add support for RO mode. The driver will load the values from the registers directly setup all the internally cached data structures based on the device configuration. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894438847.3202472.6317563824045432727.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Current submission path has no way to restrict the submitter from stop submiting on shutdown path or wq disable path. This provides a way to quiesce the submission path. Modeling after 'struct reqeust_queue' usage of percpu_ref. One of the abilities of per_cpu reference counting is the ability to stop new references from being taken while awaiting outstanding references to be dropped. On wq shutdown, we want to block any new submissions to the kernel workqueue and quiesce before disabling. The percpu_ref allows us to block any new submissions and wait for any current submission calls to finish submitting to the workqueue. A percpu_ref is embedded in each idxd_wq context to allow control for individual wq. The wq->wq_active counter is elevated before calling movdir64b() or enqcmds() to submit a descriptor to the wq and dropped once the submission call completes. The function is gated by percpu_ref_tryget_live(). On shutdown with percpu_ref_kill() called, any new submission would be blocked from acquiring a ref and failed. Once all references are dropped for the wq, shutdown can continue. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161894438293.3202472.14894701611500822232.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 20 Apr, 2021 15 commits
-
-
Dave Jiang authored
Move all static data type for per device type to an idxd_driver_data data structure. The data can be attached to the pci_device_id and provided by the pci probe function. This removes a lot of unnecessary type detection and setup code. Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852988924.2203940.2787590808682466398.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
There is no need to have an additional bus for the IAX device. The removal of IAX will change user ABI as /sys/bus/iax will no longer exist. The iax device will be moved to the dsa bus. The device id for dsa and iax will now be combined rather than unique for each device type in order to accommodate the iax devices. This is in preparation for fixing the sub-driver code for idxd. There's no hardware deployment for Sapphire Rapids platform yet, which means that users have no reason to have developed scripts against this ABI. There is some exposure to released versions of accel-config, but those are being fixed up and an accel-config upgrade is reasonable to get IAX support. As far as accel-config is concerned IAX support starts when these devices appear under /sys/bus/dsa, and old accel-config just assumes that an empty / missing /sys/bus/iax just means a lack of platform support. Fixes: f25b4638 ("dmaengine: idxd: add IAX configuration support in the IDXD driver") Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852988298.2203940.4529909758034944428.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The char device setup and cleanup has device lifetime issues regarding when parts are initialized and cleaned up. The initialization of struct device is done incorrectly. device_initialize() needs to be called on the 'struct device' and then additional changes can be added. The ->release() function needs to be setup via device_type before dev_set_name() to allow proper cleanup. The change re-parents the cdev under the wq->conf_dev to get natural reference inheritance. No known dependency on the old device path exists. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: 42d279f9 ("dmaengine: idxd: add char driver to expose submission portal to userland") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161852987721.2203940.1478218825576630810.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Remove devm_* allocation and fix group->conf_dev 'struct device' lifetime. Address issues flagged by CONFIG_DEBUG_KOBJECT_RELEASE. Add release functions in order to free the allocated memory at the group->conf_dev destruction time. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852987144.2203940.8830315575880047.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Remove devm_* allocation and fix engine->conf_dev 'struct device' lifetime. Address issues flagged by CONFIG_DEBUG_KOBJECT_RELEASE. Add release functions in order to free the allocated memory at the engine conf_dev destruction time. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852986460.2203940.16603218225412118431.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Remove devm_* allocation and fix wq->conf_dev 'struct device' lifetime. Address issues flagged by CONFIG_DEBUG_KOBJECT_RELEASE. Add release functions in order to free the allocated memory for the wq context at device destruction time. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852985907.2203940.6840120734115043753.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The devm managed lifetime is incompatible with 'struct device' objects that resides in idxd context. This is one of the series that clean up the idxd driver 'struct device' lifetime. Fix idxd->conf_dev 'struct device' lifetime. Address issues flagged by CONFIG_DEBUG_KOBJECT_RELEASE. Add release functions in order to free the allocated memory at the appropriate time. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852985319.2203940.4650791514462735368.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The idr is only used for an device id, never to lookup context from that id. Switch to plain ida. Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Reported-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161852984730.2203940.15032482460902003819.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The devm managed lifetime is incompatible with 'struct device' objects that resides in idxd context. This is one of the series that clean up the idxd driver 'struct device' lifetime. Remove pcim_* management of the PCI device and the ioremap of MMIO BAR and replace with unmanaged versions. This is for consistency of removing all the pcim/devm based calls. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161852984150.2203940.8043988289748519056.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The devm managed lifetime is incompatible with 'struct device' objects that resides in idxd context. This is one of the series that clean up the idxd driver 'struct device' lifetime. Remove devm managed pci interrupt vectors and replace with unmanged allocators. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161852983563.2203940.8116028229124776669.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The devm managed lifetime is incompatible with 'struct device' objects that resides in idxd context. This is one of the series that clean up the idxd driver 'struct device' lifetime. Remove embedding of dma_device and dma_chan in idxd since it's not the only interface that idxd will use. The freeing of the dma_device will be managed by the ->release() function. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161852983001.2203940.14817017492384561719.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Vinod Koul authored
-
YueHaibing authored
commit 765c37d8 ("dmaengine: at_xdmac: rework slave configuration part") left behind this, so can remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Link: https://lore.kernel.org/r/20210407132543.23652-1-yuehaibing@huawei.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Colin Ian King authored
There are calls to idxd_cmd_exec that pass a null status pointer however a recent commit has added an assignment to *status that can end up with a null pointer dereference. The function expects a null status pointer sometimes as there is a later assignment to *status where status is first null checked. Fix the issue by null checking status before making the assignment. Addresses-Coverity: ("Explicit null dereferenced") Fixes: 89e3becd ("dmaengine: idxd: check device state before issue command") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20210415110654.1941580-1-colin.king@canonical.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Felipe Balbi authored
No functional changes, just adding a new compatible for a diferent SoC. Signed-off-by: Felipe Balbi <felipe.balbi@microsoft.com> Link: https://lore.kernel.org/r/20210417061951.2105530-2-balbi@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 12 Apr, 2021 12 commits
-
-
Dave Jiang authored
A pre-release silicon erratum workaround where wq reset does not clear WQCFG registers was leaked into upstream code. Use wq reset command instead of blasting the MMIO region. This also address an issue where we clobber registers in future devices. Fixes: da32b28c ("dmaengine: idxd: cleanup workqueue config after disabling") Reported-by: Shreenivaas Devarajan <shreenivaas.devarajan@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161824330020.881560.16375921906426627033.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Add disabling/clearing of MSIX permission entries on device shutdown to mirror the enabling of the MSIX entries on probe. Current code left the MSIX enabled and the pasid entries still programmed at device shutdown. Fixes: 8e50d392 ("dmaengine: idxd: Add shared workqueue support") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161824457969.882533.6020239898682672311.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Hao Fang authored
s/Hisilicon/HiSilicon/g. It should use capital S, according to the official website. Signed-off-by: Hao Fang <fanghao11@huawei.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Link: https://lore.kernel.org/r/1617277820-26971-1-git-send-email-fanghao11@huawei.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jiapeng Chong authored
Fix the following clang warning: drivers/dma/qcom/hidma.c:94:20: warning: unused function 'to_hidma_desc' [-Wunused-function]. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/1617270816-36400-1-git-send-email-jiapeng.chong@linux.alibaba.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dan Carpenter authored
Add a missing put_device(&pdev->dev) if the call to dma_async_device_register(dma); fails. Fixes: 905ca51e ("dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/YFnq/0IQzixtAbC1@mwandaSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dinghao Liu authored
pm_runtime_get_sync() will increase the runtime PM counter even it returns an error. Thus a pairing decrement is needed to prevent refcount leak. Fix this by replacing this API with pm_runtime_resume_and_get(), which will not change the runtime PM counter on error. Signed-off-by: Dinghao Liu <dinghao.liu@zju.edu.cn> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20210409082805.23643-1-dinghao.liu@zju.edu.cnSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Lv Yunlong authored
In the first list_for_each_entry() macro of dma_async_device_register, it gets the chan from list and calls __dma_async_device_channel_register (..,chan). We can see that chan->local is allocated by alloc_percpu() and it is freed chan->local by free_percpu(chan->local) when __dma_async_device_channel_register() failed. But after __dma_async_device_channel_register() failed, the caller will goto err_out and freed the chan->local in the second time by free_percpu(). The cause of this problem is forget to set chan->local to NULL when chan->local was freed in __dma_async_device_channel_register(). My patch sets chan->local to NULL when the callee failed to avoid double free. Fixes: d2fb0a04 ("dmaengine: break out channel registration") Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20210331014458.3944-1-lyl2019@mail.ustc.edu.cnSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Andy Shevchenko authored
Some architectures do not provide devm_*() APIs. Hence make the driver dependent on HAVE_IOMEM. Fixes: dbde5c29 ("dw_dmac: use devm_* functions to simplify code") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Link: https://lore.kernel.org/r/20210324141757.24710-1-andriy.shevchenko@linux.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
WQ size can only be changed when the device is disabled. Current code allows change when device is enabled but wq is disabled. Change the check to detect device state. Fixes: c52ca478 ("dmaengine: idxd: add configuration component of driver") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161782558755.107710.18138252584838406025.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The operation capability register is 256bits. The current output only prints out the first 64bits. Fix to output the entire 256bits. The current code omits operation caps from IAX devices. Fixes: c52ca478 ("dmaengine: idxd: add configuration component of driver") Reported-by: Lucas Van <lucas.van@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161645624963.2003736.829798666998490151.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
The delta_rec_size and crc_val in the completion record should be 32bits and not 16bits. Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Reported-by: Nikhil Rao <nikhil.rao@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161645618572.2003490.14466173451736323035.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Current code blindly writes over the SWERR and the OVERFLOW bits. Write back the bits actually read instead so the driver avoids clobbering the OVERFLOW bit that comes after the register is read. Fixes: bfe1d560 ("dmaengine: idxd: Init and probe for Intel data accelerators") Reported-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/161352082229.3511254.1002151220537623503.stgit@djiang5-desk3.ch.intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 16 Mar, 2021 4 commits
-
-
Gustavo Pimentel authored
Currently, is missing a null check on a pcim_iomap_table() return value and this can lead to a null pointer dereference if the desired BAR wasn't mapped previously. Fix this by adding a null check and returning -ENOMEM. Addresses-Coverity: ("Dereference null return") Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/bc5e6b8632c84660bb6dae454980e9419992ed14.1613674948.git.gustavo.pimentel@synopsys.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Reverting the applied patch because it caused a regression on ARC700 platform (32 bits). Fixes: 05655541 ("dmaengine: dw-edma: Fix scatter-gather address calculation") Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/1778422e389fe40032e216b59b1b992c61ec9887.1613674948.git.gustavo.pimentel@synopsys.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
To keep code consistent, some comments with dma keyword written in lower case are now in upper case. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/8c4b3db90767972a2b4cbb6fa818cf0e9c3d6fe3.1613674948.git.gustavo.pimentel@synopsys.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
When the driver is compiled as a module and loaded if we try to unload it, the Kernel shows a crash log. This Kernel crash is due to the dma_async_device_unregister() call done after deleting the channels, this patch fixes this issue. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/4aa850c035cf7ee488f1d3fb6dee0e37be0dce0a.1613674948.git.gustavo.pimentel@synopsys.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-