- 27 Jun, 2019 32 commits
-
-
Chandrakanth Patil authored
Driver should enable interrupt coalescing (during driver load and after Controller Reset) for High IOPS queues by masking appropriate bits in IOC INIT frame. Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Aero controllers support balanced performance mode through the ability to configure queues with different properties. Reply queues with interrupt coalescing enabled are called "high iops reply queues" and reply queues with interrupt coalescing disabled are called "low latency reply queues". The driver configures a combination of high iops and low latency reply queues if: - HBA is an AERO controller; - MSI-X vectors supported by the HBA is 128; - Total CPU count in the system more than high iops queue count; - Driver is loaded with default max_msix_vectors module parameter; and - System booted in non-kdump mode. Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Added driver support to allow passthrough MPI toolbox type MFI commands to firmware based on firmware capability. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
For RAID5/RAID6 volumes configured behind Aero, driver will be doing 64bit division operations on behalf of firmware as controller's ARM CPU is very slow in this division. Later, driver calculates Q-ARM, P-ARM and Log-ARM and passes those values to firmware by writing these values to RAID_CONTEXT. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
RAID1 PCI bandwidth limit algorithm is not applicable to Aero as it's PCIe Gen4 adapter. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Issue: This issue is applicable to scenario when JBOD sequence map is unavailable (memory allocation for JBOD sequence map failed) to driver but feature is supported by firmware. If the driver sends a JBOD IO by not adding 255 (MAX_PHYSICAL_DEVICES - 1) to device ID when underlying firmware supports JBOD sequence map, it will lead to the IO failure. Fix: For JBOD IOs, driver will not use the RAID map to fetch the devhandle if JBOD sequence map is unavailable. Driver will set Devhandle to 0xffff and Target ID to 'device ID + 255 (MAX_PHYSICAL_DEVICES - 1)'. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Firmware does not expect FastPath IO sent through Region Lock Bypass queue. Though firmware never exposes such settings when fastpath IO can be sent to RL bypass queue but it's safer to remove dead code which directs fastpath IO to RL Bypass queue. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Issue: Under certain conditions, controller goes in FAULT state after IOC INIT fired to firmware. Such Fault can be recovered through controller reset. Fix: In driver probe context, if firmware fault is observed post IOC INIT, driver would do controller reset followed by retry logic for IOC INIT command. Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Issue: There is possibility of few DCMDs timing out with 'reset_mutex' lock held. As part of DCMD timeout handling, driver calls function megasas_reset_fusion which also tries to acquire same lock 'reset_mutex' and end up with deadlock. Fix: Upon timeout of DCMDs (which are fired with 'reset_mutex' lock held), driver will release 'reset_mutex' before calling OCR function and will acquire lock again after OCR function returns. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
On PowerPC architecture, calling disable_irq_nosync from IRQ context is not providing the required effect. In current megaraid_sas driver, disable_irq_nosync is being called from IRQ context before enabling IRQ poll. But due to the issue seen on PPC, after IRQ poll disable and legacy ISR is enabled, we are not seeing our ISR getting called. Fix: Call disable_irq from IRQ poll thread context instead of IRQ context. Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
This patch will add support for non-secure Aero adapter PCI IDs. Driver will throw an error message when a non-secure type controller is detected. Purpose of this interface is to avoid interacting with any firmware which is not secured/signed by Broadcom. Any tampering on Firmware component will be detected by hardware and it will be communicated to the driver to avoid any further interaction with that component. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chandrakanth Patil authored
Aero adapters provides Atomic Request Descriptor as an alternative method for posting an entry onto a request queue. The posting of an Atomic Request Descriptor is an atomic operation, providing a safe mechanism for multiple processors on the host to post requests without synchronization. This Atomic Request Descriptor format is identical to first 32 bits of Default Request Descriptor and uses only 32 bits. If Aero adapters support Atomic descriptor, driver should use it for posting IOs and DCMDs to firmware. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Chandrakanth Patil <chandrakanth.patil@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Deepak Ukey authored
Added the logic for collecting IOP log respective to event log size. Signed-off-by: Deepak Ukey <deepak.ukey@microchip.com> Signed-off-by: Viswas G <Viswas.G@microchip.com> Reviewed-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Deepak Ukey authored
Added support to read event log size from MPI configuration table and export through sysfs. Signed-off-by: Deepak Ukey <deepak.ukey@microchip.com> Signed-off-by: Viswas G <Viswas.G@microchip.com> Reviewed-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sreekanth Reddy authored
Enable msix load balance only when combined reply queue mode is disabled on the SAS3 and above generation HBA devices. Earlier msix load balance used to enable if the number of online cpus is greater than the number of MSI-X vectors enabled on the HBA. Combined reply queue mode will be disabled only on those HBA which works in shared resources mode. I.e. on SAS3 HBAs it will be <= 8 and on SAS35 HBA devices it will be <= 16. - Before this patch if system has 256 logical CPUs and HBA exposes 128 MSI-X vectors, driver will enable msix load balance. - After this patch if system has 256 logical CPUs and HBA exposes 128 MSI-X vectors, driver will disable msix load balance. - After this patch if system has 256 logical CPUs and HBA exposes 16 MSI-X vectors (due to combined reply queue mode being off in HW), driver will enable msix load balance. Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sreekanth Reddy authored
Even though 'smp_affinity_enable' module parameter is enabled, if the number of online CPUs is bigger than the number of msix vectors enabled on that HBA, then smp affinity settings should be disabled only for this HBA. But currently the smp affinity setting is disabled globally and hence smp affinity will be disabled for subsequent HBAs even though number of msix vectors enabled for this HBA matches the number of online CPU. To fix this, define a per HBA variable smp_affinity_enable. Initially this variable is initialized with smp_affinity_enable module parameter value. If this HBA has less number of msix vectors configured when compared to number of online cpus, then only this HBA's variable smp_affinity_enable is set to zero. Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sreekanth Reddy authored
When enabling high iops queues, the driver should use the HBA's configured PCIe link speed instead of looking for the maximum link speed. I.e. enable high iops queues only if Aero/Sea HBA's configured PCIe link speed is set to 16GT/s. Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sreekanth Reddy authored
Currently default perf_mode is set to 'balanced' on Intel architecture machines and on other machines default perf_mode is set to 'latency' mode. This CPU architecture check is removed and the default perf_mode mode is set to 'balanced' mode on all machines. User can choose the required performance mode using perf_mode module parameter. Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Arthur Simchaev authored
The ufs-tool stable release v1.0 is available at: https://github.com/westerndigitalcorporation/ufs-tool Feedback and bug reports, as always, are welcomed. Signed-off-by: Arthur Simchaev <Arthur.Simchaev@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Lin Yi authored
If cb_arg alloc failed, we can't release the struct orig_io_req refcount before we take its refcount. As Saurav said, move the srr_err label down to avoid unnecessary refcount release and nullptr free. Signed-off-by: Lin Yi <teroincn@163.com> Acked-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Lin Yi authored
If cb_arg alloc failed, we can't release the struct orig_io_req refcount before we take its refcount. As Saurav said, move the rec_err label down to avoid unnecessary refcount release and nullptr free. Signed-off-by: Lin Yi <teroincn@163.com> Acked-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Saurav Kashyap authored
Update the driver version to 2.12.10. Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Saurav Kashyap authored
- Reduce the sg_tablesize to 255. - Reduce the MAX BDs firmware can handle to 255. - Return IO to ML if BD goes more then 255 after split. - Correct the size of each BD split to 0xffff. Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Saurav Kashyap authored
If firmware sends either cleanup or abort completion, it means other won't be sent. Clean out flags for other as well. Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Saurav Kashyap authored
Separate out abort and cleanup flag and completion, to have better understaning of what is getting processed. Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chad Dupuis authored
In certain tests where the SCSI error handler issues an abort that is already outstanding, we will cleanup the command so that the SCSI error handler can proceed. In some of these cases we were seeing a command mismatch: kernel: scsi host2: bnx2fc: xid:0x42b eh_abort - refcnt = 2 kernel: bnx2fc: eh_abort: io_req (xid = 0x42b) already in abts processing kernel: scsi host2: bnx2fc: xid:0x42b Entered bnx2fc_initiate_cleanup kernel: scsi host2: bnx2fc: xid:0x42b CLEANUP io_req xid = 0x80b kernel: scsi host2: bnx2fc: xid:0x80b cq_compl- cleanup resp rcvd kernel: scsi host2: bnx2fc: xid:0x42b complete - rx_state = 9 kernel: scsi host2: bnx2fc: xid:0x42b Entered process_cleanup_compl refcnt = 2, cmd_type = 1 kernel: scsi host2: bnx2fc: xid:0x42b scsi_done. err_code = 0x7 kernel: scsi host2: bnx2fc: xid:0x42b sc=ffff8807f93dfb80, result=0x7, retries=0, allowed=5 kernel: ------------[ cut here ]------------ kernel: WARNING: at /root/rpmbuild/BUILD/netxtreme2-7.14.43/obj/default/bnx2fc-2.12.1/driver/bnx2fc_io.c:1347 bnx2fc_eh_abort+0x56f/0x680 [bnx2fc]() kernel: xid=0x42b refcount=-1 kernel: Modules linked in: kernel: nls_utf8 isofs sr_mod cdrom tcp_lp dm_round_robin xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun bridge ebtable_filter ebtables fuse ip6table_filter ip6_tables iptable_filter bnx2fc(OE) cnic(OE) uio fcoe libfcoe 8021q libfc garp mrp scsi_transport_fc stp llc scsi_tgt vfat fat dm_service_time intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ses enclosure ipmi_ssif i2c_core hpilo hpwdt wmi sg ipmi_devintf pcspkr ipmi_si ipmi_msghandler shpchp acpi_power_meter dm_multipath nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs sd_mod crc_t10dif kernel: crct10dif_generic bnx2x(OE) crct10dif_pclmul crct10dif_common crc32c_intel mdio ptp pps_core libcrc32c smartpqi scsi_transport_sas fjes uas usb_storage dm_mirror dm_region_hash dm_log dm_mod kernel: CPU: 9 PID: 2012 Comm: scsi_eh_2 Tainted: G W OE ------------ 3.10.0-514.el7.x86_64 #1 kernel: Hardware name: HPE Synergy 480 Gen10/Synergy 480 Gen10 Compute Module, BIOS I42 03/21/2018 kernel: ffff8807f25a3d98 0000000015e7fa0c ffff8807f25a3d50 ffffffff81685eac kernel: ffff8807f25a3d88 ffffffff81085820 ffff8807f8e39000 ffff880801ff7468 kernel: ffff880801ff7610 0000000000002002 ffff8807f8e39014 ffff8807f25a3df0 kernel: Call Trace: kernel: [<ffffffff81685eac>] dump_stack+0x19/0x1b kernel: [<ffffffff81085820>] warn_slowpath_common+0x70/0xb0 kernel: [<ffffffff810858bc>] warn_slowpath_fmt+0x5c/0x80 kernel: [<ffffffff8168d842>] ? _raw_spin_lock_bh+0x12/0x50 kernel: [<ffffffffa0549e6f>] bnx2fc_eh_abort+0x56f/0x680 [bnx2fc] kernel: [<ffffffff814570af>] scsi_error_handler+0x59f/0x8b0 kernel: [<ffffffff81456b10>] ? scsi_eh_get_sense+0x250/0x250 kernel: [<ffffffff810b052f>] kthread+0xcf/0xe0 kernel: [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140 kernel: [<ffffffff81696418>] ret_from_fork+0x58/0x90 kernel: [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140 kernel: ---[ end trace 42deb88f2032b111 ]--- The reason that there was a mismatch is that the SCSI command is actual returned from the cleanup handler. In previous testing, the type of cleanup notification we'd get from the CQE did not trigger the code that returned the SCSI command. To overcome the previous behavior we would put a reference in bnx2fc_abts_cleanup() to account for the SCSI command. However, in cases where the SCSI command is actually off, we end up with an extra put. The fix for this is to only take the extra put in bnx2fc_abts_cleanup if the completion for the cleanup times out. Signed-off-by: Chad Dupuis <cdupuis@marvell.com> Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chad Dupuis authored
For bnx2fc, the source FCoE MAC is stored in the fcoe_port struct in the data_src_mac field. Currently this is set in fcoe_ctlr_recv_flogi which ends up setting it by simply using fc_fcoe_set_mac() which only uses the default FCF-MAC. We still want to store the source FCoE MAC in port->data_src_mac but we want to snoop the FLOGI response payload so as to set it in the following method: 1. If a granted_mac is found, use that. 2. If not granted_mac is there but there is a FCF-MAP from the FCF then create the MAC from the FCF-MAP and the destination ID from the frame. 3. If there is no FCF-MAP the use the spec. default FCF-MAP and the destination ID from the frame. Signed-off-by: Chad Dupuis <cdupuis@marvell.com> Signed-off-by: Saurav Kashyap <skashyap@marvell.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Adrian Hunter authored
Add more Intel PCI IDs. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bean Huo authored
In the case of UPIU/DME request execution failed in UFS device, ufs_bsg_request() will complete the failed bsg job by calling bsg_job_done(). Meanwhile, it returns this error status to blk-mq layer, then triggers blk-mq completing this request again, this will cause the following panic. Call trace: ll_sc___cmpxchg_case_acq_32+0x4/0x20 complete+0x28/0x70 blk_end_sync_rq+0x24/0x30 blk_mq_end_request+0xb8/0x118 bsg_job_put+0x4c/0x58 bsg_complete+0x20/0x30 blk_done_softirq+0xb4/0xe8 do_softirq+0x154/0x3f0 run_ksoftirqd+0x4c/0x68 smpboot_thread_fn+0x22c/0x268 kthread+0x130/0x138 ret_from_fork+0x10/0x1c Code: f84107fe d65f03c0 d503201f f9800011 (885ffc10) ---[ end trace d92825bff6326e66 ]--- Kernel panic - not syncing: Fatal exception in interrupt This patch is to fix this issue. The solution is to complete the ufs-bsg job only if no error happened. [mkp: commit description tweak] Fixes: df032bf2 (scsi: ufs: Add a bsg endpoint that supports UPIUs) Signed-off-by: Bean Huo <beanhuo@micron.com> Reviewed-by: Avri Altman <Avri.Altman@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bean Huo authored
Correct dev_dbg to dev_err, so as to print out the error information in case of DME command failed. Signed-off-by: Bean Huo <beanhuo@micron.com> Reviewed-by: Avri Altman <Avri.Altman@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 20 Jun, 2019 8 commits
-
-
Dongli Zhang authored
The 'affinity_hint_set' is not used any longer since commit 0d9f0a52 ("virtio_scsi: use virtio IRQ affinity"). Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Tomas Henzl authored
Use existing macros. No functional change. [mkp: typo] Signed-off-by: Tomas Henzl <thenzl@redhat.com> Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Tomas Henzl authored
Support is easier with all driver parameters visible in sysfs. Also I've replaced a constant with an octal permission. Signed-off-by: Tomas Henzl <thenzl@redhat.com> Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Lee Jones authored
New Qualcomm AArch64 based laptops are now available which use UFS as their primary data storage medium. These devices are supplied with ACPI support out of the box. This patch ensures the Qualcomm UFS driver will be bound when the "QCOM24A5" H/W device is advertised as present. Signed-off-by: Lee Jones <lee.jones@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct MR_PD_CFG_SEQ_NUM_SYNC { ... struct MR_PD_CFG_SEQ seq[1]; } __packed; Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. So, replace the following form: sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) + (sizeof(struct MR_PD_CFG_SEQ) * (MAX_PHYSICAL_DEVICES - 1)) with: struct_size(pd_sync, seq, MAX_PHYSICAL_DEVICES - 1) This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Finn Thain authored
A system bus error during a PDMA send operation can result in bytes being lost. Theoretically that could cause the target to remain in DATA OUT phase and the initiator (expecting a phase change) would time-out waiting for the Last Byte Sent flag. Should that happen, fail the transfer so the core driver will stop using PDMA with this target. Cc: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Tested-by: Stan Johnson <userm57@yahoo.com> Tested-by: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Finn Thain authored
Add support for Apple's custom "SCSI DMA" chip. This patch doesn't make use of its DMA capability. Just the PDMA capability is sufficient to improve sequential read throughput by a factor of 5. Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Joshua Thompson <funaho@jurai.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Tested-by: Stan Johnson <userm57@yahoo.com> Tested-by: Michael Schmitz <schmitzmic@gmail.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Finn Thain authored
A system bus error during a PDMA transfer can mess up the calculation of the transfer residual (the PDMA handshaking hardware lacks a byte counter). This results in data corruption. The algorithm in this patch anticipates a bus error by starting each transfer with a MOVE.B instruction. If a bus error is caught the transfer will be retried. If a bus error is caught later in the transfer (for a MOVE.W instruction) the transfer gets failed and subsequent requests for that target will use PIO instead of PDMA. This avoids the "!REQ and !ACK" error so the severity level of that message is reduced to KERN_DEBUG. Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: stable@vger.kernel.org # v4.14+ Fixes: 3a0f64bf ("mac_scsi: Fix pseudo DMA implementation") Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Reported-by: Chris Jones <chris@martin-jones.com> Tested-by: Stan Johnson <userm57@yahoo.com> Tested-by: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-