Commit 9766096d authored by Kashyap, Desai's avatar Kashyap, Desai Committed by James Bottomley

[SCSI] mptsas : FW event thread and scsi mid layer deadlock in SYNCHRONIZE CACHE command

Normally In HBA reset path MPT driver will flush existing work in current work
queue (mpt/0) . This is just a dummy activity for MPT driver point of
view, since HBA reset will turn off Work queue events.

It means we will simply returns from work queue without doing anything.
But for the case where Work is already done (half the way), we have to have
that work to be done.

Considering above condition we stuck forever since Deadlock in scsi midlayer
and MPT driver. sd_sync_cache() will wait forever since HBA is not in
Running state, and it will never come into Running state since
sd_sync_cache() is called from HBA reset context.
Now new code will not wait for half cooked work to be finished
before returning from HBA reset.

Once we are out of HBA reset, EH thread will change host state to running from
recovery and work waiting for running state of HBA will be finished.
New code is turning ON firmware event from another special work called
Rescan toplogy.
Signed-off-by: default avatarKashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
parent fea98403
...@@ -325,7 +325,6 @@ mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc) ...@@ -325,7 +325,6 @@ mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc)
{ {
struct fw_event_work *fw_event, *next; struct fw_event_work *fw_event, *next;
struct mptsas_target_reset_event *target_reset_list, *n; struct mptsas_target_reset_event *target_reset_list, *n;
u8 flush_q;
MPT_SCSI_HOST *hd = shost_priv(ioc->sh); MPT_SCSI_HOST *hd = shost_priv(ioc->sh);
/* flush the target_reset_list */ /* flush the target_reset_list */
...@@ -345,15 +344,10 @@ mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc) ...@@ -345,15 +344,10 @@ mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc)
!ioc->fw_event_q || in_interrupt()) !ioc->fw_event_q || in_interrupt())
return; return;
flush_q = 0;
list_for_each_entry_safe(fw_event, next, &ioc->fw_event_list, list) { list_for_each_entry_safe(fw_event, next, &ioc->fw_event_list, list) {
if (cancel_delayed_work(&fw_event->work)) if (cancel_delayed_work(&fw_event->work))
mptsas_free_fw_event(ioc, fw_event); mptsas_free_fw_event(ioc, fw_event);
else
flush_q = 1;
} }
if (flush_q)
flush_workqueue(ioc->fw_event_q);
} }
...@@ -1279,7 +1273,6 @@ mptsas_ioc_reset(MPT_ADAPTER *ioc, int reset_phase) ...@@ -1279,7 +1273,6 @@ mptsas_ioc_reset(MPT_ADAPTER *ioc, int reset_phase)
} }
mptsas_cleanup_fw_event_q(ioc); mptsas_cleanup_fw_event_q(ioc);
mptsas_queue_rescan(ioc); mptsas_queue_rescan(ioc);
mptsas_fw_event_on(ioc);
break; break;
default: default:
break; break;
...@@ -1599,6 +1592,7 @@ mptsas_firmware_event_work(struct work_struct *work) ...@@ -1599,6 +1592,7 @@ mptsas_firmware_event_work(struct work_struct *work)
mptsas_scan_sas_topology(ioc); mptsas_scan_sas_topology(ioc);
ioc->in_rescan = 0; ioc->in_rescan = 0;
mptsas_free_fw_event(ioc, fw_event); mptsas_free_fw_event(ioc, fw_event);
mptsas_fw_event_on(ioc);
return; return;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment