Commit 7de1691a authored by Tomas Elf's avatar Tomas Elf Committed by Daniel Vetter

drm/i915: Grab execlist spinlock to avoid post-reset concurrency issues.

Grab execlist lock when cleaning up execlist queues after GPU reset to avoid
concurrency problems between the context event interrupt handler and the reset
path immediately following a GPU reset.

* v2 (Chris Wilson):
Do execlist check and use simpler form of spinlock functions.
Signed-off-by: default avatarTomas Elf <tomas.elf@intel.com>
Reviewed-by: default avatarDave Gordon <david.s.gordon@intel.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 51056723
...@@ -2753,6 +2753,9 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, ...@@ -2753,6 +2753,9 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
* are the ones that keep the context and ringbuffer backing objects * are the ones that keep the context and ringbuffer backing objects
* pinned in place. * pinned in place.
*/ */
if (i915.enable_execlists) {
spin_lock_irq(&ring->execlist_lock);
while (!list_empty(&ring->execlist_queue)) { while (!list_empty(&ring->execlist_queue)) {
struct drm_i915_gem_request *submit_req; struct drm_i915_gem_request *submit_req;
...@@ -2766,6 +2769,8 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, ...@@ -2766,6 +2769,8 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
i915_gem_request_unreference(submit_req); i915_gem_request_unreference(submit_req);
} }
spin_unlock_irq(&ring->execlist_lock);
}
/* /*
* We must free the requests after all the corresponding objects have * We must free the requests after all the corresponding objects have
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment