Commit 0d73e7a0 authored by Chris Wilson's avatar Chris Wilson

drm/i915: Mark the device as wedged from the beginning of set-wedged

Reduce the window of opportunity for set-wedged being called
concurrently with reset (after i915_reset() has performed the
i915_gem_unset_wedged()) by moving the set_bit(I915_WEDGED) to before we
complete the inflight requests. When i915_reset() is being blocked on a
request, such completion may allow it to start and beginning resetting
the GPU before i915_gem_set_wedged() has finished (and so before
set-wedge will have marked the device as wedged). As such,
i915_gem_init_hw() may see a wedged device even from inside
i915_reset().

References: 36703e79 ("drm/i915: Break modeset deadlocks on reset")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180207151350.20883-1-chris@chris-wilson.co.uk
parent 11a18f63
...@@ -3205,6 +3205,9 @@ void i915_gem_set_wedged(struct drm_i915_private *i915) ...@@ -3205,6 +3205,9 @@ void i915_gem_set_wedged(struct drm_i915_private *i915)
intel_engine_dump(engine, &p, "%s\n", engine->name); intel_engine_dump(engine, &p, "%s\n", engine->name);
} }
set_bit(I915_WEDGED, &i915->gpu_error.flags);
smp_mb__after_atomic();
/* /*
* First, stop submission to hw, but do not yet complete requests by * First, stop submission to hw, but do not yet complete requests by
* rolling the global seqno forward (since this would complete requests * rolling the global seqno forward (since this would complete requests
...@@ -3244,7 +3247,8 @@ void i915_gem_set_wedged(struct drm_i915_private *i915) ...@@ -3244,7 +3247,8 @@ void i915_gem_set_wedged(struct drm_i915_private *i915)
for_each_engine(engine, i915, id) { for_each_engine(engine, i915, id) {
unsigned long flags; unsigned long flags;
/* Mark all pending requests as complete so that any concurrent /*
* Mark all pending requests as complete so that any concurrent
* (lockless) lookup doesn't try and wait upon the request as we * (lockless) lookup doesn't try and wait upon the request as we
* reset it. * reset it.
*/ */
...@@ -3254,7 +3258,6 @@ void i915_gem_set_wedged(struct drm_i915_private *i915) ...@@ -3254,7 +3258,6 @@ void i915_gem_set_wedged(struct drm_i915_private *i915)
spin_unlock_irqrestore(&engine->timeline->lock, flags); spin_unlock_irqrestore(&engine->timeline->lock, flags);
} }
set_bit(I915_WEDGED, &i915->gpu_error.flags);
wake_up_all(&i915->gpu_error.reset_queue); wake_up_all(&i915->gpu_error.reset_queue);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment