Commit 88a4655e authored by Chris Wilson's avatar Chris Wilson

drm/i915/gt: Adapt engine_park synchronisation rules for engine_retire

In the next patch, we will introduce a new asynchronous retirement
worker, fed by execlists CS events. Here we may queue a retirement as
soon as a request is submitted to HW (and completes instantly), and we
also want to process that retirement as early as possible and cannot
afford to postpone (as there may not be another opportunity to retire it
for a few seconds). To allow the new async retirer to run in parallel
with our submission, pull the __i915_request_queue (that passes the
request to HW) inside the timelines spinlock so that the retirement
cannot release the timeline before we have completed the submission.

v2: Actually to play nicely with engine_retire, we have to raise the
timeline.active_lock before releasing the HW. intel_gt_retire_requsts()
is still serialised by the outer lock so they cannot see this
intermediate state, and engine_retire is serialised by HW submission.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191125105858.1718307-2-chris@chris-wilson.co.uk
parent de5825be
...@@ -74,16 +74,33 @@ static inline void __timeline_mark_unlock(struct intel_context *ce, ...@@ -74,16 +74,33 @@ static inline void __timeline_mark_unlock(struct intel_context *ce,
#endif /* !IS_ENABLED(CONFIG_LOCKDEP) */ #endif /* !IS_ENABLED(CONFIG_LOCKDEP) */
static void static void
__intel_timeline_enter_and_release_pm(struct intel_timeline *tl, __queue_and_release_pm(struct i915_request *rq,
struct intel_engine_cs *engine) struct intel_timeline *tl,
struct intel_engine_cs *engine)
{ {
struct intel_gt_timelines *timelines = &engine->gt->timelines; struct intel_gt_timelines *timelines = &engine->gt->timelines;
GEM_TRACE("%s\n", engine->name);
/*
* We have to serialise all potential retirement paths with our
* submission, as we don't want to underflow either the
* engine->wakeref.counter or our timeline->active_count.
*
* Equally, we cannot allow a new submission to start until
* after we finish queueing, nor could we allow that submitter
* to retire us before we are ready!
*/
spin_lock(&timelines->lock); spin_lock(&timelines->lock);
/* Let intel_gt_retire_requests() retire us (acquired under lock) */
if (!atomic_fetch_inc(&tl->active_count)) if (!atomic_fetch_inc(&tl->active_count))
list_add_tail(&tl->link, &timelines->active_list); list_add_tail(&tl->link, &timelines->active_list);
/* Hand the request over to HW and so engine_retire() */
__i915_request_queue(rq, NULL);
/* Let new submissions commence (and maybe retire this timeline) */
__intel_wakeref_defer_park(&engine->wakeref); __intel_wakeref_defer_park(&engine->wakeref);
spin_unlock(&timelines->lock); spin_unlock(&timelines->lock);
...@@ -148,10 +165,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine) ...@@ -148,10 +165,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
rq->sched.attr.priority = I915_PRIORITY_BARRIER; rq->sched.attr.priority = I915_PRIORITY_BARRIER;
__i915_request_commit(rq); __i915_request_commit(rq);
__i915_request_queue(rq, NULL); /* Expose ourselves to the world */
__queue_and_release_pm(rq, ce->timeline, engine);
/* Expose ourselves to intel_gt_retire_requests() and new submission */
__intel_timeline_enter_and_release_pm(ce->timeline, engine);
result = false; result = false;
out_unlock: out_unlock:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment