Commit 18afa288 authored by Chris Wilson's avatar Chris Wilson

Revert "drm/i915: Skip execlists_dequeue() early if the list is empty"

This reverts commit 6c943de6 ("drm/i915: Skip execlists_dequeue()
early if the list is empty").

The validity of using READ_ONCE there depends upon having a mb to
coordinate the assignment of engine->execlist_first inside
submit_request() and checking prior to taking the spinlock in
execlists_dequeue(). We wrote "the update to TASKLET_SCHED incurs a
memory barrier making this cross-cpu checking safe", but failed to
notice that this mb was *conditional* on the execlists being ready, i.e.
there wasn't the required mb when it was most necessary!

We could install an unconditional memory barrier to fixup the
READ_ONCE():

diff --git a/drivers/gpu/drm/i915/intel_lrc.c
b/drivers/gpu/drm/i915/intel_lrc.c
index 7dd732cb9f57..1ed164b16d44 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -616,6 +616,7 @@ static void execlists_submit_request(struct
drm_i915_gem_request *request)

        if (insert_request(&request->priotree, &engine->execlist_queue))
{
                engine->execlist_first = &request->priotree.node;
+               smp_wmb();
                if (execlists_elsp_ready(engine))

But we have opted to remove the race as it should be rarely effective,
and saves us having to explain the necessary memory barriers which we
quite clearly failed at.
Reported-and-tested-by: default avatarTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Fixes: 6c943de6 ("drm/i915: Skip execlists_dequeue() early if the list is empty")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170329100052.29505-1-chris@chris-wilson.co.ukReviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
parent a79a524e
......@@ -662,18 +662,6 @@ static bool i915_guc_dequeue(struct intel_engine_cs *engine)
struct rb_node *rb;
bool submit = false;
/* After execlist_first is updated, the tasklet will be rescheduled.
*
* If we are currently running (inside the tasklet) and a third
* party queues a request and so updates engine->execlist_first under
* the spinlock (which we have elided), it will atomically set the
* TASKLET_SCHED flag causing the us to be re-executed and pick up
* the change in state (the update to TASKLET_SCHED incurs a memory
* barrier making this cross-cpu checking safe).
*/
if (!READ_ONCE(engine->execlist_first))
return false;
spin_lock_irq(&engine->timeline->lock);
rb = engine->execlist_first;
while (rb) {
......
......@@ -402,18 +402,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
struct rb_node *rb;
bool submit = false;
/* After execlist_first is updated, the tasklet will be rescheduled.
*
* If we are currently running (inside the tasklet) and a third
* party queues a request and so updates engine->execlist_first under
* the spinlock (which we have elided), it will atomically set the
* TASKLET_SCHED flag causing the us to be re-executed and pick up
* the change in state (the update to TASKLET_SCHED incurs a memory
* barrier making this cross-cpu checking safe).
*/
if (!READ_ONCE(engine->execlist_first))
return;
last = port->request;
if (last)
/* WaIdleLiteRestore:bdw,skl
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment