Commit 689122dc authored by Chris Wilson's avatar Chris Wilson

Revert "drm/i915/gt: Wait for new requests in intel_gt_retire_requests()"

From inside an active timeline in the execbuf ioctl, we may try to
reclaim some space in the GGTT. We need GGTT space for all objects on
!full-ppgtt platforms, and for context images everywhere. However, to
free up space in the GGTT we may need to remove some pinned objects
(e.g. context images) that require flushing the idle barriers to remove.
For this we use the big hammer of intel_gt_wait_for_idle()

However, commit 7936a22d ("drm/i915/gt: Wait for new requests in
intel_gt_retire_requests()") will continue spinning on the wait if a
timeline is active but lacks requests, as is the case during execbuf
reservation. Spinning forever is quite time consuming, so revert that
commit and start again.

In practice, the effect commit 7936a22d was trying to achieve is
accomplished by commit 1683d24c ("drm/i915/gt: Move new timelines
to the end of active_list"), so there is no immediate rush to replace
the looping.

Testcase: igt/gem_exec_reloc/basic-range
Fixes: 7936a22d ("drm/i915/gt: Wait for new requests in intel_gt_retire_requests()")
References: 1683d24c ("drm/i915/gt: Move new timelines to the end of active_list")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191121071044.97798-1-chris@chris-wilson.co.uk
parent e18417b4
......@@ -33,6 +33,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
{
struct intel_gt_timelines *timelines = &gt->timelines;
struct intel_timeline *tl, *tn;
unsigned long active_count = 0;
bool interruptible;
LIST_HEAD(free);
......@@ -44,8 +45,10 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
spin_lock(&timelines->lock);
list_for_each_entry_safe(tl, tn, &timelines->active_list, link) {
if (!mutex_trylock(&tl->mutex))
if (!mutex_trylock(&tl->mutex)) {
active_count++; /* report busy to caller, try again? */
continue;
}
intel_timeline_get(tl);
GEM_BUG_ON(!atomic_read(&tl->active_count));
......@@ -72,6 +75,8 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
list_safe_reset_next(tl, tn, link);
if (atomic_dec_and_test(&tl->active_count))
list_del(&tl->link);
else
active_count += !!rcu_access_pointer(tl->last_request.fence);
mutex_unlock(&tl->mutex);
......@@ -86,7 +91,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
list_for_each_entry_safe(tl, tn, &free, link)
__intel_timeline_free(&tl->kref);
return list_empty(&timelines->active_list) ? 0 : timeout;
return active_count ? timeout : 0;
}
int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment