Commit c9c0f5ea authored by Chris Wilson's avatar Chris Wilson Committed by Daniel Vetter

drm/i915: During shrink_all we only need to idle the GPU

We can forgo an evict-everything here as the shrinker operation itself
will unbind any vma as required. If we explicitly idle the GPU through a
switch to the default context, we not only create a request in an
illegal context (e.g. whilst shrinking during execbuf with a request
already allocated), but switching to the default context will not free
up the memory backing the active contexts - unless in the unlikely
situation that context had already been closed (and just kept arrive by
being the current context). The saving is near zero and the danger real.

To compensate for the loss of the forced retire, add a couple of
retire-requests to i915_gem_shirnk() - this should help free up any
transitive cache from the requests.

Note that the second retire_requests is for the benefit of the
hand-rolled execlist ctx active tracking: We need to manually kick
requests to get those unpinned again. Once that's fixed we can try to
remove this again.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
[danvet: Add summary of why we need a pile of retire_requests.]
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 3abafa53
......@@ -86,6 +86,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
unsigned long count = 0;
trace_i915_gem_shrink(dev_priv, target, flags);
i915_gem_retire_requests(dev_priv->dev);
/*
* As we may completely rewrite the (un)bound list whilst unbinding
......@@ -141,6 +142,8 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
list_splice(&still_in_list, phase->list);
}
i915_gem_retire_requests(dev_priv->dev);
return count;
}
......@@ -160,7 +163,6 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
*/
unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv)
{
i915_gem_evict_everything(dev_priv->dev);
return i915_gem_shrink(dev_priv, -1UL,
I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment