Commit fbb30a5c authored by Chris Wilson's avatar Chris Wilson

drm/i915: Flush to GTT domain all GGTT bound objects after hibernation

Recently I have been applying an optimisation to avoid stalling and
clflushing GGTT objects based on their current binding. That is we only
set-to-gtt-domain upon first bind. However, on hibernation the objects
remain bound, but they are in the CPU domain. Currently (since commit
975f7ff4 ("drm/i915: Lazily migrate the objects after hibernation"))
we only flush scanout objects as all other objects are expected to be
flushed prior to use. That breaks down in the face of the runtime
optimisation above - and we need to flush all GGTT pinned objects
(essentially ringbuffers).

To reduce the burden of extra clflushes, we only flush those objects we
cannot discard from the GGTT. Everything pinned to the scanout, or
current contexts or ringbuffers will be flushed and rebound. Other
objects, such as inactive contexts, will be left unbound and in the CPU
domain until first use after resuming.

Fixes: 7abc98fa ("drm/i915: Only change the context object's domain...")
Fixes: 57e88531 ("drm/i915: Use VMA for ringbuffer tracking")
References: https://bugs.freedesktop.org/show_bug.cgi?id=94722Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: David Weinehall <david.weinehall@intel.com>
Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909201957.2499-1-chris@chris-wilson.co.uk
parent 7aab2d53
...@@ -3237,8 +3237,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev) ...@@ -3237,8 +3237,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
{ {
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_ggtt *ggtt = &dev_priv->ggtt; struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj, *on;
struct i915_vma *vma;
i915_check_and_clear_faults(dev_priv); i915_check_and_clear_faults(dev_priv);
...@@ -3246,20 +3245,32 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev) ...@@ -3246,20 +3245,32 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
ggtt->base.clear_range(&ggtt->base, ggtt->base.start, ggtt->base.total, ggtt->base.clear_range(&ggtt->base, ggtt->base.start, ggtt->base.total,
true); true);
/* Cache flush objects bound into GGTT and rebind them. */ ggtt->base.closed = true; /* skip rewriting PTE on VMA unbind */
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
/* clflush objects bound into the GGTT and rebind them. */
list_for_each_entry_safe(obj, on,
&dev_priv->mm.bound_list, global_list) {
bool ggtt_bound = false;
struct i915_vma *vma;
list_for_each_entry(vma, &obj->vma_list, obj_link) { list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (vma->vm != &ggtt->base) if (vma->vm != &ggtt->base)
continue; continue;
if (!i915_vma_unbind(vma))
continue;
WARN_ON(i915_vma_bind(vma, obj->cache_level, WARN_ON(i915_vma_bind(vma, obj->cache_level,
PIN_UPDATE)); PIN_UPDATE));
ggtt_bound = true;
} }
if (obj->pin_display) if (ggtt_bound)
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false)); WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
} }
ggtt->base.closed = false;
if (INTEL_INFO(dev)->gen >= 8) { if (INTEL_INFO(dev)->gen >= 8) {
if (IS_CHERRYVIEW(dev) || IS_BROXTON(dev)) if (IS_CHERRYVIEW(dev) || IS_BROXTON(dev))
chv_setup_private_ppat(dev_priv); chv_setup_private_ppat(dev_priv);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment