Commit 09137e94 authored by Chris Wilson's avatar Chris Wilson

drm/i915/gem: Unpin idle contexts from kswapd reclaim

We removed retiring requests from the shrinker in order to decouple the
mutexes from reclaim in preparation for unravelling the struct_mutex.
The impact of not retiring is that we are much less agressive in making
global objects available for shrinking, as such objects remain pinned
until they are flushed by a heartbeat pulse following the last retired
request along their timeline. In order to ensure that pulse occurs in
time for memory reclamation, we should kick it from kswapd.

The catch is that we have added some flush_work() into the retirement
phase (to ensure that we reach a global idle in a timely manner), but
these flush_work() are not eligible (i.e do not belong to WQ_MEM_RELCAIM)
for use from inside kswapd. To avoid flushing those workqueues, we teach
the retirer not to do so unless we are actually waiting, and only do the
plain retire from inside the shrinker.

Note that for execlists, we already retire completed contexts as they
are scheduled out, so it should not be keeping global state
unnecessarily pinned. The legacy ringbuffer however...

References: 9e953980 ("drm/i915: Remove waiting & retiring from shrinker paths")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200708173748.32734-1-chris@chris-wilson.co.uk
parent a00eda7d
......@@ -13,6 +13,8 @@
#include <linux/dma-buf.h>
#include <linux/vmalloc.h>
#include "gt/intel_gt_requests.h"
#include "i915_trace.h"
static bool swap_available(void)
......@@ -111,15 +113,6 @@ i915_gem_shrink(struct drm_i915_private *i915,
unsigned long count = 0;
unsigned long scanned = 0;
/*
* When shrinking the active list, we should also consider active
* contexts. Active contexts are pinned until they are retired, and
* so can not be simply unbound to retire and unpin their pages. To
* shrink the contexts, we must wait until the gpu is idle and
* completed its switch to the kernel context. In short, we do
* not have a good mechanism for idling a specific context.
*/
trace_i915_gem_shrink(i915, target, shrink);
/*
......@@ -133,6 +126,20 @@ i915_gem_shrink(struct drm_i915_private *i915,
shrink &= ~I915_SHRINK_BOUND;
}
/*
* When shrinking the active list, we should also consider active
* contexts. Active contexts are pinned until they are retired, and
* so can not be simply unbound to retire and unpin their pages. To
* shrink the contexts, we must wait until the gpu is idle and
* completed its switch to the kernel context. In short, we do
* not have a good mechanism for idling a specific context, but
* what we can do is give them a kick so that we do not keep idle
* contexts around longer than is necessary.
*/
if (shrink & I915_SHRINK_ACTIVE)
/* Retire requests to unpin all idle contexts */
intel_gt_retire_requests(&i915->gt);
/*
* As we may completely rewrite the (un)bound list whilst unbinding
* (due to retiring requests) we have to strictly process only
......
......@@ -31,12 +31,15 @@ static bool engine_active(const struct intel_engine_cs *engine)
return !list_empty(&engine->kernel_context->timeline->requests);
}
static bool flush_submission(struct intel_gt *gt)
static bool flush_submission(struct intel_gt *gt, long timeout)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
bool active = false;
if (!timeout)
return false;
if (!intel_gt_pm_is_awake(gt))
return false;
......@@ -139,7 +142,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
if (unlikely(timeout < 0))
timeout = -timeout, interruptible = false;
flush_submission(gt); /* kick the ksoftirqd tasklets */
flush_submission(gt, timeout); /* kick the ksoftirqd tasklets */
spin_lock(&timelines->lock);
list_for_each_entry_safe(tl, tn, &timelines->active_list, link) {
if (!mutex_trylock(&tl->mutex)) {
......@@ -194,7 +197,7 @@ out_active: spin_lock(&timelines->lock);
list_for_each_entry_safe(tl, tn, &free, link)
__intel_timeline_free(&tl->kref);
if (flush_submission(gt)) /* Wait, there's more! */
if (flush_submission(gt, timeout)) /* Wait, there's more! */
active_count++;
return active_count ? timeout : 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment