Commit dd8a07f0 authored by José Roberto de Souza's avatar José Roberto de Souza Committed by Lucas De Marchi

drm/xe: Skip VMAs pin when requesting signal to the last XE_EXEC

Doing a XE_EXEC with num_batch_buffer == 0 makes signals passed as
argument to be signaled when the last real XE_EXEC is completed.
But to do that it was first pinning all VMAs in drm_gpuvm_exec_lock(),
this patch remove this pinning as it is not required.

This change also help Mesa implementing memory over-commiting recovery
as it needs to unbind not needed VMAs when the whole VM can't fit
in GPU memory but it can only do the unbiding when the last XE_EXEC
is completed.
So with this change Mesa can get the signal it want without getting
out-of-memory errors.

Fixes: eb9702ad ("drm/xe: Allow num_batch_buffer / num_binds == 0 in IOCTLs")
Cc: Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
Co-developed-by: default avatarMatthew Brost <matthew.brost@intel.com>
Signed-off-by: default avatarJosé Roberto de Souza <jose.souza@intel.com>
Reviewed-by: default avatarMatthew Brost <matthew.brost@intel.com>
Signed-off-by: default avatarMatthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240313171318.121066-1-jose.souza@intel.com
(cherry picked from commit 58480c1c)
Signed-off-by: default avatarLucas De Marchi <lucas.demarchi@intel.com>
parent d58b4ef6
...@@ -235,6 +235,29 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) ...@@ -235,6 +235,29 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_unlock_list; goto err_unlock_list;
} }
if (!args->num_batch_buffer) {
err = xe_vm_lock(vm, true);
if (err)
goto err_unlock_list;
if (!xe_vm_in_lr_mode(vm)) {
struct dma_fence *fence;
fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
if (IS_ERR(fence)) {
err = PTR_ERR(fence);
goto err_unlock_list;
}
for (i = 0; i < num_syncs; i++)
xe_sync_entry_signal(&syncs[i], NULL, fence);
xe_exec_queue_last_fence_set(q, vm, fence);
dma_fence_put(fence);
}
xe_vm_unlock(vm);
goto err_unlock_list;
}
vm_exec.vm = &vm->gpuvm; vm_exec.vm = &vm->gpuvm;
vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT; vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
if (xe_vm_in_lr_mode(vm)) { if (xe_vm_in_lr_mode(vm)) {
...@@ -254,24 +277,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) ...@@ -254,24 +277,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_exec; goto err_exec;
} }
if (!args->num_batch_buffer) {
if (!xe_vm_in_lr_mode(vm)) {
struct dma_fence *fence;
fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
if (IS_ERR(fence)) {
err = PTR_ERR(fence);
goto err_exec;
}
for (i = 0; i < num_syncs; i++)
xe_sync_entry_signal(&syncs[i], NULL, fence);
xe_exec_queue_last_fence_set(q, vm, fence);
dma_fence_put(fence);
}
goto err_exec;
}
if (xe_exec_queue_is_lr(q) && xe_exec_queue_ring_full(q)) { if (xe_exec_queue_is_lr(q) && xe_exec_queue_ring_full(q)) {
err = -EWOULDBLOCK; /* Aliased to -EAGAIN */ err = -EWOULDBLOCK; /* Aliased to -EAGAIN */
skip_retry = true; skip_retry = true;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment