Commit a0862cf2 authored by Nirmoy Das's avatar Nirmoy Das

drm/xe: Refactor default device atomic settings

The default behavior of device atomics depends on the
VM type and buffer allocation types. Device atomics are
expected to function with all types of allocations for
traditional applications/APIs. Additionally, in compute/SVM
API scenarios with fault mode or LR mode VMs, device atomics
must work with single-region allocations. In all other cases
device atomics should be disabled by default also on platforms
where we know device atomics doesn't on work on particular
allocations types.

v3: fault mode requires LR mode so only check for LR mode
    to determine compute API(Jose).
    Handle SMEM+LMEM BO's migration to LMEM where device
    atomics is expected to work. (Brian).
v2: Fix platform checks to correct atomics behaviour on PVC.
Acked-by: default avatarMichal Mrozek <michal.mrozek@intel.com>
Reviewed-by: default avatarOak Zeng <oak.zeng@intel.com>
Acked-by: default avatarLionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240430162529.21588-6-nirmoy.das@intel.comSigned-off-by: default avatarNirmoy Das <nirmoy.das@intel.com>
parent a4b72576
......@@ -619,9 +619,40 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
struct xe_pt *pt = xe_vma_vm(vma)->pt_root[tile->id];
int ret;
if ((vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) &&
(is_devmem || !IS_DGFX(xe)))
xe_walk.default_pte |= XE_USM_PPGTT_PTE_AE;
/**
* Default atomic expectations for different allocation scenarios are as follows:
*
* 1. Traditional API: When the VM is not in LR mode:
* - Device atomics are expected to function with all allocations.
*
* 2. Compute/SVM API: When the VM is in LR mode:
* - Device atomics are the default behavior when the bo is placed in a single region.
* - In all other cases device atomics will be disabled with AE=0 until an application
* request differently using a ioctl like madvise.
*/
if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
if (xe_vm_in_lr_mode(xe_vma_vm(vma))) {
if (bo && xe_bo_has_single_placement(bo))
xe_walk.default_pte |= XE_USM_PPGTT_PTE_AE;
/**
* If a SMEM+LMEM allocation is backed by SMEM, a device
* atomics will cause a gpu page fault and which then
* gets migrated to LMEM, bind such allocations with
* device atomics enabled.
*/
else if (is_devmem && !xe_bo_has_single_placement(bo))
xe_walk.default_pte |= XE_USM_PPGTT_PTE_AE;
} else {
xe_walk.default_pte |= XE_USM_PPGTT_PTE_AE;
}
/**
* Unset AE if the platform(PVC) doesn't support it on an
* allocation
*/
if (!xe->info.has_device_atomics_on_smem && !is_devmem)
xe_walk.default_pte &= ~XE_USM_PPGTT_PTE_AE;
}
if (is_devmem) {
xe_walk.default_pte |= XE_PPGTT_PTE_DM;
......
......@@ -888,7 +888,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
for_each_tile(tile, vm->xe, id)
vma->tile_mask |= 0x1 << id;
if (GRAPHICS_VER(vm->xe) >= 20 || vm->xe->info.platform == XE_PVC)
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
vma->pat_index = pat_index;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment