Commit 8613385c authored by Daniel Vetter's avatar Daniel Vetter

dma-fence: Document recoverable page fault implications

Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit. This documentation section explains
what's the potential problem, and the remedies we've discussed,
roughly ordered from best to worst.

v2: Linus -> Linux typoe (Dave)

v3:
- Make it clear drivers only need to implement one option (Christian)
- Make it clearer that implicit sync is out the window with exclusive
  fences (Christian)
- Add the fairly theoretical option of segementing the memory (either
  statically or through dynamic checks at runtime for which piece of
  memory is managed how) and explain why it's not a great idea (Felix)

References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd.com/Reviewed-by: default avatarChristian König <christian.koenig@amd.com>
Acked-by: default avatarThomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: default avatarFelix Kuehling <Felix.Kuehling@amd.com>
c: Dave Airlie <airlied@gmail.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Link: https://patchwork.freedesktop.org/patch/msgid/20210203152921.2429937-1-daniel.vetter@ffwll.ch
parent 67cc24ac
......@@ -257,3 +257,79 @@ fences in the kernel. This means:
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.
Recoverable Hardware Page Faults Implications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Modern hardware supports recoverable page faults, which has a lot of
implications for DMA fences.
First, a pending page fault obviously holds up the work that's running on the
accelerator and a memory allocation is usually required to resolve the fault.
But memory allocations are not allowed to gate completion of DMA fences, which
means any workload using recoverable page faults cannot use DMA fences for
synchronization. Synchronization fences controlled by userspace must be used
instead.
On GPUs this poses a problem, because current desktop compositor protocols on
Linux rely on DMA fences, which means without an entirely new userspace stack
built on top of userspace fences, they cannot benefit from recoverable page
faults. Specifically this means implicit synchronization will not be possible.
The exception is when page faults are only used as migration hints and never to
on-demand fill a memory request. For now this means recoverable page
faults on GPUs are limited to pure compute workloads.
Furthermore GPUs usually have shared resources between the 3D rendering and
compute side, like compute units or command submission engines. If both a 3D
job with a DMA fence and a compute workload using recoverable page faults are
pending they could deadlock:
- The 3D workload might need to wait for the compute job to finish and release
hardware resources first.
- The compute workload might be stuck in a page fault, because the memory
allocation is waiting for the DMA fence of the 3D workload to complete.
There are a few options to prevent this problem, one of which drivers need to
ensure:
- Compute workloads can always be preempted, even when a page fault is pending
and not yet repaired. Not all hardware supports this.
- DMA fence workloads and workloads which need page fault handling have
independent hardware resources to guarantee forward progress. This could be
achieved through e.g. through dedicated engines and minimal compute unit
reservations for DMA fence workloads.
- The reservation approach could be further refined by only reserving the
hardware resources for DMA fence workloads when they are in-flight. This must
cover the time from when the DMA fence is visible to other threads up to
moment when fence is completed through dma_fence_signal().
- As a last resort, if the hardware provides no useful reservation mechanics,
all workloads must be flushed from the GPU when switching between jobs
requiring DMA fences or jobs requiring page fault handling: This means all DMA
fences must complete before a compute job with page fault handling can be
inserted into the scheduler queue. And vice versa, before a DMA fence can be
made visible anywhere in the system, all compute workloads must be preempted
to guarantee all pending GPU page faults are flushed.
- Only a fairly theoretical option would be to untangle these dependencies when
allocating memory to repair hardware page faults, either through separate
memory blocks or runtime tracking of the full dependency graph of all DMA
fences. This results very wide impact on the kernel, since resolving the page
on the CPU side can itself involve a page fault. It is much more feasible and
robust to limit the impact of handling hardware page faults to the specific
driver.
Note that workloads that run on independent hardware like copy engines or other
GPUs do not have any impact. This allows us to keep using DMA fences internally
in the kernel even for resolving hardware page faults, e.g. by using copy
engines to clear or copy memory needed to resolve the page fault.
In some ways this page fault problem is a special case of the `Infinite DMA
Fences` discussions: Infinite fences from compute workloads are allowed to
depend on DMA fences, but not the other way around. And not even the page fault
problem is new, because some other CPU thread in userspace might
hit a page fault which holds up a userspace fence - supporting page faults on
GPUs doesn't anything fundamentally new.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment