Commit 9e225fb9 authored by Andrey Grodzovsky's avatar Andrey Grodzovsky Committed by Alex Deucher

drm/amdgpu: Prevent race between late signaled fences and GPU reset.

Problem:
After we start handling timed out jobs we assume there fences won't be
signaled but we cannot be sure and sometimes they fire late. We need
to prevent concurrent accesses to fence array from
amdgpu_fence_driver_clear_job_fences during GPU reset and amdgpu_fence_process
from a late EOP interrupt.

Fix:
Before accessing fence array in GPU disable EOP interrupt and flush
all pending interrupt handlers for amdgpu device's interrupt line.

v2: Switch from irq_get/put to full enable/disable_irq for amdgpu
Signed-off-by: default avatarAndrey Grodzovsky <andrey.grodzovsky@amd.com>
Acked-by: default avatarChristian König <christian.koenig@amd.com>
Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
parent dd70748e
...@@ -4606,6 +4606,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, ...@@ -4606,6 +4606,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
amdgpu_virt_fini_data_exchange(adev); amdgpu_virt_fini_data_exchange(adev);
} }
amdgpu_fence_driver_isr_toggle(adev, true);
/* block all schedulers and reset given job's ring */ /* block all schedulers and reset given job's ring */
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
struct amdgpu_ring *ring = adev->rings[i]; struct amdgpu_ring *ring = adev->rings[i];
...@@ -4621,6 +4623,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, ...@@ -4621,6 +4623,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
amdgpu_fence_driver_force_completion(ring); amdgpu_fence_driver_force_completion(ring);
} }
amdgpu_fence_driver_isr_toggle(adev, false);
if (job && job->vm) if (job && job->vm)
drm_sched_increase_karma(&job->base); drm_sched_increase_karma(&job->base);
......
...@@ -532,6 +532,24 @@ void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev) ...@@ -532,6 +532,24 @@ void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev)
} }
} }
/* Will either stop and flush handlers for amdgpu interrupt or reanble it */
void amdgpu_fence_driver_isr_toggle(struct amdgpu_device *adev, bool stop)
{
int i;
for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
struct amdgpu_ring *ring = adev->rings[i];
if (!ring || !ring->fence_drv.initialized || !ring->fence_drv.irq_src)
continue;
if (stop)
disable_irq(adev->irq.irq);
else
enable_irq(adev->irq.irq);
}
}
void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev) void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev)
{ {
unsigned int i, j; unsigned int i, j;
......
...@@ -143,6 +143,7 @@ signed long amdgpu_fence_wait_polling(struct amdgpu_ring *ring, ...@@ -143,6 +143,7 @@ signed long amdgpu_fence_wait_polling(struct amdgpu_ring *ring,
uint32_t wait_seq, uint32_t wait_seq,
signed long timeout); signed long timeout);
unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring); unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring);
void amdgpu_fence_driver_isr_toggle(struct amdgpu_device *adev, bool stop);
/* /*
* Rings. * Rings.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment