- 23 Jul, 2024 2 commits
-
-
Matthew Brost authored
The size of an array of binds is directly tied to several kmalloc in the KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in the case of these failures. The expected UMD behavior upon returning -ENOBUFS is to split an array of binds into a series of single binds. v2: - Resend for CI v3: - Resend for CI Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240723011702.1684013-1-matthew.brost@intel.com
-
Matthew Brost authored
When restoring the children PT entries on a bind failure the incorrect loop index has used resulting in PT entries being leaked. This is shown by running xe_vm.bind-array-conflict-error-inject on a VRAM device going into a suspend state after the test completes. v2: - s/childern/children (CI, Matt Auld) Fixes: a708f650 ("drm/xe: Update PT layer with better error handling") Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240723010230.1652707-1-matthew.brost@intel.com
-
- 22 Jul, 2024 11 commits
-
-
Lucas De Marchi authored
eu_type_to_str() relies on -Wswitch to warn (and -Werror) to make sure it handles all enum values. However it's perfectly legal to pass an int to that function so in the end that function may happen to return nothing. There's too much implicit knowledge about the initialization of eu_type for a compiler to notice eu_type is never assigned to anything other than those values. Trying to reproduce this issue, none of gcc-9, gcc-10 and gcc-13 triggered for me, but this was reported in a different system with gcc-10: drivers/gpu/drm/xe/xe.o: warning: objtool: xe_gt_topology_dump() falls through to next function xe_gt_topology_init() Also it was reported these warnings when building with clang: drivers/gpu/drm/xe/xe.o: warning: objtool: xe_gt_topology_dump+0x77: sibling call from callable instruction with modified stack frame drivers/gpu/drm/xe/xe.o: warning: objtool: xe_gt_topology_dump() falls through to next function xe_dss_mask_group_ffs() drivers/gpu/drm/xe/xe.o: warning: objtool: xe_gt_topology_dump+0x77: can't find jump dest instruction at .text.xe_gt_topology_dump+0xc0 Since that value is not really possible in real world, just take the simple approach and return NULL. Fixes: 7108b4a5 ("drm/xe/uapi: Expose SIMD16 EU mask in topology query") Reviewed-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240719191534.3845469-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Michal Wajdeczko authored
In addition of NEEDS_64K BO flag, add similar one to force 2 MiB alignment of the buffer objects. Explicitly use this flag during VF LMEM provisioning as LMTT uses 2 MiB pages and one day we may drop requirement of allocating pinned objects as contiguous. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240715180538.1418-3-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
In commit 62742d12 ("drm/xe: Normalize bo flags macros"), we normalized all BO flags but XE_BO_NEEDS_64K. Do it now. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240715180538.1418-2-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
There is no point to run those tests on VFs devices as they can't access any of the MOCS registers. Skip testing on the VF device. [ ] =================== xe_mocs (1 subtest) ==================== [ ] ================ xe_live_mocs_kernel_kunit ================ [ ] [PASSED] 0000:4d:00.0 [ ] [SKIPPED] 0000:4d:00.1 [ ] ============ [PASSED] xe_live_mocs_kernel_kunit ============ [ ] ===================== [PASSED] xe_mocs ===================== Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-8-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Convert xe_mocs live tests to parameterized style. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-7-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Convert xe_migrate live tests to parameterized style. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-6-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Convert xe_dma_buf live tests to parameterized style. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-5-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Convert xe_bo live tests to parameterized style. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-4-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Instead of iterating over available Xe devices within a testcase, without being able to distinguish potential failures from different devices on system with many Xe devices, introduce helpers that will allow to treat each Xe device as a parameter for the testcase like: static void bar(struct kunit *test) { struct xe_device *xe = test->priv; ... } struct kunit_case foo_live_tests[] = { KUNIT_CASE_PARAM(bar, xe_pci_live_device_gen_param), {} }; struct kunit_suite foo_suite = { .name = "foo_live", .test_cases = foo_live_tests, .init = xe_kunit_helper_xe_device_live_test_init, }; Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-3-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Typically we want to preserve pointer constness when converting from one xe pointer to another, but in some rare cases, like kunit parameter conversions, we might want to discard this constness. Add a helper that we will use to clearly indicate our intention. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> #v1 Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240720142528.530-2-michal.wajdeczko@intel.com
-
Ohad Sharabi authored
The current implementation uses hardcoded values instead of common defines. v2: - Make the commit a regular commit instead of a fixup commit - slightly modify commit message Signed-off-by: Ohad Sharabi <osharabi@habana.ai> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240721071335.101234-1-osharabi@habana.ai
-
- 20 Jul, 2024 4 commits
-
-
Matthew Brost authored
Take PM ref when any G2H are outstanding, drop when none are outstanding. To safely ensure we have PM ref when in the GuC CT layer, a PM ref needs to be held when scheduler messages are pending too. v2: - Add outer PM protections to xe_file_close (CI) v3: - Only take PM ref 0->1 and drop on 1->0 (Matthew Auld) v4: - Add assert to G2H increment function v5: - Rebase v6: - Declare xe as local variable in xe_file_close (CI) Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Cc: Matthew Auld <matthew.auld@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240719172905.1527927-5-matthew.brost@intel.com
-
Matthew Brost authored
Avoid GT TLB invalidation timeouts by holding a PM ref when invalidations are inflight. v2: - Drop PM ref before signaling fence (CI) v3: - Move invalidation_fence_signal helper in tlb timeout to previous patch (Matthew Auld) Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240719172905.1527927-4-matthew.brost@intel.com
-
Matthew Brost authored
Having two methods to wait on GT TLB invalidations is not ideal. Remove xe_gt_tlb_invalidation_wait and only use GT TLB invalidation fences. In addition to two methods being less than ideal, once GT TLB invalidations are coalesced the seqno cannot be assigned during xe_gt_tlb_invalidation_ggtt/range. Thus xe_gt_tlb_invalidation_wait would not have a seqno to wait one. A fence however can be armed and later signaled. v3: - Add explaination about coalescing to commit message v4: - Don't put dma fence if defined on stack (CI) v5: - Initialize ret to zero (CI) v6: - Use invalidation_fence_signal helper in tlb timeout (Matthew Auld) Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240719172905.1527927-3-matthew.brost@intel.com
-
Matthew Brost authored
Other layers should not be touching struct xe_gt_tlb_invalidation_fence directly, add helper for initialization. v2: - Add dma_fence_get and list init to xe_gt_tlb_invalidation_fence_init Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240719172905.1527927-2-matthew.brost@intel.com
-
- 19 Jul, 2024 5 commits
-
-
Michal Wajdeczko authored
We should use the number of actual entries stored in the runtime register buffer, not the maximum number of entries that this buffer can hold, otherwise bsearch() may fail and we may miss the data and wrongly report unexpected access to some registers. Fixes: 4edadc41 ("drm/xe/vf: Use register values obtained from the PF") Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Piotr Piórkowski <piotr.piorkowski@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240718203155.486-1-michal.wajdeczko@intel.com
-
Umesh Nerlige Ramappa authored
xe_file_close triggers an asynchronous queue cleanup and then frees up the xef object. Since queue cleanup flushes all pending jobs and the KMD stores client usage stats into the xef object after jobs are flushed, we see a use-after-free for the xef object. Resolve this by taking a reference to xef from xe_exec_queue. While at it, revert an earlier change that contained a partial work around for this issue. v2: - Take a ref to xef even for the VM bind queue (Matt) - Squash patches relevant to that fix and work around (Lucas) v3: Fix typo (Lucas) Fixes: ce62827b ("drm/xe: Do not access xe file when updating exec queue run_ticks") Fixes: 6109f24f ("drm/xe: Add helper to accumulate exec queue runtime") Closes: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1908Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240718210548.3580382-5-umesh.nerlige.ramappa@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Umesh Nerlige Ramappa authored
Take a reference to xef when user creates the VM and put the reference when user destroys the VM. Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240718210548.3580382-4-umesh.nerlige.ramappa@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Umesh Nerlige Ramappa authored
Add ref counting for xe_file. v2: - Add kernel doc for exported functions (Matt) - Instead of xe_file_destroy, export the get/put helpers (Lucas) v3: Fixup the kernel-doc format and description (Matt, Lucas) Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240718210548.3580382-3-umesh.nerlige.ramappa@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Umesh Nerlige Ramappa authored
In order to make xe_file ref counted, move destruction of xe_file members to a helper. v2: Move xe_vm_close_and_put back into xe_file_close (Matt) Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240718210548.3580382-2-umesh.nerlige.ramappa@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
- 18 Jul, 2024 12 commits
-
-
Lucas De Marchi authored
PVC, Xe2 and later platforms have 16-wide EUs. We were implicitly reporting for PVC the number of 16-wide EUs without giving userspace any hint that they were different than for other platforms. Xe2 and later also have 16-wide, but in those cases the reported number would correspond to the 8-wide count. To avoid confusion and make sure the right number is used by userspace depending on the platform, add a new item to the topology query and drop the one that is not available. The new mask reported for both PVC and Xe2 should now match the numbers reported via hwconfig. v2: Use a different topo item with EU type in its name to report the new mask instead of adding the type itself as the item (Matt Roper) Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Acked-by: José Roberto de Souza <jose.souza@intel.com> Acked-by: Mateusz Jablonski <mateusz.jablonski@intel.com> Acked-by: Wenbin Lu <wenbin.lu@intel.com> Acked-by: Effie Yu <effie.yu@intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240710220446.2169797-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Matthew Brost authored
xe_sync_entry_wait is no longer used, remove it. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240717140429.1396820-2-matthew.brost@intel.com
-
Matthew Brost authored
Fail invalid addresses during user fence creation. Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240717140429.1396820-1-matthew.brost@intel.com
-
Nirmoy Das authored
Add trace for xe pm function for better debuggability. v2: Fix indentation and add trace for xe_pm_runtime_get_ioctl Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240717125950.9952-1-nirmoy.das@intel.comSigned-off-by: Nirmoy Das <nirmoy.das@intel.com>
-
Uma Shankar authored
As per recommendation in the workarounds: WA_22019338487 There is an issue with accessing Stolen memory pages due a hardware limitation. Limit the usage of stolen memory for fbdev for LNL+. Don't use BIOS FB from stolen on LNL+ and assign the same from system memory. v2: Corrected the WA Number, limited WA to LNL and Adopted XE_WA framework as suggested by Lucas and Matt. v3: Introduced the waxxx_display to implement display side of WA changes on Lunarlake. Used xe_root_mmio_gt and avoid the for loop (Suggested by Lucas) v4: Fixed some nits (Luca) Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Uma Shankar <uma.shankar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240717082252.3875909-1-uma.shankar@intel.com
-
Akshata Jahagirdar authored
In xe2+ dgfx, we don't need to handle the copying of ccs metadata during migration. This test validates the ccs data post clear and copy during evict/restore operation. Thus, we can skip this test on xe2+ dgfx. Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/57d9df82ad02e53c9b0d2a7d40bb27acce57b927.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
This part of kunit verifies that - main data is decompressed and ccs data is clear post bo eviction. - main data is raw copied and ccs data is clear post bo restore. v2: Added missing bo_put()/bo_unlock() (Matt Auld) Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1d36d4377c566508e42b3fb80d3fe4a588fd00ca.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
During eviction (vram->sysmem), we use compressed -> uncompressed mapping. During restore (sysmem->vram), we need to use mapping from uncompressed -> uncompressed. Handle logic for selecting the compressed identity map for eviction, and selecting uncompressed map for restore operations. v2: Move check of xe_migrate_ccs_emit() before calling xe_migrate_ccs_copy(). (Nirmoy) Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/79b3a016e686a662ae68c32b5fc7f0f2ac8043e9.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
Xe2+ has unified compression (exactly one compression mode/format), where compression is now controlled via PAT at PTE level. This simplifies KMD operations, as it can now decompress freely without concern for the buffer's original compression format—unlike DG2, which had multiple compression formats and thus required copying the raw CCS state during VRAM eviction. In addition mixed VRAM and system memory buffers were not supported with compression enabled. On Xe2 dGPU compression is still only supported with VRAM, however we can now support compression with VRAM and system memory buffers, with GPU access being seamless underneath. So long as when doing VRAM -> system memory the KMD uses compressed -> uncompressed, to decompress it. This also allows CPU access to such buffers, assuming that userspace first decompress the corresponding pages being accessed. If the pages are already in system memory then KMD would have already decompressed them. When restoring such buffers with sysmem -> VRAM the KMD can't easily know which pages were originally compressed, so we always use uncompressed -> uncompressed here. With this it also means we can drop all the raw CCS handling on such platforms (including needing to allocate extra CCS storage). In order to support this we now need to have two different identity mappings for compressed and uncompressed VRAM. In this patch, we set up the additional identity map for the VRAM with compressed pat_index. We then select the appropriate mapping during migration/clear. During eviction (vram->sysmem), we use the mapping from compressed -> uncompressed. During restore (sysmem->vram), we need the mapping from uncompressed -> uncompressed. Therefore, we need to have two different mappings for compressed and uncompressed vram. We set up an additional identity map for the vram with compressed pat_index. We then select the appropriate mapping during migration/clear. v2: Formatting nits, Updated code to match recent changes in xe_migrate_prepare_vm(). (Matt) v3: Move identity map loop to a helper function. (Matt Brost) v4: Split helper function in different patch, and add asserts and nits. (Matt Brost) v5: Convert the 2 bool arguments of pte_update_size to flags argument (Matt Brost) v6: Formatting nits (Matt Brost) Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/b00db5c7267e54260cb6183ba24b15c1e6ae52a3.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
Add an helper function to program identity map. v2: Formatting nits Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/91dc05f05bd33076fb9a9f74f8495b48d2abff53.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
This test verifies if the main and ccs data are cleared during bo creation. The motivation to use Kunit instead of IGT is that, although we can verify whether the data is zero following bo creation, we cannot confirm whether the zero value after bo creation is the result of our clear function or simply because the initial data present was zero. v2: Updated the mutex_lock and unlock logic, Changed out_unlock to out_put. (Matt) v3: Added missing dma_fence_put(). (Nirmoy) v4: Rebase. v5: Add missing bo_put(), bo_unlock() calls. (Matt Auld) Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/c07603439b88cfc99e78c0e2069327e65d5aa87d.1721250309.git.akshata.jahagirdar@intel.com
-
Akshata Jahagirdar authored
For Xe2 dGPU, we clear the bo by modifying the VRAM using an uncompressed pat index which then indirectly updates the compression status as uncompressed i.e zeroed CCS. So xe_migrate_clear() should be updated for BMG to not emit CCS surf copy commands. v2: Moved xe_device_needs_ccs_emit() to xe_migrate.c and changed name to xe_migrate_needs_ccs_emit() since its very specific to migration.(Matt) Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/8dd869dd8dda5e17ace28c04f1a48675f5540874.1721250309.git.akshata.jahagirdar@intel.com
-
- 17 Jul, 2024 3 commits
-
-
Matthew Brost authored
When wedging a device we shouldn't be suspending device as state for debug will be lost. Also this appears to not work as the below stack trace pops upon trying to resume a wedged device: [ 304.245044] INFO: task cat:12115 blocked for more than 151 seconds. [ 304.251333] Tainted: G W 6.10.0-rc7-xe+ #3518 [ 304.257617] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 304.265459] task:cat state:D stack:13384 pid:12115 tgid:12115 ppid:3986 flags:0x00000006 [ 304.265465] Call Trace: [ 304.265467] <TASK> [ 304.265469] __schedule+0x3c4/0xdf0 [ 304.265478] schedule+0x3c/0x140 [ 304.265481] rpm_resume+0x1cc/0x740 [ 304.265484] ? __pfx_autoremove_wake_function+0x10/0x10 [ 304.265489] __pm_runtime_resume+0x49/0x80 [ 304.265494] guc_info+0x6b/0xb0 [xe] [ 304.265538] ? __pfx___drm_printfn_seq_file+0x10/0x10 [ 304.265541] ? __pfx___drm_puts_seq_file+0x10/0x10 [ 304.265545] seq_read_iter+0x111/0x4c0 [ 304.265551] seq_read+0xfc/0x140 [ 304.265556] full_proxy_read+0x58/0x80 [ 304.265560] vfs_read+0xa7/0x360 [ 304.265563] ? find_held_lock+0x2b/0x80 [ 304.265568] ksys_read+0x64/0xe0 [ 304.265571] do_syscall_64+0x68/0x140 [ 304.265575] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 304.265578] RIP: 0033:0x7f4254d14992 [ 304.265580] RSP: 002b:00007ffc558666f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 304.265583] RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007f4254d14992 [ 304.265584] RDX: 0000000000020000 RSI: 00007f4254ebb000 RDI: 0000000000000003 [ 304.265586] RBP: 00007f4254ebb000 R08: 00007f4254eba010 R09: 00007f4254eba010 [ 304.265587] R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000022000 [ 304.265588] R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000 [ 304.265593] </TASK> [ 304.265594] Showing all locks held in the system: [ 304.265598] 1 lock held by khungtaskd/57: [ 304.265599] #0: ffffffff8273b860 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x36/0x1c0 [ 304.265607] 3 locks held by kworker/6:1/90: [ 304.265610] 1 lock held by in:imklog/547: [ 304.265611] #0: ffff88810498cd88 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x76/0xc0 [ 304.265620] 1 lock held by dmesg/1310: v2: Drop local 'err' variable (Jonathan) Fixes: 8ed9aaae ("drm/xe: Force wedged state and block GT reset upon any GPU hang") Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240716063902.1390130-2-matthew.brost@intel.com
-
Matthew Brost authored
Wedge the entire device, not just GT which may have triggered the wedge. To implement this, cleanup the layering so xe_device_declare_wedged() calls into the lower layers (GT) to ensure entire device is wedged. While we are here, also signal any pending GT TLB invalidations upon wedging device. Lastly, short circuit reset wait if device is wedged. v2: - Short circuit reset wait if device is wedged (Local testing) Fixes: 8ed9aaae ("drm/xe: Force wedged state and block GT reset upon any GPU hang") Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240716063902.1390130-1-matthew.brost@intel.com
-
Alexander Usyskin authored
Add heci_cscfi support bit for new CSC engine type. It has same mmio offsets as DG2 GSC but separate interrupt flow. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240708084906.2827024-1-alexander.usyskin@intel.com
-
- 15 Jul, 2024 1 commit
-
-
Michal Wajdeczko authored
Only limited set of registers is accessible for the VF driver and the hardware will silently drop writes to inaccessible registers. To improve our VF driver lets intercept all such writes to warn about such unexpected writes on debug builds or optionally allow to provide some substitution (as a potential future extension). Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Gustavo Sousa <gustavo.sousa@intel.com> Cc: Piotr Piórkowski <piotr.piorkowski@intel.com> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240713142643.1242-2-michal.wajdeczko@intel.com
-
- 12 Jul, 2024 2 commits
-
-
Tejas Upadhyay authored
Wa_15015404425 asks us to perform four "dummy" writes to a non-existent register offset before every real register read. Although the specific offset of the writes doesn't directly matter, the workaround suggests offset 0x130030 as a good target so that these writes will be easy to recognize and filter out in debugging traces. V5(MattR): - Avoid negating an equality comparison V4(MattR): - Use writel and remove xe_reg usage V3(MattR): - Define dummy reg local to function - Avoid tracing dummy writes - Update commit message V2: - Add WA to 8/16/32bit reads also - MattR - Corrected dummy reg address - MattR - Use for loop to avoid mental pause - JaniN Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240709155606.2998941-1-tejas.upadhyay@intel.com
-
Michal Wajdeczko authored
Due to the current design of the BO and VRAM manager, any object with XE_BO_FLAG_PINNED flag, which the PF driver uses during VF LMEM provisionining, is created with the TTM_PL_FLAG_CONTIGUOUS flag, which may cause VRAM fragmentation that prevents subsequent allocations of larger objects, like fair VF LMEM provisioning. To avoid such failures, round down fair VF LMEM provisioning size to next power of two size, to compensate what xe_ttm_vram_mgr is doing to achieve contiguous allocations. Fixes: ac6598ae ("drm/xe/pf: Add support to configure SR-IOV VFs") Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240711192320.1198-2-michal.wajdeczko@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-