- 21 Dec, 2023 40 commits
-
-
Lucas De Marchi authored
For Xe1 platforms, it's better to follow the way i915 adds the PCI IDs to the header, so it's easier to catch up when there is an update. This brings the same logic applied in commit 2e3c369f ("drm/i915/mtl: Eliminate subplatforms") to the equivalent xe header. The end result of this header for Xe1 platforms is now in sync with i915 as of commit 5032c607 ("drm/i915: ATS-M device ID update"). This can be seen by $ git show 5032c607:include/drm/i915_pciids.h > a.h $ git diff --color-words --no-index a.h include/drm/xe_pciids.h Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231121195209.802235-2-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
The name "compute_mode" can be confusing since compute uses either this mode or fault_mode to achieve the long-running semantics, and compute_mode can, moving forward, enable fault_mode under the hood to work around hardware limitations. Also the name no_dma_fence_mode really refers to what we elsewhere call long-running mode and the mode contrary to what its name suggests allows dma-fences as in-fences. So in an attempt to be more consistent, rename no_dma_fence_mode -> lr_mode compute_mode -> preempt_fence_mode And adjust flags so that preempt_fence_mode sets XE_VM_FLAG_LR_MODE fault_mode sets XE_VM_FLAG_LR_MODE | XE_VM_FLAG_FAULT_MODE v2: - Fix a typo in the commit message (Oak Zeng) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Oak Zeng <oak.zeng@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231127123349.23698-1-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
xa_alloc_cyclic() returns 1 on successful allocation, if wrapping occurs, but the code incorrectly treats that as an error. Fix that. Also, xa_alloc_cyclic() requires xa_init_flags(..., XA_FLAGS_ALLOC), so fix that, and assuming we don't want a zero ASID, instead of using XA_FLAGS_ALLOC1, adjust the xa limits at alloc_cyclic time. v2: - On CONFIG_DRM_XE_DEBUG, Initialize the cyclic ASID allocation in such a way that the next allocated ASID will be the maximum one, and the one following will cause an ASID wrap, (all to have CI test high ASIDs and ASID wraps). v3: - Stricter return value checking from xa_alloc_cyclic() (Matthew Auld) Suggested-by: Ohad Sharabi <osharabi@habana.ai> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/946Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Ohad Sharabi <osharabi@habana.ai> #v1 Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231124153345.97385-5-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
trace_printk() is not intended for production code. Remove it. Suggested-by: Ohad Sharabi <osharabi@habana.ai> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/946Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Ohad Sharabi <osharabi@habana.ai> Link: https://patchwork.freedesktop.org/patch/msgid/20231122110359.4087-4-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
Using "get" typically refers to obtaining a refcount, which we don't do here so rename to xe_bo_sg(). Suggested-by: Ohad Sharabi <osharabi@habana.ai> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/946Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Ohad Sharabi<osharabi@habana.ai> Link: https://patchwork.freedesktop.org/patch/msgid/20231122110359.4087-3-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
Ensure, using xe_assert that the various try_add_<placement> functions don't access the bo placements array out-of-bounds. v2: - Remove the places argument to make sure the xe_assert operates on the array we're actually populating. (Matthew Auld) Suggested-by: Ohad Sharabi <osharabi@habana.ai> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/946Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Ohad Sharabi <osharabi@habana.ai> #v1 Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231123153158.12779-2-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
We already print some basic information about the device, add virtualization information, until we expose that elsewhere. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231115073804.1861-3-michal.wajdeczko@intel.comSigned-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
We will be adding support for the SR-IOV and driver might be then running, in addition to existing non-virtualized bare-metal mode, also in Physical Function (PF) or Virtual Function (VF) mode. Since these additional modes require some changes to the driver, define enum flag to represent different SR-IOV modes and add a function where we will detect the actual mode in the runtime. We start with a forced bare-metal mode as it is sufficient to enable basic functionality and ensures no impact to existing code. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231115073804.1861-2-michal.wajdeczko@intel.comSigned-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
The Single Root I/O Virtualization (SR-IOV) extension to the PCI Express (PCIe) specification suite is supported starting from 12th generation of Intel Graphics processors. Add a device flag that we will use to enable SR-IOV specific code paths and to indicate our readiness to support SR-IOV. We will enable this flag for the specific platforms once all required changes and additions will be ready and merged. Bspec: 52391 Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231115073804.1861-1-michal.wajdeczko@intel.comSigned-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Tejas Upadhyay authored
This workaround applies to Xe2_LPM V3(MattR): - Reorder reg and wa placement - Add base parameter to reg macro for better definition V2(MattR): - Change name of register - Loop for all engines - Driver permanent WA, applies to all steps Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Tejas Upadhyay authored
This workaround applies to Xe2_LPM as well Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Tejas Upadhyay authored
This workaround applies to Xe2_LPM Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Gustavo Sousa authored
With the current implementation, a preemption or other kind of interrupt might happen between xe_mmio_read32() and ktime_get_raw(). Such an interruption (specially in the case of preemption) might be long enough to cause a timeout without giving a chance of a new check on the register value on a next iteration, which would have happened otherwise. This issue causes some sporadic timeouts in some code paths. As an example, we were experiencing some rare timeouts when waiting for PLL unlock for C10/C20 PHYs (see intel_cx0pll_disable()). After debugging, we found out that the PLL unlock was happening within the expected time period (20us), which suggested a bug in xe_mmio_wait32(). To fix the issue, ensure that we do a last check out of the loop if necessary. This change was tested with the aforementioned PLL unlocking code path. Experiments showed that, before this change, we observed reported timeouts in 54 of 5000 runs; and, after this change, no timeouts were reported in 5000 runs. v2: - Prefer an implementation without a barrier (v1 switched the order of xe_mmio_read32() and ktime_get_raw() calls and added a barrier() in between). (Lucas, Rodrigo) Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231116214000.70573-3-gustavo.sousa@intel.comSigned-off-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Gustavo Sousa authored
This function is big enough, let's move it to a shared compilation unit. While at it, document it. Here is the output of running bloat-o-metter on the new and old module (execution provided by Lucas): $ ./scripts/bloat-o-meter build64/drivers/gpu/drm/xe/xe.ko{.old,} add/remove: 2/0 grow/shrink: 0/58 up/down: 554/-15645 (-15091) (...) # Lines in between omitted Total: Before=2181322, After=2166231, chg -0.69% The overall reduction in the size is not that significant. Nevertheless, keeping the function as inline arguably does not bring too much benefit as well. As noted by Lucas, we would probably benefit from an inline function that did the fast-path check: do an optimistic first check before entering the wait-logic, which itself would go to a compilation unit. We might come back to implement this in the future if we have data to justify it. v2: - Add note in documentation for @timeout_us regarding the exponential backoff strategy. (Lucas) - Share output of bloat-o-meter in the commit message. (Lucas) Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231116214000.70573-2-gustavo.sousa@intel.comSigned-off-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Ruthuvikas Ravikumar authored
This kunit verifies the hardware values of mocs and l3cc registers with the KMD programmed values. v14: Fix CHECK. v13: Remove ret after forcewake. v11: Add KUNIT_ASSERT_EQ_MSG for Forcewake. v9/v10: Add Forcewake Fail. v8: Remove xe_bo.h and xe_pm.h Remove mocs and l3cc from live_mocs. Pull debug and err msg for mocs/l3cc out of if else block. Add HAS_LNCF_MOCS. v7: correct checkpath v6: Change ssize_t type. Change forcewake domain to XE_FW_GT. Update change of MOCS registers are multicast on Xe_HP and beyond patch. v5: Release forcewake. Remove single statement braces. Fix debug statements. v4: Drop stratch and vaddr. Fix debug statements. Fix indentation. v3: Fix checkpath. v2: Fix checkpath. Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com> Cc: Mathew D Roper <matthew.d.roper@intel.com> Reviewed-by: Mathew D Roper <matthew.d.roper@intel.com> Signed-off-by: Ruthuvikas Ravikumar <ruthuvikas.ravikumar@intel.com> Link: https://lore.kernel.org/r/20231116215152.2248859-1-ruthuvikas.ravikumar@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
When built with W=1, the following warnings show up on modpost: MODPOST drivers/gpu/drm/xe/Module.symvers WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_bo_test.o WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_dma_buf_test.o WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_migrate_test.o WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_pci_test.o WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_rtp_test.o WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/gpu/drm/xe/tests/xe_wa_test.o Add the module description for each of these to fix the warning. Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231120221904.695630-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Haridhar Kalvala authored
ATS-M device ID update. BSpec: 44477 Signed-off-by: Haridhar Kalvala <haridhar.kalvala@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231120065507.1543676-1-haridhar.kalvala@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
José Roberto de Souza authored
Those are ids present in i915 but missing in Xe. Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
José Roberto de Souza authored
RPL-U is defined as a subplatform but those PCI ids were not included in pciidlist so Xe KMD would never probe device with those ids. This is following what i915 does to include RPL-U to PCI ids probe list. v2: - change order to match i915 Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
DRM_XE_VM_BIND_OP_MAP_* IOCTL operations can result in GPUVA unmap, remap, or map operations in vm_bind_ioctl_ops_create. The xe_vma_op.map fields are blindly set which is incorrect for GPUVA unmap or remap operations. Fix this by only setting xe_vma_op.map for GPUVA map operations. Also restructure a bit vm_bind_ioctl_ops_create to make the code a bit more readable. Reported-by: Dafna Hirschfeld <dhirschfeld@habana.ai> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Brian Welty <brian.welty@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
After noticing in logs there were still mentions to GEN6 registers, it was clear commit d9b79ad2 ("drm/xe: Drop gen afixes from registers") didn't take care of all the afixes. Some were added later, but there are also constants and strings still using that. Continue the cleanup removing the remaining ones. To keep it consistent with code nearby, a few other changes are made: - Remove prefix in INTEL_LEGACY_64B_CONTEXT - Remove GEN8_CTX_L3LLC_COHERENT since it's unused - Rename GEN9_FREQ_SCALER to GT_FREQUENCY_SCALER v2: Use XELP_ as prefix for NUM_MOCS_ENTRIES and remove changes to MOCS_ENTRIES as this is now done as part of a previous commit (Matt Roper) Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231117174049.527192-3-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
The mocs documentation was copied from i915 and doesn't match the reality in xe. Reword it so it matches what the code is doing. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231117174049.527192-2-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
GEN11_MOCS_ENTRIES dates back from importing the table from the i915 module. The macro was used so the it could be maintained in a single place and platforms would just override with additional entries. With the platforms supported by xe, each of them is just defining individual tables without re-using this define. Move it inside gen12_mocs_desc that is the only user. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231117174049.527192-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
This seems to create a locking inversion with object_name_lock. The lock is held by drm_prime_fd_to_handle when calling our xe_gem_prime_import hook, which might eventually go on to grab the dma-resv lock during the attach. However we also have the opposite locking order in xe_gem_create_ioctl which is holding the dma-resv lock when calling drm_gem_handle_create, which wants to eventually grab object_name_lock: -> #1 (reservation_ww_class_mutex){+.+.}-{3:3}: <4> [635.739288] lock_acquire+0x169/0x3d0 <4> [635.739294] __ww_mutex_lock.constprop.0+0x164/0x1e60 <4> [635.739300] ww_mutex_lock_interruptible+0x42/0x1a0 <4> [635.739305] drm_gem_shmem_pin+0x4b/0x140 [drm_shmem_helper] <4> [635.739317] dma_buf_dynamic_attach+0x101/0x430 <4> [635.739323] xe_gem_prime_import+0xcc/0x2e0 [xe] <4> [635.739499] drm_prime_fd_to_handle_ioctl+0x184/0x2e0 [drm] <4> [635.739594] drm_ioctl_kernel+0x16f/0x250 [drm] <4> [635.739693] drm_ioctl+0x35e/0x620 [drm] <4> [635.739789] __x64_sys_ioctl+0xb7/0xf0 <4> [635.739794] do_syscall_64+0x3c/0x90 <4> [635.739799] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 <4> [635.739805] -> #0 (&dev->object_name_lock){+.+.}-{3:3}: <4> [635.739813] check_prev_add+0x1ba/0x14a0 <4> [635.739818] __lock_acquire+0x203e/0x2ff0 <4> [635.739823] lock_acquire+0x169/0x3d0 <4> [635.739827] __mutex_lock+0x124/0x1310 <4> [635.739832] drm_gem_handle_create+0x32/0x50 [drm] <4> [635.739927] xe_gem_create_ioctl+0x1d3/0x550 [xe] <4> [635.740102] drm_ioctl_kernel+0x16f/0x250 [drm] <4> [635.740197] drm_ioctl+0x35e/0x620 [drm] <4> [635.740293] __x64_sys_ioctl+0xb7/0xf0 <4> [635.740297] do_syscall_64+0x3c/0x90 <4> [635.740302] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 <4> [635.740307] It looks like it should be safe to simply drop the dma-resv lock prior to publishing the object when calling drm_gem_handle_create. Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/743Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
There are only 4 scratch registers VF_SW_FLAG(0..3) on each GuC. We shouldn't use non-existing register VF_SW_FLAG(4) for posting read. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
If GuC responds with the NO_RESPONSE_BUSY message, we extend our timeout while waiting for the actual response, but we wrongly assumed that the next message will be RESPONSE_SUCCESS, missing that we still can get RESPONSE_FAILURE. Change the condition for the expected message type, using only common bits from RESPONSE_SUCCESS and RESPONSE_FAILURE (as they differ, by ABI design, only by the last bit). v2: add comment/checks to the code (Matt) Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
While copying GuC response from the scratch registers to the buffer, formula to identify next scratch register is broken. Fix it. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Michal Wajdeczko authored
This variable holds full length of the message, including header length so it should be checked against GUC_CTB_MSG_MAX_LEN. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The workaround database was just updated to extend this workaround to DG2-G11 (whereas previously it applied only to G10 and G12). Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231115183029.2649992-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Rodrigo Vivi authored
Let's bring a bit of clarity on this 'region' field that is part of vm_bind operation struct. Rename and document to make it more than obvious that it is a region instance and not a mask and also that it should only be used with the prefetch operation itself. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
-
Rodrigo Vivi authored
On one hand the WAIT_OP represents the operation use for waiting such as ==, !=, > and so on. On the other hand, the mask is applied to the value used for comparision. Split those two to bring clarity to the uapi. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
-
Rodrigo Vivi authored
Only cosmetic things. No functional change on this patch. Define every flag with (1 << n) and use singular FLAG name. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Rodrigo Vivi authored
'Usage' gives an impression of telemetry information where someone would query to see how the memory is currently used and available size, etc. However this API is more than this. It is about a global view of all the memory regions available in the system and user space needs to have this information so they can then use the mem_region masks that are returned for the engine access. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
-
Rodrigo Vivi authored
- 'native' doesn't make much sense on integrated devices. - 'slow' is not necessarily true and doesn't go well with opposition to 'native'. Instead, let's use 'near' vs 'far'. It makes sense with all the current Intel GPUs and it is future proof. Right now, there's absolutely no need to define among the 'far' memory, which ones are slower, either in terms of latency, nunmber of hops or bandwidth. In case of this might become a requirement in the future, a new query could be added to indicate the certain 'distance' between a given engine and a memory_region. But for now, this fulfill all of the current requirements in the most straightforward way for the userspace drivers. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
-
Francois Dugast authored
Change rsvd to pad in struct drm_xe_class_instance to prevent the field from being used in future. v2: Change from fixup to regular commit because this touches the uAPI (Francois Dugast) Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Francois Dugast authored
Most constants defined in xe_drm.h which can be used for flags are named DRM_XE_*_FLAG_*, which is helpful to identify them. Make this systematic and add _FLAG where it was missing. Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Francois Dugast authored
Most constants defined in xe_drm.h use DRM_XE_ as prefix which is helpful to identify the name space. Make this systematic and add this prefix where it was missing. v2: - fix vertical alignment of define values - remove double DRM_ in some variables (José Roberto de Souza) v3: Rebase Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
During xe_mmio_probe_vram(), we already store the values returned from xe_mmio_tile_vram_size() into the xe_tile structures. There is no need to call xe_mmio_tile_vram_size() again later during setup of the STOLEN region. Just use the values stored in the root tile. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matt Roper <matthew.d.roper at intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Aravind Iddamsetty authored
Drop interrupt event from PMU as that is not useful and not being used by any UMD. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com> Reviewed-by: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Francois Dugast authored
As part of uAPI cleanup, remove this constant which is not used. Number of GTs are provided as num_gt in drm_xe_query_gt_list. Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-