- 21 Dec, 2023 40 commits
-
-
Matt Roper authored
EXECLIST_CONTROL ($enginebase + 0x550) is a write-only register; we shouldn't be trying to read or report it as part of the device error state. Bspec: 45910, 60335 Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://lore.kernel.org/r/20231109194606.1835284-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Pallavi Mishra authored
Pass a valid vm to xe_migrate_update_pgtables. Resolves NPD crash seen with igt@xe_live_ktest@migrate Reviewed-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Alexander Usyskin authored
Configure and enable PVC HECI GSC support. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Haridhar Kalvala authored
Enable Force Dispatch Ends Collection for DG2. BSpec: 46001 Signed-off-by: Haridhar Kalvala <haridhar.kalvala@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231108073351.3998413-1-haridhar.kalvala@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Jonathan Cavitt authored
The spec for this register, like many other interrupt related ones, asks software to write back '1' to clear the serviced bits. Let's respect the spec. v2: - Update commit message - Add missing CC Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> CC: Daniele Spurio Ceraolo <daniele.ceraolospurio@intel.com> CC: Lucas De Marchi <lucas.demarchi@intel.com> CC: Rodrigo Vivi <rodrigo.vivi@intel.com> CC: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Koby Elbaz authored
If lmem (VRAM) is not fully initialized, the punit will power down the GT, which will prevent register access from the driver side. That code moved into a corresponding function (xe_verify_lmem_ready) to make the code clearer. Signed-off-by: Koby Elbaz <kelbaz@habana.ai> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231029175326.626745-1-kelbaz@habana.aiSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
In fault mode, page table binding is deferred until fault handler. Thus vma->tile_present will be unset unless the VMA is accessed by GPU. During a later unbind, the logic doesn't account for the fact that local fence variable will be NULL in this case, leading to pass NULL into dma_fence_add_callback() and causing few WARN_ONs to print to console. The fix is already present in the code, just hoist the fence variable computation to be done earlier. Resolves warnings seen with igt@xe_exec_fault_mode@once-invalid-fault Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Gustavo Sousa authored
This workaround applies to all steppings of Xe_LPM+. Implement the KMD part. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231106210655.175109-3-gustavo.sousa@intel.comSigned-off-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
The "GPL-2.0" SPDX license identifier is deprecated. Update the code to use "GPL-2.0-only" instead. Choose this identifier over "GPL-2.0-or-later" since it's the most restrictive of the two and it's not fully clear that "GPL-2.0" also allows "GPL-2.0-or-later". Cc: Francois Dugast <francois.dugast@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Francois Dugast <francois.dugast@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231107082440.7568-1-thomas.hellstrom@linux.intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
When processing G2H messages for pagefault or access counters, we queue a work item and call queue_work(). This fails if the worker thread is already queued to run. The expectation is that the worker function will do more than process a single item and return. It needs to either process all pending items or requeue itself if items are pending. But requeuing will add latency and potential context switch can occur. We don't want to add unnecessary latency and so the worker should process as many faults as it can within a reasonable duration of time. We also do not want to hog the cpu core, so here we execute in a loop and requeue if still running after more than 20 ms. This seems reasonable framework and easy to tune this futher if needed. This resolves issues seen with several igt@xe_exec_fault_mode subtests where the GPU will hang when KMD ignores a pending pagefault. v2: requeue the worker instead of having an internal processing loop. v3: implement hybrid model of v1 and v2 now, run for 20 msec before we will requeue if still running v4: only requeue in worker if queue is non-empty (Matt B) Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Andrzej Hajda authored
Driver initiated function-reset (FLR) is the highest level of reset that we can trigger from within the driver. In contrast to PCI FLR it doesn't require re-enumeration of PCI BAR. It can be useful in case GT fails to reset. It is also the only way to trigger GSC reset from the driver and can be used in future addition of GSC support. v2: - use regs from xe_regs.h - move the flag to xe.mmio - call flr only on root gt - use BIOS protection check - copy/paste comments from i915 v3: - flr code moved to xe_device.c v4: - needs_flr_on_fini moved to xe_device Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Carlos Santa authored
This error gets printed inside a sandbox with warnings turned on. /mnt/host/source/src/third_party/kernel/v5.15/drivers/ gpu/drm/xe/xe_gt_idle_sysfs.c:87:26: error: format string is not a string literal (potentially insecure) [-Werror,-Wformat-security] return sysfs_emit(buff, gtidle->name); ^~~~~~~~~~~~ /mnt/host/source/src/third_party/kernel/v5.15/drivers/ gpu/drm/xe/xe_gt_idle_sysfs.c:87:26: note: treat the string as an argument to avoid this return sysfs_emit(buff, gtidle->name); ^ "%s", 1 error generated. CC: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Carlos Santa <carlos.santa@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
This workaround is primarily implemented by the BIOS. However if the BIOS applies the workaround it will reserve a small piece of our DSM (which should be at the top, right below the WOPCM); we just need to keep that region reserved so that nothing else attempts to re-use it. v2 (Gustavo): - Check for NULL media_gt - Mask bits [5:0] to avoid potential issues in future platforms Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231102124855.1940491-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
For mtl and above 16.67 Mhz is the scale factor to calculate rpX frequencies. v2: Fix review comment (Ashutosh) Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231101163212.1629685-1-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The LNCFCMOCS registers no longer exist on Xe2 so there's no need to attempt to program them. Since GLOB_MOCS is the only set of MOCS registers now, it's expected to be used for all platforms (both igpu and dgpu) going forward, so adjust the MOCS programming flags accordingly. v2: - Fix typo (global mocs condition is >=, not >) Bspec: 71582 Reviewed-by: Pallavi Mishra <pallavi.mishra@intel.com> Link: https://lore.kernel.org/r/20231031140536.303746-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
The access counters worker function is fixed to advance the head pointer when dequeuing from the acc_queue. This now matches the similar logic in get_pagefault(). Signed-off-by: Bruce Chang <yu.bruce.chang@intel.com> Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
If a rebind is skipped the tile_present mask needs to be updated for the newly created vma to properly reflect the state of the vma. Reported-by: <christoph.manszewski@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Pallavi Mishra authored
Print CTB info during TLB invalidation timeout event. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
Currently mem_type_to_tile() is being used to access the tile's underlying tile.mem.vram. However, this function makes the assumption that a mem_type will only ever map to a single tile. Now that the TTM vram manager contains a pointer to the memory_region, make use of this in xe_bo.c. As such, introduce a helper function res_to_mem_region() to get the ttm_vram_mgr->vram from the BO's resource, and use this to replace usage of mem_type_to_tile(). xe_tile is still needed to choose the migration context, so this part is unchanged. But as this is only renaming usage, function is renamed now to mem_type_to_migrate(). Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
Unused and would like to remove the memtype_to_tile() which it calls. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
Replace the xe_ttm_vram_mgr.tile pointer with a xe_mem_region pointer instead. The former is currently unused. TTM VRAM regions are exposing device vram and is better to store a pointer directly to the xe_mem_region instead of the tile. This allows to cleanup unnecessary usage of xe_tile from xe_bo.c in later patch. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Expose power1_max_interval, that is the tau corresponding to PL1, as a custom hwmon attribute. Some bit manipulation is needed because of the format of PKG_PWR_LIM_1_TIME in PACKAGE_RAPL_LIMIT register (1.x * power(2,y)) v2: Get rpm wake ref while accessing power1_max_interval v3: %s/hwmon/xe_hwmon/ v4: - As power1_max_interval is rw attr take lock in read function as well - Refine comment about val to fix point conversion (Andi) - Update kernel version and date in doc v5: Fix review comments (Anshuman) Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-4-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Take hwmon_lock while accessing hwmon rw attributes. For readonly attributes its not required to take lock as reads are protected by sysfs layer and therefore sequential. Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-3-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Add kernel doc and refactor some of the hwmon functions, there is no functionality change. Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-2-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
With things like pipelined evictions, VRAM pages can be marked as free and yet still have some active kernel fences, with the idea that the next caller to allocate the memory will respect them. However it looks like we are missing synchronisation for KMD internal buffers, like page-tables, lrc etc. For userspace objects we should already have the required synchronisation for CPU access via the fault handler, and likewise for GPU access when vm_binding them. To fix this synchronise against any kernel fences for all KMD objects at creation. This should resolve some severe corruption seen during evictions. v2 (Matt B): - Revamp the comment explaining this. Also mention why USAGE_KERNEL is correct here. v3 (Thomas): - Make sure to use ctx.interruptible for the wait. Testcase: igt@xe-evict-ccs Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/853 Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/855Reported-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Tested-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
There could be active fences already in the dma-resv for the object prior to clearing. Make sure to input them as dependencies for the clear job. v2 (Matt B): - We can use USAGE_KERNEL here, since it's only the move fences we care about here. Also add a comment. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Spec says: "This is a privileged command; it will not be effective (will be converted to a no-op) if executed from within a non-privileged batch buffer." However here it looks like we are just emitting it inside some bb which was jumped to via the ppGTT, which should be considered a non-privileged address space. It looks like we just need some way of preventing things like the emit_pte() and later copy/clear being preempted in-between so rather just emit directly in the ring for migration jobs. Bspec: 45716 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Shekhar Chauhan authored
Add Xe_LPM+ support to an existing workaround. BSpec: 51762 Signed-off-by: Shekhar Chauhan <shekhar.chauhan@intel.com> Link: https://lore.kernel.org/r/20231030150756.1011777-1-shekhar.chauhan@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Priyanka Dandamudi authored
Add conditional check for access counter granularity. This check will return -EINVAL if granularity is beyond 64M which is a hardware limitation. v2: Defined XE_ACC_GRANULARITY_128K 0 XE_ACC_GRANULARITY_2M 1 XE_ACC_GRANULARITY_16M 2 XE_ACC_GRANULARITY_64M 3 as part of uAPI. So, that user can also use it.(Oak) v3: Move uAPI to proper location and give proper documentation.(Brian, Oak) Cc: Oak Zeng <oak.zeng@intel.com> Cc: Janga Rahul Kumar <janga.rahul.kumar@intel.com> Cc: Brian Welty <brian.welty@intel.com> Signed-off-by: Priyanka Dandamudi <priyanka.dandamudi@intel.com> Reviewed-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Oak Zeng <oak.zeng@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
FORCE_SLM_FENCE_SCOPE_TO_TILE and FORCE_UGM_FENCE_SCOPE_TO_TILE are in the up dword of LSC_CHICKEN_BIT_0 register. Also, the 14010918519 workaround only applies to early steppings, A*. Eventually those should be dropped, like they were in commit eaeb4b36 ("drm/i915/dg2: Drop pre-production GT workarounds"), so let's make sure they are annotated appropriately. Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231024220412.223868-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
MTL uses a versionless GSC-enabled binary. v2: don't use the filename to identify the header type (Lucas) v3: fix commit msg (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
On newer platforms the HuC survives reset and stays authenticated, so no need to re-authenticate it. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
On MTL-style multi-gt platforms, the HuC is only available on the media GT, so we need to consider it as not supported on the render GT. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
The GSC-enabled HuC binary starts with a GSC header, which is followed by the legacy-style CSS header and the binary itself. We can parse the GSC headers to find the HuC version and the location of the binary to be used for the DMA transfer. The parsing function has been designed to be re-used for the GSC binary, so the entry names are external parameters (because the GSC uses different ones) and the CSS entry is optional (because the GSC doesn't have it). v2: move new code to uc_fw.c, better comments and error checking, split old code move to separate patch (Lucas), move headers and documentation to uc_fw_abi.h. v3: use 2 separate loops, rework marker check (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
GSC binaries and newer HuC ones use GSC-style headers instead of the CSS. In preparation for adding support for such parsing, split out the current parsing code to its own function, to make it cleaner to add the new paths. The existing doc section has also been renamed to narrow it to CSS-based binaries. v2: new patch in series, split out from next patch for easier reviewing v3: drop unneeded include (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
In existing code flow for future platforms i.e. >1270, the rpX (rp0,rpn and rpe) fused values are read from gen 6 registers. Which is not correct. Unless specified gen 1270 regs should be valid for gen 1270+ platforms as well. Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The MOCS registers should be written in an MCR-specific manner on Xe_HP and beyond to prevent any other driver threads or external firmware from putting the hardware into unicast mode while we initialize the MOCS table. Bspec: 66534, 67609, 71185 Cc: Ruthuvikas Ravikumar <ruthuvikas.ravikumar@intel.com> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231023204112.2856331-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
As with DG2/MTL, Xe2 also fails to emit instruction headers for SVG state instructions if no explicit state has been set. The SVG part of the LRC is nearly identical to DG2/MTL; the only change is that 3DSTATE_DRAWING_RECTANGLE has been replaced by 3DSTATE_DRAWING_RECTANGLE_FAST, so we can just re-use the same state table and handle that single instruction when we encounter it. Bspec: 65182 Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-8-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
When recording the default LRC, the expectation is that the hardware's original state settings (both register and instruction) will be written out to the LRC upon first context switch. For many 3DSTATE_* state instructions that don't truly have "default" values, this translates to a simple instruction header (opcodes + dword length) being written to the LRC, followed by an appropriate number of blank dwords as a place holder. When userspace creates a context (which starts as a copy of the default LRC), they'll generally emit real 3DSTATE_* as part of their initialization to select the settings they desire. If they don't emit one of the 3DSTATE instructions, then the zeroed dwords that remain in their LRC image generally translate to various state remaining disabled. This will either be what userspace wants or will lead to very reproducible and easily-debugged problems (rendering glitches, engine hangs). It turns out that a subset of the 3DSTATE instructions, specifically those belonging to the SVG (State Variable - Global) unit, are not only emitting 0's for the instruction's "body" dwords, but also for the instruction header dword if no specific state has been explicitly set before context switch. This means that when the hardware switches to a context that hasn't explicitly provided an appropriate state setting, the hardware will just see a sequence of NOOPs in the spot reserved for that 3DSTATE instruction while executing the LRC, and the actual hardware state setting will unintentionally inherit the configuration used by the previously running context. Now when userspace makes a mistake and forgets to emit an important state instruction they no longer get consistent, easily-reproducible corruption/hangs, but rather erratic behavior where the presence/absence of a problem depends on what other workloads are running on the system and what order the contexts are scheduled on the engine. A specific example of this that came up recently related to mesh shading The OpenGL driver was not specifically emitting a 3DSTATE_MESH_CONTROL to disable mesh shading at context init, so on context switch, mesh shading would either be on or off depending on what the previous context had been doing. Vulkan apps _were_ enabling mesh shading, so running a Vulkan app and then context switching to an OpenGL app resulted in mesh shading still unexpectedly being enabled during OpenGL operation, and since other Mesh-related state was not properly initialized for that context a GPU hang was seen. Due to the specific ordering requirements (Vulkan app runs first, followed by OpenGL app), it took additional debug effort to track down the cause of the problem. There are various workarounds related to this behavior, with current implementations handled in the userspace drivers. E.g., Wa_14019789679 and Wa_22018402687. However it's been suggested that the kernel driver can help simplify things here by emitting zeroed SVG state with proper instruction headers as part of our default context creation (i.e., at the same point we apply LRC workarounds). This will help ensure that any future cases where a userspace driver does not emit an important state setting will result in consistent behavior. Bspec: 46261 Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-7-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
On some platforms we need to emit some non-register state while recording an engine class' default LRC. Add the infrastructure to support this; actual per-platform tables will be added in future patches. v2: - Checkpatch whitespace fix - Add extra assertion to ensure num_dw != 0. (Bala) Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-6-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-