- 21 Mar, 2022 3 commits
-
-
Tvrtko Ursulin authored
On a multi-tile platform, each tile has its own registers + GGTT space, and BAR 0 is extended to cover all of them. Up to four GTs are supported in i915->gt[], with slot zero shadowing the existing i915->gt0 to enable source compatibility with legacy driver paths. A for_each_gt macro is added to iterate over the GTs and will be used by upcoming patches that convert various parts of the driver to be multi-gt aware. Only the primary/root tile is initialized for now; the other tiles will be detected and plugged in by future patches once the necessary infrastructure is in place to handle them. Signed-off-by: Abdiel Janulgue <abdiel.janulgue@gmail.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220318233938.149744-4-andi.shyti@linux.intel.com
-
Andi Shyti authored
The "gt_is_root(struct intel_gt *gt)" helper return true if the gt is the root gt, which means that its id is 0. Return false otherwise. Suggested-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220318233938.149744-3-andi.shyti@linux.intel.com
-
Andi Shyti authored
With the upcoming multitile support each tile will have its own local memory. Mark the current LMEM with the suffix '0' to emphasise that it belongs to the root tile. Suggested-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220318233938.149744-2-andi.shyti@linux.intel.com
-
- 18 Mar, 2022 5 commits
-
-
Matthew Brost authored
Add logical mapping for VDBOXs. This mapping is required for split-frame workloads, which otherwise fail with 00000000-F8C53528: [GUC] 0441-INVALID_ENGINE_SUBMIT_MASK ... if the application is using the logical id to reorder the engines and then using it for the batch buffer submission. It's not a big problem on media version 11 and 12 as they have only 2 instances of VCS and the logical to physical mapping is monotonically increasing - if the application is not using the logical id. Changing it for the previous platforms allows the media driver implementation for the next ones (12.50 and above) to be the same, checking the logical id. It should also not introduce any bug for the old versions of userspace not checking the id. The mapping added here is the complete map needed by XEHPSDV. Previous platforms with only 2 instances will just use a partial map and should still work. v2: Remove static from map variable (José) Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> [ Extend the mapping to media versions 11 and 12 and give proper justification in the commit message why ] Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Acked-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220316234538.434357-2-lucas.demarchi@intel.com
-
Lucas De Marchi authored
Earlier versions of commit a5b7ef27 ("drm/i915: Add struct to hold IP version") named "ver" as "arch" and then when it was renamed it missed the rename on MEDIA_VER_FULL() since it it's currently not used. Fixes: a5b7ef27 ("drm/i915: Add struct to hold IP version") Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220316234538.434357-1-lucas.demarchi@intel.com
-
Akeem G Abodunrin authored
Starting with DG2, preemption can no longer be controlled using userspace on a per-context basis. Instead, the hardware only allows us to enable or disable preemption in a global, system-wide basis. Also, we lose the ability to specify the preemption granularity (such as batch-level vs command-level vs object-level). v2 (MattR): - Move debugfs interface to a separate patch. (Jani) v3 (MattR): - Drop the debugfs support completely for now. Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Cc: John Harrison <john.c.harrison@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220318021051.2073847-1-matthew.d.roper@intel.com
-
Rodrigo Vivi authored
In this interface i915 is returning a blob of data which it receives from the guc software. This blob provides some useful data about the hardware for drivers. The format of this blob will be documented in the Programmer Reference Manuals when released. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Kenneth Graunke <kenneth.w.graunke@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Slawomir Milczarek <slawomir.milczarek@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Acked-by: Jordan Justen <jordan.l.justen@intel.com> Tested-by: Jordan Justen <jordan.l.justen@intel.com> Acked-by: Jon Bloomfield <jon.bloomfield@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220306232157.1174335-3-jordan.l.justen@intel.com
-
John Harrison authored
Implement support for fetching the hardware description table from the GuC. The call is made twice - once without a destination buffer to query the size and then a second time to fill in the buffer. The table is stored in the GT structure so that it can be fetched once at driver load time. Keeping inside a GuC structure would mean it would be release and reloaded on a GuC reset (part of a full GT reset). However, the table does not change just because the GT has been reset and the GuC reloaded. Also, dynamic memory allocations inside the reset path are a problem. Note that the table is only available on ADL-P and later platforms. v2 (John's v2 patch): * Move to GT level to avoid memory allocation during reset path (and unnecessary re-read of the table on a reset). v5 (of Jordan's posting): * Various changes made by Jordan and recommended by Michal - Makefile ordering - Adjust "struct intel_guc_hwconfig hwconfig" comment - Set Copyright year to 2022 in intel_guc_hwconfig.c/.h - Drop inline from hwconfig_to_guc() - Replace hwconfig param with guc in __guc_action_get_hwconfig() - Move zero size check into guc_hwconfig_discover_size() - Change comment to say zero size offset/size is needed to get size - Add has_guc_hwconfig to devinfo and drop has_table() - Change drm_err to notice in __uc_init_hw() and use %pe v6 (of Jordan's posting): * Added a couple more small changes recommended by Michal * Merge in John's v2 patch, but note: - Using drm_notice as recommended by Michal - Reverted Michal's suggestion of using devinfo v7 (of Jordan's posting): * Change back to drm_err as preferred by John Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Acked-by: Jon Bloomfield <jon.bloomfield@intel.com> Signed-off-by: Jordan Justen <jordan.l.justen@intel.com> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220306232157.1174335-2-jordan.l.justen@intel.com
-
- 16 Mar, 2022 10 commits
-
-
Matthew Auld authored
On integrated it looks like the GGTT base should always 1:1 maps to somewhere within DSM. On discrete the base seems to be pre-programmed with a normal lmem address, and is not 1:1 mapped with the base address. On such devices probe the lmem address directly from the PTE. v2(Ville): - The base is actually the pre-programmed GGTT address, which is then meant to 1:1 map to somewhere inside dsm. In the case of dgpu the base looks to just be some offset within lmem, but this also happens to be the exact dsm start, on dg1. Therefore we should only need to fudge the physical address, before allocating from stolen. - Bail if it's not located in dsm. v3: - Scratch that. There doesn't seem to be any relationship with the base and PTE address, on at least DG1. Let's instead just grab the lmem address from the PTE itself. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Nirmoy Das <nirmoy.das@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-7-matthew.auld@intel.com
-
CQ Tang authored
When system does not have mappable aperture, ggtt->mappable_end=0. In this case if we pass PIN_MAPPABLE when pinning vma, the pinning code will return -ENOSPC. So conditionally set PIN_MAPPABLE if HAS_GMCH(). Suggested-by: Chris P Wilson <chris.p.wilson@intel.com> Signed-off-by: CQ Tang <cq.tang@intel.com> Cc: Radhakrishna Sripada <radhakrishna.sripada@intel.com> Cc: Ap Kamal <kamal.ap@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-6-matthew.auld@intel.com
-
Matthew Auld authored
For the ttm backend we can use existing placements fpfn and lpfn to force the allocator to place the object at the requested offset, potentially evicting stuff if the spot is currently occupied. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-5-matthew.auld@intel.com
-
Matthew Auld authored
Add a generic interface for allocating an object at some specific offset, and convert stolen over. Later we will want to hook this up to different backends. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-4-matthew.auld@intel.com
-
Matthew Auld authored
Keep the behaviour consistent with normal lmem, where we assume CPU access if by default required. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-3-matthew.auld@intel.com
-
Akeem G Abodunrin authored
On client platforms with reduced LMEM BAR, we should be able to continue with driver load with reduced io_size. Instead of using the BAR size to determine the how large stolen should be, we should instead use the ADDR_RANGE register to figure this out(at least on platforms like DG2). For simplicity we don't attempt to support partially mappable stolen. v2: rearrange the io_mapping_init_wc slightly, since the stolen setup might result in reduced io_size. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Co-developed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-2-matthew.auld@intel.com
-
Matthew Auld authored
Just pass along the probed io_size. The backend should be able to utilize the entire range here, even if some of it is non-mappable. It does leave open with what to do with stolen local-memory. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-1-matthew.auld@intel.com
-
Matt Roper authored
Upcoming patches will need to steer writes to multicast registers as well as reading them. Although the setting of the 'multicast' bit should only really matter for write operations (reads always operate in a unicast manner and give us the result from one specific instance), Wa_22013088509 suggests that we leave the multicast bit enabled when performing read operations, so we follow suit here. Cc: Harish Chegondi <harish.chegondi@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220314234203.799268-4-matthew.d.roper@intel.com
-
Daniele Ceraolo Spurio authored
GuC has its own steering mechanism and can't use the default set by i915, so we need to provide the steering information that the FW will need to save/restore registers while processing an engine reset. The GUC interface allows us to do so as part of the register save/restore list and it requires us to specify the steering for all multicast register, even those that would be covered by the default setting for cpu access. Given that we do not distinguish between registers that do not need steering and registers that are guaranteed to work the default steering, we set the steering for all entries in the guc list that do not require a special steering (e.g. mslice) to the default settings; this will cost us a few extra writes during engine reset but allows us to keep the steering logic simple. Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220314234203.799268-3-matthew.d.roper@intel.com
-
Matt Roper authored
Add a new 'steering' node in each gt's debugfs directory that tells whether we're using explicit steering for various types of MCR ranges and, if so, what MMIO ranges it applies to. We're going to be transitioning away from implicit steering, even for slice/dss steering soon, so the information reported here will become increasingly valuable once that happens. v2: - Adding missing 'static' on intel_steering_types[] (Jose, sparse) v3: - "static const char *" -> "static const char * const" (sparse) Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315170250.954380-1-matthew.d.roper@intel.com
-
- 15 Mar, 2022 1 commit
-
-
John Harrison authored
sseu_dev_info is already a pretty large structure which will likely continue to grow when future platforms increase potential DSS and EU counts. Let's switch the stack placement of this structure in debugfs with a dynamic allocation. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315020805.844962-1-matthew.d.roper@intel.com
-
- 14 Mar, 2022 2 commits
-
-
Matt Roper authored
When running on Xe_HP or beyond, let's use an updated format for describing topology in our error state dumps and debugfs to give a more accurate view of the hardware: - Just report DSS directly without the legacy "slice0" output that's no longer meaningful. - Indicate whether each DSS is accessible for geometry and/or compute. - Rename "rcs_topology" to "sseu_topology" since the information reported is common to both RCS and CCS engines now. v2: - Name static functions in a more consistent manner. (Lucas) Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220311225459.385515-2-matthew.d.roper@intel.com
-
Matt Roper authored
Xe_HP removed "slice" as a first-class unit in the hardware design. Instead we now have a single pool of subslices (which are now referred to as "DSS") that different hardware units have different ways of grouping ("compute slices," "geometry slices," etc.). For the purposes of topology representation, we treat Xe_HP-based platforms as having a single slice that contains all of the platform's DSS. There's no need to allocate storage space for (max legacy slices * max dss); let's update some of our macros to minimize the storage requirement for sseu topology. We'll also document some of the constants to make it a little bit more clear what they represent. v2: - s/LEGACY/HSW/ in macro names. (Lucas) - Rename MAX() to SSEU_MAX() to avoid any potential clashes with other definitions elsewhere. Unfortunately max()/max_t() from linux/minmax.h cannot be used in this context. (Lucas) Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220311225459.385515-1-matthew.d.roper@intel.com
-
- 11 Mar, 2022 2 commits
-
-
Matt Roper authored
We shouldn't really be keeping track of how many SFC_DONE registers our platforms can have, but rather how many SFC hardware units there can be (each SFC unit will have one corresponding SFC_DONE register). So drop the stray GEN12_SFC_DONE_MAX definition we had in the register definition file and replace it with an I915_MAX_SFC that follows the pattern we use for other hardware units. Note that our hardware has a 2:1:1 ratio of VD:VE:SFC, and as far as we know that pattern should carry forward to future platforms, so we'll define it as #VCS/2. Cc: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220311062835.163744-1-matthew.d.roper@intel.com
-
Mastan Katragadda authored
A missing bounds check in vm_access() can lead to an out-of-bounds read or write in the adjacent memory area, since the len attribute is not validated before the memcpy later in the function, potentially hitting: [ 183.637831] BUG: unable to handle page fault for address: ffffc90000c86000 [ 183.637934] #PF: supervisor read access in kernel mode [ 183.637997] #PF: error_code(0x0000) - not-present page [ 183.638059] PGD 100000067 P4D 100000067 PUD 100258067 PMD 106341067 PTE 0 [ 183.638144] Oops: 0000 [#2] PREEMPT SMP NOPTI [ 183.638201] CPU: 3 PID: 1790 Comm: poc Tainted: G D 5.17.0-rc6-ci-drm-11296+ #1 [ 183.638298] Hardware name: Intel Corporation CoffeeLake Client Platform/CoffeeLake H DDR4 RVP, BIOS CNLSFWR1.R00.X208.B00.1905301319 05/30/2019 [ 183.638430] RIP: 0010:memcpy_erms+0x6/0x10 [ 183.640213] RSP: 0018:ffffc90001763d48 EFLAGS: 00010246 [ 183.641117] RAX: ffff888109c14000 RBX: ffff888111bece40 RCX: 0000000000000ffc [ 183.642029] RDX: 0000000000001000 RSI: ffffc90000c86000 RDI: ffff888109c14004 [ 183.642946] RBP: 0000000000000ffc R08: 800000000000016b R09: 0000000000000000 [ 183.643848] R10: ffffc90000c85000 R11: 0000000000000048 R12: 0000000000001000 [ 183.644742] R13: ffff888111bed190 R14: ffff888109c14000 R15: 0000000000001000 [ 183.645653] FS: 00007fe5ef807540(0000) GS:ffff88845b380000(0000) knlGS:0000000000000000 [ 183.646570] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 183.647481] CR2: ffffc90000c86000 CR3: 000000010ff02006 CR4: 00000000003706e0 [ 183.648384] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 183.649271] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 183.650142] Call Trace: [ 183.650988] <TASK> [ 183.651793] vm_access+0x1f0/0x2a0 [i915] [ 183.652726] __access_remote_vm+0x224/0x380 [ 183.653561] mem_rw.isra.0+0xf9/0x190 [ 183.654402] vfs_read+0x9d/0x1b0 [ 183.655238] ksys_read+0x63/0xe0 [ 183.656065] do_syscall_64+0x38/0xc0 [ 183.656882] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 183.657663] RIP: 0033:0x7fe5ef725142 [ 183.659351] RSP: 002b:00007ffe1e81c7e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 183.660227] RAX: ffffffffffffffda RBX: 0000557055dfb780 RCX: 00007fe5ef725142 [ 183.661104] RDX: 0000000000001000 RSI: 00007ffe1e81d880 RDI: 0000000000000005 [ 183.661972] RBP: 00007ffe1e81e890 R08: 0000000000000030 R09: 0000000000000046 [ 183.662832] R10: 0000557055dfc2e0 R11: 0000000000000246 R12: 0000557055dfb1c0 [ 183.663691] R13: 00007ffe1e81e980 R14: 0000000000000000 R15: 0000000000000000 Changes since v1: - Updated if condition with range_overflows_t [Chris Wilson] Fixes: 9f909e21 ("drm/i915: Implement vm_ops->access for gdb access into mmaps") Signed-off-by: Mastan Katragadda <mastanx.katragadda@intel.com> Suggested-by: Adam Zabrocki <adamza@microsoft.com> Reported-by: Jackson Cody <cody.jackson@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: <stable@vger.kernel.org> # v5.8+ Reviewed-by: Matthew Auld <matthew.auld@intel.com> [mauld: tidy up the commit message and add Cc: stable] Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220303060428.1668844-1-mastanx.katragadda@intel.com
-
- 08 Mar, 2022 3 commits
-
-
Matt Roper authored
Platforms with FlatCCS do not use auxiliary planes for compression control data and thus do not need traditional aux table invalidation (and the registers no longer even exist). Original-author: CQ Tang Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220301052952.1706597-1-matthew.d.roper@intel.com
-
Matthew Auld authored
It looks like this code was accidentally dropped at some point(in a slightly different form), so add it back. The gist is that if we know the allocation will be one single chunk, then we can just annotate the BO with I915_BO_ALLOC_CONTIGUOUS, even if the user doesn't bother. In the future this should allow us to avoid using vmap for such objects, in some upcoming patches. v2(Thomas): - Tweak the commit message to mention the future motivation Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220202173154.3758970-1-matthew.auld@intel.com
-
Matthew Auld authored
Currently this will enforce both 2M alignment and padding for any LMEM pages inserted into the GGTT. However, this was only meant to be applied to the compact-pt layout with the ppGTT. For the GGTT we can reduce the alignment and padding to 64K. Bspec: 45015 Fixes: 87bd701e ("drm/i915: enforce min GTT alignment for discrete cards") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Robert Beckett <bob.beckett@collabora.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220303100229.839282-1-matthew.auld@intel.com
-
- 07 Mar, 2022 6 commits
-
-
Matthew Auld authored
This is no longer possible since e6e1a304 ("drm/i915: vma is always backed by an object."). Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304174252.1000238-1-matthew.auld@intel.com
-
Matthew Auld authored
If the vm doesn't request async binding, like for example with the dpt, then we should be able to skip the async path and avoid calling i915_vm_lock_objects() altogether. Currently if we have a moving fence set for the BO(even though it might have signalled), we still take the async patch regardless of the bind_async setting, and then later still end up just doing i915_gem_object_wait_moving_fence() anyway. Alternatively we would need to add dummy scratch object which can be locked, just for the dpt. Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304095934.925036-2-matthew.auld@intel.com
-
Matthew Auld authored
Since we are actually mapping the object and not the vma, when dealing with LMEM, we should be careful and use the backing store size here, since the vma->node.size could have all kinds of funny padding constraints, which could result in us writing to OOB address. v2(Chris): - Prefer vma->size here, which should be the backing store size. Some more rework is needed here to stop using node.size in some other places. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304095934.925036-1-matthew.auld@intel.com
-
Thomas Hellström authored
The test for vma should always return true, and when assigning -EBUSY to ret, the variable should already have that value. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304082641.308069-4-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Now that i915_vma_parked() is taking the object lock on vma destruction, and the only user of the vma refcount, i915_gem_object_unbind() also takes the object lock, remove the vma refcount. v3: Documentation update. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304082641.308069-3-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
vms are not getting properly closed. Rather than fixing that, Remove the vm open count and instead rely on the vm refcount. The vm open count existed solely to break the strong references the vmas had on the vms. Now instead make those references weak and ensure vmas are destroyed when the vm is destroyed. Unfortunately if the vm destructor and the object destructor both wants to destroy a vma, that may lead to a race in that the vm destructor just unbinds the vma and leaves the actual vma destruction to the object destructor. However in order for the object destructor to ensure the vma is unbound it needs to grab the vm mutex. In order to keep the vm mutex alive until the object destructor is done with it, somewhat hackishly grab a vm_resv refcount that is released late in the vma destruction process, when the vm mutex is no longer needed. v2: Address review-comments from Niranjana - Clarify that the struct i915_address_space::skip_pte_rewrite is a hack and should ideally be replaced in an upcoming patch. - Remove an unneeded continue in clear_vm_list and update comment. v3: - Documentation update - Commit message formatting Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220304082641.308069-2-thomas.hellstrom@linux.intel.com
-
- 06 Mar, 2022 2 commits
-
-
Gwan-gyeong Mun authored
The current implementation of i915 prime mmap only works when initializing drm_i915_gem_object with shmem_region. When using LMEM, drm_i915_gem_object is initialized with ttm_system_region. In order to make prime mmap work even this case, when using LMEM (when using ttm in i915), dma_buf_ops.mmap callback function calls drm_gem_prime_mmap(). drm_gem_prime_mmap() of drm core calls internally i915_gem_mmap() so that prime mmap can perform normally. The fake offset is processed inside drm_gem_prime_mmap(). Testcase: igt/prime_mmap Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220225131316.1433515-3-gwan-gyeong.mun@intel.com
-
Gwan-gyeong Mun authored
The dma_buf_ops.unmap_dma_buf callback used in i915, i915_gem_unmap_dma_buf(), has the same code as drm_gem_unmap_dma_buf(). In order to eliminate defining and using duplicate function, it updates the dma_buf_ops.unmap_dma_buf callback to use drm_gem_unmap_dma_buf(). Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220225131316.1433515-2-gwan-gyeong.mun@intel.com
-
- 04 Mar, 2022 2 commits
-
-
Stuart Summers authored
If RCS is not enumerated, GuC will return invalid parameters. Make sure we do not send RCS supported when we have not enumerated it. Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Signed-off-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220303223435.2793124-2-matthew.d.roper@intel.com
-
Matt Roper authored
In the past we've always assumed that an RCS engine is present on every platform. However now that we have compute engines there may be platforms that have CCS engines but no RCS, or platforms that are designed to have both, but have the RCS engine fused off. Various engine-centric initialization that only needs to be done a single time for the group of RCS+CCS engines can't rely on being setup with the RCS now; instead we add a I915_ENGINE_FIRST_RENDER_COMPUTE flag that will be assigned to a single engine in the group; whichever engine has this flag will be responsible for some of the general setup (RCU_MODE programming, initialization of certain workarounds, etc.). Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220303223435.2793124-1-matthew.d.roper@intel.com
-
- 03 Mar, 2022 4 commits
-
-
John Harrison authored
Some G2H handlers were reading the context id field from the payload before checking the payload met the minimum length required. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220302003357.4188363-9-John.C.Harrison@Intel.com
-
John Harrison authored
The CTB registration process changed significantly a while back using a single KLV based H2G. So drop the original and now obsolete H2G definitions. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220302003357.4188363-8-John.C.Harrison@Intel.com
-
John Harrison authored
The LRC descriptor pool is going away. So, stop naming context ids as descriptor pool indecies. While at it, add a bunch of missing line feeds to some error messages. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220302003357.4188363-7-John.C.Harrison@Intel.com
-
John Harrison authored
The LRC descriptor was being initialised early on in the context registration sequence. It could then be determined that the actual registration needs to be delayed and the descriptor would be wiped out. This is inefficient, so move the setup to later in the process after the point of no return. v2: Move some split changes into the split patch (and do them correctly). Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220302003357.4188363-6-John.C.Harrison@Intel.com
-