- 26 Apr, 2019 9 commits
-
-
Chris Wilson authored
Despite what I think the prm recommends, commit f2253bd9 ("drm/i915/ringbuffer: EMIT_INVALIDATE after switch context") turned out to be a huge mistake when enabling Ironlake contexts as the GPU would hang on either a MI_FLUSH or PIPE_CONTROL immediately following the MI_SET_CONTEXT of an active mesa context (more vanilla contexts, e.g. simple rendercopies with igt, do not suffer). Ville found the following clue, "[DevCTG+]: For the invalidate operation of the pipe control, the following pointers are affected. The invalidate operation affects the restore of these packets. If the pipe control invalidate operation is completed before the context save, the indirect pointers will not be restored from memory. 1. Pipeline State Pointer 2. Media State Pointer 3. Constant Buffer Packet" which suggests by us emitting the INVALIDATE prior to the MI_SET_CONTEXT, we prevent the context-restore from chasing the dangling pointers within the image, and explains why this likely prevents the GPU hang. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419111749.3910-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
sandybride_pcode is another sideband, so move it to their new home. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-8-chris@chris-wilson.co.uk
-
Chris Wilson authored
These routines are identical except in the nature of the value parameter. For writes it is a pure in-param, but for a read, we need an out-param. Since they differ in a single line, merge the two routines into one. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-7-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since intel_sideband_read and intel_sideband_write differ by only a couple of lines (depending on whether we feed the value in or out), merge the two into a single common accessor. v2: Restore vlv_flisdsi_read() lost during rebasing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-6-chris@chris-wilson.co.uk
-
Chris Wilson authored
Split the sideback declarations out of the ginormous i915_drv.h Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
We now have two locks for sideband access. The general one covering sideband access across all generation, sb_lock, and a specific one covering sideband access via the punit on vlv/chv. After lifting the sb_lock around the punit into the callers, the pcu_lock is now redudant and can be separated from its other use to regulate RPS (essentially giving RPS a lock all of its own). v2: Extract a couple of minor bug fixes. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Lift the sideband acquisition for vlv_punit_read and vlv_punit_write into their callers, so that we can lock the sideband once for a sequence of operations, rather than perform the heavyweight acquisition on each request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
As we now employ a very heavy pm_qos around the punit access, we want to minimise the number of synchronous requests by performing one for the whole punit sequence rather than around individual accesses. The sideband lock is used for this, so push the pm_qos into the sideband lock acquisition and release, moving it from the lowlevel punit rw routine to the callers. In the first step, we move the punit magic into the common sideband lock so that we can acquire a bunch of ports simultaneously, and if need be extend the workaround protection later. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
While we talk to the punit over its sideband, we need to prevent the cpu from sleeping in order to prevent a potential machine hang. Note that by itself, it appears that pm_qos_update_request (via intel_idle) doesn't provide a sufficient barrier to ensure that all core are indeed awake (out of Cstate) and that the package is awake. To do so, we need to supplement the pm_qos with a manual ping on_each_cpu. v2: Restrict the heavy-weight wakeup to just the ISOF_PORT_PUNIT, there is insufficient evidence to implicate a wider problem atm. Similarly, restrict the w/a to Valleyview, as Cherryview doesn't have an angry cadre of users. The working theory, courtesy of Ville and Hans, is the issue lies within the power delivery and so is likely to be unit and board specific and occurs when both the unit/fw require extra power at the same time as the cpu package is changing its own power state. References: https://bugzilla.kernel.org/show_bug.cgi?id=109051 References: https://bugs.freedesktop.org/show_bug.cgi?id=102657 References: https://bugzilla.kernel.org/show_bug.cgi?id=195255Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-1-chris@chris-wilson.co.uk
-
- 25 Apr, 2019 2 commits
-
-
Chris Wilson authored
It was noted that we made the same mistake for VM_ID as for object handles, whereby we ensured that we only allocated a single handle for one ppgtt. This has the unfortunate consequence for userspace that they need to reference count the handles to avoid destroying an active ID. If we allow multiple handles to the same ppgtt, userspace can freely unreference any handle they own without fear of destroying the same handle in use elsewhere. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425054333.27299-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
In order to separate the reservation phase of building a request from its emission phase, we need to pull some of the request alloc activities from deep inside i915_request to the surface, GEM_EXECBUFFER. v2: Be frivolous, use a local drm_i915_private. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425050143.811-1-chris@chris-wilson.co.uk
-
- 24 Apr, 2019 8 commits
-
-
Chris Wilson authored
In the current scheme, on submitting a request we take a single global GEM wakeref, which trickles down to wake up all GT power domains. This is undesirable as we would like to be able to localise our power management to the available power domains and to remove the global GEM operations from the heart of the driver. (The intent there is to push global GEM decisions to the boundary as used by the GEM user interface.) Now during request construction, each request is responsible via its logical context to acquire a wakeref on each power domain it intends to utilize. Currently, each request takes a wakeref on the engine(s) and the engines themselves take a chipset wakeref. This gives us a transition on each engine which we can extend if we want to insert more powermangement control (such as soft rc6). The global GEM operations that currently require a struct_mutex are reduced to listening to pm events from the chipset GT wakeref. As we reduce the struct_mutex requirement, these listeners should evaporate. Perhaps the biggest immediate change is that this removes the struct_mutex requirement around GT power management, allowing us greater flexibility in request construction. Another important knock-on effect, is that by tracking engine usage, we can insert a switch back to the kernel context on that engine immediately, avoiding any extra delay or inserting global synchronisation barriers. This makes tracking when an engine and its associated contexts are idle much easier -- important for when we forgo our assumed execution ordering and need idle barriers to unpin used contexts. In the process, it means we remove a large chunk of code whose only purpose was to switch back to the kernel context. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
Start acquiring the logical intel_context and using that as our primary means for request allocation. This is the initial step to allow us to avoid requiring struct_mutex for request allocation along the perma-pinned kernel context, but it also provides a foundation for breaking up the complex request allocation to handle different scenarios inside execbuf. For the purpose of emitting a request from inside retirement (see the next patch for engine power management), we also need to lift control over the timeline mutex to the caller. v2: Note that the request carries the active reference upon construction. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
We wish to start segregating the power management into different control domains, both with respect to the hardware and the user interface. The first step is that at the lowest level flow of requests, we want to process a context event (and not a global GEM operation). In this patch, we introduce the context callbacks that in future patches will be redirected to per-engine interfaces leading to global operations as required. The intent is that this will be guarded by the timeline->mutex, except that retiring has not quite finished transitioning over from being guarded by struct_mutex. So at the moment it is protected by struct_mutex with a reminded to switch. v2: Rename default handlers to intel_context_enter_engine. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Split out the powermanagement portion (GT wakeref, suspend/resume) of GEM from i915_gem.c into its own file. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
For controlling runtime pm of the GT and engines, we would like to have a callback to do extra work the first time we wake up and the last time we drop the wakeref. This first/last access needs serialisation and so we encompass a mutex with the regular intel_wakeref_t tracker. v2: Drop the _once naming and report the errors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc; Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Start partitioning off the code that talks to the hardware (GT) from the uapi layers and move the device facing code under gt/ One casualty is s/intel_ringbuffer.h/intel_engine.h/ with the plan to subdivide that header and body further (and split out the submission code from the ringbuffer and logical context handling). This patch aims to be simple motion so git can fixup inflight patches with little mess. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424174839.7141-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The RING_NONPRIV allows us to add registers to a whitelist that allows userspace to modify them. Ideally such registers should be safe and saved within the context such that they do not impact system behaviour for other users. This selftest verifies that those registers we do add are (a) then writable by userspace and (b) only affect a single client. Opens: - Is GEN9_SLICE_COMMON_ECO_CHICKEN1 really write-only? v2: Remove the blatant copy-paste. v3: Emulate userspace register writes via the batch again. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424110941.9869-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
As we push for better compartmentalisation, it is more convenient to copy the default sseu configuration from the engine into the derived logical context, than it is to dig it out from i915->runtime_info. v2: Use intel_sseu_from_device_info() to describe the converter Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424095134.30249-1-chris@chris-wilson.co.uk
-
- 23 Apr, 2019 3 commits
-
-
Radhakrishna Sripada authored
Fixes the clock-gating issue when pipe scaling is enabled. (Lineage #2006604312) V2: Fix typo in headline(Chris) Handle the non double buffered nature of the register(Ville) V3: Fix checkpatch warning. BAT failure for V2 on gen3 looks unrelated. V4: Split the icl and skl wa's(Ville) V5: Split the checks for icl and skl(Ville) V6: Correct the flipped checks in intel_pre_plane_update(Ville) V7: Use enum for pipe and extend the WA for plane scalers(Ville) V8: Eliminate the redundant use of pch_pfit(Ville) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Clint Taylor <clinton.a.taylor@intel.com> Cc: Aditya Swarup <aditya.swarup@intel.com> Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417185901.14833-1-radhakrishna.sripada@intel.com
-
Ville Syrjälä authored
The spec has changed since skl_max_plane_width() was written. Now the SKL limits are lower than what they were initially, and GLK and ICL have different limits. Update the code to match the spec. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190418195907.23912-1-ville.syrjala@linux.intel.comReviewed-by: José Roberto de Souza <jose.souza@intel.com>
-
Imre Deak authored
Fix the order of lane, port parameters passed to the register macro. Note that this was already partly fixed by commit 37fc7845 ("drm/i915: Call MG_DP_MODE() macro with the right parameters order") While at it simplify things by using the macro directly instead of an unnecessary redirection via an array. v2: - Add a note the commit message about simplifying things. (José) Fixes: 58106b7d ("drm/i915: Make MG PHY macros semantically consistent") Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Aditya Swarup <aditya.swarup@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419071026.32370-1-imre.deak@intel.com
-
- 20 Apr, 2019 6 commits
-
-
Chris Wilson authored
When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
-
Fernando Pacheco authored
GPU reset is now available with GuC enabled, so re-enable our check that this reset is usable from atomic context. Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-6-fernando.pacheco@intel.com
-
Fernando Pacheco authored
We have now prepared the guc reset paths to avoid taking struct_mutex, or any other lock, and so it is now safe to re-enable. References: fe62365f ("drm/i915/guc: Disable global reset") Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-5-fernando.pacheco@intel.com
-
Fernando Pacheco authored
Currently we pin the GuC or HuC firmware image just before uploading. Perma-pin during uC initialization instead and use the range reserved at the top of the address space. Moving the firmware resulted in needing to: - use an additional pinning for the rsa signature which will be used during HuC auth as addresses above GUC_GGTT_TOP do not map through GTT. v2: Remove call to set to gtt domain Do not restore fw gtt mapping unconditionally Separate out pin/unpin functions and drop usage of pin/unpin Use uc_fw init/fini functions to bind/unbind fw object v3: Bind is only needed during xfer (Chris) Remove attempts to bind outside of xfer (Chris) Mark fw bind/unbind static Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-4-fernando.pacheco@intel.com
-
Fernando Pacheco authored
GuC and HuC depend on struct_mutex for device reinitialization. Moving away from this dependency requires perma-pinning the firmware images in GGTT. The upper portion of the GuC address space has a sizeable hole (several MB) that is inaccessible by GuC. Reserve this range within GGTT as it can comfortably hold GuC/HuC firmware images. v2: Reserve node rather than insert (Chris) Simpler determination of node start/size (Daniele) Move reserve/release out to intel_guc.* files v3: Reserve starting at GUC_GGTT_TOP only and bail if this fails (Chris) Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-3-fernando.pacheco@intel.com
-
Fernando Pacheco authored
he uC firmware init function is called during GuC/HuC init early phases. Rename to include "_early" and properly reflect which phase we are at. The uC firmware fini function is cleaning up the state set/created on firmware fetch. Replace "_fini" with "_cleanup_fetch". v2: also rename uC fw fini function Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-2-fernando.pacheco@intel.com
-
- 19 Apr, 2019 6 commits
-
-
Chris Wilson authored
If we know that the user cannot access the GGTT, by virtue of having a segregated memory area, we can skip clearing the unused entries as they cannot be accessed. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419201207.5477-1-chris@chris-wilson.co.uk
-
Christian König authored
This is to work around problems with libva and vainfo. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190417112525.16848-1-christian.koenig@amd.com
-
Chris Wilson authored
An interesting discussion regarding "hybrid interrupt polling" for NVMe came to the conclusion that the ideal busyspin before sleeping was half of the expected request latency (and better if it was already halfway through that request). This suggested that we too should look again at our tradeoff between spinning and waiting. Currently, our spin simply tries to hide the cost of enabling the interrupt, which is good to avoid penalising nop requests (i.e. test throughput) and not much else. Studying real world workloads suggests that a spin of upto 500us can dramatically boost performance, but the suggestion is that this is not from avoiding interrupt latency per-se, but from secondary effects of sleeping such as allowing the CPU reduce cstate and context switch away. In a truly hybrid interrupt polling scheme, we would aim to sleep until just before the request completed and then wake up in advance of the interrupt and do a quick poll to handle completion. This is tricky for ourselves at the moment as we are not recording request times, and since we allow preemption, our requests are not on as a nicely ordered timeline as IO. However, the idea is interesting, for it will certainly help us decide when busyspinning is worthwhile. v2: Expose the spin setting via Kconfig options for easier adjustment and testing. v3: Don't get caught sneaking in a change to the busyspin parameters. v4: Explain more about the "hybrid interrupt polling" scheme that we want to migrate towards. Suggested-by: Sagar Kamble <sagar.a.kamble@intel.com> References: http://events.linuxfoundation.org/sites/events/files/slides/lemoal-nvme-polling-vault-2017-final_0.pdfSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Sagar Kamble <sagar.a.kamble@intel.com> Cc: Eero Tamminen <eero.t.tamminen@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ben Widawsky <ben@bwidawsk.net> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Sagar Kamble <sagar.a.kamble@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419182625.11186-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
For consistency (and elegance!), add intel_device_info.has_rps. The immediate boon is that RPS support is now emitted along the other capabilities in the debug log and after errors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419134836.5626-1-chris@chris-wilson.co.uk
-
Janusz Krzysztofik authored
The driver does not currently support unbinding from a device which is in use. Since open file descriptors may still be pointing into kernel memory where the device structures used to be, entirely correct kernel panics protect the driver from being unbound as we should not be unbinding it before those dangling pointers have been made safe. According to the documentation found inside drivers/gpu/drm/drm_drv.c, drm_dev_unplug() should be used instead of drm_dev_unregister() in order to make a device inaccessible to users as soon as it is unpluged. Follow that advice to make those possibly dangling pointers safe, protected by DRM layer from a user who is otherwise left pointing into possibly reused kernel memory after the driver has been unbound from the device. Once done, also cancel inflight operations immediately by calling i915_gem_set_wedged(). Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190405130235.7707-2-janusz.krzysztofik@linux.intel.com
-
Chris Wilson authored
We always start off at an "efficient frequency" and can let the system autotune from there, eliminating the need to clamp the available range. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190418205358.11450-1-chris@chris-wilson.co.uk
-
- 18 Apr, 2019 2 commits
-
-
Chris Wilson authored
We need to set the various ring registers prior to restarting the engine, or else we may restart it after reset/resume in an ill-defined state. Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190418132720.3716-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
We store the engine->imr mask and set up the RING_IMR register on restarting the engine. We do not then want to overwrite it with an incomplete mask later as we may then lose interrupts! Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190418132720.3716-1-chris@chris-wilson.co.uk
-
- 17 Apr, 2019 4 commits
-
-
Chris Wilson authored
We have to avoid chasing after a userspace race! <3>[ 473.114328] BUG: KASAN: use-after-free in i915_gem_create+0x1d2/0x1f0 [i915] <3>[ 473.114389] Read of size 8 at addr ffff88815bf1d840 by task gem_flink_race/1541 <4>[ 473.114464] CPU: 1 PID: 1541 Comm: gem_flink_race Tainted: G U 5.1.0-rc4-g7d07e025e786-kasan_88+ #1 <4>[ 473.114469] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J4205-ITX, BIOS P1.10 09/29/2016 <4>[ 473.114474] Call Trace: <4>[ 473.114488] dump_stack+0x7c/0xbb <4>[ 473.114612] ? i915_gem_create+0x1d2/0x1f0 [i915] <4>[ 473.114621] print_address_description+0x65/0x270 <4>[ 473.114728] ? i915_gem_create+0x1d2/0x1f0 [i915] <4>[ 473.114839] ? i915_gem_create+0x1d2/0x1f0 [i915] <4>[ 473.114848] kasan_report+0x149/0x18d <4>[ 473.114962] ? i915_gem_create+0x1d2/0x1f0 [i915] <4>[ 473.115069] i915_gem_create+0x1d2/0x1f0 [i915] <4>[ 473.115176] ? i915_gem_object_create.part.28+0x4b0/0x4b0 [i915] <4>[ 473.115289] ? i915_gem_dumb_create+0x1a0/0x1a0 [i915] <4>[ 473.115297] drm_ioctl_kernel+0x192/0x260 <4>[ 473.115306] ? drm_ioctl_permit+0x280/0x280 <4>[ 473.115326] drm_ioctl+0x67c/0x960 <4>[ 473.115438] ? i915_gem_dumb_create+0x1a0/0x1a0 [i915] <4>[ 473.115448] ? drm_getstats+0x20/0x20 <4>[ 473.115459] ? __lock_acquire+0xa66/0x3fe0 <4>[ 473.115474] ? _raw_spin_unlock_irqrestore+0x39/0x60 <4>[ 473.115485] ? debug_object_active_state+0x2ea/0x4e0 <4>[ 473.115496] ? debug_show_all_locks+0x2d0/0x2d0 <4>[ 473.115513] do_vfs_ioctl+0x18d/0xfa0 <4>[ 473.115522] ? check_flags.part.27+0x440/0x440 <4>[ 473.115532] ? ioctl_preallocate+0x1a0/0x1a0 <4>[ 473.115547] ? __fget+0x2ac/0x410 <4>[ 473.115561] ? __ia32_sys_dup3+0xb0/0xb0 <4>[ 473.115569] ? rwlock_bug.part.0+0x90/0x90 <4>[ 473.115590] ksys_ioctl+0x35/0x70 <4>[ 473.115597] ? lockdep_hardirqs_off+0x1cb/0x2b0 <4>[ 473.115608] __x64_sys_ioctl+0x6a/0xb0 <4>[ 473.115614] ? lockdep_hardirqs_on+0x342/0x590 <4>[ 473.115623] do_syscall_64+0x97/0x400 <4>[ 473.115633] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 473.115641] RIP: 0033:0x7fce590d55d7 <4>[ 473.115649] Code: b3 66 90 48 8b 05 b1 48 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48 <4>[ 473.115655] RSP: 002b:00007fce4d525ba8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 <4>[ 473.115662] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fce590d55d7 <4>[ 473.115667] RDX: 00007fce4d525c10 RSI: 00000000c010645b RDI: 0000000000000007 <4>[ 473.115672] RBP: 00007fce4d525c10 R08: 00007fce4d526700 R09: 00007fce4d526700 <4>[ 473.115677] R10: 0000000000000054 R11: 0000000000000246 R12: 00000000c010645b <4>[ 473.115682] R13: 0000000000000007 R14: 0000000000000000 R15: 00007ffe0e4a7450 <3>[ 473.115731] Allocated by task 1541: <4>[ 473.115766] kmem_cache_alloc+0xce/0x290 <4>[ 473.115895] i915_gem_object_create.part.28+0x1c/0x4b0 [i915] <4>[ 473.116000] i915_gem_create+0xe3/0x1f0 [i915] <4>[ 473.116008] drm_ioctl_kernel+0x192/0x260 <4>[ 473.116013] drm_ioctl+0x67c/0x960 <4>[ 473.116020] do_vfs_ioctl+0x18d/0xfa0 <4>[ 473.116026] ksys_ioctl+0x35/0x70 <4>[ 473.116032] __x64_sys_ioctl+0x6a/0xb0 <4>[ 473.116038] do_syscall_64+0x97/0x400 <4>[ 473.116044] entry_SYSCALL_64_after_hwframe+0x49/0xbe <3>[ 473.116071] Freed by task 1542: <4>[ 473.116101] kmem_cache_free+0xb7/0x2f0 <4>[ 473.116205] __i915_gem_free_objects+0x7d4/0xe10 [i915] <4>[ 473.116311] i915_gem_create_ioctl+0xaa/0xd0 [i915] <4>[ 473.116318] drm_ioctl_kernel+0x192/0x260 <4>[ 473.116323] drm_ioctl+0x67c/0x960 <4>[ 473.116330] do_vfs_ioctl+0x18d/0xfa0 <4>[ 473.116335] ksys_ioctl+0x35/0x70 <4>[ 473.116341] __x64_sys_ioctl+0x6a/0xb0 <4>[ 473.116347] do_syscall_64+0x97/0x400 <4>[ 473.116354] entry_SYSCALL_64_after_hwframe+0x49/0xbe Testcase: igt/gem_flink_race/flink_close Fixes: e163484a ("drm/i915: Update size upon return from GEM_CREATE") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417132507.27133-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Sometimes the HW doesn't even play fair, and completely forgets about register writes. Skip verifying known troublemakers. References: https://bugs.freedesktop.org/show_bug.cgi?id=108954Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417075657.19456-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Read the engine workarounds back using the GPU after loading the initial context state to verify that we are setting them correctly, and bail if it fails. v2: Break out the verification into its own loop Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417075657.19456-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Immediately after writing the workaround, verify that it stuck in the register. References: https://bugs.freedesktop.org/show_bug.cgi?id=108954Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417075657.19456-2-chris@chris-wilson.co.uk
-