- 17 Mar, 2017 9 commits
-
-
Chris Wilson authored
To improve our historical record and to simplify userspace that wants to include i915_pciids.h as its canonical breakdown of PCI IDs and their respective generations, include the gen1 ids for i810 and i815. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170313112810.4202-1-chris@chris-wilson.co.ukAcked-by: Jani Nikula <jani.nikula@intel.com>
-
Chris Wilson authored
Do an early read of the execlists' queue before we take the spinlock and start checking. This is safe as the first writer to the execlists queue will cause the tasklet to be run again after a memory barrier. v2: Keep guc in sync with execlists queue changes v3: Explain the mb between the tasklet running on one cpu and the execlist_first update and schedule from a second cpu. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317120716.17191-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
i915_gem_stolen_list_info() sneakily takes advantage of the obj->obj_exec_link to save itself from having to allocate. Enough of the subterfuge, just allocate an array of pointers and sort them instead of the list. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316132006.7976-7-chris@chris-wilson.co.uk
-
Chris Wilson authored
Before rc6 is initialised (after driver load or resume), the value inside VLV_COUNTER_CONTROL is undefined so we cannot make an assertion that is in HIGH_RANGE mode. Fixes: 6b7f6aa7 ("drm/i915: Use coarse grained residency counter with byt") Testcase: igt/drv_suspend/debugfs-reader Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317125918.11351-1-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-
Ander Conselvan de Oliveira authored
Geminilake also supports pooled EUs. Enable it. It is unclear if the recommendation to disable it for 2x6 configurations from commit e015dd69 ("drm/i915/bxt: Add WaEnablePooledEuFor2x6") should also apply to GLK, but it is applied anyway to be on the safe side. That restriction can be lifted later if determined not to impact performance. The extra restriction should not impact user space either. The only user space that uses this feature is Beignet, and it only does so for 3x6 devices. See See Beignet's commit 6901899ec90a ("Runtime: set the sub slice according to kernel pooled EU configure."). v2: Improve commit message. (Mika, Roy) Cc: Arun Siluvery <arun.siluvery@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Yang Rong <rong.r.yang@intel.com> Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317140436.24645-1-ander.conselvan.de.oliveira@intel.com
-
Chris Wilson authored
The only time we need to emit a flush inside request emission is after an execbuffer, for which we can use the full __i915_add_request(). All other instances want the simpler i915_add_request() without flushing, so remove the useless helper. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170317114709.8388-1-chris@chris-wilson.co.uk
-
Tvrtko Ursulin authored
If we avoid initializing forcewake domains when running as a guest, and also use gen2 mmio accessors in that case, we can avoid the timer traffic and any looping through the forcewake code which is currently just so it can end up in the no-op forcewake implementation. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Weinan Li <weinan.z.li@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Terrence Xu <terrence.xu@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170310095747.12258-1-tvrtko.ursulin@linux.intel.com [tursulin: commit spelling fix]
-
Zhenyu Wang authored
From commit a6508ded ("drm/i915: Use page coloring to provide the guard page at the end of the GTT"), we no longer explicitly subtract guard page at end for GGTT address space init, so shouldn't subtract that for vGPU balloon too, as that will leave that end page to be available for vGPU. Change balloon to cover full range too. This fixes to use recent drm-intel tip kernel for guest OS. Found by GVT-g cmd parser that guest kernel uses end page as scratch then try to run MI_STORE_REG_MEM onto it. v2: remove old comments Cc: Terrence Xu <terrence.xu@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170310022238.3191-1-zhenyuw@linux.intel.comReviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
trace_i915_gem_request_out may be used after the request is completed, and so the request may have been retired on another thread, invalidating the rq->ctx. Avoid dereferencing rq->ctx in the tracepoint by switching to the fence context id instead, updating all tracepoints to match. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316204235.27786-1-chris@chris-wilson.co.ukReviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 16 Mar, 2017 18 commits
-
-
Chris Wilson authored
This should be impossible, but let's assert that we do not pin a context 4 billion times before retiring! v2: Fix the assertion -- the patch had just one job to do! Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316171628.3228-1-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-
Chris Wilson authored
Provide some serialisation between user operations by waiting for the reset initiated by setting i915_wedged to complete. The automatic wait here makes echo 1 > i915_wedged; cat i915_error_state do the right thing, and not risk reporting "No error collected". Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
When we wedge the device, we override engine->submit_request with a nop to ensure that all in-flight requests are marked in error. However, igt would like to unwedge the device to test -EIO handling. This requires us to flush those in-flight requests and restore the original engine->submit_request. v2: Use a vfunc to unify enabling request submission to engines v3: Split new vfunc to a separate patch. v4: Make the wait interruptible -- the third party fences we wait upon may be indefinitely broken, so allow the reset to be aborted. Fixes: 821ed7df ("drm/i915: Update reset path to fix incomplete requests") Testcase: igt/gem_eio Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v3 Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
It turns out that we may want to restore the original engine->submit_request (and engine->schedule) callbacks from more than just the guc <-> execlists transition. Move this to a vfunc so we can have a common interface. v2: Move initial selection to intel_engines_init_common(), repaint vfunc with engine->set_default_submission (and a similar colour for the helper). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
I915_RESET_IN_PROGRESS is being used for both signaling the requirement to i915_mutex_lock_interruptible() to avoid taking the struct_mutex and to instruct a waiter (already holding the struct_mutex) to perform the reset. To allow for a little more coordination, split these two meaning into a couple of distinct flags. I915_RESET_BACKOFF tells i915_mutex_lock_interruptible() not to acquire the mutex and I915_RESET_HANDOFF tells the waiter to call i915_reset(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-1-chris@chris-wilson.co.uk
-
Changbin Du authored
GVTg has introduced the context status notifier to schedule the GVTg workload. At that time, the notifier is bound to GVTg context only, so GVTg is not aware of host workloads. Now we are going to improve GVTg's guest workload scheduler policy, and add Guc emulation support for new Gen graphics. Both these two features require acknowledgment for all contexts running on hardware. (But will not alter host workload.) So here try to make some change. The change is simple: 1. Move the context status notifier head from i915_gem_context to intel_engine_cs. Which means there is a notifier head per engine instead of per context. Execlist driver still call notifier for each context sched-in/out events of current engine. 2. At GVTg side, it binds a notifier_block for each physical engine at GVTg initialization period. Then GVTg can hear all context status events. In this patch, GVTg do nothing for host context event, but later will add a function there. But in any case, the notifier callback is a noop if this is no active vGPU. Since intel_gvt_init() is called at early initialization stage and require the status notifier head has been initiated, I initiate it in intel_engine_setup(). v2: remove a redundant newline. (chris) Fixes: 3c7ba635 ("drm/i915: Introduce execlist context status change notification") Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232Signed-off-by: Changbin Du <changbin.du@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.comAcked-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
This emulates execlists on top of the GuC in order to defer submission of requests to the hardware. This deferral allows time for high priority requests to gazump their way to the head of the queue, however it nerfs the GuC by converting it back into a simple execlist (where the CPU has to wake up after every request to feed new commands into the GuC). v2: Drop hack status - though iirc there is still a lockdep inversion between fence and engine->timeline->lock (which is impossible as the nesting only occurs on different fences - hopefully just requires some judicious lockdep annotation) v3: Apply lockdep nesting to enabling signaling on the request, using the pattern we already have in __i915_gem_request_submit(); v4: Replaying requests after a hang also now needs the timeline spinlock, to disable the interrupts at least v5: Hold wq lock for completeness, and emit a tracepoint for enabling signal v6: Reorder interrupt checking for a happier gcc. v7: Only signal the tasklet after a user-interrupt if using guc scheduling v8: Restore lost update of rq through the i915_guc_irq_handler (Tvrtko) v9: Avoid re-initialising the engine->irq_tasklet from inside a reset v10: Hook up the execlists-style tracepoints v11: Clear the execlists irq_posted bit after taking over the interrupt/tasklet Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316125619.6856-1-chris@chris-wilson.co.ukReviewed-by: Michał Winiarski <michal.winiarski@intel.com>
-
Chris Wilson authored
When manually overwriting the HWS, rather than assume irq_seqno_barrier does the right thing, we can explicitly flush the cacheline instead. This avoids us calling the engine->irq_seqno_barrier() from an illegal context: [ 1472.651797] BUG: scheduling while atomic: migration/0/11/0x00000002 [ 1472.651807] Modules linked in: ctr ccm arc4 snd_hda_codec_hdmi bnep rfcomm iwldvm snd_hda_codec_conexant snd_hda_codec_generic snd_hda_intel mac80211 snd_hda_codec snd_hda_core snd_pcm dm_multipath snd_hwdep intel_powerclamp coretemp snd_seq_midi crct10dif_pclmul snd_seq_midi_event crc32_pclmul iwlwifi ghash_clmulni_intel btusb snd_rawmidi btrtl aesni_intel btbcm aes_x86_64 crypto_simd btintel cryptd glue_helper bluetooth snd_seq cfg80211 snd_timer snd_seq_device intel_ips binfmt_misc snd mei_me soundcore mei dm_mirror dm_region_hash dm_log i915 intel_gtt i2c_algo_bit drm_kms_helper cfbfillrect syscopyarea cfbimgblt sysfillrect sysimgblt fb_sys_fops cfbcopyarea prime_numbers e1000e drm ahci libahci [ 1472.651897] CPU: 0 PID: 11 Comm: migration/0 Tainted: G U 4.11.0-rc1+ #203 [ 1472.651899] Hardware name: LENOVO 514328U/514328U, BIOS 6QET44WW (1.14 ) 04/20/2010 [ 1472.651900] Call Trace: [ 1472.651913] dump_stack+0x63/0x90 [ 1472.651922] __schedule_bug+0x5d/0x6b [ 1472.651930] __schedule+0x46a/0x5f0 [ 1472.651934] schedule+0x38/0x90 [ 1472.651938] schedule_hrtimeout_range_clock+0x85/0x110 [ 1472.651945] ? hrtimer_init+0x10/0x10 [ 1472.651949] schedule_hrtimeout_range+0xe/0x10 [ 1472.651952] usleep_range+0x4d/0x60 [ 1472.652037] gen5_seqno_barrier+0x13/0x20 [i915] [ 1472.652101] intel_engine_init_global_seqno+0xd7/0x160 [i915] [ 1472.652160] __i915_gem_set_wedged_BKL+0xa0/0x180 [i915] [ 1472.652166] multi_cpu_stop+0xbb/0xe0 [ 1472.652170] ? cpu_stop_queue_work+0x90/0x90 [ 1472.652174] cpu_stopper_thread+0x82/0x110 [ 1472.652179] smpboot_thread_fn+0x137/0x190 [ 1472.652184] kthread+0xf7/0x130 [ 1472.652187] ? sort_range+0x20/0x20 [ 1472.652191] ? kthread_park+0x90/0x90 [ 1472.652195] ret_from_fork+0x2c/0x40 Testcase: igt/gem_eio #ilk Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170314111452.9375-1-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-
Mika Kuoppala authored
Set byt rc residency counters high level as chv does by default. We lose some accuracy on byt but we can do the calculation without extra hw read on both platforms, as now they behave identically in this respect. v2: use ktime v3: keep comparison u32 (Chris) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1489592584-10422-1-git-send-email-mika.kuoppala@intel.com
-
Mika Kuoppala authored
We have used cz timestamp register to gain a reference time wrt to residency calculations. The residency counts are in cz clk ticks (333Mhz clock) but for some reason the cz timestamp register gives 100us units. Perhaps for some other usage, the base-ten based values are easier, but in residency calculations raw units would have been the easiest. As there is not much advantage of using base-ten clock through a more costly punit access, take our reference times directly from kernel clock. v2: use ktime (Chris, Ville) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Mika Kuoppala authored
Use intel_rc6_residency to get benefit for increased resolution in byt/chv. v2: output raw and time (Chris) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Mika Kuoppala authored
Vlv and chv residency counters are 40 bits in width. With a control bit, we can choose between upper or lower 32 bit window into this counter. Lets toggle this bit on and off on and read both parts. As a result we can push the wrap from 13 seconds to 54 minutes. v2: commit msg, loop readability, goto elimination (Chris) v3: bug ref, divide outside runtime pm lock (Chris) References: https://bugs.freedesktop.org/show_bug.cgi?id=94852Reported-by: Len Brown <len.brown@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Mika Kuoppala authored
Change the granularity from milliseconds to microseconds when returning rc6 residencies. This is in preparation for increased resolution on some platforms. v2: use 64bit div macro (Chris) Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Mika Kuoppala authored
Plan is to make generic residency calculation utility function for usage outside of sysfs. As a first step move residency calculation into intel_pm.c Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
lockdep doesn't like us taking the mm->mmap_sem inside the get_pages callback for a couple of reasons. The straightforward deadlock: [13755.434059] ============================================= [13755.434061] [ INFO: possible recursive locking detected ] [13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U [13755.434066] --------------------------------------------- [13755.434068] gem_userptr_bli/8398 is trying to acquire lock: [13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434096] but task is already holding lock: [13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434105] other info that might help us debug this: [13755.434108] Possible unsafe locking scenario: [13755.434110] CPU0 [13755.434111] ---- [13755.434112] lock(&mm->mmap_sem); [13755.434115] lock(&mm->mmap_sem); [13755.434117] *** DEADLOCK *** [13755.434121] May be due to missing lock nesting notation [13755.434126] 2 locks held by gem_userptr_bli/8398: [13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915] [13755.434156] stack backtrace: [13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1 [13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017 [13755.434169] Call Trace: [13755.434174] dump_stack+0x67/0x92 [13755.434178] __lock_acquire+0x133a/0x1b50 [13755.434182] lock_acquire+0xc9/0x220 [13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434204] down_read+0x42/0x70 [13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915] [13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915] [13755.434288] i915_gem_fault+0x397/0x6a0 [i915] [13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915] [13755.434308] ? __lock_acquire+0x449/0x1b50 [13755.434311] ? __lock_acquire+0x449/0x1b50 [13755.434315] ? vm_mmap_pgoff+0xa9/0xd0 [13755.434318] __do_fault+0x19/0x70 [13755.434321] __handle_mm_fault+0x863/0xe50 [13755.434325] handle_mm_fault+0x17f/0x370 [13755.434329] ? handle_mm_fault+0x40/0x370 [13755.434332] __do_page_fault+0x279/0x560 [13755.434336] do_page_fault+0xc/0x10 [13755.434339] page_fault+0x22/0x30 [13755.434342] RIP: 0033:0x7f5ab91b5880 [13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216 [13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000 [13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000 [13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000 [13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001 [13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000 and cyclic deadlocks: [ 2566.458979] ====================================================== [ 2566.459054] [ INFO: possible circular locking dependency detected ] [ 2566.459127] 4.11.0-rc1+ #26 Not tainted [ 2566.459194] ------------------------------------------------------- [ 2566.459266] gem_streaming_w/759 is trying to acquire lock: [ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915] [ 2566.459605] [ 2566.459605] but task is already holding lock: [ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.459814] [ 2566.459814] which lock already depends on the new lock. [ 2566.459814] [ 2566.459934] [ 2566.459934] the existing dependency chain (in reverse order) is: [ 2566.460030] [ 2566.460030] -> #1 (&mm->mmap_sem){++++++}: [ 2566.460139] lock_acquire+0xfe/0x220 [ 2566.460214] down_read+0x4e/0x90 [ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915] [ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915] [ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915] [ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915] [ 2566.461352] eb_add_vma+0x67b/0x830 [i915] [ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915] [ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915] [ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915] [ 2566.462152] drm_ioctl+0x36c/0x670 [drm] [ 2566.462236] do_vfs_ioctl+0x12c/0xa60 [ 2566.462317] SyS_ioctl+0x41/0x70 [ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 2566.462477] [ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}: [ 2566.462587] __lock_acquire+0x1602/0x1790 [ 2566.462661] lock_acquire+0xfe/0x220 [ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.463197] __do_fault+0x42/0x130 [ 2566.463276] __handle_mm_fault+0x92c/0x1280 [ 2566.463356] handle_mm_fault+0x1e2/0x440 [ 2566.463443] __do_page_fault+0x1c4/0x500 [ 2566.463529] do_page_fault+0xc/0x10 [ 2566.463613] page_fault+0x1f/0x30 [ 2566.463693] [ 2566.463693] other info that might help us debug this: [ 2566.463693] [ 2566.463820] Possible unsafe locking scenario: [ 2566.463820] [ 2566.463918] CPU0 CPU1 [ 2566.463988] ---- ---- [ 2566.464068] lock(&mm->mmap_sem); [ 2566.464143] lock(&obj->mm.lock); [ 2566.464226] lock(&mm->mmap_sem); [ 2566.464304] lock(&obj->mm.lock); [ 2566.464378] [ 2566.464378] *** DEADLOCK *** [ 2566.464378] [ 2566.464504] 1 lock held by gem_streaming_w/759: [ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.464699] [ 2566.464699] stack backtrace: [ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26 [ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016 [ 2566.464983] Call Trace: [ 2566.465061] dump_stack+0x68/0x9f [ 2566.465144] print_circular_bug+0x20b/0x260 [ 2566.465234] __lock_acquire+0x1602/0x1790 [ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0 [ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915] [ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30 [ 2566.465749] lock_acquire+0xfe/0x220 [ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915] [ 2566.467030] ? __lock_acquire+0x642/0x1790 [ 2566.467122] ? __lock_acquire+0x642/0x1790 [ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0 [ 2566.467387] __do_fault+0x42/0x130 [ 2566.467474] __handle_mm_fault+0x92c/0x1280 [ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0 [ 2566.467651] ? vm_mmap_pgoff+0x160/0x190 [ 2566.467740] ? handle_mm_fault+0x111/0x440 [ 2566.467827] handle_mm_fault+0x1e2/0x440 [ 2566.467914] ? handle_mm_fault+0x5d/0x440 [ 2566.468002] __do_page_fault+0x1c4/0x500 [ 2566.468090] do_page_fault+0xc/0x10 [ 2566.468180] page_fault+0x1f/0x30 [ 2566.468263] RIP: 0033:0x557895ced32a [ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202 [ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da [ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000 [ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000 [ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60 [ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000 By checking the status of the gup worker (serialized by the obj->mm.lock) we can determine whether it is still active, has failed or has succeeded. If the worker is still active (or failed), we know that it cannot be bound and so we can skip taking struct_mutex (risking potential recursion). As we check the worker status, we mark it to discard any partial results, forcing us to restart on the next get_pages. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Fixes: 1c8782dd ("drm/i915/userptr: Disallow wrapping GTT into a userptr") Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Testcase: igt/gem_userptr_blits/dmabuf-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315140150.19432-1-chris@chris-wilson.co.ukReviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
Michal Wajdeczko authored
After negative guc fw selection we could leave guc submission flag still turned on. Reorder some checks to cover this case. While here, fix info message and return early if there is no Guc. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> [tursulin: fixup bad alignment] Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315133741.150420-1-michal.wajdeczko@intel.com
-
Arkadiusz Hiler authored
This field is used to determine which kind of firmware the struct describes (GuC/HuC) - the name does not reflect. The enum used here have "type" in the name, so let's go with that. Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315133415.15343-1-arkadiusz.hiler@intel.com
-
Chris Wilson authored
Tvrtko spotted a stale reference to b->lock (now b->rb_lock) so review the comments and try to improve them in passing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315222259.1469-1-chris@chris-wilson.co.ukReviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 15 Mar, 2017 13 commits
-
-
Chris Wilson authored
Check that request has not been signaled before acquiring a reference to the request for signaling later in the interrupt handler. The loading of the cacheline (for request->fence.flags) should be "free" when followed by the locked increment of the request->fence.refcount (which then sets the cacheline to exclusive mode), i.e. the cost of test_bit prior to an atomic_inc should be negligible. This should benefit us when we have a pile of bare breadcrumbs (interrupted execbuf) where we may get interrupts faster than we can get rid of the intel_wait, or if the device is too slow to run the bottom-half between interrupts. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
We need to ensure that we always serialize updates to the bottom-half using the breadcrumbs.irq_lock so that we don't race with a concurrent interrupt handler. This is most important just prior to leaving the waiter (when the intel_wait will be overwritten), so make sure we are not the current bottom-half when skipping the irq locks. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Before walking the rbtree of waiters (marking them as complete and waking them), decouple the interrupt handler. This prevents a race between the missed waiter waking up and removing its intel_wait (which skips checking the lock) and the interrupt handler dereferencing the intel_wait. (Though we do not expect to encounter waiters during idle!) Fixes: e1c0c91b ("drm/i915: Wake up all waiters before idling") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
When adding a new request to the breadcrumb rbtree, we mark all those requests inside the rbtree that are already completed as complete. This wakes those waiters up and allows them to skip the spinlock before returning to userspace. If one of those is the current bottom-half and allocated its intel_wait on the stack, it may then overwrite the b->irq_wait upon exiting i915_wait_request() just as the interrupt handler dereferences it. Fixes: 56299fb7 ("drm/i915: Signal first fence from irq handler if complete") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since commit 9b6586ae ("drm/i915: Keep a global seqno per-engine") converted intel_breadcrumbs_busy() to reporting a single boolean, we need only compute a boolean internally (and not needlessly compute the flag). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
We take the runtime pm wakelock during i915_handle_error() to ensure that all paths that reach the error handler keep the device awake during the hw reads. However, we need to extend that from the reset handler to include the earlier capture routines. Reported-by: Antonio Argenziano <antonio.argenziano@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michel Thierry <michel.thierry@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170314171840.25706-1-chris@chris-wilson.co.ukReviewed-by: Michel Thierry <michel.thierry@intel.com>
-
Michal Wajdeczko authored
Manual pointer manipulation is error prone. Let compiler calculate right offsets for us in case we need to change ads layout. v2: don't call it object (Chris) v3: restyle offset assignments (Chris) v4: stylistic reductions Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170314133309.126432-1-michal.wajdeczko@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Arkadiusz Hiler authored
`guc_firmware_path` and `huc_firmware_path` module parameters are added. Using the parameter disables version checks and loads desired firmware instead of the default one. v2: make params unsafe && notice about disabled fw check (J. Lahtinen) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michal Winiarski <michal.winiarski@intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Arkadiusz Hiler authored
intel_{h,g}uc_init_fw selects correct firmware and then triggers it's preparation (fetch + initial parsing). This change separates out select steps, so those can be called by the sanitize_options(). Then, during the init_fw(), we prepare the firmware if the firmware was selected. Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Arkadiusz Hiler authored
Currently fw->path values can represent one of three possible states: 1) NULL - device without the uC 2) '\0' - device with the uC but have no firmware 3) else - device with the uC and we have firmware Second case is used only to WARN at a later stage. We can WARN right away and merge cases 1 and 2. Code can be even further simplified and common (HuC/GuC logic) happening right before the fetch can be offloaded to the common function. v2: fewer temporary variables, more straightforward flow (M. Wajdeczko) v3: DRM_ERROR instead of WARN (M. Wajdeczko) v4: coding standard (J. Lahtinen) v5: non-trivial rebase v6: remove path check, we are checking fetch status (M. Wajdeczko) Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Arkadiusz Hiler authored
Current version of intel_guc_init_hw() does a lot: - cares about submission - loads huc - implement WA This change offloads some of the logic to intel_uc_init_hw(), which now cares about the above. v2: rename guc_hw_reset and fix typo in define name (M. Wajdeczko) v3: rename once again v4: remove spurious comments and add some style (J. Lahtinen) v5: flow changes, got rid of dead checks (M. Wajdeczko) v6: rebase v7: rebase & onion teardown (J. Lahtinen) Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Arkadiusz Hiler authored
Let intel_guc_init_fw() focus on determining and fetching the correct firmware. This patch introduces intel_uc_sanitize_options() that is called from intel_sanitize_options(). Then, if we have GuC, we can call intel_guc_init_fw() conditionally and we do not have to do the internal checks. v2: fix comment, notify when nuking GuC explicitly enabled (M. Wajdeczko) v3: fix comment again, change the nuke message (M. Wajdeczko) v4: update title to reflect new function name + rebase v5: text && remove 2 uneccessary checks (M. Wajdeczko) Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Arkadiusz Hiler authored
Instead of calling intel_guc_init() and intel_huc_init() one by one this patch introduces intel_uc_init_fw() function that calls them both. Called functions are renamed accordingly. Trying to have subject_verb_object ordering and more descriptive names, the intel_huc_init() and intel_guc_init() functions are renamed. For guc_init(): * `intel_guc` is the subject, so those functions now take intel_guc structure, instead of the dev_priv * init is the verb * fw is the object which better describes the function's role huc_init() change follows the same reasoning. v2: settle on intel_uc_fetch_fw name (M. Wajdeczko) v3: yet another rename - intel_uc_init_fw (J. Lahtinen) v4: non-trivial rebase Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michal Winiarski <michal.winiarski@intel.com> Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-