1. 08 Feb, 2017 1 commit
  2. 07 Feb, 2017 3 commits
    • Chris Wilson's avatar
      drm/i915: Restore context and pd for ringbuffer submission after reset · c0dcb203
      Chris Wilson authored
      Following a reset, the context and page directory registers are lost.
      However, the queue of requests that we resubmit after the reset may
      depend upon them - the registers are restored from a context image, but
      that restore may be inhibited and may simply be absent from the request
      if it was in the middle of a sequence using the same context. If we
      prime the CCID/PD registers with the first request in the queue (even
      for the hung request), we prevent invalid memory access for the
      following requests (and continually hung engines).
      
      v2: Magic BIT(8), reserved for future use but still appears unused.
      v3: Some commentary on handling innocent vs guilty requests
      v4: Add a wait for PD_BASE fetch. The reload appears to be instant on my
      Ivybridge, but this bit probably exists for a reason.
      
      Fixes: 821ed7df ("drm/i915: Update reset path to fix incomplete requests")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170207152437.4252-1-chris@chris-wilson.co.ukReviewed-by: default avatarMika Kuoppala <mika.kuoppala@intel.com>
      c0dcb203
    • Arthur Heymans's avatar
      6248017a
    • Chris Wilson's avatar
      drm/i915: Remove overzealous fence warn on runtime suspend · e0ec3ec6
      Chris Wilson authored
      The goal of the WARN was to catch when we are still actively using the
      fence as we go into the runtime suspend. However, the reg->pin_count is
      too coarse as it does not distinguish between exclusive ownership of the
      fence register from activity.
      
      I've not improved on the WARN, nor have we captured this WARN in an
      exact igt, but it is showing up regularly in the wild:
      
      [ 1915.935332] WARNING: CPU: 1 PID: 10861 at drivers/gpu/drm/i915/i915_gem.c:2022 i915_gem_runtime_suspend+0x116/0x130 [i915]
      [ 1915.935383] WARN_ON(reg->pin_count)[ 1915.935399] Modules linked in:
       snd_hda_intel i915 drm_kms_helper vgem netconsole scsi_transport_iscsi fuse vfat fat x86_pkg_temp_thermal coretemp intel_cstate intel_uncore snd_hda_codec_hdmi snd_hda_codec_generic snd_hda_codec snd_hwdep snd_hda_core snd_pcm snd_timer snd mei_me mei serio_raw intel_rapl_perf intel_pch_thermal soundcore wmi acpi_pad i2c_algo_bit syscopyarea sysfillrect sysimgblt fb_sys_fops drm r8169 mii video [last unloaded: drm_kms_helper]
      [ 1915.935785] CPU: 1 PID: 10861 Comm: kworker/1:0 Tainted: G     U  W       4.9.0-rc5+ #170
      [ 1915.935799] Hardware name: LENOVO 80MX/Lenovo E31-80, BIOS DCCN34WW(V2.03) 12/01/2015
      [ 1915.935822] Workqueue: pm pm_runtime_work
      [ 1915.935845]  ffffc900044fbbf0 ffffffffac3220bc ffffc900044fbc40 0000000000000000
      [ 1915.935890]  ffffc900044fbc30 ffffffffac059bcb 000007e6044fbc60 ffff8801626e3198
      [ 1915.935937]  ffff8801626e0000 0000000000000002 ffffffffc05e5d4e 0000000000000000
      [ 1915.935985] Call Trace:
      [ 1915.936013]  [<ffffffffac3220bc>] dump_stack+0x4f/0x73
      [ 1915.936038]  [<ffffffffac059bcb>] __warn+0xcb/0xf0
      [ 1915.936060]  [<ffffffffac059c4f>] warn_slowpath_fmt+0x5f/0x80
      [ 1915.936158]  [<ffffffffc052d916>] i915_gem_runtime_suspend+0x116/0x130 [i915]
      [ 1915.936251]  [<ffffffffc04f1c74>] intel_runtime_suspend+0x64/0x280 [i915]
      [ 1915.936277]  [<ffffffffac0926f1>] ? dequeue_entity+0x241/0xbc0
      [ 1915.936298]  [<ffffffffac36bb85>] pci_pm_runtime_suspend+0x55/0x180
      [ 1915.936317]  [<ffffffffac36bb30>] ? pci_pm_runtime_resume+0xa0/0xa0
      [ 1915.936339]  [<ffffffffac4514e2>] __rpm_callback+0x32/0x70
      [ 1915.936356]  [<ffffffffac451544>] rpm_callback+0x24/0x80
      [ 1915.936375]  [<ffffffffac36bb30>] ? pci_pm_runtime_resume+0xa0/0xa0
      [ 1915.936392]  [<ffffffffac45222d>] rpm_suspend+0x12d/0x680
      [ 1915.936415]  [<ffffffffac69f6d7>] ? _raw_spin_unlock_irq+0x17/0x30
      [ 1915.936435]  [<ffffffffac0810b8>] ? finish_task_switch+0x88/0x220
      [ 1915.936455]  [<ffffffffac4534bf>] pm_runtime_work+0x6f/0xb0
      [ 1915.936477]  [<ffffffffac074353>] process_one_work+0x1f3/0x4d0
      [ 1915.936501]  [<ffffffffac074678>] worker_thread+0x48/0x4e0
      [ 1915.936523]  [<ffffffffac074630>] ? process_one_work+0x4d0/0x4d0
      [ 1915.936542]  [<ffffffffac074630>] ? process_one_work+0x4d0/0x4d0
      [ 1915.936559]  [<ffffffffac07a2c9>] kthread+0xd9/0xf0
      [ 1915.936580]  [<ffffffffac07a1f0>] ? kthread_park+0x60/0x60
      [ 1915.936600]  [<ffffffffac69fe62>] ret_from_fork+0x22/0x30
      
      In the case the register is pinned, it should be present and we will
      need to invalidate them to be restored upon resume as we cannot expect
      the owner of the pin to call get_fence prior to use after resume.
      
      Fixes: 7c108fd8 ("drm/i915: Move fence cancellation to runtime suspend")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98804Reported-by: default avatarLionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Imre Deak <imre.deak@linux.intel.com>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: <drm-intel-fixes@lists.freedesktop.org> # v4.10-rc1+
      Link: http://patchwork.freedesktop.org/patch/msgid/20170203125717.8431-1-chris@chris-wilson.co.ukReviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      e0ec3ec6
  3. 06 Feb, 2017 18 commits
  4. 04 Feb, 2017 1 commit
  5. 03 Feb, 2017 6 commits
  6. 02 Feb, 2017 3 commits
  7. 01 Feb, 2017 7 commits
  8. 31 Jan, 2017 1 commit