1. 07 Dec, 2019 3 commits
    • Chris Wilson's avatar
      drm/i915/gtt: Account for preallocation in asserts · ca5930b1
      Chris Wilson authored
      Our asserts allow for the PDEs to be allocated concurrently, but we did
      not account for the aliasing-ppgtt to be preallocated on top.
      
      Testcase: igt/gem_ppgtt #bsw
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Acked-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191207221453.2802627-1-chris@chris-wilson.co.uk
      ca5930b1
    • Chris Wilson's avatar
      drm/i915: Avoid calling i915_gem_object_unbind holding object lock · 8b1c78e0
      Chris Wilson authored
      In the extreme case, we may wish to wait on an rcu-barrier to reap stale
      vm to purge the last of the object bindings. However, we are not allowed
      to use rcu_barrier() beneath the dma_resv (i.e. object) lock and do not
      take lightly the prospect of unlocking a mutex deep in the bowels of the
      routine. i915_gem_object_unbind() itself does not need the object lock,
      and it turns out the callers do not need to the unbind as part of a
      locked sequence around set-cache-level, so rearrange the code to avoid
      taking the object lock in the callers.
      
      <4> [186.816311] ======================================================
      <4> [186.816313] WARNING: possible circular locking dependency detected
      <4> [186.816316] 5.4.0-rc8-CI-CI_DRM_7486+ #1 Tainted: G     U
      <4> [186.816318] ------------------------------------------------------
      <4> [186.816320] perf_pmu/1321 is trying to acquire lock:
      <4> [186.816322] ffff88849487c4d8 (&mm->mmap_sem#2){++++}, at: __might_fault+0x39/0x90
      <4> [186.816331]
      but task is already holding lock:
      <4> [186.816333] ffffe8ffffa05008 (&cpuctx_mutex){+.+.}, at: perf_event_ctx_lock_nested+0xa9/0x1b0
      <4> [186.816339]
      which lock already depends on the new lock.
      
      <4> [186.816341]
      the existing dependency chain (in reverse order) is:
      <4> [186.816343]
      -> #6 (&cpuctx_mutex){+.+.}:
      <4> [186.816349]        __mutex_lock+0x9a/0x9d0
      <4> [186.816352]        perf_event_init_cpu+0xa4/0x140
      <4> [186.816357]        perf_event_init+0x19d/0x1cd
      <4> [186.816362]        start_kernel+0x372/0x4f4
      <4> [186.816365]        secondary_startup_64+0xa4/0xb0
      <4> [186.816381]
      -> #5 (pmus_lock){+.+.}:
      <4> [186.816385]        __mutex_lock+0x9a/0x9d0
      <4> [186.816387]        perf_event_init_cpu+0x6b/0x140
      <4> [186.816404]        cpuhp_invoke_callback+0x9b/0x9d0
      <4> [186.816406]        _cpu_up+0xa2/0x140
      <4> [186.816409]        do_cpu_up+0x61/0xa0
      <4> [186.816411]        smp_init+0x57/0x96
      <4> [186.816413]        kernel_init_freeable+0xac/0x1c7
      <4> [186.816416]        kernel_init+0x5/0x100
      <4> [186.816419]        ret_from_fork+0x24/0x50
      <4> [186.816421]
      -> #4 (cpu_hotplug_lock.rw_sem){++++}:
      <4> [186.816424]        cpus_read_lock+0x34/0xd0
      <4> [186.816427]        rcu_barrier+0xaa/0x190
      <4> [186.816429]        kernel_init+0x21/0x100
      <4> [186.816431]        ret_from_fork+0x24/0x50
      <4> [186.816433]
      -> #3 (rcu_state.barrier_mutex){+.+.}:
      <4> [186.816436]        __mutex_lock+0x9a/0x9d0
      <4> [186.816438]        rcu_barrier+0x23/0x190
      <4> [186.816502]        i915_gem_object_unbind+0x3a6/0x400 [i915]
      <4> [186.816537]        i915_gem_object_set_cache_level+0x32/0x90 [i915]
      <4> [186.816571]        i915_gem_object_pin_to_display_plane+0x5d/0x160 [i915]
      <4> [186.816612]        intel_pin_and_fence_fb_obj+0x9e/0x200 [i915]
      <4> [186.816679]        intel_plane_pin_fb+0x3f/0xd0 [i915]
      <4> [186.816717]        intel_prepare_plane_fb+0x130/0x520 [i915]
      <4> [186.816722]        drm_atomic_helper_prepare_planes+0x85/0x110
      <4> [186.816761]        intel_atomic_commit+0xc6/0x350 [i915]
      <4> [186.816764]        drm_atomic_helper_update_plane+0xed/0x110
      <4> [186.816768]        setplane_internal+0x97/0x190
      <4> [186.816770]        drm_mode_setplane+0xcd/0x190
      <4> [186.816773]        drm_ioctl_kernel+0xa7/0xf0
      <4> [186.816775]        drm_ioctl+0x2e1/0x390
      <4> [186.816778]        do_vfs_ioctl+0xa0/0x6f0
      <4> [186.816780]        ksys_ioctl+0x35/0x60
      <4> [186.816782]        __x64_sys_ioctl+0x11/0x20
      <4> [186.816785]        do_syscall_64+0x4f/0x210
      <4> [186.816787]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <4> [186.816789]
      -> #2 (reservation_ww_class_mutex){+.+.}:
      <4> [186.816793]        __ww_mutex_lock.constprop.15+0xc3/0x1090
      <4> [186.816795]        ww_mutex_lock+0x39/0x70
      <4> [186.816798]        dma_resv_lockdep+0x10e/0x1f7
      <4> [186.816800]        do_one_initcall+0x58/0x2ff
      <4> [186.816802]        kernel_init_freeable+0x137/0x1c7
      <4> [186.816804]        kernel_init+0x5/0x100
      <4> [186.816806]        ret_from_fork+0x24/0x50
      <4> [186.816808]
      -> #1 (reservation_ww_class_acquire){+.+.}:
      <4> [186.816811]        dma_resv_lockdep+0xec/0x1f7
      <4> [186.816813]        do_one_initcall+0x58/0x2ff
      <4> [186.816815]        kernel_init_freeable+0x137/0x1c7
      <4> [186.816817]        kernel_init+0x5/0x100
      <4> [186.816819]        ret_from_fork+0x24/0x50
      <4> [186.816820]
      -> #0 (&mm->mmap_sem#2){++++}:
      <4> [186.816824]        __lock_acquire+0x1328/0x15d0
      <4> [186.816826]        lock_acquire+0xa7/0x1c0
      <4> [186.816828]        __might_fault+0x63/0x90
      <4> [186.816831]        _copy_to_user+0x1e/0x80
      <4> [186.816834]        perf_read+0x200/0x2b0
      <4> [186.816836]        vfs_read+0x96/0x160
      <4> [186.816838]        ksys_read+0x9f/0xe0
      <4> [186.816839]        do_syscall_64+0x4f/0x210
      <4> [186.816841]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <4> [186.816843]
      other info that might help us debug this:
      
      <4> [186.816846] Chain exists of:
        &mm->mmap_sem#2 --> pmus_lock --> &cpuctx_mutex
      
      <4> [186.816849]  Possible unsafe locking scenario:
      
      <4> [186.816851]        CPU0                    CPU1
      <4> [186.816853]        ----                    ----
      <4> [186.816854]   lock(&cpuctx_mutex);
      <4> [186.816856]                                lock(pmus_lock);
      <4> [186.816858]                                lock(&cpuctx_mutex);
      <4> [186.816860]   lock(&mm->mmap_sem#2);
      <4> [186.816861]
       *** DEADLOCK ***
      
      Closes: https://gitlab.freedesktop.org/drm/intel/issues/728Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarAndi Shyti <andi.shyti@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191206105527.1130413-5-chris@chris-wilson.co.uk
      8b1c78e0
    • Matthew Brost's avatar
      drm/i915/guc: Update uncore access path in flush_ggtt_writes · a22198a9
      Matthew Brost authored
      The preferred way to access the uncore is through the GT structure.
      Update the GuC function, flush_ggtt_writes, to use this path.
      Signed-off-by: default avatarMatthew Brost <matthew.brost@intel.com>
      Signed-off-by: default avatarJohn Harrison <john.c.harrison@intel.com>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191207010033.24667-1-John.C.Harrison@Intel.com
      a22198a9
  2. 06 Dec, 2019 14 commits
  3. 05 Dec, 2019 8 commits
  4. 04 Dec, 2019 12 commits
  5. 03 Dec, 2019 3 commits
    • Chris Wilson's avatar
      drm/i915/gt: Set the PD again for Haswell · 13bb5b99
      Chris Wilson authored
      And Haswell still occasionally forgets it is meant to be using a new
      page directory, so repeat ourselves a little louder.
      
      <7> [509.919864] heartbeat rcs0 heartbeat {prio:-2147483645} not ticking
      <7> [509.919895] heartbeat 	Awake? 8
      <7> [509.919903] heartbeat 	Barriers?: no
      <7> [509.919912] heartbeat 	Heartbeat: 3008 ms ago
      <7> [509.919930] heartbeat 	Reset count: 0 (global 0)
      <7> [509.919937] heartbeat 	Requests:
      <7> [509.921008] heartbeat 		active  a7eb:56e1*  @ 5847ms:
      <7> [509.921157] heartbeat 		ring->start:  0x00001000
      <7> [509.921164] heartbeat 		ring->head:   0x00001610
      <7> [509.921170] heartbeat 		ring->tail:   0x000023d8
      <7> [509.921176] heartbeat 		ring->emit:   0x000023d8
      <7> [509.921182] heartbeat 		ring->space:  0x00002570
      <7> [509.921189] heartbeat 		ring->hwsp:   0x7fffe100
      <7> [509.921197] heartbeat [head 1628, postfix 1738, tail 1750, batch 0xffffffff_ffffffff]:
      <7> [509.921289] heartbeat [0000] 7a000002 00100002 00000000 00000000 7a000002 01154c1e 7ffff080 00000000
      <7> [509.921299] heartbeat [0020] 11000001 00002220 ffffffff 12400001 00002220 7ffff000 00000000 11000001
      <7> [509.921308] heartbeat [0040] 00002228 6e900000 7a000002 00100002 00000000 00000000 7a000002 01154c1e
      <7> [509.921317] heartbeat [0060] 7ffff080 00000000 12400001 00002228 7ffff000 00000000 7a000002 00100002
      <7> [509.921326] heartbeat [0080] 00000000 00000000 7a000002 01154c1e 7ffff080 00000000 7a000002 001010a1
      <7> [509.921335] heartbeat [00a0] 7ffff080 00000000 04000000 11000005 00022050 00010001 00012050 00010001
      <7> [509.921345] heartbeat [00c0] 0001a050 00010001 00000000 0c000000 459a110c 00000000 11000005 00022050
      <7> [509.921354] heartbeat [00e0] 00010000 00012050 00010000 0001a050 00010000 12400001 0001a050 7ffff000
      <7> [509.921363] heartbeat [0100] 00000000 04000001 18802100 00000000 7a000002 011050a1 7fffe100 000056e1
      <7> [509.921370] heartbeat [0120] 01000000 00000000
      <7> [509.921538] heartbeat 	MMIO base:  0x00002000
      <7> [509.921682] heartbeat 	CCID: 0x3fa0110d
      <7> [509.922342] heartbeat 	RING_START: 0x00001000
      <7> [509.922353] heartbeat 	RING_HEAD:  0x00001628
      <7> [509.922366] heartbeat 	RING_TAIL:  0x000023d8
      <7> [509.922381] heartbeat 	RING_CTL:   0x00003001
      <7> [509.922396] heartbeat 	RING_MODE:  0x00004000
      <7> [509.922408] heartbeat 	RING_IMR: ffffffde
      <7> [509.922421] heartbeat 	ACTHD:  0x00000000_30e01628
      <7> [509.922434] heartbeat 	BBADDR: 0x00000000_00004004
      <7> [509.922446] heartbeat 	DMA_FADDR: 0x00000000_00002800
      <7> [509.922458] heartbeat 	IPEIR: 0x00000000
      <7> [509.922470] heartbeat 	IPEHR: 0x780c0000
      <7> [509.922642] heartbeat 	PP_DIR_BASE: 0x6e700000
      <7> [509.922652] heartbeat 	PP_DIR_BASE_READ: 0x00000000
      <7> [509.922662] heartbeat 	PP_DIR_DCLV: 0xffffffff
      <7> [509.922678] heartbeat 		E  a7eb:56e1*  @ 5849ms:
      <7> [509.922689] heartbeat 		E  a7eb:56e2-  @ 5849ms:
      <7> [509.922698] heartbeat 		E  a7eb:56e3  @ 5848ms:
      <7> [509.922707] heartbeat 		E  a7eb:56e4  @ 5848ms:
      <7> [509.922715] heartbeat 		E  a7eb:56e5  @ 5847ms:
      <7> [509.922724] heartbeat 		E  a7eb:56e6  @ 5846ms:
      <7> [509.922735] heartbeat 		E  a7eb:56e7  @ 5846ms:
      <7> [509.922744] heartbeat 		...skipping 4 executing requests...
      <7> [509.922754] heartbeat 		E  a7eb:56ec  @ 3010ms:
      <7> [509.922796] heartbeat HWSP:
      <7> [509.922807] heartbeat [0000] 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922817] heartbeat [0020] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922826] heartbeat *
      <7> [509.922836] heartbeat [0100] 000056e0 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922845] heartbeat [0120] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      <7> [509.922851] heartbeat *
      <7> [509.922870] heartbeat Idle? no
      <7> [509.922878] heartbeat Signals:
      <7> [509.923000] heartbeat 	[a7eb:56e2] @ 5850ms
      
      Here, we have a failed context restore after the PD switch, but note
      that the PP_DIR_BASE register does not match the LRI in the ring.
      
      Bump it to 8^W 4 loops, and with that Baytrail starts passing the sanity
      checks.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191203211631.3167430-1-chris@chris-wilson.co.uk
      13bb5b99
    • Chris Wilson's avatar
      drm/i915/gem: Avoid parking the vma as we unbind · cb6c3d45
      Chris Wilson authored
      In order to avoid keeping a reference on the i915_vma (which is long
      overdue!) we have to coordinate all the possible lifetimes and only use
      the vma while we know it is alive. In this episode, we are reminded that
      while idle, the closed vma are destroyed. So if the GT idles while we are
      working with the vma, the vma itself becomes invalid.
      
      First class i915_vma here we come, but in the meantime keep piling on
      the straw.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191203155032.3137263-1-chris@chris-wilson.co.uk
      cb6c3d45
    • José Roberto de Souza's avatar
      drm/i915/display/mst: Move DPMS_OFF call to post_disable · 78eaaba3
      José Roberto de Souza authored
      Moving just to simplify handling as there is no change in behavior.
      
      Cc: Lucas De Marchi <lucas.demarchi@intel.com>
      Reviewed-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: default avatarJosé Roberto de Souza <jose.souza@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191202222513.337777-3-jose.souza@intel.com
      78eaaba3