An error occurred fetching the project authors.
  1. 13 Feb, 2020 2 commits
  2. 04 Feb, 2020 1 commit
  3. 30 Jan, 2020 1 commit
  4. 22 Jan, 2020 1 commit
  5. 28 Dec, 2019 1 commit
  6. 26 Dec, 2019 1 commit
  7. 22 Dec, 2019 1 commit
  8. 09 Dec, 2019 1 commit
  9. 04 Dec, 2019 1 commit
  10. 02 Dec, 2019 1 commit
  11. 20 Nov, 2019 1 commit
  12. 14 Nov, 2019 2 commits
  13. 12 Nov, 2019 1 commit
  14. 08 Nov, 2019 1 commit
  15. 05 Nov, 2019 3 commits
  16. 01 Nov, 2019 2 commits
  17. 31 Oct, 2019 1 commit
    • Matthew Auld's avatar
      drm/i915/lmem: add the fake lmem region · 16292243
      Matthew Auld authored
      Intended for upstream testing so that we can still exercise the LMEM
      plumbing and !i915_ggtt_has_aperture paths. Smoke tested on Skull Canyon
      device. This works by allocating an intel_memory_region for a reserved
      portion of system memory, which we treat like LMEM. For the LMEMBAR we
      steal the aperture and 1:1 it map to the stolen region.
      
      To enable simply set the i915 modparam fake_lmem_start= on the kernel
      cmdline with the start of reserved region(see memmap=). The size of the
      region we can use is determined by the size of the mappable aperture, so
      the size of reserved region should be >= mappable_end. For now we only
      enable for the selftests. Depends on CONFIG_DRM_I915_UNSTABLE being
      enabled.
      
      eg. memmap=2G$16G i915.fake_lmem_start=0x400000000
      
      v2: make fake_lmem_start an i915 modparam
      Signed-off-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
      Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191030173320.8850-1-matthew.auld@intel.com
      16292243
  18. 26 Oct, 2019 2 commits
  19. 23 Oct, 2019 1 commit
  20. 22 Oct, 2019 2 commits
  21. 18 Oct, 2019 2 commits
  22. 16 Oct, 2019 1 commit
  23. 07 Oct, 2019 1 commit
  24. 06 Oct, 2019 2 commits
  25. 04 Oct, 2019 2 commits
    • Chris Wilson's avatar
      drm/i915: Move context management under GEM · a4e7ccda
      Chris Wilson authored
      Keep track of the GEM contexts underneath i915->gem.contexts and assign
      them their own lock for the purposes of list management.
      
      v2: Focus on lock tracking; ctx->vm is protected by ctx->mutex
      v3: Correct split with removal of logical HW ID
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-15-chris@chris-wilson.co.uk
      a4e7ccda
    • Chris Wilson's avatar
      drm/i915: Pull i915_vma_pin under the vm->mutex · 2850748e
      Chris Wilson authored
      Replace the struct_mutex requirement for pinning the i915_vma with the
      local vm->mutex instead. Note that the vm->mutex is tainted by the
      shrinker (we require unbinding from inside fs-reclaim) and so we cannot
      allocate while holding that mutex. Instead we have to preallocate
      workers to do allocate and apply the PTE updates after we have we
      reserved their slot in the drm_mm (using fences to order the PTE writes
      with the GPU work and with later unbind).
      
      In adding the asynchronous vma binding, one subtle requirement is to
      avoid coupling the binding fence into the backing object->resv. That is
      the asynchronous binding only applies to the vma timeline itself and not
      to the pages as that is a more global timeline (the binding of one vma
      does not need to be ordered with another vma, nor does the implicit GEM
      fencing depend on a vma, only on writes to the backing store). Keeping
      the vma binding distinct from the backing store timelines is verified by
      a number of async gem_exec_fence and gem_exec_schedule tests. The way we
      do this is quite simple, we keep the fence for the vma binding separate
      and only wait on it as required, and never add it to the obj->resv
      itself.
      
      Another consequence in reducing the locking around the vma is the
      destruction of the vma is no longer globally serialised by struct_mutex.
      A natural solution would be to add a kref to i915_vma, but that requires
      decoupling the reference cycles, possibly by introducing a new
      i915_mm_pages object that is own by both obj->mm and vma->pages.
      However, we have not taken that route due to the overshadowing lmem/ttm
      discussions, and instead play a series of complicated games with
      trylocks to (hopefully) ensure that only one destruction path is called!
      
      v2: Add some commentary, and some helpers to reduce patch churn.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
      2850748e
  26. 02 Oct, 2019 1 commit
  27. 27 Sep, 2019 1 commit
  28. 23 Sep, 2019 3 commits