1. 06 Jul, 2018 5 commits
  2. 05 Jul, 2018 8 commits
  3. 04 Jul, 2018 4 commits
  4. 03 Jul, 2018 2 commits
  5. 02 Jul, 2018 6 commits
  6. 29 Jun, 2018 5 commits
  7. 28 Jun, 2018 10 commits
    • Chris Wilson's avatar
      drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd) · 9512f985
      Chris Wilson authored
      Back in commit 27af5eea ("drm/i915: Move execlists irq handler to a
      bottom half"), we came to the conclusion that running our CSB processing
      and ELSP submission from inside the irq handler was a bad idea. A really
      bad idea as we could impose nearly 1s latency on other users of the
      system, on average! Deferring our work to a tasklet allowed us to do the
      processing with irqs enabled, reducing the impact to an average of about
      50us.
      
      We have since eradicated the use of forcewaked mmio from inside the CSB
      processing and ELSP submission, bringing the impact down to around 5us
      (on Kabylake); an order of magnitude better than our measurements 2
      years ago on Broadwell and only about 2x worse on average than the
      gem_syslatency on an unladen system.
      
      In this iteration of the tasklet-vs-direct submission debate, we seek a
      compromise where by we submit new requests immediately to the HW but
      defer processing the CS interrupt onto a tasklet. We gain the advantage
      of low-latency and ksoftirqd avoidance when waking up the HW, while
      avoiding the system-wide starvation of our CS irq-storms.
      
      Comparing the impact on the maximum latency observed (that is the time
      stolen from an RT process) over a 120s interval, repeated several times
      (using gem_syslatency, similar to RT's cyclictest) while the system is
      fully laden with i915 nops, we see that direct submission an actually
      improve the worse case.
      
      Maximum latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 2)
        x Always using tasklets (a couple of >1000us outliers removed)
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |          +                                                             |
      |          +                                                             |
      |          +                                                             |
      |          +       +                                                     |
      |          + +     +                                                     |
      |       +  + +     +  x     x     x                                      |
      |      +++ + +     +  x  x  x  x  x  x                                   |
      |      +++ + ++  + +  *x x  x  x  x  x                                   |
      |      +++ + ++  + *  *x x  *  x  x  x                                   |
      |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
      |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
      |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
      |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
      |             |__________MA___________|                                  |
      |      |______M__A________|                                              |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 118            91           186           124     125.28814     16.279137
      + 120            92           187           109     112.00833     13.458617
      Difference at 95.0% confidence
      	-13.2798 +/- 3.79219
      	-10.5994% +/- 3.02677%
      	(Student's t, pooled s = 14.9237)
      
      However the mean latency is adversely affected:
      
      Mean latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 1)
        x Always using tasklets
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                      + +++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ +++        |
      |           xxxxxxx                                   + ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |          xxxxxxxxxx                                +++++++++++++++     |
      |         xxxxxxxxxxx    x                           +++++++++++++++     |
      |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
      |           |__A__|                                                      |
      |                                                      |____A___|        |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 120         3.506         3.727         3.631     3.6321417    0.02773109
      + 120         3.834         4.149         4.039     4.0375167   0.041221676
      Difference at 95.0% confidence
      	0.405375 +/- 0.00888913
      	11.1608% +/- 0.244735%
      	(Student's t, pooled s = 0.03513)
      
      However, since the mean latency corresponds to the amount of irqsoff
      processing we have to do for a CS interrupt, we only need to speed that
      up to benefit not just system latency but our own throughput.
      
      v2: Remember to defer submissions when under reset.
      v4: Only use direct submission for new requests
      v5: Be aware that with mixing direct tasklet evaluation and deferred
      tasklets, we may end up idling before running the deferred tasklet.
      v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
      annotation to reset_in_progress().
      v7: Take the full timeline.lock when enabling perf_pmu stats as the
      tasklet is no longer a valid guard. A consequence is that the stats are
      now only valid for engines also using the timeline.lock to process
      state.
      
      Testcase: igt/gem_exec_latency/*rthog*
      References: 27af5eea ("drm/i915: Move execlists irq handler to a bottom half")
      Suggested-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
      9512f985
    • Chris Wilson's avatar
      drm/i915/execlists: Trust the CSB · fd8526e5
      Chris Wilson authored
      Now that we use the CSB stored in the CPU friendly HWSP, we do not need
      to track interrupts for when the mmio CSB registers are valid and can
      just check where we read up to last from the cached HWSP. This means we
      can forgo the atomic bit tracking from interrupt, and in the next patch
      it means we can check the CSB at any time.
      
      v2: Change the splitting inside reset_prepare, we only want to lose
      testing the interrupt in this patch, the next patch requires the change
      in locking
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
      fd8526e5
    • Chris Wilson's avatar
      drm/i915/execlists: Stop storing the CSB read pointer in the mmio register · 3800cd19
      Chris Wilson authored
      As we now never read back our current head position from the CSB
      pointers register, and the HW itself doesn't use it to prevent
      overwriting unread CSB entries, we do not need to keep updating the
      register. As it turns out this register is not listed as being shadowed,
      and so requires forcewake -- but we haven't been taking forcewake around
      it so the writes has probably been regularly dropped. Fortuitously, we
      only read the value after a reset where it did not matter, and zero was
      the right answer (well, close enough).
      
      Mika pointed out that this was how we used to do it (accidentally!)
      before he fixed it in commit cc53699b ("drm/i915: Use masked write
      for Context Status Buffer Pointer").
      
      References: cc53699b ("drm/i915: Use masked write for Context Status Buffer Pointer")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
      3800cd19
    • Chris Wilson's avatar
      drm/i915/execlists: Reset CSB write pointer after reset · f4b58f04
      Chris Wilson authored
      On HW reset, the HW clears the write pointer (to 0). But since it also
      writes its first CSB entry to slot 0, we need to reset the write pointer
      back to the element before (so the first entry we read is 0).
      
      This is required for the next patch, where we trust the CSB completely!
      
      v2: Use _MASKED_FIELD
      v3: Store the reset value, so that we differentiate between mmio/hwsp
      transparently and without pretense.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
      f4b58f04
    • Chris Wilson's avatar
      drm/i915/execlists: Unify CSB access pointers · bc4237ec
      Chris Wilson authored
      Following the removal of the last workarounds, the only CSB mmio access
      is for the old vGPU interface. The mmio registers presented by vGPU do
      not require forcewake and can be treated as ordinary volatile memory,
      i.e. they behave just like the HWSP access just at a different location.
      We can reduce the CSB access to a set of read/write/buffer pointers and
      treat the various paths identically and not worry about forcewake.
      (Forcewake is nightmare for worstcase latency, and we want to process
      this all with irqsoff -- no latency allowed!)
      
      v2: Comments, comments, comments. Well, 2 bonus comments.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
      bc4237ec
    • Chris Wilson's avatar
      drm/i915/execlists: Process one CSB update at a time · 8ea397fa
      Chris Wilson authored
      In the next patch, we will process the CSB events directly from the
      submission path, rather than only after a CS interrupt. Hence, we will
      no longer have the need for a loop until the has-interrupt bit is clear,
      and in the meantime can remove that small optimisation.
      
      v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
      after each irq, when assuming that the tasklet is called for each irq.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
      8ea397fa
    • Chris Wilson's avatar
      drm/i915/execlists: Pull CSB reset under the timeline.lock · d8857d54
      Chris Wilson authored
      In the following patch, we will process the CSB events under the
      timeline.lock and not serialised by the tasklet. This also means that we
      will need to protect access to common variables such as
      execlists->csb_head with the timeline.lock during reset.
      
      v2: Move sync_irq to avoid deadlocks between taking timeline.lock from
      our interrupt handler.
      v3: Kill off the synchronize_hardirq as it raises more questions than
      answered; now we use the timeline.lock entirely for CSB serialisation
      between the irq and elsewhere, we don't need to be so heavy handed with
      flushing
      v4: Treat request cancellation (wedging after failed reset) similarly
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
      d8857d54
    • Chris Wilson's avatar
      drm/i915/execlists: Pull submit after dequeue under timeline lock · 0b02befa
      Chris Wilson authored
      In the next patch, we will begin processing the CSB from inside the
      submission path (underneath an irqsoff section, and even from inside
      interrupt handlers). This means that updating the execlists->port[] will
      no longer be serialised by the tasklet but needs to be locked by the
      engine->timeline.lock instead. Pull dequeue and submit under the same
      lock for protection. (An alternate future plan is to keep the in/out
      arrays separate for concurrent processing and reduced lock coverage.)
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
      0b02befa
    • Chris Wilson's avatar
      drm/i915: Drop posting reads to flush master interrupts · 74093f3e
      Chris Wilson authored
      We do not need to do a posting read of our uncached mmio write to
      re-enable the master interrupt lines after handling an interrupt, so
      don't. This saves us a slow UC read before we can process the interrupt,
      most noticeable in execlists where any stalls imposes extra latency on
      GPU command execution.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarVille Syrjala <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-1-chris@chris-wilson.co.uk
      74093f3e
    • Michal Wajdeczko's avatar
      drm/i915/uc: Fetch GuC/HuC firmwares from guc/huc specific init · f7dc0157
      Michal Wajdeczko authored
      We're fetching GuC/HuC firmwares directly from uc level during
      init_early stage but this breaks guc/huc struct isolation and
      also strict SW-only initialization rule for init_early. Move fw
      fetching to init phase and do it separately per guc/huc struct.
      
      v2: don't forget to move wopcm_init - Michele
      v3: fetch in init_misc phase - Michal
      Signed-off-by: default avatarMichal Wajdeczko <michal.wajdeczko@intel.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Reviewed-by: Michel Thierry <michel.thierry@intel.com> #2
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-2-michal.wajdeczko@intel.com
      f7dc0157