Commit 8eb008c8 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-intel-next-2018-04-13' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

First drm/i915 feature batch heading for v4.18:

- drm-next backmerge to fix build (Rodrigo)
- GPU documentation improvements (Kevin)
- GuC and HuC refactoring, host/GuC communication, logging, fixes, and more
  (mostly Michal and Michał, also Jackie, Michel and Piotr)
- PSR and PSR2 enabling and fixes (DK, José, Rodrigo and Chris)
- Selftest updates (Chris, Daniele)
- DPLL management refactoring (Lucas)
- DP MST fixes (Lyude and DK)
- Watermark refactoring and changes to support NV12 (Mahesh)
- NV12 prep work (Chandra)
- Icelake Combo PHY enablers (Manasi)
- Perf OA refactoring and ICL enabling (Lionel)
- ICL enabling (Oscar, Paulo, Nabendu, Mika, Kelvin, Michel)
- Workarounds refactoring (Oscar)
- HDCP fixes and improvements (Ramalingam, Radhakrishna)
- Power management fixes (Imre)
- Various display fixes (Maarten, Ville, Vidya, Jani, Gaurav)
- debugfs for FIFO underrun clearing (Maarten)
- Execlist improvements (Chris)
- Reset improvements (Chris)
- Plenty of things here and there I overlooked and/or didn't understand... (Everyone)
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87lgd2cze8.fsf@intel.com
parents 0ab39026 fadec6ee
...@@ -58,6 +58,12 @@ Intel GVT-g Host Support(vGPU device model) ...@@ -58,6 +58,12 @@ Intel GVT-g Host Support(vGPU device model)
.. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c .. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c
:internal: :internal:
Workarounds
-----------
.. kernel-doc:: drivers/gpu/drm/i915/intel_workarounds.c
:doc: Hardware workarounds
Display Hardware Handling Display Hardware Handling
========================= =========================
...@@ -249,6 +255,103 @@ Memory Management and Command Submission ...@@ -249,6 +255,103 @@ Memory Management and Command Submission
This sections covers all things related to the GEM implementation in the This sections covers all things related to the GEM implementation in the
i915 driver. i915 driver.
Intel GPU Basics
----------------
An Intel GPU has multiple engines. There are several engine types.
- RCS engine is for rendering 3D and performing compute, this is named
`I915_EXEC_RENDER` in user space.
- BCS is a blitting (copy) engine, this is named `I915_EXEC_BLT` in user
space.
- VCS is a video encode and decode engine, this is named `I915_EXEC_BSD`
in user space
- VECS is video enhancement engine, this is named `I915_EXEC_VEBOX` in user
space.
- The enumeration `I915_EXEC_DEFAULT` does not refer to specific engine;
instead it is to be used by user space to specify a default rendering
engine (for 3D) that may or may not be the same as RCS.
The Intel GPU family is a family of integrated GPU's using Unified
Memory Access. For having the GPU "do work", user space will feed the
GPU batch buffers via one of the ioctls `DRM_IOCTL_I915_GEM_EXECBUFFER2`
or `DRM_IOCTL_I915_GEM_EXECBUFFER2_WR`. Most such batchbuffers will
instruct the GPU to perform work (for example rendering) and that work
needs memory from which to read and memory to which to write. All memory
is encapsulated within GEM buffer objects (usually created with the ioctl
`DRM_IOCTL_I915_GEM_CREATE`). An ioctl providing a batchbuffer for the GPU
to create will also list all GEM buffer objects that the batchbuffer reads
and/or writes. For implementation details of memory management see
`GEM BO Management Implementation Details`_.
The i915 driver allows user space to create a context via the ioctl
`DRM_IOCTL_I915_GEM_CONTEXT_CREATE` which is identified by a 32-bit
integer. Such a context should be viewed by user-space as -loosely-
analogous to the idea of a CPU process of an operating system. The i915
driver guarantees that commands issued to a fixed context are to be
executed so that writes of a previously issued command are seen by
reads of following commands. Actions issued between different contexts
(even if from the same file descriptor) are NOT given that guarantee
and the only way to synchronize across contexts (even from the same
file descriptor) is through the use of fences. At least as far back as
Gen4, also have that a context carries with it a GPU HW context;
the HW context is essentially (most of atleast) the state of a GPU.
In addition to the ordering guarantees, the kernel will restore GPU
state via HW context when commands are issued to a context, this saves
user space the need to restore (most of atleast) the GPU state at the
start of each batchbuffer. The non-deprecated ioctls to submit batchbuffer
work can pass that ID (in the lower bits of drm_i915_gem_execbuffer2::rsvd1)
to identify what context to use with the command.
The GPU has its own memory management and address space. The kernel
driver maintains the memory translation table for the GPU. For older
GPUs (i.e. those before Gen8), there is a single global such translation
table, a global Graphics Translation Table (GTT). For newer generation
GPUs each context has its own translation table, called Per-Process
Graphics Translation Table (PPGTT). Of important note, is that although
PPGTT is named per-process it is actually per context. When user space
submits a batchbuffer, the kernel walks the list of GEM buffer objects
used by the batchbuffer and guarantees that not only is the memory of
each such GEM buffer object resident but it is also present in the
(PP)GTT. If the GEM buffer object is not yet placed in the (PP)GTT,
then it is given an address. Two consequences of this are: the kernel
needs to edit the batchbuffer submitted to write the correct value of
the GPU address when a GEM BO is assigned a GPU address and the kernel
might evict a different GEM BO from the (PP)GTT to make address room
for another GEM BO. Consequently, the ioctls submitting a batchbuffer
for execution also include a list of all locations within buffers that
refer to GPU-addresses so that the kernel can edit the buffer correctly.
This process is dubbed relocation.
GEM BO Management Implementation Details
----------------------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_vma.h
:doc: Virtual Memory Address
Buffer Object Eviction
----------------------
This section documents the interface functions for evicting buffer
objects to make space available in the virtual gpu address spaces. Note
that this is mostly orthogonal to shrinking buffer objects caches, which
has the goal to make main memory (shared with the gpu through the
unified memory architecture) available.
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_evict.c
:internal:
Buffer Object Memory Shrinking
------------------------------
This section documents the interface function for shrinking memory usage
of buffer object caches. Shrinking is used to make main memory
available. Note that this is mostly orthogonal to evicting buffer
objects, which has the goal to make space in gpu virtual address spaces.
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_shrinker.c
:internal:
Batchbuffer Parsing Batchbuffer Parsing
------------------- -------------------
...@@ -267,6 +370,12 @@ Batchbuffer Pools ...@@ -267,6 +370,12 @@ Batchbuffer Pools
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_batch_pool.c .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_batch_pool.c
:internal: :internal:
User Batchbuffer Execution
--------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_execbuffer.c
:doc: User command execution
Logical Rings, Logical Ring Contexts and Execlists Logical Rings, Logical Ring Contexts and Execlists
-------------------------------------------------- --------------------------------------------------
...@@ -312,28 +421,14 @@ Object Tiling IOCTLs ...@@ -312,28 +421,14 @@ Object Tiling IOCTLs
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_tiling.c .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_tiling.c
:doc: buffer object tiling :doc: buffer object tiling
Buffer Object Eviction WOPCM
---------------------- =====
This section documents the interface functions for evicting buffer
objects to make space available in the virtual gpu address spaces. Note
that this is mostly orthogonal to shrinking buffer objects caches, which
has the goal to make main memory (shared with the gpu through the
unified memory architecture) available.
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_evict.c
:internal:
Buffer Object Memory Shrinking
------------------------------
This section documents the interface function for shrinking memory usage WOPCM Layout
of buffer object caches. Shrinking is used to make main memory ------------
available. Note that this is mostly orthogonal to evicting buffer
objects, which has the goal to make space in gpu virtual address spaces.
.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_shrinker.c .. kernel-doc:: drivers/gpu/drm/i915/intel_wopcm.c
:internal: :doc: WOPCM Layout
GuC GuC
=== ===
...@@ -359,6 +454,12 @@ GuC Firmware Layout ...@@ -359,6 +454,12 @@ GuC Firmware Layout
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc_fwif.h .. kernel-doc:: drivers/gpu/drm/i915/intel_guc_fwif.h
:doc: GuC Firmware Layout :doc: GuC Firmware Layout
GuC Address Space
-----------------
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc.c
:doc: GuC Address Space
Tracing Tracing
======= =======
......
...@@ -25,6 +25,7 @@ config DRM_I915_DEBUG ...@@ -25,6 +25,7 @@ config DRM_I915_DEBUG
select X86_MSR # used by igt/pm_rpm select X86_MSR # used by igt/pm_rpm
select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks) select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
select DRM_DEBUG_MM if DRM=y select DRM_DEBUG_MM if DRM=y
select STACKDEPOT if DRM=y # for DRM_DEBUG_MM
select DRM_DEBUG_MM_SELFTEST select DRM_DEBUG_MM_SELFTEST
select SW_SYNC # signaling validation framework (igt/syncobj*) select SW_SYNC # signaling validation framework (igt/syncobj*)
select DRM_I915_SW_FENCE_DEBUG_OBJECTS select DRM_I915_SW_FENCE_DEBUG_OBJECTS
...@@ -89,6 +90,18 @@ config DRM_I915_SW_FENCE_CHECK_DAG ...@@ -89,6 +90,18 @@ config DRM_I915_SW_FENCE_CHECK_DAG
If in doubt, say "N". If in doubt, say "N".
config DRM_I915_DEBUG_GUC
bool "Enable additional driver debugging for GuC"
depends on DRM_I915
default n
help
Choose this option to turn on extra driver debugging that may affect
performance but will help resolve GuC related issues.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_I915_SELFTEST config DRM_I915_SELFTEST
bool "Enable selftests upon driver load" bool "Enable selftests upon driver load"
depends on DRM_I915 depends on DRM_I915
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# Note the danger in using -Wall -Wextra is that when CI updates gcc we # Note the danger in using -Wall -Wextra is that when CI updates gcc we
# will most likely get a sudden build breakage... Hopefully we will fix # will most likely get a sudden build breakage... Hopefully we will fix
# new warnings before CI updates! # new warnings before CI updates!
subdir-ccflags-y := -Wall -Wextra subdir-ccflags-y := -Wall -Wextra -Wvla
subdir-ccflags-y += $(call cc-disable-warning, unused-parameter) subdir-ccflags-y += $(call cc-disable-warning, unused-parameter)
subdir-ccflags-y += $(call cc-disable-warning, type-limits) subdir-ccflags-y += $(call cc-disable-warning, type-limits)
subdir-ccflags-y += $(call cc-disable-warning, missing-field-initializers) subdir-ccflags-y += $(call cc-disable-warning, missing-field-initializers)
...@@ -43,7 +43,8 @@ i915-y := i915_drv.o \ ...@@ -43,7 +43,8 @@ i915-y := i915_drv.o \
intel_csr.o \ intel_csr.o \
intel_device_info.o \ intel_device_info.o \
intel_pm.o \ intel_pm.o \
intel_runtime_pm.o intel_runtime_pm.o \
intel_workarounds.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o i915-$(CONFIG_COMPAT) += i915_ioc32.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
...@@ -79,7 +80,8 @@ i915-y += i915_cmd_parser.o \ ...@@ -79,7 +80,8 @@ i915-y += i915_cmd_parser.o \
intel_lrc.o \ intel_lrc.o \
intel_mocs.o \ intel_mocs.o \
intel_ringbuffer.o \ intel_ringbuffer.o \
intel_uncore.o intel_uncore.o \
intel_wopcm.o
# general-purpose microcontroller (GuC) support # general-purpose microcontroller (GuC) support
i915-y += intel_uc.o \ i915-y += intel_uc.o \
...@@ -171,7 +173,8 @@ i915-y += i915_perf.o \ ...@@ -171,7 +173,8 @@ i915-y += i915_perf.o \
i915_oa_glk.o \ i915_oa_glk.o \
i915_oa_cflgt2.o \ i915_oa_cflgt2.o \
i915_oa_cflgt3.o \ i915_oa_cflgt3.o \
i915_oa_cnl.o i915_oa_cnl.o \
i915_oa_icl.o
ifeq ($(CONFIG_DRM_I915_GVT),y) ifeq ($(CONFIG_DRM_I915_GVT),y)
i915-y += intel_gvt.o i915-y += intel_gvt.o
......
...@@ -122,18 +122,7 @@ static int vgpu_mmio_diff_show(struct seq_file *s, void *unused) ...@@ -122,18 +122,7 @@ static int vgpu_mmio_diff_show(struct seq_file *s, void *unused)
seq_printf(s, "Total: %d, Diff: %d\n", param.total, param.diff); seq_printf(s, "Total: %d, Diff: %d\n", param.total, param.diff);
return 0; return 0;
} }
DEFINE_SHOW_ATTRIBUTE(vgpu_mmio_diff);
static int vgpu_mmio_diff_open(struct inode *inode, struct file *file)
{
return single_open(file, vgpu_mmio_diff_show, inode->i_private);
}
static const struct file_operations vgpu_mmio_diff_fops = {
.open = vgpu_mmio_diff_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/** /**
* intel_gvt_debugfs_add_vgpu - register debugfs entries for a vGPU * intel_gvt_debugfs_add_vgpu - register debugfs entries for a vGPU
......
This diff is collapsed.
...@@ -377,9 +377,9 @@ static int i915_getparam_ioctl(struct drm_device *dev, void *data, ...@@ -377,9 +377,9 @@ static int i915_getparam_ioctl(struct drm_device *dev, void *data,
value = INTEL_INFO(dev_priv)->sseu.min_eu_in_pool; value = INTEL_INFO(dev_priv)->sseu.min_eu_in_pool;
break; break;
case I915_PARAM_HUC_STATUS: case I915_PARAM_HUC_STATUS:
intel_runtime_pm_get(dev_priv); value = intel_huc_check_status(&dev_priv->huc);
value = I915_READ(HUC_STATUS2) & HUC_FW_VERIFIED; if (value < 0)
intel_runtime_pm_put(dev_priv); return value;
break; break;
case I915_PARAM_MMAP_GTT_VERSION: case I915_PARAM_MMAP_GTT_VERSION:
/* Though we've started our numbering from 1, and so class all /* Though we've started our numbering from 1, and so class all
...@@ -695,11 +695,9 @@ static int i915_load_modeset_init(struct drm_device *dev) ...@@ -695,11 +695,9 @@ static int i915_load_modeset_init(struct drm_device *dev)
if (ret) if (ret)
goto cleanup_irq; goto cleanup_irq;
intel_uc_init_fw(dev_priv);
ret = i915_gem_init(dev_priv); ret = i915_gem_init(dev_priv);
if (ret) if (ret)
goto cleanup_uc; goto cleanup_irq;
intel_setup_overlay(dev_priv); intel_setup_overlay(dev_priv);
...@@ -719,8 +717,6 @@ static int i915_load_modeset_init(struct drm_device *dev) ...@@ -719,8 +717,6 @@ static int i915_load_modeset_init(struct drm_device *dev)
if (i915_gem_suspend(dev_priv)) if (i915_gem_suspend(dev_priv))
DRM_ERROR("failed to idle hardware; continuing to unload!\n"); DRM_ERROR("failed to idle hardware; continuing to unload!\n");
i915_gem_fini(dev_priv); i915_gem_fini(dev_priv);
cleanup_uc:
intel_uc_fini_fw(dev_priv);
cleanup_irq: cleanup_irq:
drm_irq_uninstall(dev); drm_irq_uninstall(dev);
intel_teardown_gmbus(dev_priv); intel_teardown_gmbus(dev_priv);
...@@ -922,16 +918,21 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv, ...@@ -922,16 +918,21 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
mutex_init(&dev_priv->wm.wm_mutex); mutex_init(&dev_priv->wm.wm_mutex);
mutex_init(&dev_priv->pps_mutex); mutex_init(&dev_priv->pps_mutex);
intel_uc_init_early(dev_priv);
i915_memcpy_init_early(dev_priv); i915_memcpy_init_early(dev_priv);
ret = i915_workqueues_init(dev_priv); ret = i915_workqueues_init(dev_priv);
if (ret < 0) if (ret < 0)
goto err_engines; goto err_engines;
ret = i915_gem_init_early(dev_priv);
if (ret < 0)
goto err_workqueues;
/* This must be called before any calls to HAS_PCH_* */ /* This must be called before any calls to HAS_PCH_* */
intel_detect_pch(dev_priv); intel_detect_pch(dev_priv);
intel_wopcm_init_early(&dev_priv->wopcm);
intel_uc_init_early(dev_priv);
intel_pm_setup(dev_priv); intel_pm_setup(dev_priv);
intel_init_dpio(dev_priv); intel_init_dpio(dev_priv);
intel_power_domains_init(dev_priv); intel_power_domains_init(dev_priv);
...@@ -940,18 +941,13 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv, ...@@ -940,18 +941,13 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
intel_init_display_hooks(dev_priv); intel_init_display_hooks(dev_priv);
intel_init_clock_gating_hooks(dev_priv); intel_init_clock_gating_hooks(dev_priv);
intel_init_audio_hooks(dev_priv); intel_init_audio_hooks(dev_priv);
ret = i915_gem_load_init(dev_priv);
if (ret < 0)
goto err_irq;
intel_display_crc_init(dev_priv); intel_display_crc_init(dev_priv);
intel_detect_preproduction_hw(dev_priv); intel_detect_preproduction_hw(dev_priv);
return 0; return 0;
err_irq: err_workqueues:
intel_irq_fini(dev_priv);
i915_workqueues_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv);
err_engines: err_engines:
i915_engines_cleanup(dev_priv); i915_engines_cleanup(dev_priv);
...@@ -964,8 +960,9 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv, ...@@ -964,8 +960,9 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
*/ */
static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv) static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv)
{ {
i915_gem_load_cleanup(dev_priv);
intel_irq_fini(dev_priv); intel_irq_fini(dev_priv);
intel_uc_cleanup_early(dev_priv);
i915_gem_cleanup_early(dev_priv);
i915_workqueues_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv);
i915_engines_cleanup(dev_priv); i915_engines_cleanup(dev_priv);
} }
...@@ -1035,6 +1032,10 @@ static int i915_driver_init_mmio(struct drm_i915_private *dev_priv) ...@@ -1035,6 +1032,10 @@ static int i915_driver_init_mmio(struct drm_i915_private *dev_priv)
intel_uncore_init(dev_priv); intel_uncore_init(dev_priv);
intel_device_info_init_mmio(dev_priv);
intel_uncore_prune(dev_priv);
intel_uc_init_mmio(dev_priv); intel_uc_init_mmio(dev_priv);
ret = intel_engines_init_mmio(dev_priv); ret = intel_engines_init_mmio(dev_priv);
...@@ -1077,8 +1078,6 @@ static void intel_sanitize_options(struct drm_i915_private *dev_priv) ...@@ -1077,8 +1078,6 @@ static void intel_sanitize_options(struct drm_i915_private *dev_priv)
i915_modparams.enable_ppgtt); i915_modparams.enable_ppgtt);
DRM_DEBUG_DRIVER("ppgtt mode: %i\n", i915_modparams.enable_ppgtt); DRM_DEBUG_DRIVER("ppgtt mode: %i\n", i915_modparams.enable_ppgtt);
intel_uc_sanitize_options(dev_priv);
intel_gvt_sanitize_options(dev_priv); intel_gvt_sanitize_options(dev_priv);
} }
...@@ -1244,7 +1243,6 @@ static void i915_driver_register(struct drm_i915_private *dev_priv) ...@@ -1244,7 +1243,6 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
/* Reveal our presence to userspace */ /* Reveal our presence to userspace */
if (drm_dev_register(dev, 0) == 0) { if (drm_dev_register(dev, 0) == 0) {
i915_debugfs_register(dev_priv); i915_debugfs_register(dev_priv);
i915_guc_log_register(dev_priv);
i915_setup_sysfs(dev_priv); i915_setup_sysfs(dev_priv);
/* Depends on sysfs having been initialized */ /* Depends on sysfs having been initialized */
...@@ -1304,7 +1302,6 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv) ...@@ -1304,7 +1302,6 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
i915_pmu_unregister(dev_priv); i915_pmu_unregister(dev_priv);
i915_teardown_sysfs(dev_priv); i915_teardown_sysfs(dev_priv);
i915_guc_log_unregister(dev_priv);
drm_dev_unregister(&dev_priv->drm); drm_dev_unregister(&dev_priv->drm);
i915_gem_shrinker_unregister(dev_priv); i915_gem_shrinker_unregister(dev_priv);
...@@ -1463,7 +1460,6 @@ void i915_driver_unload(struct drm_device *dev) ...@@ -1463,7 +1460,6 @@ void i915_driver_unload(struct drm_device *dev)
i915_reset_error_state(dev_priv); i915_reset_error_state(dev_priv);
i915_gem_fini(dev_priv); i915_gem_fini(dev_priv);
intel_uc_fini_fw(dev_priv);
intel_fbc_cleanup_cfb(dev_priv); intel_fbc_cleanup_cfb(dev_priv);
intel_power_domains_fini(dev_priv); intel_power_domains_fini(dev_priv);
...@@ -1876,7 +1872,8 @@ static int i915_resume_switcheroo(struct drm_device *dev) ...@@ -1876,7 +1872,8 @@ static int i915_resume_switcheroo(struct drm_device *dev)
/** /**
* i915_reset - reset chip after a hang * i915_reset - reset chip after a hang
* @i915: #drm_i915_private to reset * @i915: #drm_i915_private to reset
* @flags: Instructions * @stalled_mask: mask of the stalled engines with the guilty requests
* @reason: user error message for why we are resetting
* *
* Reset the chip. Useful if a hang is detected. Marks the device as wedged * Reset the chip. Useful if a hang is detected. Marks the device as wedged
* on failure. * on failure.
...@@ -1891,12 +1888,16 @@ static int i915_resume_switcheroo(struct drm_device *dev) ...@@ -1891,12 +1888,16 @@ static int i915_resume_switcheroo(struct drm_device *dev)
* - re-init interrupt state * - re-init interrupt state
* - re-init display * - re-init display
*/ */
void i915_reset(struct drm_i915_private *i915, unsigned int flags) void i915_reset(struct drm_i915_private *i915,
unsigned int stalled_mask,
const char *reason)
{ {
struct i915_gpu_error *error = &i915->gpu_error; struct i915_gpu_error *error = &i915->gpu_error;
int ret; int ret;
int i; int i;
GEM_TRACE("flags=%lx\n", error->flags);
might_sleep(); might_sleep();
lockdep_assert_held(&i915->drm.struct_mutex); lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &error->flags)); GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &error->flags));
...@@ -1908,8 +1909,8 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags) ...@@ -1908,8 +1909,8 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags)
if (!i915_gem_unset_wedged(i915)) if (!i915_gem_unset_wedged(i915))
goto wakeup; goto wakeup;
if (!(flags & I915_RESET_QUIET)) if (reason)
dev_notice(i915->drm.dev, "Resetting chip after gpu hang\n"); dev_notice(i915->drm.dev, "Resetting chip for %s\n", reason);
error->reset_count++; error->reset_count++;
disable_irq(i915->drm.irq); disable_irq(i915->drm.irq);
...@@ -1952,7 +1953,7 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags) ...@@ -1952,7 +1953,7 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags)
goto error; goto error;
} }
i915_gem_reset(i915); i915_gem_reset(i915, stalled_mask);
intel_overlay_reset(i915); intel_overlay_reset(i915);
/* /*
...@@ -1998,7 +1999,6 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags) ...@@ -1998,7 +1999,6 @@ void i915_reset(struct drm_i915_private *i915, unsigned int flags)
error: error:
i915_gem_set_wedged(i915); i915_gem_set_wedged(i915);
i915_retire_requests(i915); i915_retire_requests(i915);
intel_gpu_reset(i915, ALL_ENGINES);
goto finish; goto finish;
} }
...@@ -2011,7 +2011,7 @@ static inline int intel_gt_reset_engine(struct drm_i915_private *dev_priv, ...@@ -2011,7 +2011,7 @@ static inline int intel_gt_reset_engine(struct drm_i915_private *dev_priv,
/** /**
* i915_reset_engine - reset GPU engine to recover from a hang * i915_reset_engine - reset GPU engine to recover from a hang
* @engine: engine to reset * @engine: engine to reset
* @flags: options * @msg: reason for GPU reset; or NULL for no dev_notice()
* *
* Reset a specific GPU engine. Useful if a hang is detected. * Reset a specific GPU engine. Useful if a hang is detected.
* Returns zero on successful reset or otherwise an error code. * Returns zero on successful reset or otherwise an error code.
...@@ -2021,12 +2021,13 @@ static inline int intel_gt_reset_engine(struct drm_i915_private *dev_priv, ...@@ -2021,12 +2021,13 @@ static inline int intel_gt_reset_engine(struct drm_i915_private *dev_priv,
* - reset engine (which will force the engine to idle) * - reset engine (which will force the engine to idle)
* - re-init/configure engine * - re-init/configure engine
*/ */
int i915_reset_engine(struct intel_engine_cs *engine, unsigned int flags) int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
{ {
struct i915_gpu_error *error = &engine->i915->gpu_error; struct i915_gpu_error *error = &engine->i915->gpu_error;
struct i915_request *active_request; struct i915_request *active_request;
int ret; int ret;
GEM_TRACE("%s flags=%lx\n", engine->name, error->flags);
GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags)); GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags));
active_request = i915_gem_reset_prepare_engine(engine); active_request = i915_gem_reset_prepare_engine(engine);
...@@ -2036,10 +2037,9 @@ int i915_reset_engine(struct intel_engine_cs *engine, unsigned int flags) ...@@ -2036,10 +2037,9 @@ int i915_reset_engine(struct intel_engine_cs *engine, unsigned int flags)
goto out; goto out;
} }
if (!(flags & I915_RESET_QUIET)) { if (msg)
dev_notice(engine->i915->drm.dev, dev_notice(engine->i915->drm.dev,
"Resetting %s after gpu hang\n", engine->name); "Resetting %s for %s\n", engine->name, msg);
}
error->reset_engine_count[engine->id]++; error->reset_engine_count[engine->id]++;
if (!engine->i915->guc.execbuf_client) if (!engine->i915->guc.execbuf_client)
...@@ -2059,7 +2059,7 @@ int i915_reset_engine(struct intel_engine_cs *engine, unsigned int flags) ...@@ -2059,7 +2059,7 @@ int i915_reset_engine(struct intel_engine_cs *engine, unsigned int flags)
* active request and can drop it, adjust head to skip the offending * active request and can drop it, adjust head to skip the offending
* request to resume executing remaining requests in the queue. * request to resume executing remaining requests in the queue.
*/ */
i915_gem_reset_engine(engine, active_request); i915_gem_reset_engine(engine, active_request, true);
/* /*
* The engine and its registers (and workarounds in case of render) * The engine and its registers (and workarounds in case of render)
......
This diff is collapsed.
This diff is collapsed.
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
#include <linux/bug.h> #include <linux/bug.h>
struct drm_i915_private;
#ifdef CONFIG_DRM_I915_DEBUG_GEM #ifdef CONFIG_DRM_I915_DEBUG_GEM
#define GEM_BUG_ON(condition) do { if (unlikely((condition))) { \ #define GEM_BUG_ON(condition) do { if (unlikely((condition))) { \
pr_err("%s:%d GEM_BUG_ON(%s)\n", \ pr_err("%s:%d GEM_BUG_ON(%s)\n", \
...@@ -53,10 +55,15 @@ ...@@ -53,10 +55,15 @@
#if IS_ENABLED(CONFIG_DRM_I915_TRACE_GEM) #if IS_ENABLED(CONFIG_DRM_I915_TRACE_GEM)
#define GEM_TRACE(...) trace_printk(__VA_ARGS__) #define GEM_TRACE(...) trace_printk(__VA_ARGS__)
#define GEM_TRACE_DUMP() ftrace_dump(DUMP_ALL)
#else #else
#define GEM_TRACE(...) do { } while (0) #define GEM_TRACE(...) do { } while (0)
#define GEM_TRACE_DUMP() do { } while (0)
#endif #endif
#define I915_NUM_ENGINES 8 #define I915_NUM_ENGINES 8
void i915_gem_park(struct drm_i915_private *i915);
void i915_gem_unpark(struct drm_i915_private *i915);
#endif /* __I915_GEM_H__ */ #endif /* __I915_GEM_H__ */
/* /*
* Copyright © 2014 Intel Corporation * SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
* *
* Copyright © 2014-2018 Intel Corporation
*/ */
#include "i915_drv.h"
#include "i915_gem_batch_pool.h" #include "i915_gem_batch_pool.h"
#include "i915_drv.h"
/** /**
* DOC: batch pool * DOC: batch pool
...@@ -41,11 +23,11 @@ ...@@ -41,11 +23,11 @@
/** /**
* i915_gem_batch_pool_init() - initialize a batch buffer pool * i915_gem_batch_pool_init() - initialize a batch buffer pool
* @engine: the associated request submission engine
* @pool: the batch buffer pool * @pool: the batch buffer pool
* @engine: the associated request submission engine
*/ */
void i915_gem_batch_pool_init(struct intel_engine_cs *engine, void i915_gem_batch_pool_init(struct i915_gem_batch_pool *pool,
struct i915_gem_batch_pool *pool) struct intel_engine_cs *engine)
{ {
int n; int n;
......
/* /*
* Copyright © 2014 Intel Corporation * SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
* *
* Copyright © 2014-2018 Intel Corporation
*/ */
#ifndef I915_GEM_BATCH_POOL_H #ifndef I915_GEM_BATCH_POOL_H
#define I915_GEM_BATCH_POOL_H #define I915_GEM_BATCH_POOL_H
#include "i915_drv.h" #include <linux/types.h>
struct intel_engine_cs; struct intel_engine_cs;
...@@ -34,9 +16,8 @@ struct i915_gem_batch_pool { ...@@ -34,9 +16,8 @@ struct i915_gem_batch_pool {
struct list_head cache_list[4]; struct list_head cache_list[4];
}; };
/* i915_gem_batch_pool.c */ void i915_gem_batch_pool_init(struct i915_gem_batch_pool *pool,
void i915_gem_batch_pool_init(struct intel_engine_cs *engine, struct intel_engine_cs *engine);
struct i915_gem_batch_pool *pool);
void i915_gem_batch_pool_fini(struct i915_gem_batch_pool *pool); void i915_gem_batch_pool_fini(struct i915_gem_batch_pool *pool);
struct drm_i915_gem_object* struct drm_i915_gem_object*
i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool, size_t size); i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool, size_t size);
......
...@@ -90,6 +90,7 @@ ...@@ -90,6 +90,7 @@
#include <drm/i915_drm.h> #include <drm/i915_drm.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_trace.h" #include "i915_trace.h"
#include "intel_workarounds.h"
#define ALL_L3_SLICES(dev) (1 << NUM_L3_SLICES(dev)) - 1 #define ALL_L3_SLICES(dev) (1 << NUM_L3_SLICES(dev)) - 1
...@@ -318,12 +319,13 @@ __create_hw_context(struct drm_i915_private *dev_priv, ...@@ -318,12 +319,13 @@ __create_hw_context(struct drm_i915_private *dev_priv,
ctx->desc_template = ctx->desc_template =
default_desc_template(dev_priv, dev_priv->mm.aliasing_ppgtt); default_desc_template(dev_priv, dev_priv->mm.aliasing_ppgtt);
/* GuC requires the ring to be placed above GUC_WOPCM_TOP. If GuC is not /*
* GuC requires the ring to be placed in Non-WOPCM memory. If GuC is not
* present or not in use we still need a small bias as ring wraparound * present or not in use we still need a small bias as ring wraparound
* at offset 0 sometimes hangs. No idea why. * at offset 0 sometimes hangs. No idea why.
*/ */
if (USES_GUC(dev_priv)) if (USES_GUC(dev_priv))
ctx->ggtt_offset_bias = GUC_WOPCM_TOP; ctx->ggtt_offset_bias = dev_priv->guc.ggtt_pin_bias;
else else
ctx->ggtt_offset_bias = I915_GTT_PAGE_SIZE; ctx->ggtt_offset_bias = I915_GTT_PAGE_SIZE;
...@@ -458,11 +460,16 @@ static bool needs_preempt_context(struct drm_i915_private *i915) ...@@ -458,11 +460,16 @@ static bool needs_preempt_context(struct drm_i915_private *i915)
int i915_gem_contexts_init(struct drm_i915_private *dev_priv) int i915_gem_contexts_init(struct drm_i915_private *dev_priv)
{ {
struct i915_gem_context *ctx; struct i915_gem_context *ctx;
int ret;
/* Reassure ourselves we are only called once */ /* Reassure ourselves we are only called once */
GEM_BUG_ON(dev_priv->kernel_context); GEM_BUG_ON(dev_priv->kernel_context);
GEM_BUG_ON(dev_priv->preempt_context); GEM_BUG_ON(dev_priv->preempt_context);
ret = intel_ctx_workarounds_init(dev_priv);
if (ret)
return ret;
INIT_LIST_HEAD(&dev_priv->contexts.list); INIT_LIST_HEAD(&dev_priv->contexts.list);
INIT_WORK(&dev_priv->contexts.free_work, contexts_free_worker); INIT_WORK(&dev_priv->contexts.free_work, contexts_free_worker);
init_llist_head(&dev_priv->contexts.free_list); init_llist_head(&dev_priv->contexts.free_list);
......
...@@ -81,6 +81,35 @@ enum { ...@@ -81,6 +81,35 @@ enum {
* but this remains just a hint as the kernel may choose a new location for * but this remains just a hint as the kernel may choose a new location for
* any object in the future. * any object in the future.
* *
* At the level of talking to the hardware, submitting a batchbuffer for the
* GPU to execute is to add content to a buffer from which the HW
* command streamer is reading.
*
* 1. Add a command to load the HW context. For Logical Ring Contexts, i.e.
* Execlists, this command is not placed on the same buffer as the
* remaining items.
*
* 2. Add a command to invalidate caches to the buffer.
*
* 3. Add a batchbuffer start command to the buffer; the start command is
* essentially a token together with the GPU address of the batchbuffer
* to be executed.
*
* 4. Add a pipeline flush to the buffer.
*
* 5. Add a memory write command to the buffer to record when the GPU
* is done executing the batchbuffer. The memory write writes the
* global sequence number of the request, ``i915_request::global_seqno``;
* the i915 driver uses the current value in the register to determine
* if the GPU has completed the batchbuffer.
*
* 6. Add a user interrupt command to the buffer. This command instructs
* the GPU to issue an interrupt when the command, pipeline flush and
* memory write are completed.
*
* 7. Inform the hardware of the additional commands added to the buffer
* (by updating the tail pointer).
*
* Processing an execbuf ioctl is conceptually split up into a few phases. * Processing an execbuf ioctl is conceptually split up into a few phases.
* *
* 1. Validation - Ensure all the pointers, handles and flags are valid. * 1. Validation - Ensure all the pointers, handles and flags are valid.
......
...@@ -121,8 +121,8 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv, ...@@ -121,8 +121,8 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
if (stolen[0].start != stolen[1].start || if (stolen[0].start != stolen[1].start ||
stolen[0].end != stolen[1].end) { stolen[0].end != stolen[1].end) {
DRM_DEBUG_KMS("GTT within stolen memory at %pR\n", &ggtt_res); DRM_DEBUG_DRIVER("GTT within stolen memory at %pR\n", &ggtt_res);
DRM_DEBUG_KMS("Stolen memory adjusted to %pR\n", dsm); DRM_DEBUG_DRIVER("Stolen memory adjusted to %pR\n", dsm);
} }
} }
...@@ -174,18 +174,19 @@ void i915_gem_cleanup_stolen(struct drm_device *dev) ...@@ -174,18 +174,19 @@ void i915_gem_cleanup_stolen(struct drm_device *dev)
} }
static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv, static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base, resource_size_t *size) resource_size_t *base,
resource_size_t *size)
{ {
uint32_t reg_val = I915_READ(IS_GM45(dev_priv) ? u32 reg_val = I915_READ(IS_GM45(dev_priv) ?
CTG_STOLEN_RESERVED : CTG_STOLEN_RESERVED :
ELK_STOLEN_RESERVED); ELK_STOLEN_RESERVED);
resource_size_t stolen_top = dev_priv->dsm.end + 1; resource_size_t stolen_top = dev_priv->dsm.end + 1;
if ((reg_val & G4X_STOLEN_RESERVED_ENABLE) == 0) { DRM_DEBUG_DRIVER("%s_STOLEN_RESERVED = %08x\n",
*base = 0; IS_GM45(dev_priv) ? "CTG" : "ELK", reg_val);
*size = 0;
if ((reg_val & G4X_STOLEN_RESERVED_ENABLE) == 0)
return; return;
}
/* /*
* Whether ILK really reuses the ELK register for this is unclear. * Whether ILK really reuses the ELK register for this is unclear.
...@@ -193,30 +194,25 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv, ...@@ -193,30 +194,25 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv,
*/ */
WARN(IS_GEN5(dev_priv), "ILK stolen reserved found? 0x%08x\n", reg_val); WARN(IS_GEN5(dev_priv), "ILK stolen reserved found? 0x%08x\n", reg_val);
*base = (reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK) << 16; if (!(reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK))
return;
*base = (reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK) << 16;
WARN_ON((reg_val & G4X_STOLEN_RESERVED_ADDR1_MASK) < *base); WARN_ON((reg_val & G4X_STOLEN_RESERVED_ADDR1_MASK) < *base);
/* On these platforms, the register doesn't have a size field, so the *size = stolen_top - *base;
* size is the distance between the base and the top of the stolen
* memory. We also have the genuine case where base is zero and there's
* nothing reserved. */
if (*base == 0)
*size = 0;
else
*size = stolen_top - *base;
} }
static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv, static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base, resource_size_t *size) resource_size_t *base,
resource_size_t *size)
{ {
uint32_t reg_val = I915_READ(GEN6_STOLEN_RESERVED); u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if ((reg_val & GEN6_STOLEN_RESERVED_ENABLE) == 0) { if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
*base = 0;
*size = 0;
return; return;
}
*base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK; *base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK;
...@@ -239,17 +235,44 @@ static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv, ...@@ -239,17 +235,44 @@ static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv,
} }
} }
static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv, static void vlv_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base, resource_size_t *size) resource_size_t *base,
resource_size_t *size)
{ {
uint32_t reg_val = I915_READ(GEN6_STOLEN_RESERVED); u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = dev_priv->dsm.end + 1;
if ((reg_val & GEN6_STOLEN_RESERVED_ENABLE) == 0) { DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
*base = 0;
*size = 0; if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
switch (reg_val & GEN7_STOLEN_RESERVED_SIZE_MASK) {
default:
MISSING_CASE(reg_val & GEN7_STOLEN_RESERVED_SIZE_MASK);
case GEN7_STOLEN_RESERVED_1M:
*size = 1024 * 1024;
break;
} }
/*
* On vlv, the ADDR_MASK portion is left as 0 and HW deduces the
* reserved location as (top - size).
*/
*base = stolen_top - *size;
}
static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return;
*base = reg_val & GEN7_STOLEN_RESERVED_ADDR_MASK; *base = reg_val & GEN7_STOLEN_RESERVED_ADDR_MASK;
switch (reg_val & GEN7_STOLEN_RESERVED_SIZE_MASK) { switch (reg_val & GEN7_STOLEN_RESERVED_SIZE_MASK) {
...@@ -266,15 +289,15 @@ static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv, ...@@ -266,15 +289,15 @@ static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv,
} }
static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv, static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base, resource_size_t *size) resource_size_t *base,
resource_size_t *size)
{ {
uint32_t reg_val = I915_READ(GEN6_STOLEN_RESERVED); u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
if ((reg_val & GEN6_STOLEN_RESERVED_ENABLE) == 0) { DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
*base = 0;
*size = 0; if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
}
*base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK; *base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK;
...@@ -298,29 +321,22 @@ static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv, ...@@ -298,29 +321,22 @@ static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv,
} }
static void bdw_get_stolen_reserved(struct drm_i915_private *dev_priv, static void bdw_get_stolen_reserved(struct drm_i915_private *dev_priv,
resource_size_t *base, resource_size_t *size) resource_size_t *base,
resource_size_t *size)
{ {
uint32_t reg_val = I915_READ(GEN6_STOLEN_RESERVED); u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
resource_size_t stolen_top; resource_size_t stolen_top = dev_priv->dsm.end + 1;
if ((reg_val & GEN6_STOLEN_RESERVED_ENABLE) == 0) { DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
*base = 0;
*size = 0; if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
}
stolen_top = dev_priv->dsm.end + 1; if (!(reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK))
return;
*base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK; *base = reg_val & GEN6_STOLEN_RESERVED_ADDR_MASK;
*size = stolen_top - *base;
/* On these platforms, the register doesn't have a size field, so the
* size is the distance between the base and the top of the stolen
* memory. We also have the genuine case where base is zero and there's
* nothing reserved. */
if (*base == 0)
*size = 0;
else
*size = stolen_top - *base;
} }
int i915_gem_init_stolen(struct drm_i915_private *dev_priv) int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
...@@ -353,7 +369,7 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv) ...@@ -353,7 +369,7 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
GEM_BUG_ON(dev_priv->dsm.end <= dev_priv->dsm.start); GEM_BUG_ON(dev_priv->dsm.end <= dev_priv->dsm.start);
stolen_top = dev_priv->dsm.end + 1; stolen_top = dev_priv->dsm.end + 1;
reserved_base = 0; reserved_base = stolen_top;
reserved_size = 0; reserved_size = 0;
switch (INTEL_GEN(dev_priv)) { switch (INTEL_GEN(dev_priv)) {
...@@ -373,8 +389,12 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv) ...@@ -373,8 +389,12 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
&reserved_base, &reserved_size); &reserved_base, &reserved_size);
break; break;
case 7: case 7:
gen7_get_stolen_reserved(dev_priv, if (IS_VALLEYVIEW(dev_priv))
&reserved_base, &reserved_size); vlv_get_stolen_reserved(dev_priv,
&reserved_base, &reserved_size);
else
gen7_get_stolen_reserved(dev_priv,
&reserved_base, &reserved_size);
break; break;
default: default:
if (IS_LP(dev_priv)) if (IS_LP(dev_priv))
...@@ -386,11 +406,16 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv) ...@@ -386,11 +406,16 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
break; break;
} }
/* It is possible for the reserved base to be zero, but the register /*
* field for size doesn't have a zero option. */ * Our expectation is that the reserved space is at the top of the
if (reserved_base == 0) { * stolen region and *never* at the bottom. If we see !reserved_base,
reserved_size = 0; * it likely means we failed to read the registers correctly.
*/
if (!reserved_base) {
DRM_ERROR("inconsistent reservation %pa + %pa; ignoring\n",
&reserved_base, &reserved_size);
reserved_base = stolen_top; reserved_base = stolen_top;
reserved_size = 0;
} }
dev_priv->dsm_reserved = dev_priv->dsm_reserved =
...@@ -406,9 +431,9 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv) ...@@ -406,9 +431,9 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
* memory, so just consider the start. */ * memory, so just consider the start. */
reserved_total = stolen_top - reserved_base; reserved_total = stolen_top - reserved_base;
DRM_DEBUG_KMS("Memory reserved for graphics device: %lluK, usable: %lluK\n", DRM_DEBUG_DRIVER("Memory reserved for graphics device: %lluK, usable: %lluK\n",
(u64)resource_size(&dev_priv->dsm) >> 10, (u64)resource_size(&dev_priv->dsm) >> 10,
((u64)resource_size(&dev_priv->dsm) - reserved_total) >> 10); ((u64)resource_size(&dev_priv->dsm) - reserved_total) >> 10);
stolen_usable_start = 0; stolen_usable_start = 0;
/* WaSkipStolenMemoryFirstPage:bdw+ */ /* WaSkipStolenMemoryFirstPage:bdw+ */
...@@ -580,8 +605,8 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv ...@@ -580,8 +605,8 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
lockdep_assert_held(&dev_priv->drm.struct_mutex); lockdep_assert_held(&dev_priv->drm.struct_mutex);
DRM_DEBUG_KMS("creating preallocated stolen object: stolen_offset=%pa, gtt_offset=%pa, size=%pa\n", DRM_DEBUG_DRIVER("creating preallocated stolen object: stolen_offset=%pa, gtt_offset=%pa, size=%pa\n",
&stolen_offset, &gtt_offset, &size); &stolen_offset, &gtt_offset, &size);
/* KISS and expect everything to be page-aligned */ /* KISS and expect everything to be page-aligned */
if (WARN_ON(size == 0) || if (WARN_ON(size == 0) ||
...@@ -599,14 +624,14 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv ...@@ -599,14 +624,14 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
ret = drm_mm_reserve_node(&dev_priv->mm.stolen, stolen); ret = drm_mm_reserve_node(&dev_priv->mm.stolen, stolen);
mutex_unlock(&dev_priv->mm.stolen_lock); mutex_unlock(&dev_priv->mm.stolen_lock);
if (ret) { if (ret) {
DRM_DEBUG_KMS("failed to allocate stolen space\n"); DRM_DEBUG_DRIVER("failed to allocate stolen space\n");
kfree(stolen); kfree(stolen);
return NULL; return NULL;
} }
obj = _i915_gem_object_create_stolen(dev_priv, stolen); obj = _i915_gem_object_create_stolen(dev_priv, stolen);
if (obj == NULL) { if (obj == NULL) {
DRM_DEBUG_KMS("failed to allocate stolen object\n"); DRM_DEBUG_DRIVER("failed to allocate stolen object\n");
i915_gem_stolen_remove_node(dev_priv, stolen); i915_gem_stolen_remove_node(dev_priv, stolen);
kfree(stolen); kfree(stolen);
return NULL; return NULL;
...@@ -635,7 +660,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv ...@@ -635,7 +660,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
size, gtt_offset, obj->cache_level, size, gtt_offset, obj->cache_level,
0); 0);
if (ret) { if (ret) {
DRM_DEBUG_KMS("failed to allocate stolen GTT space\n"); DRM_DEBUG_DRIVER("failed to allocate stolen GTT space\n");
goto err_pages; goto err_pages;
} }
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <linux/zlib.h> #include <linux/zlib.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include "i915_gpu_error.h"
#include "i915_drv.h" #include "i915_drv.h"
static inline const struct intel_engine_cs * static inline const struct intel_engine_cs *
......
/*
* SPDX-License-Identifier: MIT
*
* Copyright � 2008-2018 Intel Corporation
*/
#ifndef _I915_GPU_ERROR_H_
#define _I915_GPU_ERROR_H_
#include <linux/kref.h>
#include <linux/ktime.h>
#include <linux/sched.h>
#include <drm/drm_mm.h>
#include "intel_device_info.h"
#include "intel_ringbuffer.h"
#include "intel_uc_fw.h"
#include "i915_gem.h"
#include "i915_gem_gtt.h"
#include "i915_params.h"
struct drm_i915_private;
struct intel_overlay_error_state;
struct intel_display_error_state;
struct i915_gpu_state {
struct kref ref;
ktime_t time;
ktime_t boottime;
ktime_t uptime;
struct drm_i915_private *i915;
char error_msg[128];
bool simulated;
bool awake;
bool wakelock;
bool suspended;
int iommu;
u32 reset_count;
u32 suspend_count;
struct intel_device_info device_info;
struct intel_driver_caps driver_caps;
struct i915_params params;
struct i915_error_uc {
struct intel_uc_fw guc_fw;
struct intel_uc_fw huc_fw;
struct drm_i915_error_object *guc_log;
} uc;
/* Generic register state */
u32 eir;
u32 pgtbl_er;
u32 ier;
u32 gtier[4], ngtier;
u32 ccid;
u32 derrmr;
u32 forcewake;
u32 error; /* gen6+ */
u32 err_int; /* gen7 */
u32 fault_data0; /* gen8, gen9 */
u32 fault_data1; /* gen8, gen9 */
u32 done_reg;
u32 gac_eco;
u32 gam_ecochk;
u32 gab_ctl;
u32 gfx_mode;
u32 nfence;
u64 fence[I915_MAX_NUM_FENCES];
struct intel_overlay_error_state *overlay;
struct intel_display_error_state *display;
struct drm_i915_error_engine {
int engine_id;
/* Software tracked state */
bool idle;
bool waiting;
int num_waiters;
unsigned long hangcheck_timestamp;
bool hangcheck_stalled;
enum intel_engine_hangcheck_action hangcheck_action;
struct i915_address_space *vm;
int num_requests;
u32 reset_count;
/* position of active request inside the ring */
u32 rq_head, rq_post, rq_tail;
/* our own tracking of ring head and tail */
u32 cpu_ring_head;
u32 cpu_ring_tail;
u32 last_seqno;
/* Register state */
u32 start;
u32 tail;
u32 head;
u32 ctl;
u32 mode;
u32 hws;
u32 ipeir;
u32 ipehr;
u32 bbstate;
u32 instpm;
u32 instps;
u32 seqno;
u64 bbaddr;
u64 acthd;
u32 fault_reg;
u64 faddr;
u32 rc_psmi; /* sleep state */
u32 semaphore_mboxes[I915_NUM_ENGINES - 1];
struct intel_instdone instdone;
struct drm_i915_error_context {
char comm[TASK_COMM_LEN];
pid_t pid;
u32 handle;
u32 hw_id;
int priority;
int ban_score;
int active;
int guilty;
bool bannable;
} context;
struct drm_i915_error_object {
u64 gtt_offset;
u64 gtt_size;
int page_count;
int unused;
u32 *pages[0];
} *ringbuffer, *batchbuffer, *wa_batchbuffer, *ctx, *hws_page;
struct drm_i915_error_object **user_bo;
long user_bo_count;
struct drm_i915_error_object *wa_ctx;
struct drm_i915_error_object *default_state;
struct drm_i915_error_request {
long jiffies;
pid_t pid;
u32 context;
int priority;
int ban_score;
u32 seqno;
u32 head;
u32 tail;
} *requests, execlist[EXECLIST_MAX_PORTS];
unsigned int num_ports;
struct drm_i915_error_waiter {
char comm[TASK_COMM_LEN];
pid_t pid;
u32 seqno;
} *waiters;
struct {
u32 gfx_mode;
union {
u64 pdp[4];
u32 pp_dir_base;
};
} vm_info;
} engine[I915_NUM_ENGINES];
struct drm_i915_error_buffer {
u32 size;
u32 name;
u32 rseqno[I915_NUM_ENGINES], wseqno;
u64 gtt_offset;
u32 read_domains;
u32 write_domain;
s32 fence_reg:I915_MAX_NUM_FENCE_BITS;
u32 tiling:2;
u32 dirty:1;
u32 purgeable:1;
u32 userptr:1;
s32 engine:4;
u32 cache_level:3;
} *active_bo[I915_NUM_ENGINES], *pinned_bo;
u32 active_bo_count[I915_NUM_ENGINES], pinned_bo_count;
struct i915_address_space *active_vm[I915_NUM_ENGINES];
};
struct i915_gpu_error {
/* For hangcheck timer */
#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
#define DRM_I915_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_I915_HANGCHECK_PERIOD)
struct delayed_work hangcheck_work;
/* For reset and error_state handling. */
spinlock_t lock;
/* Protected by the above dev->gpu_error.lock. */
struct i915_gpu_state *first_error;
atomic_t pending_fb_pin;
unsigned long missed_irq_rings;
/**
* State variable controlling the reset flow and count
*
* This is a counter which gets incremented when reset is triggered,
*
* Before the reset commences, the I915_RESET_BACKOFF bit is set
* meaning that any waiters holding onto the struct_mutex should
* relinquish the lock immediately in order for the reset to start.
*
* If reset is not completed successfully, the I915_WEDGE bit is
* set meaning that hardware is terminally sour and there is no
* recovery. All waiters on the reset_queue will be woken when
* that happens.
*
* This counter is used by the wait_seqno code to notice that reset
* event happened and it needs to restart the entire ioctl (since most
* likely the seqno it waited for won't ever signal anytime soon).
*
* This is important for lock-free wait paths, where no contended lock
* naturally enforces the correct ordering between the bail-out of the
* waiter and the gpu reset work code.
*/
unsigned long reset_count;
/**
* flags: Control various stages of the GPU reset
*
* #I915_RESET_BACKOFF - When we start a reset, we want to stop any
* other users acquiring the struct_mutex. To do this we set the
* #I915_RESET_BACKOFF bit in the error flags when we detect a reset
* and then check for that bit before acquiring the struct_mutex (in
* i915_mutex_lock_interruptible()?). I915_RESET_BACKOFF serves a
* secondary role in preventing two concurrent global reset attempts.
*
* #I915_RESET_HANDOFF - To perform the actual GPU reset, we need the
* struct_mutex. We try to acquire the struct_mutex in the reset worker,
* but it may be held by some long running waiter (that we cannot
* interrupt without causing trouble). Once we are ready to do the GPU
* reset, we set the I915_RESET_HANDOFF bit and wakeup any waiters. If
* they already hold the struct_mutex and want to participate they can
* inspect the bit and do the reset directly, otherwise the worker
* waits for the struct_mutex.
*
* #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
* acquire the struct_mutex to reset an engine, we need an explicit
* flag to prevent two concurrent reset attempts in the same engine.
* As the number of engines continues to grow, allocate the flags from
* the most significant bits.
*
* #I915_WEDGED - If reset fails and we can no longer use the GPU,
* we set the #I915_WEDGED bit. Prior to command submission, e.g.
* i915_request_alloc(), this bit is checked and the sequence
* aborted (with -EIO reported to userspace) if set.
*/
unsigned long flags;
#define I915_RESET_BACKOFF 0
#define I915_RESET_HANDOFF 1
#define I915_RESET_MODESET 2
#define I915_WEDGED (BITS_PER_LONG - 1)
#define I915_RESET_ENGINE (I915_WEDGED - I915_NUM_ENGINES)
/** Number of times an engine has been reset */
u32 reset_engine_count[I915_NUM_ENGINES];
/** Set of stalled engines with guilty requests, in the current reset */
u32 stalled_mask;
/** Reason for the current *global* reset */
const char *reason;
/**
* Waitqueue to signal when a hang is detected. Used to for waiters
* to release the struct_mutex for the reset to procede.
*/
wait_queue_head_t wait_queue;
/**
* Waitqueue to signal when the reset has completed. Used by clients
* that wait for dev_priv->mm.wedged to settle.
*/
wait_queue_head_t reset_queue;
/* For missed irq/seqno simulation. */
unsigned long test_irq_rings;
};
struct drm_i915_error_state_buf {
struct drm_i915_private *i915;
unsigned int bytes;
unsigned int size;
int err;
u8 *buf;
loff_t start;
loff_t pos;
};
#if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR)
__printf(2, 3)
void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
const struct i915_gpu_state *gpu);
int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
struct drm_i915_private *i915,
size_t count, loff_t pos);
static inline void
i915_error_state_buf_release(struct drm_i915_error_state_buf *eb)
{
kfree(eb->buf);
}
struct i915_gpu_state *i915_capture_gpu_state(struct drm_i915_private *i915);
void i915_capture_error_state(struct drm_i915_private *dev_priv,
u32 engine_mask,
const char *error_msg);
static inline struct i915_gpu_state *
i915_gpu_state_get(struct i915_gpu_state *gpu)
{
kref_get(&gpu->ref);
return gpu;
}
void __i915_gpu_state_free(struct kref *kref);
static inline void i915_gpu_state_put(struct i915_gpu_state *gpu)
{
if (gpu)
kref_put(&gpu->ref, __i915_gpu_state_free);
}
struct i915_gpu_state *i915_first_error_state(struct drm_i915_private *i915);
void i915_reset_error_state(struct drm_i915_private *i915);
#else
static inline void i915_capture_error_state(struct drm_i915_private *dev_priv,
u32 engine_mask,
const char *error_msg)
{
}
static inline struct i915_gpu_state *
i915_first_error_state(struct drm_i915_private *i915)
{
return NULL;
}
static inline void i915_reset_error_state(struct drm_i915_private *i915)
{
}
#endif /* IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) */
#endif /* _I915_GPU_ERROR_H_ */
This diff is collapsed.
/*
* Autogenerated file by GPU Top : https://github.com/rib/gputop
* DO NOT EDIT manually!
*
*
* Copyright (c) 2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#include <linux/sysfs.h>
#include "i915_drv.h"
#include "i915_oa_icl.h"
static const struct i915_oa_reg b_counter_config_test_oa[] = {
{ _MMIO(0x2740), 0x00000000 },
{ _MMIO(0x2710), 0x00000000 },
{ _MMIO(0x2714), 0xf0800000 },
{ _MMIO(0x2720), 0x00000000 },
{ _MMIO(0x2724), 0xf0800000 },
{ _MMIO(0x2770), 0x00000004 },
{ _MMIO(0x2774), 0x0000ffff },
{ _MMIO(0x2778), 0x00000003 },
{ _MMIO(0x277c), 0x0000ffff },
{ _MMIO(0x2780), 0x00000007 },
{ _MMIO(0x2784), 0x0000ffff },
{ _MMIO(0x2788), 0x00100002 },
{ _MMIO(0x278c), 0x0000fff7 },
{ _MMIO(0x2790), 0x00100002 },
{ _MMIO(0x2794), 0x0000ffcf },
{ _MMIO(0x2798), 0x00100082 },
{ _MMIO(0x279c), 0x0000ffef },
{ _MMIO(0x27a0), 0x001000c2 },
{ _MMIO(0x27a4), 0x0000ffe7 },
{ _MMIO(0x27a8), 0x00100001 },
{ _MMIO(0x27ac), 0x0000ffe7 },
};
static const struct i915_oa_reg flex_eu_config_test_oa[] = {
};
static const struct i915_oa_reg mux_config_test_oa[] = {
{ _MMIO(0xd04), 0x00000200 },
{ _MMIO(0x9840), 0x00000000 },
{ _MMIO(0x9884), 0x00000000 },
{ _MMIO(0x9888), 0x10060000 },
{ _MMIO(0x9888), 0x22060000 },
{ _MMIO(0x9888), 0x16060000 },
{ _MMIO(0x9888), 0x24060000 },
{ _MMIO(0x9888), 0x18060000 },
{ _MMIO(0x9888), 0x1a060000 },
{ _MMIO(0x9888), 0x12060000 },
{ _MMIO(0x9888), 0x14060000 },
{ _MMIO(0x9888), 0x10060000 },
{ _MMIO(0x9888), 0x22060000 },
{ _MMIO(0x9884), 0x00000003 },
{ _MMIO(0x9888), 0x16130000 },
{ _MMIO(0x9888), 0x24000001 },
{ _MMIO(0x9888), 0x0e130056 },
{ _MMIO(0x9888), 0x10130000 },
{ _MMIO(0x9888), 0x1a130000 },
{ _MMIO(0x9888), 0x541f0001 },
{ _MMIO(0x9888), 0x181f0000 },
{ _MMIO(0x9888), 0x4c1f0000 },
{ _MMIO(0x9888), 0x301f0000 },
};
static ssize_t
show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
{
return sprintf(buf, "1\n");
}
void
i915_perf_load_test_config_icl(struct drm_i915_private *dev_priv)
{
strlcpy(dev_priv->perf.oa.test_config.uuid,
"a291665e-244b-4b76-9b9a-01de9d3c8068",
sizeof(dev_priv->perf.oa.test_config.uuid));
dev_priv->perf.oa.test_config.id = 1;
dev_priv->perf.oa.test_config.mux_regs = mux_config_test_oa;
dev_priv->perf.oa.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa);
dev_priv->perf.oa.test_config.b_counter_regs = b_counter_config_test_oa;
dev_priv->perf.oa.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa);
dev_priv->perf.oa.test_config.flex_regs = flex_eu_config_test_oa;
dev_priv->perf.oa.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa);
dev_priv->perf.oa.test_config.sysfs_metric.name = "a291665e-244b-4b76-9b9a-01de9d3c8068";
dev_priv->perf.oa.test_config.sysfs_metric.attrs = dev_priv->perf.oa.test_config.attrs;
dev_priv->perf.oa.test_config.attrs[0] = &dev_priv->perf.oa.test_config.sysfs_metric_id.attr;
dev_priv->perf.oa.test_config.sysfs_metric_id.attr.name = "id";
dev_priv->perf.oa.test_config.sysfs_metric_id.attr.mode = 0444;
dev_priv->perf.oa.test_config.sysfs_metric_id.show = show_test_oa_id;
}
/*
* Autogenerated file by GPU Top : https://github.com/rib/gputop
* DO NOT EDIT manually!
*
*
* Copyright (c) 2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#ifndef __I915_OA_ICL_H__
#define __I915_OA_ICL_H__
extern void i915_perf_load_test_config_icl(struct drm_i915_private *dev_priv);
#endif
...@@ -48,7 +48,7 @@ struct drm_printer; ...@@ -48,7 +48,7 @@ struct drm_printer;
param(int, enable_ips, 1) \ param(int, enable_ips, 1) \
param(int, invert_brightness, 0) \ param(int, invert_brightness, 0) \
param(int, enable_guc, 0) \ param(int, enable_guc, 0) \
param(int, guc_log_level, 0) \ param(int, guc_log_level, -1) \
param(char *, guc_firmware_path, NULL) \ param(char *, guc_firmware_path, NULL) \
param(char *, huc_firmware_path, NULL) \ param(char *, huc_firmware_path, NULL) \
param(int, mmio_debug, 0) \ param(int, mmio_debug, 0) \
......
...@@ -602,6 +602,7 @@ static const struct intel_device_info intel_icelake_11_info = { ...@@ -602,6 +602,7 @@ static const struct intel_device_info intel_icelake_11_info = {
PLATFORM(INTEL_ICELAKE), PLATFORM(INTEL_ICELAKE),
.is_alpha_support = 1, .is_alpha_support = 1,
.has_resource_streamer = 0, .has_resource_streamer = 0,
.ring_mask = RENDER_RING | BLT_RING | VEBOX_RING | BSD_RING | BSD3_RING,
}; };
#undef GEN #undef GEN
......
...@@ -209,6 +209,7 @@ ...@@ -209,6 +209,7 @@
#include "i915_oa_cflgt2.h" #include "i915_oa_cflgt2.h"
#include "i915_oa_cflgt3.h" #include "i915_oa_cflgt3.h"
#include "i915_oa_cnl.h" #include "i915_oa_cnl.h"
#include "i915_oa_icl.h"
/* HW requires this to be a power of two, between 128k and 16M, though driver /* HW requires this to be a power of two, between 128k and 16M, though driver
* is currently generally designed assuming the largest 16M size is used such * is currently generally designed assuming the largest 16M size is used such
...@@ -1042,7 +1043,7 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream, ...@@ -1042,7 +1043,7 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
I915_WRITE(GEN7_OASTATUS2, I915_WRITE(GEN7_OASTATUS2,
((head & GEN7_OASTATUS2_HEAD_MASK) | ((head & GEN7_OASTATUS2_HEAD_MASK) |
OA_MEM_SELECT_GGTT)); GEN7_OASTATUS2_MEM_SELECT_GGTT));
dev_priv->perf.oa.oa_buffer.head = head; dev_priv->perf.oa.oa_buffer.head = head;
spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags); spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
...@@ -1332,7 +1333,8 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv) ...@@ -1332,7 +1333,8 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
/* Pre-DevBDW: OABUFFER must be set with counters off, /* Pre-DevBDW: OABUFFER must be set with counters off,
* before OASTATUS1, but after OASTATUS2 * before OASTATUS1, but after OASTATUS2
*/ */
I915_WRITE(GEN7_OASTATUS2, gtt_offset | OA_MEM_SELECT_GGTT); /* head */ I915_WRITE(GEN7_OASTATUS2,
gtt_offset | GEN7_OASTATUS2_MEM_SELECT_GGTT); /* head */
dev_priv->perf.oa.oa_buffer.head = gtt_offset; dev_priv->perf.oa.oa_buffer.head = gtt_offset;
I915_WRITE(GEN7_OABUFFER, gtt_offset); I915_WRITE(GEN7_OABUFFER, gtt_offset);
...@@ -1392,7 +1394,7 @@ static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv) ...@@ -1392,7 +1394,7 @@ static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
* bit." * bit."
*/ */
I915_WRITE(GEN8_OABUFFER, gtt_offset | I915_WRITE(GEN8_OABUFFER, gtt_offset |
OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT); OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT);
I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK); I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
/* Mark that we need updated tail pointers to read from... */ /* Mark that we need updated tail pointers to read from... */
...@@ -1840,7 +1842,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv, ...@@ -1840,7 +1842,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv,
* be read back from automatically triggered reports, as part of the * be read back from automatically triggered reports, as part of the
* RPT_ID field. * RPT_ID field.
*/ */
if (IS_GEN9(dev_priv) || IS_GEN10(dev_priv)) { if (IS_GEN(dev_priv, 9, 11)) {
I915_WRITE(GEN8_OA_DEBUG, I915_WRITE(GEN8_OA_DEBUG,
_MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
GEN9_OA_DEBUG_INCLUDE_CLK_RATIO)); GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
...@@ -1870,7 +1872,6 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv) ...@@ -1870,7 +1872,6 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
I915_WRITE(GDT_CHICKEN_BITS, (I915_READ(GDT_CHICKEN_BITS) & I915_WRITE(GDT_CHICKEN_BITS, (I915_READ(GDT_CHICKEN_BITS) &
~GT_NOA_ENABLE)); ~GT_NOA_ENABLE));
} }
static void gen10_disable_metric_set(struct drm_i915_private *dev_priv) static void gen10_disable_metric_set(struct drm_i915_private *dev_priv)
...@@ -1885,6 +1886,13 @@ static void gen10_disable_metric_set(struct drm_i915_private *dev_priv) ...@@ -1885,6 +1886,13 @@ static void gen10_disable_metric_set(struct drm_i915_private *dev_priv)
static void gen7_oa_enable(struct drm_i915_private *dev_priv) static void gen7_oa_enable(struct drm_i915_private *dev_priv)
{ {
struct i915_gem_context *ctx =
dev_priv->perf.oa.exclusive_stream->ctx;
u32 ctx_id = dev_priv->perf.oa.specific_ctx_id;
bool periodic = dev_priv->perf.oa.periodic;
u32 period_exponent = dev_priv->perf.oa.period_exponent;
u32 report_format = dev_priv->perf.oa.oa_buffer.format;
/* /*
* Reset buf pointers so we don't forward reports from before now. * Reset buf pointers so we don't forward reports from before now.
* *
...@@ -1896,25 +1904,14 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv) ...@@ -1896,25 +1904,14 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
*/ */
gen7_init_oa_buffer(dev_priv); gen7_init_oa_buffer(dev_priv);
if (dev_priv->perf.oa.exclusive_stream->enabled) { I915_WRITE(GEN7_OACONTROL,
struct i915_gem_context *ctx = (ctx_id & GEN7_OACONTROL_CTX_MASK) |
dev_priv->perf.oa.exclusive_stream->ctx; (period_exponent <<
u32 ctx_id = dev_priv->perf.oa.specific_ctx_id; GEN7_OACONTROL_TIMER_PERIOD_SHIFT) |
(periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) |
bool periodic = dev_priv->perf.oa.periodic; (report_format << GEN7_OACONTROL_FORMAT_SHIFT) |
u32 period_exponent = dev_priv->perf.oa.period_exponent; (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) |
u32 report_format = dev_priv->perf.oa.oa_buffer.format; GEN7_OACONTROL_ENABLE);
I915_WRITE(GEN7_OACONTROL,
(ctx_id & GEN7_OACONTROL_CTX_MASK) |
(period_exponent <<
GEN7_OACONTROL_TIMER_PERIOD_SHIFT) |
(periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) |
(report_format << GEN7_OACONTROL_FORMAT_SHIFT) |
(ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) |
GEN7_OACONTROL_ENABLE);
} else
I915_WRITE(GEN7_OACONTROL, 0);
} }
static void gen8_oa_enable(struct drm_i915_private *dev_priv) static void gen8_oa_enable(struct drm_i915_private *dev_priv)
...@@ -2099,13 +2096,17 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, ...@@ -2099,13 +2096,17 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
if (stream->ctx) { if (stream->ctx) {
ret = oa_get_render_ctx_id(stream); ret = oa_get_render_ctx_id(stream);
if (ret) if (ret) {
DRM_DEBUG("Invalid context id to filter with\n");
return ret; return ret;
}
} }
ret = get_oa_config(dev_priv, props->metrics_set, &stream->oa_config); ret = get_oa_config(dev_priv, props->metrics_set, &stream->oa_config);
if (ret) if (ret) {
DRM_DEBUG("Invalid OA config id=%i\n", props->metrics_set);
goto err_config; goto err_config;
}
/* PRM - observability performance counters: /* PRM - observability performance counters:
* *
...@@ -2132,8 +2133,10 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, ...@@ -2132,8 +2133,10 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
ret = dev_priv->perf.oa.ops.enable_metric_set(dev_priv, ret = dev_priv->perf.oa.ops.enable_metric_set(dev_priv,
stream->oa_config); stream->oa_config);
if (ret) if (ret) {
DRM_DEBUG("Unable to enable metric set\n");
goto err_enable; goto err_enable;
}
stream->ops = &i915_oa_stream_ops; stream->ops = &i915_oa_stream_ops;
...@@ -2745,7 +2748,8 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv, ...@@ -2745,7 +2748,8 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
props->ctx_handle = value; props->ctx_handle = value;
break; break;
case DRM_I915_PERF_PROP_SAMPLE_OA: case DRM_I915_PERF_PROP_SAMPLE_OA:
props->sample_flags |= SAMPLE_OA_REPORT; if (value)
props->sample_flags |= SAMPLE_OA_REPORT;
break; break;
case DRM_I915_PERF_PROP_OA_METRICS_SET: case DRM_I915_PERF_PROP_OA_METRICS_SET:
if (value == 0) { if (value == 0) {
...@@ -2935,6 +2939,8 @@ void i915_perf_register(struct drm_i915_private *dev_priv) ...@@ -2935,6 +2939,8 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
i915_perf_load_test_config_cflgt3(dev_priv); i915_perf_load_test_config_cflgt3(dev_priv);
} else if (IS_CANNONLAKE(dev_priv)) { } else if (IS_CANNONLAKE(dev_priv)) {
i915_perf_load_test_config_cnl(dev_priv); i915_perf_load_test_config_cnl(dev_priv);
} else if (IS_ICELAKE(dev_priv)) {
i915_perf_load_test_config_icl(dev_priv);
} }
if (dev_priv->perf.oa.test_config.id == 0) if (dev_priv->perf.oa.test_config.id == 0)
...@@ -3292,6 +3298,8 @@ int i915_perf_add_config_ioctl(struct drm_device *dev, void *data, ...@@ -3292,6 +3298,8 @@ int i915_perf_add_config_ioctl(struct drm_device *dev, void *data,
mutex_unlock(&dev_priv->perf.metrics_lock); mutex_unlock(&dev_priv->perf.metrics_lock);
DRM_DEBUG("Added config %s id=%i\n", oa_config->uuid, oa_config->id);
return oa_config->id; return oa_config->id;
sysfs_err: sysfs_err:
...@@ -3348,6 +3356,9 @@ int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data, ...@@ -3348,6 +3356,9 @@ int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data,
&oa_config->sysfs_metric); &oa_config->sysfs_metric);
idr_remove(&dev_priv->perf.metrics_idr, *arg); idr_remove(&dev_priv->perf.metrics_idr, *arg);
DRM_DEBUG("Removed config %s id=%i\n", oa_config->uuid, oa_config->id);
put_oa_config(dev_priv, oa_config); put_oa_config(dev_priv, oa_config);
config_err: config_err:
...@@ -3467,7 +3478,7 @@ void i915_perf_init(struct drm_i915_private *dev_priv) ...@@ -3467,7 +3478,7 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16); dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
} }
} else if (IS_GEN10(dev_priv)) { } else if (IS_GEN(dev_priv, 10, 11)) {
dev_priv->perf.oa.ops.is_valid_b_counter_reg = dev_priv->perf.oa.ops.is_valid_b_counter_reg =
gen7_is_valid_b_counter_addr; gen7_is_valid_b_counter_addr;
dev_priv->perf.oa.ops.is_valid_mux_reg = dev_priv->perf.oa.ops.is_valid_mux_reg =
......
/* /*
* Copyright © 2017 Intel Corporation * SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
* *
* Copyright © 2017-2018 Intel Corporation
*/ */
#include <linux/perf_event.h>
#include <linux/pm_runtime.h>
#include "i915_drv.h"
#include "i915_pmu.h" #include "i915_pmu.h"
#include "intel_ringbuffer.h" #include "intel_ringbuffer.h"
#include "i915_drv.h"
/* Frequency for the sampling timer for events which need it. */ /* Frequency for the sampling timer for events which need it. */
#define FREQUENCY 200 #define FREQUENCY 200
......
/* /*
* Copyright © 2017 Intel Corporation * SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
* *
* Copyright © 2017-2018 Intel Corporation
*/ */
#ifndef __I915_PMU_H__ #ifndef __I915_PMU_H__
#define __I915_PMU_H__ #define __I915_PMU_H__
#include <linux/hrtimer.h>
#include <linux/perf_event.h>
#include <linux/spinlock_types.h>
#include <drm/i915_drm.h>
struct drm_i915_private;
enum { enum {
__I915_SAMPLE_FREQ_ACT = 0, __I915_SAMPLE_FREQ_ACT = 0,
__I915_SAMPLE_FREQ_REQ, __I915_SAMPLE_FREQ_REQ,
......
This diff is collapsed.
...@@ -59,11 +59,7 @@ static bool i915_fence_signaled(struct dma_fence *fence) ...@@ -59,11 +59,7 @@ static bool i915_fence_signaled(struct dma_fence *fence)
static bool i915_fence_enable_signaling(struct dma_fence *fence) static bool i915_fence_enable_signaling(struct dma_fence *fence)
{ {
if (i915_fence_signaled(fence)) return intel_engine_enable_signaling(to_request(fence), true);
return false;
intel_engine_enable_signaling(to_request(fence), true);
return !i915_fence_signaled(fence);
} }
static signed long i915_fence_wait(struct dma_fence *fence, static signed long i915_fence_wait(struct dma_fence *fence,
...@@ -211,11 +207,19 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno) ...@@ -211,11 +207,19 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
if (ret) if (ret)
return ret; return ret;
GEM_BUG_ON(i915->gt.active_requests);
/* If the seqno wraps around, we need to clear the breadcrumb rbtree */ /* If the seqno wraps around, we need to clear the breadcrumb rbtree */
for_each_engine(engine, i915, id) { for_each_engine(engine, i915, id) {
struct i915_gem_timeline *timeline; struct i915_gem_timeline *timeline;
struct intel_timeline *tl = engine->timeline; struct intel_timeline *tl = engine->timeline;
GEM_TRACE("%s seqno %d (current %d) -> %d\n",
engine->name,
tl->seqno,
intel_engine_get_seqno(engine),
seqno);
if (!i915_seqno_passed(seqno, tl->seqno)) { if (!i915_seqno_passed(seqno, tl->seqno)) {
/* Flush any waiters before we reuse the seqno */ /* Flush any waiters before we reuse the seqno */
intel_engine_disarm_breadcrumbs(engine); intel_engine_disarm_breadcrumbs(engine);
...@@ -251,47 +255,6 @@ int i915_gem_set_global_seqno(struct drm_device *dev, u32 seqno) ...@@ -251,47 +255,6 @@ int i915_gem_set_global_seqno(struct drm_device *dev, u32 seqno)
return reset_all_global_seqno(i915, seqno - 1); return reset_all_global_seqno(i915, seqno - 1);
} }
static void mark_busy(struct drm_i915_private *i915)
{
if (i915->gt.awake)
return;
GEM_BUG_ON(!i915->gt.active_requests);
intel_runtime_pm_get_noresume(i915);
/*
* It seems that the DMC likes to transition between the DC states a lot
* when there are no connected displays (no active power domains) during
* command submission.
*
* This activity has negative impact on the performance of the chip with
* huge latencies observed in the interrupt handler and elsewhere.
*
* Work around it by grabbing a GT IRQ power domain whilst there is any
* GT activity, preventing any DC state transitions.
*/
intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
i915->gt.awake = true;
if (unlikely(++i915->gt.epoch == 0)) /* keep 0 as invalid */
i915->gt.epoch = 1;
intel_enable_gt_powersave(i915);
i915_update_gfx_val(i915);
if (INTEL_GEN(i915) >= 6)
gen6_rps_busy(i915);
i915_pmu_gt_unparked(i915);
intel_engines_unpark(i915);
i915_queue_hangcheck(i915);
queue_delayed_work(i915->wq,
&i915->gt.retire_work,
round_jiffies_up_relative(HZ));
}
static int reserve_engine(struct intel_engine_cs *engine) static int reserve_engine(struct intel_engine_cs *engine)
{ {
struct drm_i915_private *i915 = engine->i915; struct drm_i915_private *i915 = engine->i915;
...@@ -309,7 +272,7 @@ static int reserve_engine(struct intel_engine_cs *engine) ...@@ -309,7 +272,7 @@ static int reserve_engine(struct intel_engine_cs *engine)
} }
if (!i915->gt.active_requests++) if (!i915->gt.active_requests++)
mark_busy(i915); i915_gem_unpark(i915);
return 0; return 0;
} }
...@@ -318,13 +281,8 @@ static void unreserve_engine(struct intel_engine_cs *engine) ...@@ -318,13 +281,8 @@ static void unreserve_engine(struct intel_engine_cs *engine)
{ {
struct drm_i915_private *i915 = engine->i915; struct drm_i915_private *i915 = engine->i915;
if (!--i915->gt.active_requests) { if (!--i915->gt.active_requests)
/* Cancel the mark_busy() from our reserve_engine() */ i915_gem_park(i915);
GEM_BUG_ON(!i915->gt.awake);
mod_delayed_work(i915->wq,
&i915->gt.idle_work,
msecs_to_jiffies(100));
}
GEM_BUG_ON(!engine->timeline->inflight_seqnos); GEM_BUG_ON(!engine->timeline->inflight_seqnos);
engine->timeline->inflight_seqnos--; engine->timeline->inflight_seqnos--;
...@@ -358,7 +316,7 @@ static void advance_ring(struct i915_request *request) ...@@ -358,7 +316,7 @@ static void advance_ring(struct i915_request *request)
* is just about to be. Either works, if we miss the last two * is just about to be. Either works, if we miss the last two
* noops - they are safe to be replayed on a reset. * noops - they are safe to be replayed on a reset.
*/ */
tail = READ_ONCE(request->ring->tail); tail = READ_ONCE(request->tail);
} else { } else {
tail = request->postfix; tail = request->postfix;
} }
...@@ -385,6 +343,12 @@ static void i915_request_retire(struct i915_request *request) ...@@ -385,6 +343,12 @@ static void i915_request_retire(struct i915_request *request)
struct intel_engine_cs *engine = request->engine; struct intel_engine_cs *engine = request->engine;
struct i915_gem_active *active, *next; struct i915_gem_active *active, *next;
GEM_TRACE("%s fence %llx:%d, global=%d, current %d\n",
engine->name,
request->fence.context, request->fence.seqno,
request->global_seqno,
intel_engine_get_seqno(engine));
lockdep_assert_held(&request->i915->drm.struct_mutex); lockdep_assert_held(&request->i915->drm.struct_mutex);
GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit)); GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
GEM_BUG_ON(!i915_request_completed(request)); GEM_BUG_ON(!i915_request_completed(request));
...@@ -486,21 +450,34 @@ static u32 timeline_get_seqno(struct intel_timeline *tl) ...@@ -486,21 +450,34 @@ static u32 timeline_get_seqno(struct intel_timeline *tl)
return ++tl->seqno; return ++tl->seqno;
} }
static void move_to_timeline(struct i915_request *request,
struct intel_timeline *timeline)
{
GEM_BUG_ON(request->timeline == request->engine->timeline);
lockdep_assert_held(&request->engine->timeline->lock);
spin_lock(&request->timeline->lock);
list_move_tail(&request->link, &timeline->requests);
spin_unlock(&request->timeline->lock);
}
void __i915_request_submit(struct i915_request *request) void __i915_request_submit(struct i915_request *request)
{ {
struct intel_engine_cs *engine = request->engine; struct intel_engine_cs *engine = request->engine;
struct intel_timeline *timeline;
u32 seqno; u32 seqno;
GEM_TRACE("%s fence %llx:%d -> global=%d, current %d\n",
engine->name,
request->fence.context, request->fence.seqno,
engine->timeline->seqno + 1,
intel_engine_get_seqno(engine));
GEM_BUG_ON(!irqs_disabled()); GEM_BUG_ON(!irqs_disabled());
lockdep_assert_held(&engine->timeline->lock); lockdep_assert_held(&engine->timeline->lock);
/* Transfer from per-context onto the global per-engine timeline */
timeline = engine->timeline;
GEM_BUG_ON(timeline == request->timeline);
GEM_BUG_ON(request->global_seqno); GEM_BUG_ON(request->global_seqno);
seqno = timeline_get_seqno(timeline); seqno = timeline_get_seqno(engine->timeline);
GEM_BUG_ON(!seqno); GEM_BUG_ON(!seqno);
GEM_BUG_ON(i915_seqno_passed(intel_engine_get_seqno(engine), seqno)); GEM_BUG_ON(i915_seqno_passed(intel_engine_get_seqno(engine), seqno));
...@@ -514,9 +491,8 @@ void __i915_request_submit(struct i915_request *request) ...@@ -514,9 +491,8 @@ void __i915_request_submit(struct i915_request *request)
engine->emit_breadcrumb(request, engine->emit_breadcrumb(request,
request->ring->vaddr + request->postfix); request->ring->vaddr + request->postfix);
spin_lock(&request->timeline->lock); /* Transfer from per-context onto the global per-engine timeline */
list_move_tail(&request->link, &timeline->requests); move_to_timeline(request, engine->timeline);
spin_unlock(&request->timeline->lock);
trace_i915_request_execute(request); trace_i915_request_execute(request);
...@@ -539,7 +515,12 @@ void i915_request_submit(struct i915_request *request) ...@@ -539,7 +515,12 @@ void i915_request_submit(struct i915_request *request)
void __i915_request_unsubmit(struct i915_request *request) void __i915_request_unsubmit(struct i915_request *request)
{ {
struct intel_engine_cs *engine = request->engine; struct intel_engine_cs *engine = request->engine;
struct intel_timeline *timeline;
GEM_TRACE("%s fence %llx:%d <- global=%d, current %d\n",
engine->name,
request->fence.context, request->fence.seqno,
request->global_seqno,
intel_engine_get_seqno(engine));
GEM_BUG_ON(!irqs_disabled()); GEM_BUG_ON(!irqs_disabled());
lockdep_assert_held(&engine->timeline->lock); lockdep_assert_held(&engine->timeline->lock);
...@@ -562,12 +543,7 @@ void __i915_request_unsubmit(struct i915_request *request) ...@@ -562,12 +543,7 @@ void __i915_request_unsubmit(struct i915_request *request)
spin_unlock(&request->lock); spin_unlock(&request->lock);
/* Transfer back from the global per-engine timeline to per-context */ /* Transfer back from the global per-engine timeline to per-context */
timeline = request->timeline; move_to_timeline(request, request->timeline);
GEM_BUG_ON(timeline == engine->timeline);
spin_lock(&timeline->lock);
list_move(&request->link, &timeline->requests);
spin_unlock(&timeline->lock);
/* /*
* We don't need to wake_up any waiters on request->execute, they * We don't need to wake_up any waiters on request->execute, they
...@@ -1000,6 +976,9 @@ void __i915_request_add(struct i915_request *request, bool flush_caches) ...@@ -1000,6 +976,9 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
u32 *cs; u32 *cs;
int err; int err;
GEM_TRACE("%s fence %llx:%d\n",
engine->name, request->fence.context, request->fence.seqno);
lockdep_assert_held(&request->i915->drm.struct_mutex); lockdep_assert_held(&request->i915->drm.struct_mutex);
trace_i915_request_add(request); trace_i915_request_add(request);
...@@ -1206,11 +1185,13 @@ static bool __i915_spin_request(const struct i915_request *rq, ...@@ -1206,11 +1185,13 @@ static bool __i915_spin_request(const struct i915_request *rq,
static bool __i915_wait_request_check_and_reset(struct i915_request *request) static bool __i915_wait_request_check_and_reset(struct i915_request *request)
{ {
if (likely(!i915_reset_handoff(&request->i915->gpu_error))) struct i915_gpu_error *error = &request->i915->gpu_error;
if (likely(!i915_reset_handoff(error)))
return false; return false;
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
i915_reset(request->i915, 0); i915_reset(request->i915, error->stalled_mask, error->reason);
return true; return true;
} }
......
...@@ -40,8 +40,8 @@ ...@@ -40,8 +40,8 @@
#undef WARN_ON_ONCE #undef WARN_ON_ONCE
#define WARN_ON_ONCE(x) WARN_ONCE((x), "%s", "WARN_ON_ONCE(" __stringify(x) ")") #define WARN_ON_ONCE(x) WARN_ONCE((x), "%s", "WARN_ON_ONCE(" __stringify(x) ")")
#define MISSING_CASE(x) WARN(1, "Missing switch case (%lu) in %s\n", \ #define MISSING_CASE(x) WARN(1, "Missing case (%s == %ld)\n", \
(long)(x), __func__) __stringify(x), (long)(x))
#if GCC_VERSION >= 70000 #if GCC_VERSION >= 70000
#define add_overflows(A, B) \ #define add_overflows(A, B) \
......
...@@ -227,6 +227,7 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv, ...@@ -227,6 +227,7 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
struct intel_crtc_scaler_state *scaler_state = struct intel_crtc_scaler_state *scaler_state =
&crtc_state->scaler_state; &crtc_state->scaler_state;
struct drm_atomic_state *drm_state = crtc_state->base.state; struct drm_atomic_state *drm_state = crtc_state->base.state;
struct intel_atomic_state *intel_state = to_intel_atomic_state(drm_state);
int num_scalers_need; int num_scalers_need;
int i, j; int i, j;
...@@ -304,8 +305,8 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv, ...@@ -304,8 +305,8 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
continue; continue;
} }
plane_state = intel_atomic_get_existing_plane_state(drm_state, plane_state = intel_atomic_get_new_plane_state(intel_state,
intel_plane); intel_plane);
scaler_id = &plane_state->scaler_id; scaler_id = &plane_state->scaler_id;
} }
...@@ -328,8 +329,18 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv, ...@@ -328,8 +329,18 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
} }
/* set scaler mode */ /* set scaler mode */
if (IS_GEMINILAKE(dev_priv) || IS_CANNONLAKE(dev_priv)) { if ((INTEL_GEN(dev_priv) >= 9) &&
scaler_state->scalers[*scaler_id].mode = 0; plane_state && plane_state->base.fb &&
plane_state->base.fb->format->format ==
DRM_FORMAT_NV12) {
if (INTEL_GEN(dev_priv) == 9 &&
!IS_GEMINILAKE(dev_priv) &&
!IS_SKYLAKE(dev_priv))
scaler_state->scalers[*scaler_id].mode =
SKL_PS_SCALER_MODE_NV12;
else
scaler_state->scalers[*scaler_id].mode =
PS_SCALER_MODE_PLANAR;
} else if (num_scalers_need == 1 && intel_crtc->pipe != PIPE_C) { } else if (num_scalers_need == 1 && intel_crtc->pipe != PIPE_C) {
/* /*
* when only 1 scaler is in use on either pipe A or B, * when only 1 scaler is in use on either pipe A or B,
......
...@@ -1215,10 +1215,8 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port, ...@@ -1215,10 +1215,8 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
{ {
struct child_device_config *it, *child = NULL; struct child_device_config *it, *child = NULL;
struct ddi_vbt_port_info *info = &dev_priv->vbt.ddi_port_info[port]; struct ddi_vbt_port_info *info = &dev_priv->vbt.ddi_port_info[port];
uint8_t hdmi_level_shift;
int i, j; int i, j;
bool is_dvi, is_hdmi, is_dp, is_edp, is_crt; bool is_dvi, is_hdmi, is_dp, is_edp, is_crt;
uint8_t aux_channel, ddc_pin;
/* Each DDI port can have more than one value on the "DVO Port" field, /* Each DDI port can have more than one value on the "DVO Port" field,
* so look for all the possible values for each port. * so look for all the possible values for each port.
*/ */
...@@ -1255,8 +1253,6 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port, ...@@ -1255,8 +1253,6 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
if (!child) if (!child)
return; return;
aux_channel = child->aux_channel;
is_dvi = child->device_type & DEVICE_TYPE_TMDS_DVI_SIGNALING; is_dvi = child->device_type & DEVICE_TYPE_TMDS_DVI_SIGNALING;
is_dp = child->device_type & DEVICE_TYPE_DISPLAYPORT_OUTPUT; is_dp = child->device_type & DEVICE_TYPE_DISPLAYPORT_OUTPUT;
is_crt = child->device_type & DEVICE_TYPE_ANALOG_OUTPUT; is_crt = child->device_type & DEVICE_TYPE_ANALOG_OUTPUT;
...@@ -1270,13 +1266,6 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port, ...@@ -1270,13 +1266,6 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
is_hdmi = false; is_hdmi = false;
} }
if (port == PORT_A && is_dvi) {
DRM_DEBUG_KMS("VBT claims port A supports DVI%s, ignoring\n",
is_hdmi ? "/HDMI" : "");
is_dvi = false;
is_hdmi = false;
}
info->supports_dvi = is_dvi; info->supports_dvi = is_dvi;
info->supports_hdmi = is_hdmi; info->supports_hdmi = is_hdmi;
info->supports_dp = is_dp; info->supports_dp = is_dp;
...@@ -1302,6 +1291,8 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port, ...@@ -1302,6 +1291,8 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
DRM_DEBUG_KMS("Port %c is internal DP\n", port_name(port)); DRM_DEBUG_KMS("Port %c is internal DP\n", port_name(port));
if (is_dvi) { if (is_dvi) {
u8 ddc_pin;
ddc_pin = map_ddc_pin(dev_priv, child->ddc_pin); ddc_pin = map_ddc_pin(dev_priv, child->ddc_pin);
if (intel_gmbus_is_valid_pin(dev_priv, ddc_pin)) { if (intel_gmbus_is_valid_pin(dev_priv, ddc_pin)) {
info->alternate_ddc_pin = ddc_pin; info->alternate_ddc_pin = ddc_pin;
...@@ -1314,14 +1305,14 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port, ...@@ -1314,14 +1305,14 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
} }
if (is_dp) { if (is_dp) {
info->alternate_aux_channel = aux_channel; info->alternate_aux_channel = child->aux_channel;
sanitize_aux_ch(dev_priv, port); sanitize_aux_ch(dev_priv, port);
} }
if (bdb_version >= 158) { if (bdb_version >= 158) {
/* The VBT HDMI level shift values match the table we have. */ /* The VBT HDMI level shift values match the table we have. */
hdmi_level_shift = child->hdmi_level_shifter_value; u8 hdmi_level_shift = child->hdmi_level_shifter_value;
DRM_DEBUG_KMS("VBT HDMI level shift for port %c: %d\n", DRM_DEBUG_KMS("VBT HDMI level shift for port %c: %d\n",
port_name(port), port_name(port),
hdmi_level_shift); hdmi_level_shift);
......
...@@ -730,10 +730,11 @@ static void insert_signal(struct intel_breadcrumbs *b, ...@@ -730,10 +730,11 @@ static void insert_signal(struct intel_breadcrumbs *b,
list_add(&request->signaling.link, &iter->signaling.link); list_add(&request->signaling.link, &iter->signaling.link);
} }
void intel_engine_enable_signaling(struct i915_request *request, bool wakeup) bool intel_engine_enable_signaling(struct i915_request *request, bool wakeup)
{ {
struct intel_engine_cs *engine = request->engine; struct intel_engine_cs *engine = request->engine;
struct intel_breadcrumbs *b = &engine->breadcrumbs; struct intel_breadcrumbs *b = &engine->breadcrumbs;
struct intel_wait *wait = &request->signaling.wait;
u32 seqno; u32 seqno;
/* /*
...@@ -750,12 +751,12 @@ void intel_engine_enable_signaling(struct i915_request *request, bool wakeup) ...@@ -750,12 +751,12 @@ void intel_engine_enable_signaling(struct i915_request *request, bool wakeup)
seqno = i915_request_global_seqno(request); seqno = i915_request_global_seqno(request);
if (!seqno) /* will be enabled later upon execution */ if (!seqno) /* will be enabled later upon execution */
return; return true;
GEM_BUG_ON(request->signaling.wait.seqno); GEM_BUG_ON(wait->seqno);
request->signaling.wait.tsk = b->signaler; wait->tsk = b->signaler;
request->signaling.wait.request = request; wait->request = request;
request->signaling.wait.seqno = seqno; wait->seqno = seqno;
/* /*
* Add ourselves into the list of waiters, but registering our * Add ourselves into the list of waiters, but registering our
...@@ -768,11 +769,15 @@ void intel_engine_enable_signaling(struct i915_request *request, bool wakeup) ...@@ -768,11 +769,15 @@ void intel_engine_enable_signaling(struct i915_request *request, bool wakeup)
*/ */
spin_lock(&b->rb_lock); spin_lock(&b->rb_lock);
insert_signal(b, request, seqno); insert_signal(b, request, seqno);
wakeup &= __intel_engine_add_wait(engine, &request->signaling.wait); wakeup &= __intel_engine_add_wait(engine, wait);
spin_unlock(&b->rb_lock); spin_unlock(&b->rb_lock);
if (wakeup) if (wakeup) {
wake_up_process(b->signaler); wake_up_process(b->signaler);
return !intel_wait_complete(wait);
}
return true;
} }
void intel_engine_cancel_signaling(struct i915_request *request) void intel_engine_cancel_signaling(struct i915_request *request)
......
This diff is collapsed.
This diff is collapsed.
...@@ -114,7 +114,7 @@ enum intel_platform { ...@@ -114,7 +114,7 @@ enum intel_platform {
func(has_ipc); func(has_ipc);
#define GEN_MAX_SLICES (6) /* CNL upper bound */ #define GEN_MAX_SLICES (6) /* CNL upper bound */
#define GEN_MAX_SUBSLICES (7) #define GEN_MAX_SUBSLICES (8) /* ICL upper bound */
struct sseu_dev_info { struct sseu_dev_info {
u8 slice_mask; u8 slice_mask;
...@@ -247,6 +247,8 @@ void intel_device_info_dump_runtime(const struct intel_device_info *info, ...@@ -247,6 +247,8 @@ void intel_device_info_dump_runtime(const struct intel_device_info *info,
void intel_device_info_dump_topology(const struct sseu_dev_info *sseu, void intel_device_info_dump_topology(const struct sseu_dev_info *sseu,
struct drm_printer *p); struct drm_printer *p);
void intel_device_info_init_mmio(struct drm_i915_private *dev_priv);
void intel_driver_caps_print(const struct intel_driver_caps *caps, void intel_driver_caps_print(const struct intel_driver_caps *caps,
struct drm_printer *p); struct drm_printer *p);
......
This diff is collapsed.
This diff is collapsed.
...@@ -180,9 +180,11 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder, ...@@ -180,9 +180,11 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
intel_dp->active_mst_links--; intel_dp->active_mst_links--;
intel_mst->connector = NULL; intel_mst->connector = NULL;
if (intel_dp->active_mst_links == 0) if (intel_dp->active_mst_links == 0) {
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
intel_dig_port->base.post_disable(&intel_dig_port->base, intel_dig_port->base.post_disable(&intel_dig_port->base,
old_crtc_state, NULL); old_crtc_state, NULL);
}
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
} }
...@@ -223,7 +225,11 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder, ...@@ -223,7 +225,11 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
if (intel_dp->active_mst_links == 0)
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true); drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);
if (intel_dp->active_mst_links == 0) if (intel_dp->active_mst_links == 0)
intel_dig_port->base.pre_enable(&intel_dig_port->base, intel_dig_port->base.pre_enable(&intel_dig_port->base,
pipe_config, NULL); pipe_config, NULL);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -80,7 +80,7 @@ void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, ...@@ -80,7 +80,7 @@ void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
} }
might_sleep(); might_sleep();
intel_psr_invalidate(dev_priv, frontbuffer_bits); intel_psr_invalidate(dev_priv, frontbuffer_bits, origin);
intel_edp_drrs_invalidate(dev_priv, frontbuffer_bits); intel_edp_drrs_invalidate(dev_priv, frontbuffer_bits);
intel_fbc_invalidate(dev_priv, frontbuffer_bits, origin); intel_fbc_invalidate(dev_priv, frontbuffer_bits, origin);
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment