Commit 19b232b9 authored by Daniel Vetter's avatar Daniel Vetter

Merge tag 'drm-xe-next-2024-02-25' of ssh://gitlab.freedesktop.org/drm/xe/kernel into drm-next

drm/xe feature pull for v6.9:

UAPI Changes:

- New query to the GuC firmware submission version. (José Roberto de Souza)
- Remove unused persistent exec_queues (Thomas Hellström)
- Add vram frequency sysfs attributes (Sujaritha Sundaresan, Rodrigo Vivi)
- Add the flag XE_VM_BIND_FLAG_DUMPABLE to notify devcoredump that mapping
  should be dumped (Maarten Lankhorst)

Cross-drivers Changes:

- Make sure intel_wakeref_t is treated as opaque type on i915-display
  and fix its type on xe

Driver Changes:

- Drop pre-production workarounds (Matt Roper)
- Drop kunit tests for unsuported platforms: PVC and pre-production DG2 (Lucas De Marchi)
- Start pumbling SR-IOV support with memory based interrupts
  for VF (Michal Wajdeczko)
- Allow to map BO in GGTT with PAT index corresponding to
  XE_CACHE_UC to work with memory based interrupts (Michal Wajdeczko)
- Improve logging with GT-oriented drm_printers (Michal Wajdeczko)
- Add GuC Doorbells Manager as prep work SR-IOV during
  VF provisioning ((Michal Wajdeczko)
- Refactor fake device handling in kunit integration ((Michal Wajdeczko)
- Implement additional workarounds for xe2 and MTL (Tejas Upadhyay,
  Lucas De Marchi, Shekhar Chauhan, Karthik Poosa)
- Program a few registers according to perfomance guide spec for Xe2 (Shekhar Chauhan)
- Add error handling for non-blocking communication with GuC (Daniele Ceraolo Spurio)
- Fix remaining 32b build issues and enable it back (Lucas De  Marchi)
- Fix build with CONFIG_DEBUG_FS=n (Jani Nikula)
- Fix warnings from GuC ABI headers (Matthew Brost)
- Introduce Relay Communication for SR-IOV for VF <-> GuC <-> PF (Michal Wajdeczko)
- Add mocs reset kunit (Ruthuvikas Ravikumar)
- Fix spellings (Colin Ian King)
- Disable mid-thread preemption when not properly supported by hardware (Nirmoy Das)
- Release mmap mappings on rpm suspend (Badal Nilawar)
- Fix BUG_ON on xe_exec by moving fence reservation to the validate stage (Matthew Auld)
- Fix xe_exec by reserving extra fence slot for CPU bind (Matthew Brost)
- Fix xe_exec with full long running exec queue, now returning
  -EWOULDBLOCK to userspace (Matthew Brost)
- Fix CT irq handler when CT is disabled (Matthew Brost)
- Fix VM_BIND_OP_UNMAP_ALL without any bound vmas (Thomas Hellström)
- Fix missing __iomem annotations (Thomas Hellström)
- Fix exec queue priority handling with GuC (Brian Welty)
- Fix setting SLPC flag to GuC when it's not supported (Vinay Belgaumkar)
- Fix C6 disabling without SLPC (Matt Roper)
- Drop -Wstringop-overflow to fix build with GCC11 (Paul E. McKenney)
- Circumvent bogus -Wstringop-overflow in one case (Arnd Bergmann)
- Refactor exec_queue user extensions handling and fix USM attributes
  being applied too late (Brian Welty)
- Use circ_buf head/tail convention (Matthew Brost)
- Fail build if circ_buf-related defines are modified with incompatible values
  (Matthew Brost)
- Fix several error paths (Dan Carpenter)
- Fix CCS copy for small VRAM copy chunks (Thomas Hellström)
- Rework driver initialization order and paths to account for driver running
  in VF mode (Michal Wajdeczko)
- Initialize GuC earlier during probe to handle driver in VF mode (Michał Winiarski)
- Fix migration use of MI_STORE_DATA_IMM to write PTEs (Matt Roper)
- Fix bounds checking in __xe_bo_placement_for_flags (Brian Welty)
- Drop display dependency on CONFIG_EXPERT (Jani Nikula)
- Do not hand-roll kstrdup when creating snapshot (Michal Wajdeczko)
- Stop creating one kunit module per kunit suite (Lucas De Marchi)
- Reduce scope and constify variables (Thomas Hellström, Jani Nikula, Michal Wajdeczko)
- Improve and document xe_guc_ct_send_recv() (Michal Wajdeczko)
- Add proxy communication between CSME and GSC uC (Daniele Ceraolo Spurio)
- Fix size calculation when writing pgtable (Fei Yang)
- Make sure cfb is page size aligned in stolen memory (Vinod Govindapillai)
- Stop printing guc log to dmesg when waiting for GuC fails (Rodrigo Vivi)
- Use XE_CACHE_WB instead of XE_CACHE_NONE for cpu coherency on migration
  (Himal Prasad Ghimiray)
- Fix error path in xe_vm_create (Moti Haimovski)
- Fix warnings in doc generation (Thomas Hellström, Badal Nilawar)
- Improve devcoredump content for mesa debugging (José Roberto de Souza)
- Fix crash in trace_dma_fence_init() (José Roberto de Souza)
- Improve CT state change handling (Matthew Brost)
- Toggle USM support for Xe2 (Lucas De Marchi)
- Reduces code duplication to emit PIPE_CONTROL (José Roberto de Souza)
- Canonicalize addresses where needed for Xe2 and add to devcoredump
  (José Roberto de Souza)
- Only allow 1 ufence per exec / bind IOCTL (Matthew Brost)
- Move all display code to display/ (Jani Nikula)
- Fix sparse warnings by correctly using annotations (Thomas Hellström)
- Warn on job timeouts instead of using asserts (Matt Roper)
- Prefix macros to avoid clashes with sparc (Matthew Brost)
- Fix -Walloc-size by subclassing instead of allocating size smaller than struct (Thomas Hellström)
- Add status check during gsc header readout (Suraj Kandpal)
- Fix infinite loop in vm_bind_ioctl_ops_unwind() (Matthew Brost)
- Fix fence refcounting (Matthew Brost)
- Fix picking incorrect userptr VMA (Matthew Brost)
- Fix USM on integrated by mapping both mem.kernel_bb_pool and usm.bb_pool (Matthew Brost)
- Fix double initialization of display power domains (Xiaoming Wang)
- Check expected uC versions by major.minor.patch instead of just major.minor (John Harrison)
- Bump minimum GuC version to 70.19.2 for all platforms under force-probe
  (John Harrison)
- Add GuC firmware loading for Lunar Lake (John Harrison)
- Use kzalloc() instead of hand-rolled alloc + memset (Nirmoy Das)
- Fix max page size of VMA during a REMAP (Matthew Brost)
- Don't ignore error when pinning pages in kthread (Matthew Auld)
- Refactor xe hwmon (Karthik Poosa)
- Add debug logs for D3cold (Riana Tauro)
- Remove broken TEST_VM_ASYNC_OPS_ERROR (Matthew Brost)
- Always allow to override firmware blob with module param and improve
  log when no firmware is found (Lucas De Marchi)
- Fix shift-out-of-bounds due to xe_vm_prepare_vma() accepting zero fences (Thomas Hellström)
- Fix shift-out-of-bounds by distinguishing xe_pt/xe_pt_dir subclass (Thomas Hellström)
- Fail driver bind if platform supports MSIX, but fails to allocate all of them (Dani Liberman)
- Fix intel_fbdev thinking memory is backed by shmem (Matthew Auld)
- Prefer drm_dbg() over dev_dbg() (Jani Nikula)
- Avoid function cast warnings with clang-16 (Arnd Bergmann)
- Enhance xe_bo_move trace (Priyanka Dandamudi)
- Fix xe_vma_set_pte_size() not setting the right gpuva.flags for 4K size (Matthew Brost)
- Add XE_VMA_PTE_64K VMA flag (Matthew Brost)
- Return 2MB page size for compact 64k PTEs (Matthew Brost)
- Remove usage of the deprecated ida_simple_xx() API (Christophe JAILLET)
- Fix modpost warning on xe_mocs live kunit module (Ashutosh Dixit)
- Drop extra newline in from sysfs files (Ashutosh Dixit)
- Implement VM snapshot support for BO's and userptr (Maarten Lankhorst)
- Add debug logs when skipping rebinds (Matthew Brost)
- Fix code generation when mixing build directories (Dafna Hirschfeld)
- Prefer struct_size over open coded arithmetic (Erick Archer)
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
From: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/dbdkrwmcoqqlwftuc3olbauazc3pbamj26wa34puztowsnauoh@i3zms7ut4yuw
parents 71ab34f7 a7a3d736
...@@ -11,3 +11,8 @@ CONFIG_DRM_XE_DISPLAY=n ...@@ -11,3 +11,8 @@ CONFIG_DRM_XE_DISPLAY=n
CONFIG_EXPERT=y CONFIG_EXPERT=y
CONFIG_FB=y CONFIG_FB=y
CONFIG_DRM_XE_KUNIT_TEST=y CONFIG_DRM_XE_KUNIT_TEST=y
CONFIG_LOCK_DEBUGGING_SUPPORT=y
CONFIG_PROVE_LOCKING=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_LOCKDEP=y
CONFIG_DEBUG_LOCKDEP=y
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config DRM_XE config DRM_XE
tristate "Intel Xe Graphics" tristate "Intel Xe Graphics"
depends on DRM && PCI && MMU && (m || (y && KUNIT=y)) && 64BIT depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
select INTERVAL_TREE select INTERVAL_TREE
# we need shmfs for the swappable backing store, and in particular # we need shmfs for the swappable backing store, and in particular
# the shmem_readpage() which depends upon tmpfs # the shmem_readpage() which depends upon tmpfs
......
...@@ -76,6 +76,7 @@ xe-y += xe_bb.o \ ...@@ -76,6 +76,7 @@ xe-y += xe_bb.o \
xe_ggtt.o \ xe_ggtt.o \
xe_gpu_scheduler.o \ xe_gpu_scheduler.o \
xe_gsc.o \ xe_gsc.o \
xe_gsc_proxy.o \
xe_gsc_submit.o \ xe_gsc_submit.o \
xe_gt.o \ xe_gt.o \
xe_gt_ccs_mode.o \ xe_gt_ccs_mode.o \
...@@ -92,6 +93,7 @@ xe-y += xe_bb.o \ ...@@ -92,6 +93,7 @@ xe-y += xe_bb.o \
xe_guc.o \ xe_guc.o \
xe_guc_ads.o \ xe_guc_ads.o \
xe_guc_ct.o \ xe_guc_ct.o \
xe_guc_db_mgr.o \
xe_guc_debugfs.o \ xe_guc_debugfs.o \
xe_guc_hwconfig.o \ xe_guc_hwconfig.o \
xe_guc_log.o \ xe_guc_log.o \
...@@ -137,6 +139,7 @@ xe-y += xe_bb.o \ ...@@ -137,6 +139,7 @@ xe-y += xe_bb.o \
xe_uc_debugfs.o \ xe_uc_debugfs.o \
xe_uc_fw.o \ xe_uc_fw.o \
xe_vm.o \ xe_vm.o \
xe_vram_freq.o \
xe_wait_user_fence.o \ xe_wait_user_fence.o \
xe_wa.o \ xe_wa.o \
xe_wopcm.o xe_wopcm.o
...@@ -145,18 +148,23 @@ xe-y += xe_bb.o \ ...@@ -145,18 +148,23 @@ xe-y += xe_bb.o \
xe-$(CONFIG_HWMON) += xe_hwmon.o xe-$(CONFIG_HWMON) += xe_hwmon.o
# graphics virtualization (SR-IOV) support # graphics virtualization (SR-IOV) support
xe-y += xe_sriov.o xe-y += \
xe_guc_relay.o \
xe_memirq.o \
xe_sriov.o
xe-$(CONFIG_PCI_IOV) += \ xe-$(CONFIG_PCI_IOV) += \
xe_lmtt.o \ xe_lmtt.o \
xe_lmtt_2l.o \ xe_lmtt_2l.o \
xe_lmtt_ml.o xe_lmtt_ml.o
xe-$(CONFIG_DRM_XE_KUNIT_TEST) += \
tests/xe_kunit_helpers.o
# i915 Display compat #defines and #includes # i915 Display compat #defines and #includes
subdir-ccflags-$(CONFIG_DRM_XE_DISPLAY) += \ subdir-ccflags-$(CONFIG_DRM_XE_DISPLAY) += \
-I$(srctree)/$(src)/display/ext \ -I$(srctree)/$(src)/display/ext \
-I$(srctree)/$(src)/compat-i915-headers \ -I$(srctree)/$(src)/compat-i915-headers \
-I$(srctree)/drivers/gpu/drm/xe/display/ \
-I$(srctree)/drivers/gpu/drm/i915/display/ \ -I$(srctree)/drivers/gpu/drm/i915/display/ \
-Ddrm_i915_gem_object=xe_bo \ -Ddrm_i915_gem_object=xe_bo \
-Ddrm_i915_private=xe_device -Ddrm_i915_private=xe_device
...@@ -176,17 +184,17 @@ $(obj)/i915-display/%.o: $(srctree)/drivers/gpu/drm/i915/display/%.c FORCE ...@@ -176,17 +184,17 @@ $(obj)/i915-display/%.o: $(srctree)/drivers/gpu/drm/i915/display/%.c FORCE
# Display code specific to xe # Display code specific to xe
xe-$(CONFIG_DRM_XE_DISPLAY) += \ xe-$(CONFIG_DRM_XE_DISPLAY) += \
xe_display.o \ display/ext/i915_irq.o \
display/xe_fb_pin.o \ display/ext/i915_utils.o \
display/xe_hdcp_gsc.o \ display/intel_fb_bo.o \
display/xe_plane_initial.o \ display/intel_fbdev_fb.o \
display/xe_display_rps.o \ display/xe_display.o \
display/xe_display_misc.o \ display/xe_display_misc.o \
display/xe_display_rps.o \
display/xe_dsb_buffer.o \ display/xe_dsb_buffer.o \
display/intel_fbdev_fb.o \ display/xe_fb_pin.o \
display/intel_fb_bo.o \ display/xe_hdcp_gsc.o \
display/ext/i915_irq.o \ display/xe_plane_initial.o
display/ext/i915_utils.o
# SOC code shared with i915 # SOC code shared with i915
xe-$(CONFIG_DRM_XE_DISPLAY) += \ xe-$(CONFIG_DRM_XE_DISPLAY) += \
...@@ -213,8 +221,6 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \ ...@@ -213,8 +221,6 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/intel_ddi.o \ i915-display/intel_ddi.o \
i915-display/intel_ddi_buf_trans.o \ i915-display/intel_ddi_buf_trans.o \
i915-display/intel_display.o \ i915-display/intel_display.o \
i915-display/intel_display_debugfs.o \
i915-display/intel_display_debugfs_params.o \
i915-display/intel_display_device.o \ i915-display/intel_display_device.o \
i915-display/intel_display_driver.o \ i915-display/intel_display_driver.o \
i915-display/intel_display_irq.o \ i915-display/intel_display_irq.o \
...@@ -258,7 +264,6 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \ ...@@ -258,7 +264,6 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/intel_modeset_setup.o \ i915-display/intel_modeset_setup.o \
i915-display/intel_modeset_verify.o \ i915-display/intel_modeset_verify.o \
i915-display/intel_panel.o \ i915-display/intel_panel.o \
i915-display/intel_pipe_crc.o \
i915-display/intel_pmdemand.o \ i915-display/intel_pmdemand.o \
i915-display/intel_pps.o \ i915-display/intel_pps.o \
i915-display/intel_psr.o \ i915-display/intel_psr.o \
...@@ -285,6 +290,13 @@ ifeq ($(CONFIG_DRM_FBDEV_EMULATION),y) ...@@ -285,6 +290,13 @@ ifeq ($(CONFIG_DRM_FBDEV_EMULATION),y)
xe-$(CONFIG_DRM_XE_DISPLAY) += i915-display/intel_fbdev.o xe-$(CONFIG_DRM_XE_DISPLAY) += i915-display/intel_fbdev.o
endif endif
ifeq ($(CONFIG_DEBUG_FS),y)
xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/intel_display_debugfs.o \
i915-display/intel_display_debugfs_params.o \
i915-display/intel_pipe_crc.o
endif
obj-$(CONFIG_DRM_XE) += xe.o obj-$(CONFIG_DRM_XE) += xe.o
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += tests/ obj-$(CONFIG_DRM_XE_KUNIT_TEST) += tests/
......
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GSC_PROXY_COMMANDS_ABI_H
#define _ABI_GSC_PROXY_COMMANDS_ABI_H
#include <linux/types.h>
/* Heci client ID for proxy commands */
#define HECI_MEADDRESS_PROXY 10
/* FW-defined proxy header */
struct xe_gsc_proxy_header {
/*
* hdr:
* Bits 0-7: type of the proxy message (see enum xe_gsc_proxy_type)
* Bits 8-15: rsvd
* Bits 16-31: length in bytes of the payload following the proxy header
*/
u32 hdr;
#define GSC_PROXY_TYPE GENMASK(7, 0)
#define GSC_PROXY_PAYLOAD_LENGTH GENMASK(31, 16)
u32 source; /* Source of the Proxy message */
u32 destination; /* Destination of the Proxy message */
#define GSC_PROXY_ADDRESSING_KMD 0x10000
#define GSC_PROXY_ADDRESSING_GSC 0x20000
#define GSC_PROXY_ADDRESSING_CSME 0x30000
u32 status; /* Command status */
} __packed;
/* FW-defined proxy types */
enum xe_gsc_proxy_type {
GSC_PROXY_MSG_TYPE_PROXY_INVALID = 0,
GSC_PROXY_MSG_TYPE_PROXY_QUERY = 1,
GSC_PROXY_MSG_TYPE_PROXY_PAYLOAD = 2,
GSC_PROXY_MSG_TYPE_PROXY_END = 3,
GSC_PROXY_MSG_TYPE_PROXY_NOTIFICATION = 4,
};
#endif
This diff is collapsed.
...@@ -81,12 +81,13 @@ static_assert(sizeof(struct guc_ct_buffer_desc) == 64); ...@@ -81,12 +81,13 @@ static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
#define GUC_CTB_HDR_LEN 1u #define GUC_CTB_HDR_LEN 1u
#define GUC_CTB_MSG_MIN_LEN GUC_CTB_HDR_LEN #define GUC_CTB_MSG_MIN_LEN GUC_CTB_HDR_LEN
#define GUC_CTB_MSG_MAX_LEN 256u #define GUC_CTB_MSG_MAX_LEN (GUC_CTB_MSG_MIN_LEN + GUC_CTB_MAX_DWORDS)
#define GUC_CTB_MSG_0_FENCE (0xffffu << 16) #define GUC_CTB_MSG_0_FENCE (0xffffu << 16)
#define GUC_CTB_MSG_0_FORMAT (0xfu << 12) #define GUC_CTB_MSG_0_FORMAT (0xfu << 12)
#define GUC_CTB_FORMAT_HXG 0u #define GUC_CTB_FORMAT_HXG 0u
#define GUC_CTB_MSG_0_RESERVED (0xfu << 8) #define GUC_CTB_MSG_0_RESERVED (0xfu << 8)
#define GUC_CTB_MSG_0_NUM_DWORDS (0xffu << 0) #define GUC_CTB_MSG_0_NUM_DWORDS (0xffu << 0)
#define GUC_CTB_MAX_DWORDS 255
/** /**
* DOC: CTB HXG Message * DOC: CTB HXG Message
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
* | | 30:28 | **TYPE** - message type | * | | 30:28 | **TYPE** - message type |
* | | | - _`GUC_HXG_TYPE_REQUEST` = 0 | * | | | - _`GUC_HXG_TYPE_REQUEST` = 0 |
* | | | - _`GUC_HXG_TYPE_EVENT` = 1 | * | | | - _`GUC_HXG_TYPE_EVENT` = 1 |
* | | | - _`GUC_HXG_TYPE_FAST_REQUEST` = 2 |
* | | | - _`GUC_HXG_TYPE_NO_RESPONSE_BUSY` = 3 | * | | | - _`GUC_HXG_TYPE_NO_RESPONSE_BUSY` = 3 |
* | | | - _`GUC_HXG_TYPE_NO_RESPONSE_RETRY` = 5 | * | | | - _`GUC_HXG_TYPE_NO_RESPONSE_RETRY` = 5 |
* | | | - _`GUC_HXG_TYPE_RESPONSE_FAILURE` = 6 | * | | | - _`GUC_HXG_TYPE_RESPONSE_FAILURE` = 6 |
...@@ -46,6 +47,7 @@ ...@@ -46,6 +47,7 @@
#define GUC_HXG_MSG_0_TYPE (0x7u << 28) #define GUC_HXG_MSG_0_TYPE (0x7u << 28)
#define GUC_HXG_TYPE_REQUEST 0u #define GUC_HXG_TYPE_REQUEST 0u
#define GUC_HXG_TYPE_EVENT 1u #define GUC_HXG_TYPE_EVENT 1u
#define GUC_HXG_TYPE_FAST_REQUEST 2u
#define GUC_HXG_TYPE_NO_RESPONSE_BUSY 3u #define GUC_HXG_TYPE_NO_RESPONSE_BUSY 3u
#define GUC_HXG_TYPE_NO_RESPONSE_RETRY 5u #define GUC_HXG_TYPE_NO_RESPONSE_RETRY 5u
#define GUC_HXG_TYPE_RESPONSE_FAILURE 6u #define GUC_HXG_TYPE_RESPONSE_FAILURE 6u
......
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GUC_RELAY_ACTIONS_ABI_H_
#define _ABI_GUC_RELAY_ACTIONS_ABI_H_
/**
* DOC: GuC Relay Debug Actions
*
* This range of action codes is reserved for debugging purposes only and should
* be used only on debug builds. These actions may not be supported by the
* production drivers. Their definitions could be changed in the future.
*
* _`GUC_RELAY_ACTION_DEBUG_ONLY_START` = 0xDEB0
* _`GUC_RELAY_ACTION_DEBUG_ONLY_END` = 0xDEFF
*/
#define GUC_RELAY_ACTION_DEBUG_ONLY_START 0xDEB0
#define GUC_RELAY_ACTION_DEBUG_ONLY_END 0xDEFF
/**
* DOC: VFXPF_TESTLOOP
*
* This `Relay Message`_ is used to selftest the `GuC Relay Communication`_.
*
* The following opcodes are defined:
* VFXPF_TESTLOOP_OPCODE_NOP_ will return no data.
* VFXPF_TESTLOOP_OPCODE_BUSY_ will reply with BUSY response first.
* VFXPF_TESTLOOP_OPCODE_RETRY_ will reply with RETRY response instead.
* VFXPF_TESTLOOP_OPCODE_ECHO_ will return same data as received.
* VFXPF_TESTLOOP_OPCODE_FAIL_ will always fail with error.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ or GUC_HXG_TYPE_FAST_REQUEST_ |
* | | | or GUC_HXG_TYPE_EVENT_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | **OPCODE** |
* | | | - _`VFXPF_TESTLOOP_OPCODE_NOP` = 0x0 |
* | | | - _`VFXPF_TESTLOOP_OPCODE_BUSY` = 0xB |
* | | | - _`VFXPF_TESTLOOP_OPCODE_RETRY` = 0xD |
* | | | - _`VFXPF_TESTLOOP_OPCODE_ECHO` = 0xE |
* | | | - _`VFXPF_TESTLOOP_OPCODE_FAIL` = 0xF |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`IOV_ACTION_SELFTEST_RELAY` |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | **DATA1** = optional, depends on **OPCODE**: |
* | | | for VFXPF_TESTLOOP_OPCODE_BUSY_: time in ms for reply |
* | | | for VFXPF_TESTLOOP_OPCODE_FAIL_: expected error |
* | | | for VFXPF_TESTLOOP_OPCODE_ECHO_: payload |
* +---+-------+--------------------------------------------------------------+
* |...| 31:0 | **DATAn** = only for **OPCODE** VFXPF_TESTLOOP_OPCODE_ECHO_ |
* +---+-------+--------------------------------------------------------------+
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | DATA0 = MBZ |
* +---+-------+--------------------------------------------------------------+
* |...| 31:0 | DATAn = only for **OPCODE** VFXPF_TESTLOOP_OPCODE_ECHO_ |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_RELAY_ACTION_VFXPF_TESTLOOP (GUC_RELAY_ACTION_DEBUG_ONLY_START + 1)
#define VFXPF_TESTLOOP_OPCODE_NOP 0x0
#define VFXPF_TESTLOOP_OPCODE_BUSY 0xB
#define VFXPF_TESTLOOP_OPCODE_RETRY 0xD
#define VFXPF_TESTLOOP_OPCODE_ECHO 0xE
#define VFXPF_TESTLOOP_OPCODE_FAIL 0xF
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GUC_RELAY_COMMUNICATION_ABI_H
#define _ABI_GUC_RELAY_COMMUNICATION_ABI_H
#include <linux/build_bug.h>
#include "guc_actions_sriov_abi.h"
#include "guc_communication_ctb_abi.h"
#include "guc_messages_abi.h"
/**
* DOC: GuC Relay Communication
*
* The communication between Virtual Function (VF) drivers and Physical Function
* (PF) drivers is based on the GuC firmware acting as a proxy (relay) agent.
*
* To communicate with the PF driver, VF's drivers use `VF2GUC_RELAY_TO_PF`_
* action that takes the `Relay Message`_ as opaque payload and requires the
* relay message identifier (RID) as additional parameter.
*
* This identifier is used by the drivers to match related messages.
*
* The GuC forwards this `Relay Message`_ and its identifier to the PF driver
* in `GUC2PF_RELAY_FROM_VF`_ action. This event message additionally contains
* the identifier of the origin VF (VFID).
*
* Likewise, to communicate with the VF drivers, PF driver use
* `VF2GUC_RELAY_TO_PF`_ action that in addition to the `Relay Message`_
* and the relay message identifier (RID) also takes the target VF identifier.
*
* The GuC uses this target VFID from the message to select where to send the
* `GUC2VF_RELAY_FROM_PF`_ with the embedded `Relay Message`_ with response::
*
* VF GuC PF
* | | |
* [ ] VF2GUC_RELAY_TO_PF | |
* [ ]---------------------------> [ ] |
* [ ] { rid, msg } [ ] |
* [ ] [ ] GUC2PF_RELAY_FROM_VF |
* [ ] [ ]---------------------------> [ ]
* [ ] | { VFID, rid, msg } [ ]
* [ ] | [ ]
* [ ] | PF2GUC_RELAY_TO_VF [ ]
* [ ] [ ] <---------------------------[ ]
* [ ] [ ] { VFID, rid, reply } |
* [ ] GUC2VF_RELAY_FROM_PF [ ] |
* [ ] <---------------------------[ ] |
* | { rid, reply } | |
* | | |
*
* It is also possible that PF driver will initiate communication with the
* selected VF driver. The same GuC action messages will be used::
*
* VF GuC PF
* | | |
* | | PF2GUC_RELAY_TO_VF [ ]
* | [ ] <---------------------------[ ]
* | [ ] { VFID, rid, msg } [ ]
* | GUC2VF_RELAY_FROM_PF [ ] [ ]
* [ ] <---------------------------[ ] [ ]
* [ ] { rid, msg } | [ ]
* [ ] | [ ]
* [ ] VF2GUC_RELAY_TO_PF | [ ]
* [ ]---------------------------> [ ] [ ]
* | { rid, reply } [ ] [ ]
* | [ ] GUC2PF_RELAY_FROM_VF [ ]
* | [ ]---------------------------> [ ]
* | | { VFID, rid, reply } |
* | | |
*/
/**
* DOC: Relay Message
*
* The `Relay Message`_ is used by Physical Function (PF) driver and Virtual
* Function (VF) drivers to communicate using `GuC Relay Communication`_.
*
* Format of the `Relay Message`_ follows format of the generic `HXG Message`_.
*
* +--------------------------------------------------------------------------+
* | `Relay Message`_ |
* +==========================================================================+
* | `HXG Message`_ |
* +--------------------------------------------------------------------------+
*
* Maximum length of the `Relay Message`_ is limited by the maximum length of
* the `CTB HXG Message`_ and format of the `GUC2PF_RELAY_FROM_VF`_ message.
*/
#define GUC_RELAY_MSG_MIN_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_RELAY_MSG_MAX_LEN \
(GUC_CTB_MAX_DWORDS - GUC2PF_RELAY_FROM_VF_EVENT_MSG_MIN_LEN)
static_assert(PF2GUC_RELAY_TO_VF_REQUEST_MSG_MIN_LEN >
VF2GUC_RELAY_TO_PF_REQUEST_MSG_MIN_LEN);
/**
* DOC: Relay Error Codes
*
* The `GuC Relay Communication`_ can be used to pass `Relay Message`_ between
* drivers that run on different Operating Systems. To help in troubleshooting,
* `GuC Relay Communication`_ uses error codes that mostly match errno values.
*/
#define GUC_RELAY_ERROR_UNDISCLOSED 0
#define GUC_RELAY_ERROR_OPERATION_NOT_PERMITTED 1 /* EPERM */
#define GUC_RELAY_ERROR_PERMISSION_DENIED 13 /* EACCES */
#define GUC_RELAY_ERROR_INVALID_ARGUMENT 22 /* EINVAL */
#define GUC_RELAY_ERROR_INVALID_REQUEST_CODE 56 /* EBADRQC */
#define GUC_RELAY_ERROR_NO_DATA_AVAILABLE 61 /* ENODATA */
#define GUC_RELAY_ERROR_PROTOCOL_ERROR 71 /* EPROTO */
#define GUC_RELAY_ERROR_MESSAGE_SIZE 90 /* EMSGSIZE */
#endif
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
#include "xe_bo.h" #include "xe_bo.h"
#define i915_gem_object_is_shmem(obj) ((obj)->flags & XE_BO_CREATE_SYSTEM_BIT) #define i915_gem_object_is_shmem(obj) (0) /* We don't use shmem */
static inline dma_addr_t i915_gem_object_get_dma_address(const struct xe_bo *bo, pgoff_t n) static inline dma_addr_t i915_gem_object_get_dma_address(const struct xe_bo *bo, pgoff_t n)
{ {
......
...@@ -162,18 +162,18 @@ static inline struct drm_i915_private *kdev_to_i915(struct device *kdev) ...@@ -162,18 +162,18 @@ static inline struct drm_i915_private *kdev_to_i915(struct device *kdev)
#include "intel_wakeref.h" #include "intel_wakeref.h"
static inline bool intel_runtime_pm_get(struct xe_runtime_pm *pm) static inline intel_wakeref_t intel_runtime_pm_get(struct xe_runtime_pm *pm)
{ {
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm); struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
if (xe_pm_runtime_get(xe) < 0) { if (xe_pm_runtime_get(xe) < 0) {
xe_pm_runtime_put(xe); xe_pm_runtime_put(xe);
return false; return 0;
} }
return true; return 1;
} }
static inline bool intel_runtime_pm_get_if_in_use(struct xe_runtime_pm *pm) static inline intel_wakeref_t intel_runtime_pm_get_if_in_use(struct xe_runtime_pm *pm)
{ {
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm); struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
...@@ -187,7 +187,7 @@ static inline void intel_runtime_pm_put_unchecked(struct xe_runtime_pm *pm) ...@@ -187,7 +187,7 @@ static inline void intel_runtime_pm_put_unchecked(struct xe_runtime_pm *pm)
xe_pm_runtime_put(xe); xe_pm_runtime_put(xe);
} }
static inline void intel_runtime_pm_put(struct xe_runtime_pm *pm, bool wakeref) static inline void intel_runtime_pm_put(struct xe_runtime_pm *pm, intel_wakeref_t wakeref)
{ {
if (wakeref) if (wakeref)
intel_runtime_pm_put_unchecked(pm); intel_runtime_pm_put_unchecked(pm);
......
...@@ -19,6 +19,9 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe, ...@@ -19,6 +19,9 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
int err; int err;
u32 flags = XE_BO_CREATE_PINNED_BIT | XE_BO_CREATE_STOLEN_BIT; u32 flags = XE_BO_CREATE_PINNED_BIT | XE_BO_CREATE_STOLEN_BIT;
if (align)
size = ALIGN(size, align);
bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe), bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe),
NULL, size, start, end, NULL, size, start, end,
ttm_bo_type_kernel, flags); ttm_bo_type_kernel, flags);
......
...@@ -134,8 +134,6 @@ static void xe_display_fini_nommio(struct drm_device *dev, void *dummy) ...@@ -134,8 +134,6 @@ static void xe_display_fini_nommio(struct drm_device *dev, void *dummy)
int xe_display_init_nommio(struct xe_device *xe) int xe_display_init_nommio(struct xe_device *xe)
{ {
int err;
if (!xe->info.enable_display) if (!xe->info.enable_display)
return 0; return 0;
...@@ -145,10 +143,6 @@ int xe_display_init_nommio(struct xe_device *xe) ...@@ -145,10 +143,6 @@ int xe_display_init_nommio(struct xe_device *xe)
/* This must be called before any calls to HAS_PCH_* */ /* This must be called before any calls to HAS_PCH_* */
intel_detect_pch(xe); intel_detect_pch(xe);
err = intel_power_domains_init(xe);
if (err)
return err;
return drmm_add_action_or_reset(&xe->drm, xe_display_fini_nommio, xe); return drmm_add_action_or_reset(&xe->drm, xe_display_fini_nommio, xe);
} }
......
...@@ -56,6 +56,9 @@ ...@@ -56,6 +56,9 @@
#define MI_FLUSH_IMM_QW REG_FIELD_PREP(MI_FLUSH_DW_LEN_DW, 5 - 2) #define MI_FLUSH_IMM_QW REG_FIELD_PREP(MI_FLUSH_DW_LEN_DW, 5 - 2)
#define MI_FLUSH_DW_USE_GTT REG_BIT(2) #define MI_FLUSH_DW_USE_GTT REG_BIT(2)
#define MI_LOAD_REGISTER_MEM (__MI_INSTR(0x29) | XE_INSTR_NUM_DW(4))
#define MI_LRM_USE_GGTT REG_BIT(22)
#define MI_BATCH_BUFFER_START __MI_INSTR(0x31) #define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
#endif #endif
...@@ -75,12 +75,17 @@ ...@@ -75,12 +75,17 @@
#define FF_THREAD_MODE(base) XE_REG((base) + 0xa0) #define FF_THREAD_MODE(base) XE_REG((base) + 0xa0)
#define FF_TESSELATION_DOP_GATE_DISABLE BIT(19) #define FF_TESSELATION_DOP_GATE_DISABLE BIT(19)
#define RING_INT_SRC_RPT_PTR(base) XE_REG((base) + 0xa4)
#define RING_IMR(base) XE_REG((base) + 0xa8) #define RING_IMR(base) XE_REG((base) + 0xa8)
#define RING_INT_STATUS_RPT_PTR(base) XE_REG((base) + 0xac)
#define RING_EIR(base) XE_REG((base) + 0xb0) #define RING_EIR(base) XE_REG((base) + 0xb0)
#define RING_EMR(base) XE_REG((base) + 0xb4) #define RING_EMR(base) XE_REG((base) + 0xb4)
#define RING_ESR(base) XE_REG((base) + 0xb8) #define RING_ESR(base) XE_REG((base) + 0xb8)
#define INSTPM(base) XE_REG((base) + 0xc0, XE_REG_OPTION_MASKED)
#define ENABLE_SEMAPHORE_POLL_BIT REG_BIT(13)
#define RING_CMD_CCTL(base) XE_REG((base) + 0xc4, XE_REG_OPTION_MASKED) #define RING_CMD_CCTL(base) XE_REG((base) + 0xc4, XE_REG_OPTION_MASKED)
/* /*
* CMD_CCTL read/write fields take a MOCS value and _not_ a table index. * CMD_CCTL read/write fields take a MOCS value and _not_ a table index.
...@@ -136,6 +141,7 @@ ...@@ -136,6 +141,7 @@
#define TAIL_ADDR 0x001FFFF8 #define TAIL_ADDR 0x001FFFF8
#define RING_CTX_TIMESTAMP(base) XE_REG((base) + 0x3a8) #define RING_CTX_TIMESTAMP(base) XE_REG((base) + 0x3a8)
#define CSBE_DEBUG_STATUS(base) XE_REG((base) + 0x3fc)
#define RING_FORCE_TO_NONPRIV(base, i) XE_REG(((base) + 0x4d0) + (i) * 4) #define RING_FORCE_TO_NONPRIV(base, i) XE_REG(((base) + 0x4d0) + (i) * 4)
#define RING_FORCE_TO_NONPRIV_DENY REG_BIT(30) #define RING_FORCE_TO_NONPRIV_DENY REG_BIT(30)
......
...@@ -144,8 +144,12 @@ ...@@ -144,8 +144,12 @@
#define GSCPSMI_BASE XE_REG(0x880c) #define GSCPSMI_BASE XE_REG(0x880c)
#define CCCHKNREG1 XE_REG_MCR(0x8828)
#define ENCOMPPERFFIX REG_BIT(18)
/* Fuse readout registers for GT */ /* Fuse readout registers for GT */
#define XEHP_FUSE4 XE_REG(0x9114) #define XEHP_FUSE4 XE_REG(0x9114)
#define CFEG_WMTP_DISABLE REG_BIT(20)
#define CCS_EN_MASK REG_GENMASK(19, 16) #define CCS_EN_MASK REG_GENMASK(19, 16)
#define GT_L3_EXC_MASK REG_GENMASK(6, 4) #define GT_L3_EXC_MASK REG_GENMASK(6, 4)
...@@ -288,6 +292,9 @@ ...@@ -288,6 +292,9 @@
#define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4) #define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4)
#define XEHP_LNESPARE REG_BIT(19) #define XEHP_LNESPARE REG_BIT(19)
#define L3SQCREG3 XE_REG_MCR(0xb108)
#define COMPPWOVERFETCHEN REG_BIT(28)
#define XEHP_L3SQCREG5 XE_REG_MCR(0xb158) #define XEHP_L3SQCREG5 XE_REG_MCR(0xb158)
#define L3_PWM_TIMER_INIT_VAL_MASK REG_GENMASK(9, 0) #define L3_PWM_TIMER_INIT_VAL_MASK REG_GENMASK(9, 0)
...@@ -344,6 +351,9 @@ ...@@ -344,6 +351,9 @@
#define ROW_CHICKEN3 XE_REG_MCR(0xe49c, XE_REG_OPTION_MASKED) #define ROW_CHICKEN3 XE_REG_MCR(0xe49c, XE_REG_OPTION_MASKED)
#define DIS_FIX_EOT1_FLUSH REG_BIT(9) #define DIS_FIX_EOT1_FLUSH REG_BIT(9)
#define TDL_TSL_CHICKEN XE_REG_MCR(0xe4c4, XE_REG_OPTION_MASKED)
#define SLM_WMTP_RESTORE REG_BIT(11)
#define ROW_CHICKEN XE_REG_MCR(0xe4f0, XE_REG_OPTION_MASKED) #define ROW_CHICKEN XE_REG_MCR(0xe4f0, XE_REG_OPTION_MASKED)
#define UGM_BACKUP_MODE REG_BIT(13) #define UGM_BACKUP_MODE REG_BIT(13)
#define MDQ_ARBITRATION_MODE REG_BIT(12) #define MDQ_ARBITRATION_MODE REG_BIT(12)
...@@ -430,6 +440,15 @@ ...@@ -430,6 +440,15 @@
#define VOLTAGE_MASK REG_GENMASK(10, 0) #define VOLTAGE_MASK REG_GENMASK(10, 0)
#define GT_INTR_DW(x) XE_REG(0x190018 + ((x) * 4)) #define GT_INTR_DW(x) XE_REG(0x190018 + ((x) * 4))
#define INTR_GSC REG_BIT(31)
#define INTR_GUC REG_BIT(25)
#define INTR_MGUC REG_BIT(24)
#define INTR_BCS8 REG_BIT(23)
#define INTR_BCS(x) REG_BIT(15 - (x))
#define INTR_CCS(x) REG_BIT(4 + (x))
#define INTR_RCS0 REG_BIT(0)
#define INTR_VECS(x) REG_BIT(31 - (x))
#define INTR_VCS(x) REG_BIT(x)
#define RENDER_COPY_INTR_ENABLE XE_REG(0x190030) #define RENDER_COPY_INTR_ENABLE XE_REG(0x190030)
#define VCS_VECS_INTR_ENABLE XE_REG(0x190034) #define VCS_VECS_INTR_ENABLE XE_REG(0x190034)
...@@ -446,6 +465,7 @@ ...@@ -446,6 +465,7 @@
#define INTR_ENGINE_CLASS(x) REG_FIELD_GET(GENMASK(18, 16), x) #define INTR_ENGINE_CLASS(x) REG_FIELD_GET(GENMASK(18, 16), x)
#define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x) #define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
#define OTHER_GUC_INSTANCE 0 #define OTHER_GUC_INSTANCE 0
#define OTHER_GSC_HECI2_INSTANCE 3
#define OTHER_GSC_INSTANCE 6 #define OTHER_GSC_INSTANCE 6
#define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4)) #define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4))
...@@ -454,6 +474,7 @@ ...@@ -454,6 +474,7 @@
#define VCS0_VCS1_INTR_MASK XE_REG(0x1900a8) #define VCS0_VCS1_INTR_MASK XE_REG(0x1900a8)
#define VCS2_VCS3_INTR_MASK XE_REG(0x1900ac) #define VCS2_VCS3_INTR_MASK XE_REG(0x1900ac)
#define VECS0_VECS1_INTR_MASK XE_REG(0x1900d0) #define VECS0_VECS1_INTR_MASK XE_REG(0x1900d0)
#define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
#define GUC_SG_INTR_MASK XE_REG(0x1900e8) #define GUC_SG_INTR_MASK XE_REG(0x1900e8)
#define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec) #define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec)
#define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4) #define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4)
...@@ -469,10 +490,4 @@ ...@@ -469,10 +490,4 @@
#define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3) #define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
#define GT_RENDER_USER_INTERRUPT REG_BIT(0) #define GT_RENDER_USER_INTERRUPT REG_BIT(0)
#define PVC_GT0_PACKAGE_ENERGY_STATUS XE_REG(0x281004)
#define PVC_GT0_PACKAGE_RAPL_LIMIT XE_REG(0x281008)
#define PVC_GT0_PACKAGE_POWER_SKU_UNIT XE_REG(0x281068)
#define PVC_GT0_PLATFORM_ENERGY_STATUS XE_REG(0x28106c)
#define PVC_GT0_PACKAGE_POWER_SKU XE_REG(0x281080)
#endif #endif
...@@ -14,4 +14,13 @@ ...@@ -14,4 +14,13 @@
#define CTX_PDP0_UDW (0x30 + 1) #define CTX_PDP0_UDW (0x30 + 1)
#define CTX_PDP0_LDW (0x32 + 1) #define CTX_PDP0_LDW (0x32 + 1)
#define CTX_LRM_INT_MASK_ENABLE 0x50
#define CTX_INT_MASK_ENABLE_REG (CTX_LRM_INT_MASK_ENABLE + 1)
#define CTX_INT_MASK_ENABLE_PTR (CTX_LRM_INT_MASK_ENABLE + 2)
#define CTX_LRI_INT_REPORT_PTR 0x55
#define CTX_INT_STATUS_REPORT_REG (CTX_LRI_INT_REPORT_PTR + 1)
#define CTX_INT_STATUS_REPORT_PTR (CTX_LRI_INT_REPORT_PTR + 2)
#define CTX_INT_SRC_REPORT_REG (CTX_LRI_INT_REPORT_PTR + 3)
#define CTX_INT_SRC_REPORT_PTR (CTX_LRI_INT_REPORT_PTR + 4)
#endif #endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef _XE_PCODE_REGS_H_
#define _XE_PCODE_REGS_H_
#include "regs/xe_reg_defs.h"
/*
* This file contains addresses of PCODE registers visible through GT MMIO space.
*/
#define PVC_GT0_PACKAGE_ENERGY_STATUS XE_REG(0x281004)
#define PVC_GT0_PACKAGE_RAPL_LIMIT XE_REG(0x281008)
#define PVC_GT0_PACKAGE_POWER_SKU_UNIT XE_REG(0x281068)
#define PVC_GT0_PLATFORM_ENERGY_STATUS XE_REG(0x28106c)
#define PVC_GT0_PACKAGE_POWER_SKU XE_REG(0x281080)
#endif /* _XE_PCODE_REGS_H_ */
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
# "live" kunit tests
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += \ obj-$(CONFIG_DRM_XE_KUNIT_TEST) += \
xe_bo_test.o \ xe_bo_test.o \
xe_dma_buf_test.o \ xe_dma_buf_test.o \
xe_migrate_test.o \ xe_migrate_test.o \
xe_mocs_test.o \ xe_mocs_test.o
# Normal kunit tests
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += xe_test.o
xe_test-y = xe_test_mod.o \
xe_pci_test.o \ xe_pci_test.o \
xe_rtp_test.o \ xe_rtp_test.o \
xe_wa_test.o xe_wa_test.o
// SPDX-License-Identifier: GPL-2.0 AND MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include <kunit/test.h>
#include "xe_device.h"
#include "xe_kunit_helpers.h"
static int guc_dbm_test_init(struct kunit *test)
{
struct xe_guc_db_mgr *dbm;
xe_kunit_helper_xe_device_test_init(test);
dbm = &xe_device_get_gt(test->priv, 0)->uc.guc.dbm;
mutex_init(dbm_mutex(dbm));
test->priv = dbm;
return 0;
}
static void test_empty(struct kunit *test)
{
struct xe_guc_db_mgr *dbm = test->priv;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, 0), 0);
KUNIT_ASSERT_EQ(test, dbm->count, 0);
mutex_lock(dbm_mutex(dbm));
KUNIT_EXPECT_LT(test, xe_guc_db_mgr_reserve_id_locked(dbm), 0);
mutex_unlock(dbm_mutex(dbm));
KUNIT_EXPECT_LT(test, xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
}
static void test_default(struct kunit *test)
{
struct xe_guc_db_mgr *dbm = test->priv;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, ~0), 0);
KUNIT_ASSERT_EQ(test, dbm->count, GUC_NUM_DOORBELLS);
}
static const unsigned int guc_dbm_params[] = {
GUC_NUM_DOORBELLS / 64,
GUC_NUM_DOORBELLS / 32,
GUC_NUM_DOORBELLS / 8,
GUC_NUM_DOORBELLS,
};
static void uint_param_get_desc(const unsigned int *p, char *desc)
{
snprintf(desc, KUNIT_PARAM_DESC_SIZE, "%u", *p);
}
KUNIT_ARRAY_PARAM(guc_dbm, guc_dbm_params, uint_param_get_desc);
static void test_size(struct kunit *test)
{
const unsigned int *p = test->param_value;
struct xe_guc_db_mgr *dbm = test->priv;
unsigned int n;
int id;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, *p), 0);
KUNIT_ASSERT_EQ(test, dbm->count, *p);
mutex_lock(dbm_mutex(dbm));
for (n = 0; n < *p; n++) {
KUNIT_EXPECT_GE(test, id = xe_guc_db_mgr_reserve_id_locked(dbm), 0);
KUNIT_EXPECT_LT(test, id, dbm->count);
}
KUNIT_EXPECT_LT(test, xe_guc_db_mgr_reserve_id_locked(dbm), 0);
mutex_unlock(dbm_mutex(dbm));
mutex_lock(dbm_mutex(dbm));
for (n = 0; n < *p; n++)
xe_guc_db_mgr_release_id_locked(dbm, n);
mutex_unlock(dbm_mutex(dbm));
}
static void test_reuse(struct kunit *test)
{
const unsigned int *p = test->param_value;
struct xe_guc_db_mgr *dbm = test->priv;
unsigned int n;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, *p), 0);
mutex_lock(dbm_mutex(dbm));
for (n = 0; n < *p; n++)
KUNIT_EXPECT_GE(test, xe_guc_db_mgr_reserve_id_locked(dbm), 0);
KUNIT_EXPECT_LT(test, xe_guc_db_mgr_reserve_id_locked(dbm), 0);
mutex_unlock(dbm_mutex(dbm));
mutex_lock(dbm_mutex(dbm));
for (n = 0; n < *p; n++) {
xe_guc_db_mgr_release_id_locked(dbm, n);
KUNIT_EXPECT_EQ(test, xe_guc_db_mgr_reserve_id_locked(dbm), n);
}
KUNIT_EXPECT_LT(test, xe_guc_db_mgr_reserve_id_locked(dbm), 0);
mutex_unlock(dbm_mutex(dbm));
mutex_lock(dbm_mutex(dbm));
for (n = 0; n < *p; n++)
xe_guc_db_mgr_release_id_locked(dbm, n);
mutex_unlock(dbm_mutex(dbm));
}
static void test_range_overlap(struct kunit *test)
{
const unsigned int *p = test->param_value;
struct xe_guc_db_mgr *dbm = test->priv;
int id1, id2, id3;
unsigned int n;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, ~0), 0);
KUNIT_ASSERT_LE(test, *p, dbm->count);
KUNIT_ASSERT_GE(test, id1 = xe_guc_db_mgr_reserve_range(dbm, *p, 0), 0);
for (n = 0; n < dbm->count - *p; n++) {
KUNIT_ASSERT_GE(test, id2 = xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
KUNIT_ASSERT_NE(test, id2, id1);
KUNIT_ASSERT_NE_MSG(test, id2 < id1, id2 > id1 + *p - 1,
"id1=%d id2=%d", id1, id2);
}
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
xe_guc_db_mgr_release_range(dbm, 0, dbm->count);
if (*p >= 1) {
KUNIT_ASSERT_GE(test, id1 = xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
KUNIT_ASSERT_GE(test, id2 = xe_guc_db_mgr_reserve_range(dbm, *p - 1, 0), 0);
KUNIT_ASSERT_NE(test, id2, id1);
KUNIT_ASSERT_NE_MSG(test, id1 < id2, id1 > id2 + *p - 2,
"id1=%d id2=%d", id1, id2);
for (n = 0; n < dbm->count - *p; n++) {
KUNIT_ASSERT_GE(test, id3 = xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
KUNIT_ASSERT_NE(test, id3, id1);
KUNIT_ASSERT_NE(test, id3, id2);
KUNIT_ASSERT_NE_MSG(test, id3 < id2, id3 > id2 + *p - 2,
"id3=%d id2=%d", id3, id2);
}
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
xe_guc_db_mgr_release_range(dbm, 0, dbm->count);
}
}
static void test_range_compact(struct kunit *test)
{
const unsigned int *p = test->param_value;
struct xe_guc_db_mgr *dbm = test->priv;
unsigned int n;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, ~0), 0);
KUNIT_ASSERT_NE(test, *p, 0);
KUNIT_ASSERT_LE(test, *p, dbm->count);
if (dbm->count % *p)
kunit_skip(test, "must be divisible");
KUNIT_ASSERT_GE(test, xe_guc_db_mgr_reserve_range(dbm, *p, 0), 0);
for (n = 1; n < dbm->count / *p; n++)
KUNIT_ASSERT_GE(test, xe_guc_db_mgr_reserve_range(dbm, *p, 0), 0);
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, 1, 0), 0);
xe_guc_db_mgr_release_range(dbm, 0, dbm->count);
}
static void test_range_spare(struct kunit *test)
{
const unsigned int *p = test->param_value;
struct xe_guc_db_mgr *dbm = test->priv;
int id;
KUNIT_ASSERT_EQ(test, xe_guc_db_mgr_init(dbm, ~0), 0);
KUNIT_ASSERT_LE(test, *p, dbm->count);
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, *p, dbm->count), 0);
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, *p, dbm->count - *p + 1), 0);
KUNIT_ASSERT_EQ(test, id = xe_guc_db_mgr_reserve_range(dbm, *p, dbm->count - *p), 0);
KUNIT_ASSERT_LT(test, xe_guc_db_mgr_reserve_range(dbm, 1, dbm->count - *p), 0);
xe_guc_db_mgr_release_range(dbm, id, *p);
}
static struct kunit_case guc_dbm_test_cases[] = {
KUNIT_CASE(test_empty),
KUNIT_CASE(test_default),
KUNIT_CASE_PARAM(test_size, guc_dbm_gen_params),
KUNIT_CASE_PARAM(test_reuse, guc_dbm_gen_params),
KUNIT_CASE_PARAM(test_range_overlap, guc_dbm_gen_params),
KUNIT_CASE_PARAM(test_range_compact, guc_dbm_gen_params),
KUNIT_CASE_PARAM(test_range_spare, guc_dbm_gen_params),
{}
};
static struct kunit_suite guc_dbm_suite = {
.name = "guc_dbm",
.test_cases = guc_dbm_test_cases,
.init = guc_dbm_test_init,
};
kunit_test_suites(&guc_dbm_suite);
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0 AND MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include <kunit/test.h>
#include <kunit/static_stub.h>
#include <kunit/visibility.h>
#include <drm/drm_drv.h>
#include <drm/drm_kunit_helpers.h>
#include "tests/xe_kunit_helpers.h"
#include "tests/xe_pci_test.h"
#include "xe_device_types.h"
/**
* xe_kunit_helper_alloc_xe_device - Allocate a &xe_device for a KUnit test.
* @test: the &kunit where this &xe_device will be used
* @dev: The parent device object
*
* This function allocates xe_device using drm_kunit_helper_alloc_device().
* The xe_device allocation is managed by the test.
*
* @dev should be allocated using drm_kunit_helper_alloc_device().
*
* This function uses KUNIT_ASSERT to detect any allocation failures.
*
* Return: A pointer to the new &xe_device.
*/
struct xe_device *xe_kunit_helper_alloc_xe_device(struct kunit *test,
struct device *dev)
{
struct xe_device *xe;
xe = drm_kunit_helper_alloc_drm_device(test, dev,
struct xe_device,
drm, DRIVER_GEM);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe);
return xe;
}
EXPORT_SYMBOL_IF_KUNIT(xe_kunit_helper_alloc_xe_device);
static void kunit_action_restore_priv(void *priv)
{
struct kunit *test = kunit_get_current_test();
test->priv = priv;
}
/**
* xe_kunit_helper_xe_device_test_init - Prepare a &xe_device for a KUnit test.
* @test: the &kunit where this fake &xe_device will be used
*
* This function allocates and initializes a fake &xe_device and stores its
* pointer as &kunit.priv to allow the test code to access it.
*
* This function can be directly used as custom implementation of
* &kunit_suite.init.
*
* It is possible to prepare specific variant of the fake &xe_device by passing
* in &kunit.priv pointer to the struct xe_pci_fake_data supplemented with
* desired parameters prior to calling this function.
*
* This function uses KUNIT_ASSERT to detect any failures.
*
* Return: Always 0.
*/
int xe_kunit_helper_xe_device_test_init(struct kunit *test)
{
struct xe_device *xe;
struct device *dev;
int err;
dev = drm_kunit_helper_alloc_device(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
xe = xe_kunit_helper_alloc_xe_device(test, dev);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe);
err = xe_pci_fake_device_init(xe);
KUNIT_ASSERT_EQ(test, err, 0);
err = kunit_add_action_or_reset(test, kunit_action_restore_priv, test->priv);
KUNIT_ASSERT_EQ(test, err, 0);
test->priv = xe;
return 0;
}
EXPORT_SYMBOL_IF_KUNIT(xe_kunit_helper_xe_device_test_init);
/* SPDX-License-Identifier: GPL-2.0 AND MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_KUNIT_HELPERS_H_
#define _XE_KUNIT_HELPERS_H_
struct device;
struct kunit;
struct xe_device;
struct xe_device *xe_kunit_helper_alloc_xe_device(struct kunit *test,
struct device *dev);
int xe_kunit_helper_xe_device_test_init(struct kunit *test);
#endif
...@@ -128,3 +128,39 @@ void xe_live_mocs_kernel_kunit(struct kunit *test) ...@@ -128,3 +128,39 @@ void xe_live_mocs_kernel_kunit(struct kunit *test)
xe_call_for_each_device(mocs_kernel_test_run_device); xe_call_for_each_device(mocs_kernel_test_run_device);
} }
EXPORT_SYMBOL_IF_KUNIT(xe_live_mocs_kernel_kunit); EXPORT_SYMBOL_IF_KUNIT(xe_live_mocs_kernel_kunit);
static int mocs_reset_test_run_device(struct xe_device *xe)
{
/* Check the mocs setup is retained over GT reset */
struct live_mocs mocs;
struct xe_gt *gt;
unsigned int flags;
int id;
struct kunit *test = xe_cur_kunit();
for_each_gt(gt, xe, id) {
flags = live_mocs_init(&mocs, gt);
kunit_info(test, "mocs_reset_test before reset\n");
if (flags & HAS_GLOBAL_MOCS)
read_mocs_table(gt, &mocs.table);
if (flags & HAS_LNCF_MOCS)
read_l3cc_table(gt, &mocs.table);
xe_gt_reset_async(gt);
flush_work(&gt->reset.worker);
kunit_info(test, "mocs_reset_test after reset\n");
if (flags & HAS_GLOBAL_MOCS)
read_mocs_table(gt, &mocs.table);
if (flags & HAS_LNCF_MOCS)
read_l3cc_table(gt, &mocs.table);
}
return 0;
}
void xe_live_mocs_reset_kunit(struct kunit *test)
{
xe_call_for_each_device(mocs_reset_test_run_device);
}
EXPORT_SYMBOL_IF_KUNIT(xe_live_mocs_reset_kunit);
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
static struct kunit_case xe_mocs_tests[] = { static struct kunit_case xe_mocs_tests[] = {
KUNIT_CASE(xe_live_mocs_kernel_kunit), KUNIT_CASE(xe_live_mocs_kernel_kunit),
KUNIT_CASE(xe_live_mocs_reset_kunit),
{} {}
}; };
...@@ -21,4 +22,5 @@ kunit_test_suite(xe_mocs_test_suite); ...@@ -21,4 +22,5 @@ kunit_test_suite(xe_mocs_test_suite);
MODULE_AUTHOR("Intel Corporation"); MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe_mocs kunit test");
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING); MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
...@@ -9,5 +9,6 @@ ...@@ -9,5 +9,6 @@
struct kunit; struct kunit;
void xe_live_mocs_kernel_kunit(struct kunit *test); void xe_live_mocs_kernel_kunit(struct kunit *test);
void xe_live_mocs_reset_kunit(struct kunit *test);
#endif #endif
...@@ -156,6 +156,9 @@ int xe_pci_fake_device_init(struct xe_device *xe) ...@@ -156,6 +156,9 @@ int xe_pci_fake_device_init(struct xe_device *xe)
return -ENODEV; return -ENODEV;
done: done:
xe->sriov.__mode = data && data->sriov_mode ?
data->sriov_mode : XE_SRIOV_MODE_NONE;
kunit_activate_static_stub(test, read_gmdid, fake_read_gmdid); kunit_activate_static_stub(test, read_gmdid, fake_read_gmdid);
xe_info_init_early(xe, desc, subplatform_desc); xe_info_init_early(xe, desc, subplatform_desc);
......
...@@ -64,8 +64,3 @@ static struct kunit_suite xe_pci_test_suite = { ...@@ -64,8 +64,3 @@ static struct kunit_suite xe_pci_test_suite = {
}; };
kunit_test_suite(xe_pci_test_suite); kunit_test_suite(xe_pci_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe_pci kunit test");
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <linux/types.h> #include <linux/types.h>
#include "xe_platform_types.h" #include "xe_platform_types.h"
#include "xe_sriov_types.h"
struct xe_device; struct xe_device;
struct xe_graphics_desc; struct xe_graphics_desc;
...@@ -23,6 +24,7 @@ void xe_call_for_each_graphics_ip(xe_graphics_fn xe_fn); ...@@ -23,6 +24,7 @@ void xe_call_for_each_graphics_ip(xe_graphics_fn xe_fn);
void xe_call_for_each_media_ip(xe_media_fn xe_fn); void xe_call_for_each_media_ip(xe_media_fn xe_fn);
struct xe_pci_fake_data { struct xe_pci_fake_data {
enum xe_sriov_mode sriov_mode;
enum xe_platform platform; enum xe_platform platform;
enum xe_subplatform subplatform; enum xe_subplatform subplatform;
u32 graphics_verx100; u32 graphics_verx100;
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include "regs/xe_reg_defs.h" #include "regs/xe_reg_defs.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_kunit_helpers.h"
#include "xe_pci_test.h" #include "xe_pci_test.h"
#include "xe_reg_sr.h" #include "xe_reg_sr.h"
#include "xe_rtp.h" #include "xe_rtp.h"
...@@ -276,9 +277,7 @@ static int xe_rtp_test_init(struct kunit *test) ...@@ -276,9 +277,7 @@ static int xe_rtp_test_init(struct kunit *test)
dev = drm_kunit_helper_alloc_device(test); dev = drm_kunit_helper_alloc_device(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
xe = drm_kunit_helper_alloc_drm_device(test, dev, xe = xe_kunit_helper_alloc_xe_device(test, dev);
struct xe_device,
drm, DRIVER_GEM);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe);
/* Initialize an empty device */ /* Initialize an empty device */
...@@ -312,8 +311,3 @@ static struct kunit_suite xe_rtp_test_suite = { ...@@ -312,8 +311,3 @@ static struct kunit_suite xe_rtp_test_suite = {
}; };
kunit_test_suite(xe_rtp_test_suite); kunit_test_suite(xe_rtp_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe_rtp kunit test");
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright © 2023 Intel Corporation
*/
#include <linux/module.h>
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe kunit tests");
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <kunit/test.h> #include <kunit/test.h>
#include "xe_device.h" #include "xe_device.h"
#include "xe_kunit_helpers.h"
#include "xe_pci_test.h" #include "xe_pci_test.h"
#include "xe_reg_sr.h" #include "xe_reg_sr.h"
#include "xe_tuning.h" #include "xe_tuning.h"
...@@ -65,14 +66,8 @@ static const struct platform_test_case cases[] = { ...@@ -65,14 +66,8 @@ static const struct platform_test_case cases[] = {
PLATFORM_CASE(ALDERLAKE_P, C0), PLATFORM_CASE(ALDERLAKE_P, C0),
SUBPLATFORM_CASE(ALDERLAKE_S, RPLS, D0), SUBPLATFORM_CASE(ALDERLAKE_S, RPLS, D0),
SUBPLATFORM_CASE(ALDERLAKE_P, RPLU, E0), SUBPLATFORM_CASE(ALDERLAKE_P, RPLU, E0),
SUBPLATFORM_CASE(DG2, G10, A0),
SUBPLATFORM_CASE(DG2, G10, A1),
SUBPLATFORM_CASE(DG2, G10, B0),
SUBPLATFORM_CASE(DG2, G10, C0), SUBPLATFORM_CASE(DG2, G10, C0),
SUBPLATFORM_CASE(DG2, G11, A0),
SUBPLATFORM_CASE(DG2, G11, B0),
SUBPLATFORM_CASE(DG2, G11, B1), SUBPLATFORM_CASE(DG2, G11, B1),
SUBPLATFORM_CASE(DG2, G12, A0),
SUBPLATFORM_CASE(DG2, G12, A1), SUBPLATFORM_CASE(DG2, G12, A1),
GMDID_CASE(METEORLAKE, 1270, A0, 1300, A0), GMDID_CASE(METEORLAKE, 1270, A0, 1300, A0),
GMDID_CASE(METEORLAKE, 1271, A0, 1300, A0), GMDID_CASE(METEORLAKE, 1271, A0, 1300, A0),
...@@ -105,9 +100,7 @@ static int xe_wa_test_init(struct kunit *test) ...@@ -105,9 +100,7 @@ static int xe_wa_test_init(struct kunit *test)
dev = drm_kunit_helper_alloc_device(test); dev = drm_kunit_helper_alloc_device(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
xe = drm_kunit_helper_alloc_drm_device(test, dev, xe = xe_kunit_helper_alloc_xe_device(test, dev);
struct xe_device,
drm, DRIVER_GEM);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xe);
test->priv = &data; test->priv = &data;
...@@ -160,8 +153,3 @@ static struct kunit_suite xe_rtp_test_suite = { ...@@ -160,8 +153,3 @@ static struct kunit_suite xe_rtp_test_suite = {
}; };
kunit_test_suite(xe_rtp_test_suite); kunit_test_suite(xe_rtp_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe_wa kunit test");
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
...@@ -28,6 +28,14 @@ ...@@ -28,6 +28,14 @@
#include "xe_ttm_stolen_mgr.h" #include "xe_ttm_stolen_mgr.h"
#include "xe_vm.h" #include "xe_vm.h"
const char *const xe_mem_type_to_name[TTM_NUM_MEM_TYPES] = {
[XE_PL_SYSTEM] = "system",
[XE_PL_TT] = "gtt",
[XE_PL_VRAM0] = "vram0",
[XE_PL_VRAM1] = "vram1",
[XE_PL_STOLEN] = "stolen"
};
static const struct ttm_place sys_placement_flags = { static const struct ttm_place sys_placement_flags = {
.fpfn = 0, .fpfn = 0,
.lpfn = 0, .lpfn = 0,
...@@ -587,6 +595,8 @@ static int xe_bo_move_notify(struct xe_bo *bo, ...@@ -587,6 +595,8 @@ static int xe_bo_move_notify(struct xe_bo *bo,
{ {
struct ttm_buffer_object *ttm_bo = &bo->ttm; struct ttm_buffer_object *ttm_bo = &bo->ttm;
struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev); struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
struct ttm_resource *old_mem = ttm_bo->resource;
u32 old_mem_type = old_mem ? old_mem->mem_type : XE_PL_SYSTEM;
int ret; int ret;
/* /*
...@@ -606,6 +616,18 @@ static int xe_bo_move_notify(struct xe_bo *bo, ...@@ -606,6 +616,18 @@ static int xe_bo_move_notify(struct xe_bo *bo,
if (ttm_bo->base.dma_buf && !ttm_bo->base.import_attach) if (ttm_bo->base.dma_buf && !ttm_bo->base.import_attach)
dma_buf_move_notify(ttm_bo->base.dma_buf); dma_buf_move_notify(ttm_bo->base.dma_buf);
/*
* TTM has already nuked the mmap for us (see ttm_bo_unmap_virtual),
* so if we moved from VRAM make sure to unlink this from the userfault
* tracking.
*/
if (mem_type_is_vram(old_mem_type)) {
mutex_lock(&xe->mem_access.vram_userfault.lock);
if (!list_empty(&bo->vram_userfault_link))
list_del_init(&bo->vram_userfault_link);
mutex_unlock(&xe->mem_access.vram_userfault.lock);
}
return 0; return 0;
} }
...@@ -714,8 +736,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict, ...@@ -714,8 +736,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
migrate = xe->tiles[0].migrate; migrate = xe->tiles[0].migrate;
xe_assert(xe, migrate); xe_assert(xe, migrate);
trace_xe_bo_move(bo, new_mem->mem_type, old_mem_type);
trace_xe_bo_move(bo);
xe_device_mem_access_get(xe); xe_device_mem_access_get(xe);
if (xe_bo_is_pinned(bo) && !xe_bo_is_user(bo)) { if (xe_bo_is_pinned(bo) && !xe_bo_is_user(bo)) {
...@@ -1028,7 +1049,7 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo) ...@@ -1028,7 +1049,7 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo)
} }
} }
struct ttm_device_funcs xe_ttm_funcs = { const struct ttm_device_funcs xe_ttm_funcs = {
.ttm_tt_create = xe_ttm_tt_create, .ttm_tt_create = xe_ttm_tt_create,
.ttm_tt_populate = xe_ttm_tt_populate, .ttm_tt_populate = xe_ttm_tt_populate,
.ttm_tt_unpopulate = xe_ttm_tt_unpopulate, .ttm_tt_unpopulate = xe_ttm_tt_unpopulate,
...@@ -1064,6 +1085,11 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo) ...@@ -1064,6 +1085,11 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo)
if (bo->vm && xe_bo_is_user(bo)) if (bo->vm && xe_bo_is_user(bo))
xe_vm_put(bo->vm); xe_vm_put(bo->vm);
mutex_lock(&xe->mem_access.vram_userfault.lock);
if (!list_empty(&bo->vram_userfault_link))
list_del(&bo->vram_userfault_link);
mutex_unlock(&xe->mem_access.vram_userfault.lock);
kfree(bo); kfree(bo);
} }
...@@ -1111,16 +1137,20 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) ...@@ -1111,16 +1137,20 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
{ {
struct ttm_buffer_object *tbo = vmf->vma->vm_private_data; struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
struct drm_device *ddev = tbo->base.dev; struct drm_device *ddev = tbo->base.dev;
struct xe_device *xe = to_xe_device(ddev);
struct xe_bo *bo = ttm_to_xe_bo(tbo);
bool needs_rpm = bo->flags & XE_BO_CREATE_VRAM_MASK;
vm_fault_t ret; vm_fault_t ret;
int idx, r = 0; int idx, r = 0;
if (needs_rpm)
xe_device_mem_access_get(xe);
ret = ttm_bo_vm_reserve(tbo, vmf); ret = ttm_bo_vm_reserve(tbo, vmf);
if (ret) if (ret)
return ret; goto out;
if (drm_dev_enter(ddev, &idx)) { if (drm_dev_enter(ddev, &idx)) {
struct xe_bo *bo = ttm_to_xe_bo(tbo);
trace_xe_bo_cpu_fault(bo); trace_xe_bo_cpu_fault(bo);
if (should_migrate_to_system(bo)) { if (should_migrate_to_system(bo)) {
...@@ -1138,10 +1168,24 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) ...@@ -1138,10 +1168,24 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
} else { } else {
ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
} }
if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
return ret; goto out;
/*
* ttm_bo_vm_reserve() already has dma_resv_lock.
*/
if (ret == VM_FAULT_NOPAGE && mem_type_is_vram(tbo->resource->mem_type)) {
mutex_lock(&xe->mem_access.vram_userfault.lock);
if (list_empty(&bo->vram_userfault_link))
list_add(&bo->vram_userfault_link, &xe->mem_access.vram_userfault.list);
mutex_unlock(&xe->mem_access.vram_userfault.lock);
}
dma_resv_unlock(tbo->base.resv); dma_resv_unlock(tbo->base.resv);
out:
if (needs_rpm)
xe_device_mem_access_put(xe);
return ret; return ret;
} }
...@@ -1255,6 +1299,7 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo, ...@@ -1255,6 +1299,7 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
INIT_LIST_HEAD(&bo->client_link); INIT_LIST_HEAD(&bo->client_link);
#endif #endif
INIT_LIST_HEAD(&bo->vram_userfault_link);
drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size); drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
...@@ -1567,6 +1612,38 @@ struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_til ...@@ -1567,6 +1612,38 @@ struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_til
return bo; return bo;
} }
/**
* xe_managed_bo_reinit_in_vram
* @xe: xe device
* @tile: Tile where the new buffer will be created
* @src: Managed buffer object allocated in system memory
*
* Replace a managed src buffer object allocated in system memory with a new
* one allocated in vram, copying the data between them.
* Buffer object in VRAM is not going to have the same GGTT address, the caller
* is responsible for making sure that any old references to it are updated.
*
* Returns 0 for success, negative error code otherwise.
*/
int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src)
{
struct xe_bo *bo;
xe_assert(xe, IS_DGFX(xe));
xe_assert(xe, !(*src)->vmap.is_iomem);
bo = xe_managed_bo_create_from_data(xe, tile, (*src)->vmap.vaddr, (*src)->size,
XE_BO_CREATE_VRAM_IF_DGFX(tile) |
XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(bo))
return PTR_ERR(bo);
drmm_release_action(&xe->drm, __xe_bo_unpin_map_no_vm, *src);
*src = bo;
return 0;
}
/* /*
* XXX: This is in the VM bind data path, likely should calculate this once and * XXX: This is in the VM bind data path, likely should calculate this once and
* store, with a recalculation if the BO is moved. * store, with a recalculation if the BO is moved.
...@@ -2261,6 +2338,16 @@ int xe_bo_dumb_create(struct drm_file *file_priv, ...@@ -2261,6 +2338,16 @@ int xe_bo_dumb_create(struct drm_file *file_priv,
return err; return err;
} }
void xe_bo_runtime_pm_release_mmap_offset(struct xe_bo *bo)
{
struct ttm_buffer_object *tbo = &bo->ttm;
struct ttm_device *bdev = tbo->bdev;
drm_vma_node_unmap(&tbo->base.vma_node, bdev->dev_mapping);
list_del_init(&bo->vram_userfault_link);
}
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
#include "tests/xe_bo.c" #include "tests/xe_bo.c"
#endif #endif
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
#define XE_BO_FIXED_PLACEMENT_BIT BIT(11) #define XE_BO_FIXED_PLACEMENT_BIT BIT(11)
#define XE_BO_PAGETABLE BIT(12) #define XE_BO_PAGETABLE BIT(12)
#define XE_BO_NEEDS_CPU_ACCESS BIT(13) #define XE_BO_NEEDS_CPU_ACCESS BIT(13)
#define XE_BO_NEEDS_UC BIT(14)
/* this one is trigger internally only */ /* this one is trigger internally only */
#define XE_BO_INTERNAL_TEST BIT(30) #define XE_BO_INTERNAL_TEST BIT(30)
#define XE_BO_INTERNAL_64K BIT(31) #define XE_BO_INTERNAL_64K BIT(31)
...@@ -128,6 +129,7 @@ struct xe_bo *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile ...@@ -128,6 +129,7 @@ struct xe_bo *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile
size_t size, u32 flags); size_t size, u32 flags);
struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile, struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
const void *data, size_t size, u32 flags); const void *data, size_t size, u32 flags);
int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src);
int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo, int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo,
u32 bo_flags); u32 bo_flags);
...@@ -242,12 +244,15 @@ int xe_bo_evict(struct xe_bo *bo, bool force_alloc); ...@@ -242,12 +244,15 @@ int xe_bo_evict(struct xe_bo *bo, bool force_alloc);
int xe_bo_evict_pinned(struct xe_bo *bo); int xe_bo_evict_pinned(struct xe_bo *bo);
int xe_bo_restore_pinned(struct xe_bo *bo); int xe_bo_restore_pinned(struct xe_bo *bo);
extern struct ttm_device_funcs xe_ttm_funcs; extern const struct ttm_device_funcs xe_ttm_funcs;
extern const char *const xe_mem_type_to_name[];
int xe_gem_create_ioctl(struct drm_device *dev, void *data, int xe_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file); struct drm_file *file);
int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data, int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
struct drm_file *file); struct drm_file *file);
void xe_bo_runtime_pm_release_mmap_offset(struct xe_bo *bo);
int xe_bo_dumb_create(struct drm_file *file_priv, int xe_bo_dumb_create(struct drm_file *file_priv,
struct drm_device *dev, struct drm_device *dev,
struct drm_mode_create_dumb *args); struct drm_mode_create_dumb *args);
......
...@@ -88,6 +88,9 @@ struct xe_bo { ...@@ -88,6 +88,9 @@ struct xe_bo {
* objects. * objects.
*/ */
u16 cpu_caching; u16 cpu_caching;
/** @vram_userfault_link: Link into @mem_access.vram_userfault.list */
struct list_head vram_userfault_link;
}; };
#define intel_bo_to_drm_bo(bo) (&(bo)->ttm.base) #define intel_bo_to_drm_bo(bo) (&(bo)->ttm.base)
......
...@@ -55,6 +55,7 @@ static int info(struct seq_file *m, void *data) ...@@ -55,6 +55,7 @@ static int info(struct seq_file *m, void *data)
drm_printf(&p, "force_execlist %s\n", str_yes_no(xe->info.force_execlist)); drm_printf(&p, "force_execlist %s\n", str_yes_no(xe->info.force_execlist));
drm_printf(&p, "has_flat_ccs %s\n", str_yes_no(xe->info.has_flat_ccs)); drm_printf(&p, "has_flat_ccs %s\n", str_yes_no(xe->info.has_flat_ccs));
drm_printf(&p, "has_usm %s\n", str_yes_no(xe->info.has_usm)); drm_printf(&p, "has_usm %s\n", str_yes_no(xe->info.has_usm));
drm_printf(&p, "skip_guc_pc %s\n", str_yes_no(xe->info.skip_guc_pc));
for_each_gt(gt, xe, id) { for_each_gt(gt, xe, id) {
drm_printf(&p, "gt%d force wake %d\n", id, drm_printf(&p, "gt%d force wake %d\n", id,
xe_force_wake_ref(gt_to_fw(gt), XE_FW_GT)); xe_force_wake_ref(gt_to_fw(gt), XE_FW_GT));
......
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
#include "xe_guc_ct.h" #include "xe_guc_ct.h"
#include "xe_guc_submit.h" #include "xe_guc_submit.h"
#include "xe_hw_engine.h" #include "xe_hw_engine.h"
#include "xe_sched_job.h"
#include "xe_vm.h"
/** /**
* DOC: Xe device coredump * DOC: Xe device coredump
...@@ -58,11 +60,22 @@ static struct xe_guc *exec_queue_to_guc(struct xe_exec_queue *q) ...@@ -58,11 +60,22 @@ static struct xe_guc *exec_queue_to_guc(struct xe_exec_queue *q)
return &q->gt->uc.guc; return &q->gt->uc.guc;
} }
static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
{
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
if (ss->vm)
xe_vm_snapshot_capture_delayed(ss->vm);
xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
}
static ssize_t xe_devcoredump_read(char *buffer, loff_t offset, static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
size_t count, void *data, size_t datalen) size_t count, void *data, size_t datalen)
{ {
struct xe_devcoredump *coredump = data; struct xe_devcoredump *coredump = data;
struct xe_devcoredump_snapshot *ss; struct xe_device *xe = coredump_to_xe(coredump);
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct drm_printer p; struct drm_printer p;
struct drm_print_iterator iter; struct drm_print_iterator iter;
struct timespec64 ts; struct timespec64 ts;
...@@ -72,12 +85,14 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset, ...@@ -72,12 +85,14 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
if (!data || !coredump_to_xe(coredump)) if (!data || !coredump_to_xe(coredump))
return -ENODEV; return -ENODEV;
/* Ensure delayed work is captured before continuing */
flush_work(&ss->work);
iter.data = buffer; iter.data = buffer;
iter.offset = 0; iter.offset = 0;
iter.start = offset; iter.start = offset;
iter.remain = count; iter.remain = count;
ss = &coredump->snapshot;
p = drm_coredump_printer(&iter); p = drm_coredump_printer(&iter);
drm_printf(&p, "**** Xe Device Coredump ****\n"); drm_printf(&p, "**** Xe Device Coredump ****\n");
...@@ -88,16 +103,24 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset, ...@@ -88,16 +103,24 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec); drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
ts = ktime_to_timespec64(ss->boot_time); ts = ktime_to_timespec64(ss->boot_time);
drm_printf(&p, "Uptime: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec); drm_printf(&p, "Uptime: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
xe_device_snapshot_print(xe, &p);
drm_printf(&p, "\n**** GuC CT ****\n"); drm_printf(&p, "\n**** GuC CT ****\n");
xe_guc_ct_snapshot_print(coredump->snapshot.ct, &p); xe_guc_ct_snapshot_print(coredump->snapshot.ct, &p);
xe_guc_exec_queue_snapshot_print(coredump->snapshot.ge, &p); xe_guc_exec_queue_snapshot_print(coredump->snapshot.ge, &p);
drm_printf(&p, "\n**** Job ****\n");
xe_sched_job_snapshot_print(coredump->snapshot.job, &p);
drm_printf(&p, "\n**** HW Engines ****\n"); drm_printf(&p, "\n**** HW Engines ****\n");
for (i = 0; i < XE_NUM_HW_ENGINES; i++) for (i = 0; i < XE_NUM_HW_ENGINES; i++)
if (coredump->snapshot.hwe[i]) if (coredump->snapshot.hwe[i])
xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i], xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i],
&p); &p);
if (coredump->snapshot.vm) {
drm_printf(&p, "\n**** VM state ****\n");
xe_vm_snapshot_print(coredump->snapshot.vm, &p);
}
return count - iter.remain; return count - iter.remain;
} }
...@@ -111,21 +134,28 @@ static void xe_devcoredump_free(void *data) ...@@ -111,21 +134,28 @@ static void xe_devcoredump_free(void *data)
if (!data || !coredump_to_xe(coredump)) if (!data || !coredump_to_xe(coredump))
return; return;
cancel_work_sync(&coredump->snapshot.work);
xe_guc_ct_snapshot_free(coredump->snapshot.ct); xe_guc_ct_snapshot_free(coredump->snapshot.ct);
xe_guc_exec_queue_snapshot_free(coredump->snapshot.ge); xe_guc_exec_queue_snapshot_free(coredump->snapshot.ge);
xe_sched_job_snapshot_free(coredump->snapshot.job);
for (i = 0; i < XE_NUM_HW_ENGINES; i++) for (i = 0; i < XE_NUM_HW_ENGINES; i++)
if (coredump->snapshot.hwe[i]) if (coredump->snapshot.hwe[i])
xe_hw_engine_snapshot_free(coredump->snapshot.hwe[i]); xe_hw_engine_snapshot_free(coredump->snapshot.hwe[i]);
xe_vm_snapshot_free(coredump->snapshot.vm);
/* To prevent stale data on next snapshot, clear everything */
memset(&coredump->snapshot, 0, sizeof(coredump->snapshot));
coredump->captured = false; coredump->captured = false;
drm_info(&coredump_to_xe(coredump)->drm, drm_info(&coredump_to_xe(coredump)->drm,
"Xe device coredump has been deleted.\n"); "Xe device coredump has been deleted.\n");
} }
static void devcoredump_snapshot(struct xe_devcoredump *coredump, static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_exec_queue *q) struct xe_sched_job *job)
{ {
struct xe_devcoredump_snapshot *ss = &coredump->snapshot; struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_exec_queue *q = job->q;
struct xe_guc *guc = exec_queue_to_guc(q); struct xe_guc *guc = exec_queue_to_guc(q);
struct xe_hw_engine *hwe; struct xe_hw_engine *hwe;
enum xe_hw_engine_id id; enum xe_hw_engine_id id;
...@@ -137,6 +167,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump, ...@@ -137,6 +167,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
ss->snapshot_time = ktime_get_real(); ss->snapshot_time = ktime_get_real();
ss->boot_time = ktime_get_boottime(); ss->boot_time = ktime_get_boottime();
ss->gt = q->gt;
INIT_WORK(&ss->work, xe_devcoredump_deferred_snap_work);
cookie = dma_fence_begin_signalling(); cookie = dma_fence_begin_signalling();
for (i = 0; q->width > 1 && i < XE_HW_ENGINE_MAX_INSTANCE;) { for (i = 0; q->width > 1 && i < XE_HW_ENGINE_MAX_INSTANCE;) {
if (adj_logical_mask & BIT(i)) { if (adj_logical_mask & BIT(i)) {
...@@ -150,7 +183,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump, ...@@ -150,7 +183,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL); xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true); coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(q); coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(job);
coredump->snapshot.job = xe_sched_job_snapshot_capture(job);
coredump->snapshot.vm = xe_vm_snapshot_capture(q->vm);
for_each_hw_engine(hwe, q->gt, id) { for_each_hw_engine(hwe, q->gt, id) {
if (hwe->class != q->hwe->class || if (hwe->class != q->hwe->class ||
...@@ -161,21 +196,24 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump, ...@@ -161,21 +196,24 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe); coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe);
} }
if (ss->vm)
queue_work(system_unbound_wq, &ss->work);
xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL); xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
dma_fence_end_signalling(cookie); dma_fence_end_signalling(cookie);
} }
/** /**
* xe_devcoredump - Take the required snapshots and initialize coredump device. * xe_devcoredump - Take the required snapshots and initialize coredump device.
* @q: The faulty xe_exec_queue, where the issue was detected. * @job: The faulty xe_sched_job, where the issue was detected.
* *
* This function should be called at the crash time within the serialized * This function should be called at the crash time within the serialized
* gt_reset. It is skipped if we still have the core dump device available * gt_reset. It is skipped if we still have the core dump device available
* with the information of the 'first' snapshot. * with the information of the 'first' snapshot.
*/ */
void xe_devcoredump(struct xe_exec_queue *q) void xe_devcoredump(struct xe_sched_job *job)
{ {
struct xe_device *xe = gt_to_xe(q->gt); struct xe_device *xe = gt_to_xe(job->q->gt);
struct xe_devcoredump *coredump = &xe->devcoredump; struct xe_devcoredump *coredump = &xe->devcoredump;
if (coredump->captured) { if (coredump->captured) {
...@@ -184,7 +222,7 @@ void xe_devcoredump(struct xe_exec_queue *q) ...@@ -184,7 +222,7 @@ void xe_devcoredump(struct xe_exec_queue *q)
} }
coredump->captured = true; coredump->captured = true;
devcoredump_snapshot(coredump, q); devcoredump_snapshot(coredump, job);
drm_info(&xe->drm, "Xe device coredump has been created\n"); drm_info(&xe->drm, "Xe device coredump has been created\n");
drm_info(&xe->drm, "Check your /sys/class/drm/card%d/device/devcoredump/data\n", drm_info(&xe->drm, "Check your /sys/class/drm/card%d/device/devcoredump/data\n",
...@@ -194,3 +232,4 @@ void xe_devcoredump(struct xe_exec_queue *q) ...@@ -194,3 +232,4 @@ void xe_devcoredump(struct xe_exec_queue *q)
xe_devcoredump_read, xe_devcoredump_free); xe_devcoredump_read, xe_devcoredump_free);
} }
#endif #endif
...@@ -7,12 +7,12 @@ ...@@ -7,12 +7,12 @@
#define _XE_DEVCOREDUMP_H_ #define _XE_DEVCOREDUMP_H_
struct xe_device; struct xe_device;
struct xe_exec_queue; struct xe_sched_job;
#ifdef CONFIG_DEV_COREDUMP #ifdef CONFIG_DEV_COREDUMP
void xe_devcoredump(struct xe_exec_queue *q); void xe_devcoredump(struct xe_sched_job *job);
#else #else
static inline void xe_devcoredump(struct xe_exec_queue *q) static inline void xe_devcoredump(struct xe_sched_job *job)
{ {
} }
#endif #endif
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include "xe_hw_engine_types.h" #include "xe_hw_engine_types.h"
struct xe_device; struct xe_device;
struct xe_gt;
/** /**
* struct xe_devcoredump_snapshot - Crash snapshot * struct xe_devcoredump_snapshot - Crash snapshot
...@@ -26,13 +27,23 @@ struct xe_devcoredump_snapshot { ...@@ -26,13 +27,23 @@ struct xe_devcoredump_snapshot {
/** @boot_time: Relative boot time so the uptime can be calculated. */ /** @boot_time: Relative boot time so the uptime can be calculated. */
ktime_t boot_time; ktime_t boot_time;
/** @gt: Affected GT, used by forcewake for delayed capture */
struct xe_gt *gt;
/** @work: Workqueue for deferred capture outside of signaling context */
struct work_struct work;
/* GuC snapshots */ /* GuC snapshots */
/** @ct: GuC CT snapshot */ /** @ct: GuC CT snapshot */
struct xe_guc_ct_snapshot *ct; struct xe_guc_ct_snapshot *ct;
/** @ge: Guc Engine snapshot */ /** @ge: Guc Engine snapshot */
struct xe_guc_submit_exec_queue_snapshot *ge; struct xe_guc_submit_exec_queue_snapshot *ge;
/** @hwe: HW Engine snapshot array */ /** @hwe: HW Engine snapshot array */
struct xe_hw_engine_snapshot *hwe[XE_NUM_HW_ENGINES]; struct xe_hw_engine_snapshot *hwe[XE_NUM_HW_ENGINES];
/** @job: Snapshot of job state */
struct xe_sched_job_snapshot *job;
/** @vm: Snapshot of VM state */
struct xe_vm_snapshot *vm;
}; };
/** /**
...@@ -44,8 +55,6 @@ struct xe_devcoredump_snapshot { ...@@ -44,8 +55,6 @@ struct xe_devcoredump_snapshot {
* for reading the information. * for reading the information.
*/ */
struct xe_devcoredump { struct xe_devcoredump {
/** @xe: Xe device. */
struct xe_device *xe;
/** @captured: The snapshot of the first hang has already been taken. */ /** @captured: The snapshot of the first hang has already been taken. */
bool captured; bool captured;
/** @snapshot: Snapshot is captured at time of the first crash */ /** @snapshot: Snapshot is captured at time of the first crash */
......
...@@ -15,32 +15,35 @@ ...@@ -15,32 +15,35 @@
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include <drm/xe_drm.h> #include <drm/xe_drm.h>
#include "display/xe_display.h"
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "regs/xe_regs.h" #include "regs/xe_regs.h"
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_debugfs.h" #include "xe_debugfs.h"
#include "xe_display.h"
#include "xe_dma_buf.h" #include "xe_dma_buf.h"
#include "xe_drm_client.h" #include "xe_drm_client.h"
#include "xe_drv.h" #include "xe_drv.h"
#include "xe_exec_queue.h"
#include "xe_exec.h" #include "xe_exec.h"
#include "xe_exec_queue.h"
#include "xe_ggtt.h" #include "xe_ggtt.h"
#include "xe_gsc_proxy.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_mcr.h" #include "xe_gt_mcr.h"
#include "xe_hwmon.h"
#include "xe_irq.h" #include "xe_irq.h"
#include "xe_memirq.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_module.h" #include "xe_module.h"
#include "xe_pat.h" #include "xe_pat.h"
#include "xe_pcode.h" #include "xe_pcode.h"
#include "xe_pm.h" #include "xe_pm.h"
#include "xe_query.h" #include "xe_query.h"
#include "xe_sriov.h"
#include "xe_tile.h" #include "xe_tile.h"
#include "xe_ttm_stolen_mgr.h" #include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h" #include "xe_ttm_sys_mgr.h"
#include "xe_vm.h" #include "xe_vm.h"
#include "xe_wait_user_fence.h" #include "xe_wait_user_fence.h"
#include "xe_hwmon.h"
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
struct lockdep_map xe_device_mem_access_lockdep_map = { struct lockdep_map xe_device_mem_access_lockdep_map = {
...@@ -83,9 +86,6 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file) ...@@ -83,9 +86,6 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
return 0; return 0;
} }
static void device_kill_persistent_exec_queues(struct xe_device *xe,
struct xe_file *xef);
static void xe_file_close(struct drm_device *dev, struct drm_file *file) static void xe_file_close(struct drm_device *dev, struct drm_file *file)
{ {
struct xe_device *xe = to_xe_device(dev); struct xe_device *xe = to_xe_device(dev);
...@@ -102,8 +102,6 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file) ...@@ -102,8 +102,6 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
mutex_unlock(&xef->exec_queue.lock); mutex_unlock(&xef->exec_queue.lock);
xa_destroy(&xef->exec_queue.xa); xa_destroy(&xef->exec_queue.xa);
mutex_destroy(&xef->exec_queue.lock); mutex_destroy(&xef->exec_queue.lock);
device_kill_persistent_exec_queues(xe, xef);
mutex_lock(&xef->vm.lock); mutex_lock(&xef->vm.lock);
xa_for_each(&xef->vm.xa, idx, vm) xa_for_each(&xef->vm.xa, idx, vm)
xe_vm_close_and_put(vm); xe_vm_close_and_put(vm);
...@@ -255,9 +253,6 @@ struct xe_device *xe_device_create(struct pci_dev *pdev, ...@@ -255,9 +253,6 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
xa_erase(&xe->usm.asid_to_vm, asid); xa_erase(&xe->usm.asid_to_vm, asid);
} }
drmm_mutex_init(&xe->drm, &xe->persistent_engines.lock);
INIT_LIST_HEAD(&xe->persistent_engines.list);
spin_lock_init(&xe->pinned.lock); spin_lock_init(&xe->pinned.lock);
INIT_LIST_HEAD(&xe->pinned.kernel_bo_present); INIT_LIST_HEAD(&xe->pinned.kernel_bo_present);
INIT_LIST_HEAD(&xe->pinned.external_vram); INIT_LIST_HEAD(&xe->pinned.external_vram);
...@@ -432,10 +427,15 @@ int xe_device_probe(struct xe_device *xe) ...@@ -432,10 +427,15 @@ int xe_device_probe(struct xe_device *xe)
struct xe_tile *tile; struct xe_tile *tile;
struct xe_gt *gt; struct xe_gt *gt;
int err; int err;
u8 last_gt;
u8 id; u8 id;
xe_pat_init_early(xe); xe_pat_init_early(xe);
err = xe_sriov_init(xe);
if (err)
return err;
xe->info.mem_region_mask = 1; xe->info.mem_region_mask = 1;
err = xe_display_init_nommio(xe); err = xe_display_init_nommio(xe);
if (err) if (err)
...@@ -456,6 +456,17 @@ int xe_device_probe(struct xe_device *xe) ...@@ -456,6 +456,17 @@ int xe_device_probe(struct xe_device *xe)
err = xe_ggtt_init_early(tile->mem.ggtt); err = xe_ggtt_init_early(tile->mem.ggtt);
if (err) if (err)
return err; return err;
if (IS_SRIOV_VF(xe)) {
err = xe_memirq_init(&tile->sriov.vf.memirq);
if (err)
return err;
}
}
for_each_gt(gt, xe, id) {
err = xe_gt_init_hwconfig(gt);
if (err)
return err;
} }
err = drmm_add_action_or_reset(&xe->drm, xe_driver_flr_fini, xe); err = drmm_add_action_or_reset(&xe->drm, xe_driver_flr_fini, xe);
...@@ -510,16 +521,18 @@ int xe_device_probe(struct xe_device *xe) ...@@ -510,16 +521,18 @@ int xe_device_probe(struct xe_device *xe)
goto err_irq_shutdown; goto err_irq_shutdown;
for_each_gt(gt, xe, id) { for_each_gt(gt, xe, id) {
last_gt = id;
err = xe_gt_init(gt); err = xe_gt_init(gt);
if (err) if (err)
goto err_irq_shutdown; goto err_fini_gt;
} }
xe_heci_gsc_init(xe); xe_heci_gsc_init(xe);
err = xe_display_init(xe); err = xe_display_init(xe);
if (err) if (err)
goto err_irq_shutdown; goto err_fini_gt;
err = drm_dev_register(&xe->drm, 0); err = drm_dev_register(&xe->drm, 0);
if (err) if (err)
...@@ -540,6 +553,14 @@ int xe_device_probe(struct xe_device *xe) ...@@ -540,6 +553,14 @@ int xe_device_probe(struct xe_device *xe)
err_fini_display: err_fini_display:
xe_display_driver_remove(xe); xe_display_driver_remove(xe);
err_fini_gt:
for_each_gt(gt, xe, id) {
if (id < last_gt)
xe_gt_remove(gt);
else
break;
}
err_irq_shutdown: err_irq_shutdown:
xe_irq_shutdown(xe); xe_irq_shutdown(xe);
err: err:
...@@ -557,12 +578,18 @@ static void xe_device_remove_display(struct xe_device *xe) ...@@ -557,12 +578,18 @@ static void xe_device_remove_display(struct xe_device *xe)
void xe_device_remove(struct xe_device *xe) void xe_device_remove(struct xe_device *xe)
{ {
struct xe_gt *gt;
u8 id;
xe_device_remove_display(xe); xe_device_remove_display(xe);
xe_display_fini(xe); xe_display_fini(xe);
xe_heci_gsc_fini(xe); xe_heci_gsc_fini(xe);
for_each_gt(gt, xe, id)
xe_gt_remove(gt);
xe_irq_shutdown(xe); xe_irq_shutdown(xe);
} }
...@@ -570,37 +597,6 @@ void xe_device_shutdown(struct xe_device *xe) ...@@ -570,37 +597,6 @@ void xe_device_shutdown(struct xe_device *xe)
{ {
} }
void xe_device_add_persistent_exec_queues(struct xe_device *xe, struct xe_exec_queue *q)
{
mutex_lock(&xe->persistent_engines.lock);
list_add_tail(&q->persistent.link, &xe->persistent_engines.list);
mutex_unlock(&xe->persistent_engines.lock);
}
void xe_device_remove_persistent_exec_queues(struct xe_device *xe,
struct xe_exec_queue *q)
{
mutex_lock(&xe->persistent_engines.lock);
if (!list_empty(&q->persistent.link))
list_del(&q->persistent.link);
mutex_unlock(&xe->persistent_engines.lock);
}
static void device_kill_persistent_exec_queues(struct xe_device *xe,
struct xe_file *xef)
{
struct xe_exec_queue *q, *next;
mutex_lock(&xe->persistent_engines.lock);
list_for_each_entry_safe(q, next, &xe->persistent_engines.list,
persistent.link)
if (q->persistent.xef == xef) {
xe_exec_queue_kill(q);
list_del_init(&q->persistent.link);
}
mutex_unlock(&xe->persistent_engines.lock);
}
void xe_device_wmb(struct xe_device *xe) void xe_device_wmb(struct xe_device *xe)
{ {
struct xe_gt *gt = xe_root_mmio_gt(xe); struct xe_gt *gt = xe_root_mmio_gt(xe);
...@@ -698,3 +694,33 @@ void xe_device_mem_access_put(struct xe_device *xe) ...@@ -698,3 +694,33 @@ void xe_device_mem_access_put(struct xe_device *xe)
xe_assert(xe, ref >= 0); xe_assert(xe, ref >= 0);
} }
void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p)
{
struct xe_gt *gt;
u8 id;
drm_printf(p, "PCI ID: 0x%04x\n", xe->info.devid);
drm_printf(p, "PCI revision: 0x%02x\n", xe->info.revid);
for_each_gt(gt, xe, id) {
drm_printf(p, "GT id: %u\n", id);
drm_printf(p, "\tType: %s\n",
gt->info.type == XE_GT_TYPE_MAIN ? "main" : "media");
drm_printf(p, "\tIP ver: %u.%u.%u\n",
REG_FIELD_GET(GMD_ID_ARCH_MASK, gt->info.gmdid),
REG_FIELD_GET(GMD_ID_RELEASE_MASK, gt->info.gmdid),
REG_FIELD_GET(GMD_ID_REVID, gt->info.gmdid));
drm_printf(p, "\tCS reference clock: %u\n", gt->info.reference_clock);
}
}
u64 xe_device_canonicalize_addr(struct xe_device *xe, u64 address)
{
return sign_extend64(address, xe->info.va_bits - 1);
}
u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address)
{
return address & GENMASK_ULL(xe->info.va_bits - 1, 0);
}
...@@ -42,10 +42,6 @@ int xe_device_probe(struct xe_device *xe); ...@@ -42,10 +42,6 @@ int xe_device_probe(struct xe_device *xe);
void xe_device_remove(struct xe_device *xe); void xe_device_remove(struct xe_device *xe);
void xe_device_shutdown(struct xe_device *xe); void xe_device_shutdown(struct xe_device *xe);
void xe_device_add_persistent_exec_queues(struct xe_device *xe, struct xe_exec_queue *q);
void xe_device_remove_persistent_exec_queues(struct xe_device *xe,
struct xe_exec_queue *q);
void xe_device_wmb(struct xe_device *xe); void xe_device_wmb(struct xe_device *xe);
static inline struct xe_file *to_xe_file(const struct drm_file *file) static inline struct xe_file *to_xe_file(const struct drm_file *file)
...@@ -168,6 +164,16 @@ static inline bool xe_device_has_sriov(struct xe_device *xe) ...@@ -168,6 +164,16 @@ static inline bool xe_device_has_sriov(struct xe_device *xe)
return xe->info.has_sriov; return xe->info.has_sriov;
} }
static inline bool xe_device_has_memirq(struct xe_device *xe)
{
return GRAPHICS_VERx100(xe) >= 1250;
}
u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size); u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size);
void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p);
u64 xe_device_canonicalize_addr(struct xe_device *xe, u64 address);
u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address);
#endif #endif
This diff is collapsed.
...@@ -131,14 +131,6 @@ static void bo_meminfo(struct xe_bo *bo, ...@@ -131,14 +131,6 @@ static void bo_meminfo(struct xe_bo *bo,
static void show_meminfo(struct drm_printer *p, struct drm_file *file) static void show_meminfo(struct drm_printer *p, struct drm_file *file)
{ {
static const char *const mem_type_to_name[TTM_NUM_MEM_TYPES] = {
[XE_PL_SYSTEM] = "system",
[XE_PL_TT] = "gtt",
[XE_PL_VRAM0] = "vram0",
[XE_PL_VRAM1] = "vram1",
[4 ... 6] = NULL,
[XE_PL_STOLEN] = "stolen"
};
struct drm_memory_stats stats[TTM_NUM_MEM_TYPES] = {}; struct drm_memory_stats stats[TTM_NUM_MEM_TYPES] = {};
struct xe_file *xef = file->driver_priv; struct xe_file *xef = file->driver_priv;
struct ttm_device *bdev = &xef->xe->ttm; struct ttm_device *bdev = &xef->xe->ttm;
...@@ -171,7 +163,7 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file) ...@@ -171,7 +163,7 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
spin_unlock(&client->bos_lock); spin_unlock(&client->bos_lock);
for (mem_type = XE_PL_SYSTEM; mem_type < TTM_NUM_MEM_TYPES; ++mem_type) { for (mem_type = XE_PL_SYSTEM; mem_type < TTM_NUM_MEM_TYPES; ++mem_type) {
if (!mem_type_to_name[mem_type]) if (!xe_mem_type_to_name[mem_type])
continue; continue;
man = ttm_manager_type(bdev, mem_type); man = ttm_manager_type(bdev, mem_type);
...@@ -182,7 +174,7 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file) ...@@ -182,7 +174,7 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
DRM_GEM_OBJECT_RESIDENT | DRM_GEM_OBJECT_RESIDENT |
(mem_type != XE_PL_SYSTEM ? 0 : (mem_type != XE_PL_SYSTEM ? 0 :
DRM_GEM_OBJECT_PURGEABLE), DRM_GEM_OBJECT_PURGEABLE),
mem_type_to_name[mem_type]); xe_mem_type_to_name[mem_type]);
} }
} }
} }
......
...@@ -96,7 +96,46 @@ ...@@ -96,7 +96,46 @@
static int xe_exec_fn(struct drm_gpuvm_exec *vm_exec) static int xe_exec_fn(struct drm_gpuvm_exec *vm_exec)
{ {
return drm_gpuvm_validate(vm_exec->vm, &vm_exec->exec); struct xe_vm *vm = container_of(vm_exec->vm, struct xe_vm, gpuvm);
struct drm_gem_object *obj;
unsigned long index;
int num_fences;
int ret;
ret = drm_gpuvm_validate(vm_exec->vm, &vm_exec->exec);
if (ret)
return ret;
/*
* 1 fence slot for the final submit, and 1 more for every per-tile for
* GPU bind and 1 extra for CPU bind. Note that there are potentially
* many vma per object/dma-resv, however the fence slot will just be
* re-used, since they are largely the same timeline and the seqno
* should be in order. In the case of CPU bind there is dummy fence used
* for all CPU binds, so no need to have a per-tile slot for that.
*/
num_fences = 1 + 1 + vm->xe->info.tile_count;
/*
* We don't know upfront exactly how many fence slots we will need at
* the start of the exec, since the TTM bo_validate above can consume
* numerous fence slots. Also due to how the dma_resv_reserve_fences()
* works it only ensures that at least that many fence slots are
* available i.e if there are already 10 slots available and we reserve
* two more, it can just noop without reserving anything. With this it
* is quite possible that TTM steals some of the fence slots and then
* when it comes time to do the vma binding and final exec stage we are
* lacking enough fence slots, leading to some nasty BUG_ON() when
* adding the fences. Hence just add our own fences here, after the
* validate stage.
*/
drm_exec_for_each_locked_object(&vm_exec->exec, index, obj) {
ret = dma_resv_reserve_fences(obj->resv, num_fences);
if (ret)
return ret;
}
return 0;
} }
int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
...@@ -197,7 +236,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) ...@@ -197,7 +236,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
} }
vm_exec.vm = &vm->gpuvm; vm_exec.vm = &vm->gpuvm;
vm_exec.num_fences = 1 + vm->xe->info.tile_count;
vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT; vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
if (xe_vm_in_lr_mode(vm)) { if (xe_vm_in_lr_mode(vm)) {
drm_exec_init(exec, vm_exec.flags, 0); drm_exec_init(exec, vm_exec.flags, 0);
......
This diff is collapsed.
...@@ -16,7 +16,8 @@ struct xe_file; ...@@ -16,7 +16,8 @@ struct xe_file;
struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *vm, struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *vm,
u32 logical_mask, u16 width, u32 logical_mask, u16 width,
struct xe_hw_engine *hw_engine, u32 flags); struct xe_hw_engine *hw_engine, u32 flags,
u64 extensions);
struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe_gt *gt, struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe_gt *gt,
struct xe_vm *vm, struct xe_vm *vm,
enum xe_engine_class class, u32 flags); enum xe_engine_class class, u32 flags);
......
...@@ -106,67 +106,48 @@ struct xe_exec_queue { ...@@ -106,67 +106,48 @@ struct xe_exec_queue {
}; };
/** /**
* @persistent: persistent exec queue state * @parallel: parallel submission state
*/ */
struct { struct {
/** @xef: file which this exec queue belongs to */ /** @parallel.composite_fence_ctx: context composite fence */
struct xe_file *xef; u64 composite_fence_ctx;
/** @link: link in list of persistent exec queues */ /** @parallel.composite_fence_seqno: seqno for composite fence */
struct list_head link; u32 composite_fence_seqno;
} persistent; } parallel;
union {
/**
* @parallel: parallel submission state
*/
struct {
/** @composite_fence_ctx: context composite fence */
u64 composite_fence_ctx;
/** @composite_fence_seqno: seqno for composite fence */
u32 composite_fence_seqno;
} parallel;
/**
* @bind: bind submission state
*/
struct {
/** @fence_ctx: context bind fence */
u64 fence_ctx;
/** @fence_seqno: seqno for bind fence */
u32 fence_seqno;
} bind;
};
/** @sched_props: scheduling properties */ /** @sched_props: scheduling properties */
struct { struct {
/** @timeslice_us: timeslice period in micro-seconds */ /** @sched_props.timeslice_us: timeslice period in micro-seconds */
u32 timeslice_us; u32 timeslice_us;
/** @preempt_timeout_us: preemption timeout in micro-seconds */ /** @sched_props.preempt_timeout_us: preemption timeout in micro-seconds */
u32 preempt_timeout_us; u32 preempt_timeout_us;
/** @priority: priority of this exec queue */ /** @sched_props.job_timeout_ms: job timeout in milliseconds */
u32 job_timeout_ms;
/** @sched_props.priority: priority of this exec queue */
enum xe_exec_queue_priority priority; enum xe_exec_queue_priority priority;
} sched_props; } sched_props;
/** @compute: compute exec queue state */ /** @compute: compute exec queue state */
struct { struct {
/** @pfence: preemption fence */ /** @compute.pfence: preemption fence */
struct dma_fence *pfence; struct dma_fence *pfence;
/** @context: preemption fence context */ /** @compute.context: preemption fence context */
u64 context; u64 context;
/** @seqno: preemption fence seqno */ /** @compute.seqno: preemption fence seqno */
u32 seqno; u32 seqno;
/** @link: link into VM's list of exec queues */ /** @compute.link: link into VM's list of exec queues */
struct list_head link; struct list_head link;
/** @lock: preemption fences lock */ /** @compute.lock: preemption fences lock */
spinlock_t lock; spinlock_t lock;
} compute; } compute;
/** @usm: unified shared memory state */ /** @usm: unified shared memory state */
struct { struct {
/** @acc_trigger: access counter trigger */ /** @usm.acc_trigger: access counter trigger */
u32 acc_trigger; u32 acc_trigger;
/** @acc_notify: access counter notify */ /** @usm.acc_notify: access counter notify */
u32 acc_notify; u32 acc_notify;
/** @acc_granularity: access counter granularity */ /** @usm.acc_granularity: access counter granularity */
u32 acc_granularity; u32 acc_granularity;
} usm; } usm;
...@@ -198,8 +179,6 @@ struct xe_exec_queue_ops { ...@@ -198,8 +179,6 @@ struct xe_exec_queue_ops {
int (*set_timeslice)(struct xe_exec_queue *q, u32 timeslice_us); int (*set_timeslice)(struct xe_exec_queue *q, u32 timeslice_us);
/** @set_preempt_timeout: Set preemption timeout for exec queue */ /** @set_preempt_timeout: Set preemption timeout for exec queue */
int (*set_preempt_timeout)(struct xe_exec_queue *q, u32 preempt_timeout_us); int (*set_preempt_timeout)(struct xe_exec_queue *q, u32 preempt_timeout_us);
/** @set_job_timeout: Set job timeout for exec queue */
int (*set_job_timeout)(struct xe_exec_queue *q, u32 job_timeout_ms);
/** /**
* @suspend: Suspend exec queue from executing, allowed to be called * @suspend: Suspend exec queue from executing, allowed to be called
* multiple times in a row before resume with the caveat that * multiple times in a row before resume with the caveat that
......
...@@ -378,8 +378,6 @@ static void execlist_exec_queue_fini_async(struct work_struct *w) ...@@ -378,8 +378,6 @@ static void execlist_exec_queue_fini_async(struct work_struct *w)
list_del(&exl->active_link); list_del(&exl->active_link);
spin_unlock_irqrestore(&exl->port->lock, flags); spin_unlock_irqrestore(&exl->port->lock, flags);
if (q->flags & EXEC_QUEUE_FLAG_PERSISTENT)
xe_device_remove_persistent_exec_queues(xe, q);
drm_sched_entity_fini(&exl->entity); drm_sched_entity_fini(&exl->entity);
drm_sched_fini(&exl->sched); drm_sched_fini(&exl->sched);
kfree(exl); kfree(exl);
...@@ -418,13 +416,6 @@ static int execlist_exec_queue_set_preempt_timeout(struct xe_exec_queue *q, ...@@ -418,13 +416,6 @@ static int execlist_exec_queue_set_preempt_timeout(struct xe_exec_queue *q,
return 0; return 0;
} }
static int execlist_exec_queue_set_job_timeout(struct xe_exec_queue *q,
u32 job_timeout_ms)
{
/* NIY */
return 0;
}
static int execlist_exec_queue_suspend(struct xe_exec_queue *q) static int execlist_exec_queue_suspend(struct xe_exec_queue *q)
{ {
/* NIY */ /* NIY */
...@@ -455,7 +446,6 @@ static const struct xe_exec_queue_ops execlist_exec_queue_ops = { ...@@ -455,7 +446,6 @@ static const struct xe_exec_queue_ops execlist_exec_queue_ops = {
.set_priority = execlist_exec_queue_set_priority, .set_priority = execlist_exec_queue_set_priority,
.set_timeslice = execlist_exec_queue_set_timeslice, .set_timeslice = execlist_exec_queue_set_timeslice,
.set_preempt_timeout = execlist_exec_queue_set_preempt_timeout, .set_preempt_timeout = execlist_exec_queue_set_preempt_timeout,
.set_job_timeout = execlist_exec_queue_set_job_timeout,
.suspend = execlist_exec_queue_suspend, .suspend = execlist_exec_queue_suspend,
.suspend_wait = execlist_exec_queue_suspend_wait, .suspend_wait = execlist_exec_queue_suspend_wait,
.resume = execlist_exec_queue_resume, .resume = execlist_exec_queue_resume,
......
...@@ -11,12 +11,16 @@ ...@@ -11,12 +11,16 @@
#include <drm/i915_drm.h> #include <drm/i915_drm.h>
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "regs/xe_regs.h"
#include "xe_assert.h"
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_printk.h"
#include "xe_gt_tlb_invalidation.h" #include "xe_gt_tlb_invalidation.h"
#include "xe_map.h" #include "xe_map.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_sriov.h"
#include "xe_wopcm.h" #include "xe_wopcm.h"
#define XELPG_GGTT_PTE_PAT0 BIT_ULL(52) #define XELPG_GGTT_PTE_PAT0 BIT_ULL(52)
...@@ -141,7 +145,11 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt) ...@@ -141,7 +145,11 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
struct pci_dev *pdev = to_pci_dev(xe->drm.dev); struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
unsigned int gsm_size; unsigned int gsm_size;
gsm_size = probe_gsm_size(pdev); if (IS_SRIOV_VF(xe))
gsm_size = SZ_8M; /* GGTT is expected to be 4GiB */
else
gsm_size = probe_gsm_size(pdev);
if (gsm_size == 0) { if (gsm_size == 0) {
drm_err(&xe->drm, "Hardware reported no preallocated GSM\n"); drm_err(&xe->drm, "Hardware reported no preallocated GSM\n");
return -ENOMEM; return -ENOMEM;
...@@ -312,6 +320,74 @@ void xe_ggtt_printk(struct xe_ggtt *ggtt, const char *prefix) ...@@ -312,6 +320,74 @@ void xe_ggtt_printk(struct xe_ggtt *ggtt, const char *prefix)
} }
} }
static void xe_ggtt_dump_node(struct xe_ggtt *ggtt,
const struct drm_mm_node *node, const char *description)
{
char buf[10];
if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
string_get_size(node->size, 1, STRING_UNITS_2, buf, sizeof(buf));
xe_gt_dbg(ggtt->tile->primary_gt, "GGTT %#llx-%#llx (%s) %s\n",
node->start, node->start + node->size, buf, description);
}
}
/**
* xe_ggtt_balloon - prevent allocation of specified GGTT addresses
* @ggtt: the &xe_ggtt where we want to make reservation
* @start: the starting GGTT address of the reserved region
* @end: then end GGTT address of the reserved region
* @node: the &drm_mm_node to hold reserved GGTT node
*
* Use xe_ggtt_deballoon() to release a reserved GGTT node.
*
* Return: 0 on success or a negative error code on failure.
*/
int xe_ggtt_balloon(struct xe_ggtt *ggtt, u64 start, u64 end, struct drm_mm_node *node)
{
int err;
xe_tile_assert(ggtt->tile, start < end);
xe_tile_assert(ggtt->tile, IS_ALIGNED(start, XE_PAGE_SIZE));
xe_tile_assert(ggtt->tile, IS_ALIGNED(end, XE_PAGE_SIZE));
xe_tile_assert(ggtt->tile, !drm_mm_node_allocated(node));
node->color = 0;
node->start = start;
node->size = end - start;
mutex_lock(&ggtt->lock);
err = drm_mm_reserve_node(&ggtt->mm, node);
mutex_unlock(&ggtt->lock);
if (xe_gt_WARN(ggtt->tile->primary_gt, err,
"Failed to balloon GGTT %#llx-%#llx (%pe)\n",
node->start, node->start + node->size, ERR_PTR(err)))
return err;
xe_ggtt_dump_node(ggtt, node, "balloon");
return 0;
}
/**
* xe_ggtt_deballoon - release a reserved GGTT region
* @ggtt: the &xe_ggtt where reserved node belongs
* @node: the &drm_mm_node with reserved GGTT region
*
* See xe_ggtt_balloon() for details.
*/
void xe_ggtt_deballoon(struct xe_ggtt *ggtt, struct drm_mm_node *node)
{
if (!drm_mm_node_allocated(node))
return;
xe_ggtt_dump_node(ggtt, node, "deballoon");
mutex_lock(&ggtt->lock);
drm_mm_remove_node(node);
mutex_unlock(&ggtt->lock);
}
int xe_ggtt_insert_special_node_locked(struct xe_ggtt *ggtt, struct drm_mm_node *node, int xe_ggtt_insert_special_node_locked(struct xe_ggtt *ggtt, struct drm_mm_node *node,
u32 size, u32 align, u32 mm_flags) u32 size, u32 align, u32 mm_flags)
{ {
...@@ -334,7 +410,8 @@ int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node, ...@@ -334,7 +410,8 @@ int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
{ {
u16 pat_index = tile_to_xe(ggtt->tile)->pat.idx[XE_CACHE_WB]; u16 cache_mode = bo->flags & XE_BO_NEEDS_UC ? XE_CACHE_NONE : XE_CACHE_WB;
u16 pat_index = tile_to_xe(ggtt->tile)->pat.idx[cache_mode];
u64 start = bo->ggtt_node.start; u64 start = bo->ggtt_node.start;
u64 offset, pte; u64 offset, pte;
......
...@@ -16,6 +16,9 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt); ...@@ -16,6 +16,9 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt);
int xe_ggtt_init(struct xe_ggtt *ggtt); int xe_ggtt_init(struct xe_ggtt *ggtt);
void xe_ggtt_printk(struct xe_ggtt *ggtt, const char *prefix); void xe_ggtt_printk(struct xe_ggtt *ggtt, const char *prefix);
int xe_ggtt_balloon(struct xe_ggtt *ggtt, u64 start, u64 size, struct drm_mm_node *node);
void xe_ggtt_deballoon(struct xe_ggtt *ggtt, struct drm_mm_node *node);
int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node, int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
u32 size, u32 align); u32 size, u32 align);
int xe_ggtt_insert_special_node_locked(struct xe_ggtt *ggtt, int xe_ggtt_insert_special_node_locked(struct xe_ggtt *ggtt,
......
...@@ -7,12 +7,14 @@ ...@@ -7,12 +7,14 @@
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include <generated/xe_wa_oob.h>
#include "abi/gsc_mkhi_commands_abi.h" #include "abi/gsc_mkhi_commands_abi.h"
#include "generated/xe_wa_oob.h"
#include "xe_bb.h" #include "xe_bb.h"
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_exec_queue.h" #include "xe_exec_queue.h"
#include "xe_gsc_proxy.h"
#include "xe_gsc_submit.h" #include "xe_gsc_submit.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_printk.h" #include "xe_gt_printk.h"
...@@ -242,8 +244,31 @@ static int gsc_upload(struct xe_gsc *gsc) ...@@ -242,8 +244,31 @@ static int gsc_upload(struct xe_gsc *gsc)
if (err) if (err)
return err; return err;
return 0;
}
static int gsc_upload_and_init(struct xe_gsc *gsc)
{
struct xe_gt *gt = gsc_to_gt(gsc);
int ret;
ret = gsc_upload(gsc);
if (ret)
return ret;
xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_TRANSFERRED);
xe_gt_dbg(gt, "GSC FW async load completed\n"); xe_gt_dbg(gt, "GSC FW async load completed\n");
/* HuC auth failure is not fatal */
if (xe_huc_is_authenticated(&gt->uc.huc, XE_HUC_AUTH_VIA_GUC))
xe_huc_auth(&gt->uc.huc, XE_HUC_AUTH_VIA_GSC);
ret = xe_gsc_proxy_start(gsc);
if (ret)
return ret;
xe_gt_dbg(gt, "GSC proxy init completed\n");
return 0; return 0;
} }
...@@ -252,24 +277,28 @@ static void gsc_work(struct work_struct *work) ...@@ -252,24 +277,28 @@ static void gsc_work(struct work_struct *work)
struct xe_gsc *gsc = container_of(work, typeof(*gsc), work); struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
struct xe_gt *gt = gsc_to_gt(gsc); struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt); struct xe_device *xe = gt_to_xe(gt);
u32 actions;
int ret; int ret;
spin_lock_irq(&gsc->lock);
actions = gsc->work_actions;
gsc->work_actions = 0;
spin_unlock_irq(&gsc->lock);
xe_device_mem_access_get(xe); xe_device_mem_access_get(xe);
xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC); xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
ret = gsc_upload(gsc); if (actions & GSC_ACTION_FW_LOAD) {
if (ret && ret != -EEXIST) { ret = gsc_upload_and_init(gsc);
xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_LOAD_FAIL); if (ret && ret != -EEXIST)
goto out; xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_LOAD_FAIL);
else
xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_RUNNING);
} }
xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_TRANSFERRED); if (actions & GSC_ACTION_SW_PROXY)
xe_gsc_proxy_request_handler(gsc);
/* HuC auth failure is not fatal */
if (xe_huc_is_authenticated(&gt->uc.huc, XE_HUC_AUTH_VIA_GUC))
xe_huc_auth(&gt->uc.huc, XE_HUC_AUTH_VIA_GSC);
out:
xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC); xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC);
xe_device_mem_access_put(xe); xe_device_mem_access_put(xe);
} }
...@@ -282,6 +311,7 @@ int xe_gsc_init(struct xe_gsc *gsc) ...@@ -282,6 +311,7 @@ int xe_gsc_init(struct xe_gsc *gsc)
gsc->fw.type = XE_UC_FW_TYPE_GSC; gsc->fw.type = XE_UC_FW_TYPE_GSC;
INIT_WORK(&gsc->work, gsc_work); INIT_WORK(&gsc->work, gsc_work);
spin_lock_init(&gsc->lock);
/* The GSC uC is only available on the media GT */ /* The GSC uC is only available on the media GT */
if (tile->media_gt && (gt != tile->media_gt)) { if (tile->media_gt && (gt != tile->media_gt)) {
...@@ -302,6 +332,10 @@ int xe_gsc_init(struct xe_gsc *gsc) ...@@ -302,6 +332,10 @@ int xe_gsc_init(struct xe_gsc *gsc)
else if (ret) else if (ret)
goto out; goto out;
ret = xe_gsc_proxy_init(gsc);
if (ret && ret != -ENODEV)
goto out;
return 0; return 0;
out: out:
...@@ -356,7 +390,7 @@ int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc) ...@@ -356,7 +390,7 @@ int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc)
q = xe_exec_queue_create(xe, NULL, q = xe_exec_queue_create(xe, NULL,
BIT(hwe->logical_instance), 1, hwe, BIT(hwe->logical_instance), 1, hwe,
EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_KERNEL |
EXEC_QUEUE_FLAG_PERMANENT); EXEC_QUEUE_FLAG_PERMANENT, 0);
if (IS_ERR(q)) { if (IS_ERR(q)) {
xe_gt_err(gt, "Failed to create queue for GSC submission\n"); xe_gt_err(gt, "Failed to create queue for GSC submission\n");
err = PTR_ERR(q); err = PTR_ERR(q);
...@@ -401,6 +435,10 @@ void xe_gsc_load_start(struct xe_gsc *gsc) ...@@ -401,6 +435,10 @@ void xe_gsc_load_start(struct xe_gsc *gsc)
return; return;
} }
spin_lock_irq(&gsc->lock);
gsc->work_actions |= GSC_ACTION_FW_LOAD;
spin_unlock_irq(&gsc->lock);
queue_work(gsc->wq, &gsc->work); queue_work(gsc->wq, &gsc->work);
} }
...@@ -410,6 +448,15 @@ void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc) ...@@ -410,6 +448,15 @@ void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc)
flush_work(&gsc->work); flush_work(&gsc->work);
} }
/**
* xe_gsc_remove() - Clean up the GSC structures before driver removal
* @gsc: the GSC uC
*/
void xe_gsc_remove(struct xe_gsc *gsc)
{
xe_gsc_proxy_remove(gsc);
}
/* /*
* wa_14015076503: if the GSC FW is loaded, we need to alert it before doing a * wa_14015076503: if the GSC FW is loaded, we need to alert it before doing a
* GSC engine reset by writing a notification bit in the GS1 register and then * GSC engine reset by writing a notification bit in the GS1 register and then
......
...@@ -14,6 +14,7 @@ int xe_gsc_init(struct xe_gsc *gsc); ...@@ -14,6 +14,7 @@ int xe_gsc_init(struct xe_gsc *gsc);
int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc); int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc);
void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc); void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc);
void xe_gsc_load_start(struct xe_gsc *gsc); void xe_gsc_load_start(struct xe_gsc *gsc);
void xe_gsc_remove(struct xe_gsc *gsc);
void xe_gsc_wa_14015076503(struct xe_gt *gt, bool prep); void xe_gsc_wa_14015076503(struct xe_gt *gt, bool prep);
......
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GSC_PROXY_H_
#define _XE_GSC_PROXY_H_
#include <linux/types.h>
struct xe_gsc;
int xe_gsc_proxy_init(struct xe_gsc *gsc);
void xe_gsc_proxy_remove(struct xe_gsc *gsc);
int xe_gsc_proxy_start(struct xe_gsc *gsc);
int xe_gsc_proxy_request_handler(struct xe_gsc *gsc);
void xe_gsc_proxy_irq_handler(struct xe_gsc *gsc, u32 iir);
#endif
...@@ -5,6 +5,8 @@ ...@@ -5,6 +5,8 @@
#include "xe_gsc_submit.h" #include "xe_gsc_submit.h"
#include <linux/poison.h>
#include "abi/gsc_command_header_abi.h" #include "abi/gsc_command_header_abi.h"
#include "xe_bb.h" #include "xe_bb.h"
#include "xe_exec_queue.h" #include "xe_exec_queue.h"
...@@ -68,6 +70,17 @@ u32 xe_gsc_emit_header(struct xe_device *xe, struct iosys_map *map, u32 offset, ...@@ -68,6 +70,17 @@ u32 xe_gsc_emit_header(struct xe_device *xe, struct iosys_map *map, u32 offset,
return offset + GSC_HDR_SIZE; return offset + GSC_HDR_SIZE;
}; };
/**
* xe_gsc_poison_header - poison the MTL GSC header in memory
* @xe: the Xe device
* @map: the iosys map to write to
* @offset: offset from the start of the map at which the header resides
*/
void xe_gsc_poison_header(struct xe_device *xe, struct iosys_map *map, u32 offset)
{
xe_map_memset(xe, map, offset, POISON_FREE, GSC_HDR_SIZE);
};
/** /**
* xe_gsc_check_and_update_pending - check the pending bit and update the input * xe_gsc_check_and_update_pending - check the pending bit and update the input
* header with the retry handle from the output header * header with the retry handle from the output header
...@@ -112,11 +125,18 @@ int xe_gsc_read_out_header(struct xe_device *xe, ...@@ -112,11 +125,18 @@ int xe_gsc_read_out_header(struct xe_device *xe,
{ {
u32 marker = mtl_gsc_header_rd(xe, map, offset, validity_marker); u32 marker = mtl_gsc_header_rd(xe, map, offset, validity_marker);
u32 size = mtl_gsc_header_rd(xe, map, offset, message_size); u32 size = mtl_gsc_header_rd(xe, map, offset, message_size);
u32 status = mtl_gsc_header_rd(xe, map, offset, status);
u32 payload_size = size - GSC_HDR_SIZE; u32 payload_size = size - GSC_HDR_SIZE;
if (marker != GSC_HECI_VALIDITY_MARKER) if (marker != GSC_HECI_VALIDITY_MARKER)
return -EPROTO; return -EPROTO;
if (status != 0) {
drm_err(&xe->drm, "GSC header readout indicates error: %d\n",
status);
return -EINVAL;
}
if (size < GSC_HDR_SIZE || payload_size < min_payload_size) if (size < GSC_HDR_SIZE || payload_size < min_payload_size)
return -ENODATA; return -ENODATA;
......
...@@ -14,6 +14,7 @@ struct xe_gsc; ...@@ -14,6 +14,7 @@ struct xe_gsc;
u32 xe_gsc_emit_header(struct xe_device *xe, struct iosys_map *map, u32 offset, u32 xe_gsc_emit_header(struct xe_device *xe, struct iosys_map *map, u32 offset,
u8 heci_client_id, u64 host_session_id, u32 payload_size); u8 heci_client_id, u64 host_session_id, u32 payload_size);
void xe_gsc_poison_header(struct xe_device *xe, struct iosys_map *map, u32 offset);
bool xe_gsc_check_and_update_pending(struct xe_device *xe, bool xe_gsc_check_and_update_pending(struct xe_device *xe,
struct iosys_map *in, u32 offset_in, struct iosys_map *in, u32 offset_in,
......
...@@ -6,12 +6,17 @@ ...@@ -6,12 +6,17 @@
#ifndef _XE_GSC_TYPES_H_ #ifndef _XE_GSC_TYPES_H_
#define _XE_GSC_TYPES_H_ #define _XE_GSC_TYPES_H_
#include <linux/iosys-map.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include "xe_uc_fw_types.h" #include "xe_uc_fw_types.h"
struct xe_bo; struct xe_bo;
struct xe_exec_queue; struct xe_exec_queue;
struct i915_gsc_proxy_component;
/** /**
* struct xe_gsc - GSC * struct xe_gsc - GSC
...@@ -34,6 +39,34 @@ struct xe_gsc { ...@@ -34,6 +39,34 @@ struct xe_gsc {
/** @work: delayed load and proxy handling work */ /** @work: delayed load and proxy handling work */
struct work_struct work; struct work_struct work;
/** @lock: protects access to the work_actions mask */
spinlock_t lock;
/** @work_actions: mask of actions to be performed in the work */
u32 work_actions;
#define GSC_ACTION_FW_LOAD BIT(0)
#define GSC_ACTION_SW_PROXY BIT(1)
/** @proxy: sub-structure containing the SW proxy-related variables */
struct {
/** @proxy.component: struct for communication with mei component */
struct i915_gsc_proxy_component *component;
/** @proxy.mutex: protects the component binding and usage */
struct mutex mutex;
/** @proxy.component_added: whether the component has been added */
bool component_added;
/** @proxy.bo: object to store message to and from the GSC */
struct xe_bo *bo;
/** @proxy.to_gsc: map of the memory used to send messages to the GSC */
struct iosys_map to_gsc;
/** @proxy.from_gsc: map of the memory used to recv messages from the GSC */
struct iosys_map from_gsc;
/** @proxy.to_csme: pointer to the memory used to send messages to CSME */
void *to_csme;
/** @proxy.from_csme: pointer to the memory used to recv messages from CSME */
void *from_csme;
} proxy;
}; };
#endif #endif
...@@ -78,6 +78,19 @@ void xe_gt_sanitize(struct xe_gt *gt) ...@@ -78,6 +78,19 @@ void xe_gt_sanitize(struct xe_gt *gt)
gt->uc.guc.submission_state.enabled = false; gt->uc.guc.submission_state.enabled = false;
} }
/**
* xe_gt_remove() - Clean up the GT structures before driver removal
* @gt: the GT object
*
* This function should only act on objects/structures that must be cleaned
* before the driver removal callback is complete and therefore can't be
* deferred to a drmm action.
*/
void xe_gt_remove(struct xe_gt *gt)
{
xe_uc_remove(&gt->uc);
}
static void gt_fini(struct drm_device *drm, void *arg) static void gt_fini(struct drm_device *drm, void *arg)
{ {
struct xe_gt *gt = arg; struct xe_gt *gt = arg;
...@@ -235,7 +248,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt) ...@@ -235,7 +248,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
return -ENOMEM; return -ENOMEM;
q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance), 1, q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance), 1,
hwe, EXEC_QUEUE_FLAG_KERNEL); hwe, EXEC_QUEUE_FLAG_KERNEL, 0);
if (IS_ERR(q)) { if (IS_ERR(q)) {
err = PTR_ERR(q); err = PTR_ERR(q);
xe_gt_err(gt, "hwe %s: xe_exec_queue_create failed (%pe)\n", xe_gt_err(gt, "hwe %s: xe_exec_queue_create failed (%pe)\n",
...@@ -252,7 +265,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt) ...@@ -252,7 +265,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
} }
nop_q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance), nop_q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance),
1, hwe, EXEC_QUEUE_FLAG_KERNEL); 1, hwe, EXEC_QUEUE_FLAG_KERNEL, 0);
if (IS_ERR(nop_q)) { if (IS_ERR(nop_q)) {
err = PTR_ERR(nop_q); err = PTR_ERR(nop_q);
xe_gt_err(gt, "hwe %s: nop xe_exec_queue_create failed (%pe)\n", xe_gt_err(gt, "hwe %s: nop xe_exec_queue_create failed (%pe)\n",
...@@ -302,7 +315,6 @@ int xe_gt_init_early(struct xe_gt *gt) ...@@ -302,7 +315,6 @@ int xe_gt_init_early(struct xe_gt *gt)
return err; return err;
xe_gt_topology_init(gt); xe_gt_topology_init(gt);
xe_gt_mcr_init(gt);
err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT); err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
if (err) if (err)
...@@ -341,8 +353,6 @@ static int gt_fw_domain_init(struct xe_gt *gt) ...@@ -341,8 +353,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
if (err) if (err)
goto err_hw_fence_irq; goto err_hw_fence_irq;
xe_pat_init(gt);
if (!xe_gt_is_media_type(gt)) { if (!xe_gt_is_media_type(gt)) {
err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt); err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
if (err) if (err)
...@@ -351,22 +361,8 @@ static int gt_fw_domain_init(struct xe_gt *gt) ...@@ -351,22 +361,8 @@ static int gt_fw_domain_init(struct xe_gt *gt)
xe_lmtt_init(&gt_to_tile(gt)->sriov.pf.lmtt); xe_lmtt_init(&gt_to_tile(gt)->sriov.pf.lmtt);
} }
err = xe_uc_init(&gt->uc);
if (err)
goto err_force_wake;
/* Raise GT freq to speed up HuC/GuC load */
xe_guc_pc_init_early(&gt->uc.guc.pc);
err = xe_uc_init_hwconfig(&gt->uc);
if (err)
goto err_force_wake;
xe_gt_idle_sysfs_init(&gt->gtidle); xe_gt_idle_sysfs_init(&gt->gtidle);
/* XXX: Fake that we pull the engine mask from hwconfig blob */
gt->info.engine_mask = gt->info.__engine_mask;
/* Enable per hw engine IRQs */ /* Enable per hw engine IRQs */
xe_irq_enable_hwe(gt); xe_irq_enable_hwe(gt);
...@@ -386,6 +382,12 @@ static int gt_fw_domain_init(struct xe_gt *gt) ...@@ -386,6 +382,12 @@ static int gt_fw_domain_init(struct xe_gt *gt)
/* Initialize CCS mode sysfs after early initialization of HW engines */ /* Initialize CCS mode sysfs after early initialization of HW engines */
xe_gt_ccs_mode_sysfs_init(gt); xe_gt_ccs_mode_sysfs_init(gt);
/*
* Stash hardware-reported version. Since this register does not exist
* on pre-MTL platforms, reading it there will (correctly) return 0.
*/
gt->info.gmdid = xe_mmio_read32(gt, GMD_ID);
err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT); err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
XE_WARN_ON(err); XE_WARN_ON(err);
xe_device_mem_access_put(gt_to_xe(gt)); xe_device_mem_access_put(gt_to_xe(gt));
...@@ -428,16 +430,15 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -428,16 +430,15 @@ static int all_fw_domain_init(struct xe_gt *gt)
if (err) if (err)
goto err_force_wake; goto err_force_wake;
err = xe_uc_init_post_hwconfig(&gt->uc);
if (err)
goto err_force_wake;
if (!xe_gt_is_media_type(gt)) { if (!xe_gt_is_media_type(gt)) {
/* /*
* USM has its only SA pool to non-block behind user operations * USM has its only SA pool to non-block behind user operations
*/ */
if (gt_to_xe(gt)->info.has_usm) { if (gt_to_xe(gt)->info.has_usm) {
gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt), SZ_1M, 16); struct xe_device *xe = gt_to_xe(gt);
gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
if (IS_ERR(gt->usm.bb_pool)) { if (IS_ERR(gt->usm.bb_pool)) {
err = PTR_ERR(gt->usm.bb_pool); err = PTR_ERR(gt->usm.bb_pool);
goto err_force_wake; goto err_force_wake;
...@@ -455,6 +456,10 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -455,6 +456,10 @@ static int all_fw_domain_init(struct xe_gt *gt)
} }
} }
err = xe_uc_init_post_hwconfig(&gt->uc);
if (err)
goto err_force_wake;
err = xe_uc_init_hw(&gt->uc); err = xe_uc_init_hw(&gt->uc);
if (err) if (err)
goto err_force_wake; goto err_force_wake;
...@@ -484,6 +489,41 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -484,6 +489,41 @@ static int all_fw_domain_init(struct xe_gt *gt)
return err; return err;
} }
/*
* Initialize enough GT to be able to load GuC in order to obtain hwconfig and
* enable CTB communication.
*/
int xe_gt_init_hwconfig(struct xe_gt *gt)
{
int err;
xe_device_mem_access_get(gt_to_xe(gt));
err = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
if (err)
goto out;
xe_gt_mcr_init(gt);
xe_pat_init(gt);
err = xe_uc_init(&gt->uc);
if (err)
goto out_fw;
err = xe_uc_init_hwconfig(&gt->uc);
if (err)
goto out_fw;
/* XXX: Fake that we pull the engine mask from hwconfig blob */
gt->info.engine_mask = gt->info.__engine_mask;
out_fw:
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
out:
xe_device_mem_access_put(gt_to_xe(gt));
return err;
}
int xe_gt_init(struct xe_gt *gt) int xe_gt_init(struct xe_gt *gt)
{ {
int err; int err;
...@@ -619,12 +659,12 @@ static int gt_reset(struct xe_gt *gt) ...@@ -619,12 +659,12 @@ static int gt_reset(struct xe_gt *gt)
if (err) if (err)
goto err_out; goto err_out;
xe_gt_tlb_invalidation_reset(gt);
err = do_gt_reset(gt); err = do_gt_reset(gt);
if (err) if (err)
goto err_out; goto err_out;
xe_gt_tlb_invalidation_reset(gt);
err = do_gt_restart(gt); err = do_gt_restart(gt);
if (err) if (err)
goto err_out; goto err_out;
......
...@@ -33,6 +33,7 @@ static inline bool xe_fault_inject_gt_reset(void) ...@@ -33,6 +33,7 @@ static inline bool xe_fault_inject_gt_reset(void)
#endif #endif
struct xe_gt *xe_gt_alloc(struct xe_tile *tile); struct xe_gt *xe_gt_alloc(struct xe_tile *tile);
int xe_gt_init_hwconfig(struct xe_gt *gt);
int xe_gt_init_early(struct xe_gt *gt); int xe_gt_init_early(struct xe_gt *gt);
int xe_gt_init(struct xe_gt *gt); int xe_gt_init(struct xe_gt *gt);
int xe_gt_record_default_lrcs(struct xe_gt *gt); int xe_gt_record_default_lrcs(struct xe_gt *gt);
...@@ -41,6 +42,7 @@ int xe_gt_suspend(struct xe_gt *gt); ...@@ -41,6 +42,7 @@ int xe_gt_suspend(struct xe_gt *gt);
int xe_gt_resume(struct xe_gt *gt); int xe_gt_resume(struct xe_gt *gt);
void xe_gt_reset_async(struct xe_gt *gt); void xe_gt_reset_async(struct xe_gt *gt);
void xe_gt_sanitize(struct xe_gt *gt); void xe_gt_sanitize(struct xe_gt *gt);
void xe_gt_remove(struct xe_gt *gt);
/** /**
* xe_gt_any_hw_engine_by_reset_domain - scan the list of engines and return the * xe_gt_any_hw_engine_by_reset_domain - scan the list of engines and return the
......
...@@ -145,10 +145,10 @@ void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle) ...@@ -145,10 +145,10 @@ void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
} }
if (xe_gt_is_media_type(gt)) { if (xe_gt_is_media_type(gt)) {
sprintf(gtidle->name, "gt%d-mc\n", gt->info.id); sprintf(gtidle->name, "gt%d-mc", gt->info.id);
gtidle->idle_residency = xe_guc_pc_mc6_residency; gtidle->idle_residency = xe_guc_pc_mc6_residency;
} else { } else {
sprintf(gtidle->name, "gt%d-rc\n", gt->info.id); sprintf(gtidle->name, "gt%d-rc", gt->info.id);
gtidle->idle_residency = xe_guc_pc_rc6_residency; gtidle->idle_residency = xe_guc_pc_rc6_residency;
} }
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include "xe_gt_topology.h" #include "xe_gt_topology.h"
#include "xe_gt_types.h" #include "xe_gt_types.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_sriov.h"
/** /**
* DOC: GT Multicast/Replicated (MCR) Register Support * DOC: GT Multicast/Replicated (MCR) Register Support
...@@ -38,6 +39,8 @@ ...@@ -38,6 +39,8 @@
* ``init_steering_*()`` functions is to apply the platform-specific rules for * ``init_steering_*()`` functions is to apply the platform-specific rules for
* each MCR register type to identify a steering target that will select a * each MCR register type to identify a steering target that will select a
* non-terminated instance. * non-terminated instance.
*
* MCR registers are not available on Virtual Function (VF).
*/ */
#define STEER_SEMAPHORE XE_REG(0xFD0) #define STEER_SEMAPHORE XE_REG(0xFD0)
...@@ -352,6 +355,9 @@ void xe_gt_mcr_init(struct xe_gt *gt) ...@@ -352,6 +355,9 @@ void xe_gt_mcr_init(struct xe_gt *gt)
BUILD_BUG_ON(IMPLICIT_STEERING + 1 != NUM_STEERING_TYPES); BUILD_BUG_ON(IMPLICIT_STEERING + 1 != NUM_STEERING_TYPES);
BUILD_BUG_ON(ARRAY_SIZE(xe_steering_types) != NUM_STEERING_TYPES); BUILD_BUG_ON(ARRAY_SIZE(xe_steering_types) != NUM_STEERING_TYPES);
if (IS_SRIOV_VF(xe))
return;
spin_lock_init(&gt->mcr_lock); spin_lock_init(&gt->mcr_lock);
if (gt->info.type == XE_GT_TYPE_MEDIA) { if (gt->info.type == XE_GT_TYPE_MEDIA) {
...@@ -405,6 +411,9 @@ void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt) ...@@ -405,6 +411,9 @@ void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt)
{ {
struct xe_device *xe = gt_to_xe(gt); struct xe_device *xe = gt_to_xe(gt);
if (IS_SRIOV_VF(xe))
return;
if (xe->info.platform == XE_DG2) { if (xe->info.platform == XE_DG2) {
u32 steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, 0) | u32 steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, 0) |
REG_FIELD_PREP(MCR_SUBSLICE_MASK, 2); REG_FIELD_PREP(MCR_SUBSLICE_MASK, 2);
...@@ -588,6 +597,8 @@ u32 xe_gt_mcr_unicast_read_any(struct xe_gt *gt, struct xe_reg_mcr reg_mcr) ...@@ -588,6 +597,8 @@ u32 xe_gt_mcr_unicast_read_any(struct xe_gt *gt, struct xe_reg_mcr reg_mcr)
u32 val; u32 val;
bool steer; bool steer;
xe_gt_assert(gt, !IS_SRIOV_VF(gt_to_xe(gt)));
steer = xe_gt_mcr_get_nonterminated_steering(gt, reg_mcr, steer = xe_gt_mcr_get_nonterminated_steering(gt, reg_mcr,
&group, &instance); &group, &instance);
...@@ -619,6 +630,8 @@ u32 xe_gt_mcr_unicast_read(struct xe_gt *gt, ...@@ -619,6 +630,8 @@ u32 xe_gt_mcr_unicast_read(struct xe_gt *gt,
{ {
u32 val; u32 val;
xe_gt_assert(gt, !IS_SRIOV_VF(gt_to_xe(gt)));
mcr_lock(gt); mcr_lock(gt);
val = rw_with_mcr_steering(gt, reg_mcr, MCR_OP_READ, group, instance, 0); val = rw_with_mcr_steering(gt, reg_mcr, MCR_OP_READ, group, instance, 0);
mcr_unlock(gt); mcr_unlock(gt);
...@@ -640,6 +653,8 @@ u32 xe_gt_mcr_unicast_read(struct xe_gt *gt, ...@@ -640,6 +653,8 @@ u32 xe_gt_mcr_unicast_read(struct xe_gt *gt,
void xe_gt_mcr_unicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr, void xe_gt_mcr_unicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
u32 value, int group, int instance) u32 value, int group, int instance)
{ {
xe_gt_assert(gt, !IS_SRIOV_VF(gt_to_xe(gt)));
mcr_lock(gt); mcr_lock(gt);
rw_with_mcr_steering(gt, reg_mcr, MCR_OP_WRITE, group, instance, value); rw_with_mcr_steering(gt, reg_mcr, MCR_OP_WRITE, group, instance, value);
mcr_unlock(gt); mcr_unlock(gt);
...@@ -658,6 +673,8 @@ void xe_gt_mcr_multicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr, ...@@ -658,6 +673,8 @@ void xe_gt_mcr_multicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
{ {
struct xe_reg reg = to_xe_reg(reg_mcr); struct xe_reg reg = to_xe_reg(reg_mcr);
xe_gt_assert(gt, !IS_SRIOV_VF(gt_to_xe(gt)));
/* /*
* Synchronize with any unicast operations. Once we have exclusive * Synchronize with any unicast operations. Once we have exclusive
* access, the MULTICAST bit should already be set, so there's no need * access, the MULTICAST bit should already be set, so there's no need
......
...@@ -285,9 +285,9 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf) ...@@ -285,9 +285,9 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
bool ret = false; bool ret = false;
spin_lock_irq(&pf_queue->lock); spin_lock_irq(&pf_queue->lock);
if (pf_queue->head != pf_queue->tail) { if (pf_queue->tail != pf_queue->head) {
desc = (const struct xe_guc_pagefault_desc *) desc = (const struct xe_guc_pagefault_desc *)
(pf_queue->data + pf_queue->head); (pf_queue->data + pf_queue->tail);
pf->fault_level = FIELD_GET(PFD_FAULT_LEVEL, desc->dw0); pf->fault_level = FIELD_GET(PFD_FAULT_LEVEL, desc->dw0);
pf->trva_fault = FIELD_GET(XE2_PFD_TRVA_FAULT, desc->dw0); pf->trva_fault = FIELD_GET(XE2_PFD_TRVA_FAULT, desc->dw0);
...@@ -305,7 +305,7 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf) ...@@ -305,7 +305,7 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
pf->page_addr |= FIELD_GET(PFD_VIRTUAL_ADDR_LO, desc->dw2) << pf->page_addr |= FIELD_GET(PFD_VIRTUAL_ADDR_LO, desc->dw2) <<
PFD_VIRTUAL_ADDR_LO_SHIFT; PFD_VIRTUAL_ADDR_LO_SHIFT;
pf_queue->head = (pf_queue->head + PF_MSG_LEN_DW) % pf_queue->tail = (pf_queue->tail + PF_MSG_LEN_DW) %
PF_QUEUE_NUM_DW; PF_QUEUE_NUM_DW;
ret = true; ret = true;
} }
...@@ -318,7 +318,7 @@ static bool pf_queue_full(struct pf_queue *pf_queue) ...@@ -318,7 +318,7 @@ static bool pf_queue_full(struct pf_queue *pf_queue)
{ {
lockdep_assert_held(&pf_queue->lock); lockdep_assert_held(&pf_queue->lock);
return CIRC_SPACE(pf_queue->tail, pf_queue->head, PF_QUEUE_NUM_DW) <= return CIRC_SPACE(pf_queue->head, pf_queue->tail, PF_QUEUE_NUM_DW) <=
PF_MSG_LEN_DW; PF_MSG_LEN_DW;
} }
...@@ -331,17 +331,22 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len) ...@@ -331,17 +331,22 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
u32 asid; u32 asid;
bool full; bool full;
/*
* The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
*/
BUILD_BUG_ON(PF_QUEUE_NUM_DW % PF_MSG_LEN_DW);
if (unlikely(len != PF_MSG_LEN_DW)) if (unlikely(len != PF_MSG_LEN_DW))
return -EPROTO; return -EPROTO;
asid = FIELD_GET(PFD_ASID, msg[1]); asid = FIELD_GET(PFD_ASID, msg[1]);
pf_queue = &gt->usm.pf_queue[asid % NUM_PF_QUEUE]; pf_queue = gt->usm.pf_queue + (asid % NUM_PF_QUEUE);
spin_lock_irqsave(&pf_queue->lock, flags); spin_lock_irqsave(&pf_queue->lock, flags);
full = pf_queue_full(pf_queue); full = pf_queue_full(pf_queue);
if (!full) { if (!full) {
memcpy(pf_queue->data + pf_queue->tail, msg, len * sizeof(u32)); memcpy(pf_queue->data + pf_queue->head, msg, len * sizeof(u32));
pf_queue->tail = (pf_queue->tail + len) % PF_QUEUE_NUM_DW; pf_queue->head = (pf_queue->head + len) % PF_QUEUE_NUM_DW;
queue_work(gt->usm.pf_wq, &pf_queue->worker); queue_work(gt->usm.pf_wq, &pf_queue->worker);
} else { } else {
drm_warn(&xe->drm, "PF Queue full, shouldn't be possible"); drm_warn(&xe->drm, "PF Queue full, shouldn't be possible");
...@@ -387,7 +392,7 @@ static void pf_queue_work_func(struct work_struct *w) ...@@ -387,7 +392,7 @@ static void pf_queue_work_func(struct work_struct *w)
send_pagefault_reply(&gt->uc.guc, &reply); send_pagefault_reply(&gt->uc.guc, &reply);
if (time_after(jiffies, threshold) && if (time_after(jiffies, threshold) &&
pf_queue->head != pf_queue->tail) { pf_queue->tail != pf_queue->head) {
queue_work(gt->usm.pf_wq, w); queue_work(gt->usm.pf_wq, w);
break; break;
} }
...@@ -562,9 +567,9 @@ static bool get_acc(struct acc_queue *acc_queue, struct acc *acc) ...@@ -562,9 +567,9 @@ static bool get_acc(struct acc_queue *acc_queue, struct acc *acc)
bool ret = false; bool ret = false;
spin_lock(&acc_queue->lock); spin_lock(&acc_queue->lock);
if (acc_queue->head != acc_queue->tail) { if (acc_queue->tail != acc_queue->head) {
desc = (const struct xe_guc_acc_desc *) desc = (const struct xe_guc_acc_desc *)
(acc_queue->data + acc_queue->head); (acc_queue->data + acc_queue->tail);
acc->granularity = FIELD_GET(ACC_GRANULARITY, desc->dw2); acc->granularity = FIELD_GET(ACC_GRANULARITY, desc->dw2);
acc->sub_granularity = FIELD_GET(ACC_SUBG_HI, desc->dw1) << 31 | acc->sub_granularity = FIELD_GET(ACC_SUBG_HI, desc->dw1) << 31 |
...@@ -577,7 +582,7 @@ static bool get_acc(struct acc_queue *acc_queue, struct acc *acc) ...@@ -577,7 +582,7 @@ static bool get_acc(struct acc_queue *acc_queue, struct acc *acc)
acc->va_range_base = make_u64(desc->dw3 & ACC_VIRTUAL_ADDR_RANGE_HI, acc->va_range_base = make_u64(desc->dw3 & ACC_VIRTUAL_ADDR_RANGE_HI,
desc->dw2 & ACC_VIRTUAL_ADDR_RANGE_LO); desc->dw2 & ACC_VIRTUAL_ADDR_RANGE_LO);
acc_queue->head = (acc_queue->head + ACC_MSG_LEN_DW) % acc_queue->tail = (acc_queue->tail + ACC_MSG_LEN_DW) %
ACC_QUEUE_NUM_DW; ACC_QUEUE_NUM_DW;
ret = true; ret = true;
} }
...@@ -605,7 +610,7 @@ static void acc_queue_work_func(struct work_struct *w) ...@@ -605,7 +610,7 @@ static void acc_queue_work_func(struct work_struct *w)
} }
if (time_after(jiffies, threshold) && if (time_after(jiffies, threshold) &&
acc_queue->head != acc_queue->tail) { acc_queue->tail != acc_queue->head) {
queue_work(gt->usm.acc_wq, w); queue_work(gt->usm.acc_wq, w);
break; break;
} }
...@@ -616,7 +621,7 @@ static bool acc_queue_full(struct acc_queue *acc_queue) ...@@ -616,7 +621,7 @@ static bool acc_queue_full(struct acc_queue *acc_queue)
{ {
lockdep_assert_held(&acc_queue->lock); lockdep_assert_held(&acc_queue->lock);
return CIRC_SPACE(acc_queue->tail, acc_queue->head, ACC_QUEUE_NUM_DW) <= return CIRC_SPACE(acc_queue->head, acc_queue->tail, ACC_QUEUE_NUM_DW) <=
ACC_MSG_LEN_DW; ACC_MSG_LEN_DW;
} }
...@@ -627,6 +632,11 @@ int xe_guc_access_counter_notify_handler(struct xe_guc *guc, u32 *msg, u32 len) ...@@ -627,6 +632,11 @@ int xe_guc_access_counter_notify_handler(struct xe_guc *guc, u32 *msg, u32 len)
u32 asid; u32 asid;
bool full; bool full;
/*
* The below logic doesn't work unless ACC_QUEUE_NUM_DW % ACC_MSG_LEN_DW == 0
*/
BUILD_BUG_ON(ACC_QUEUE_NUM_DW % ACC_MSG_LEN_DW);
if (unlikely(len != ACC_MSG_LEN_DW)) if (unlikely(len != ACC_MSG_LEN_DW))
return -EPROTO; return -EPROTO;
...@@ -636,9 +646,9 @@ int xe_guc_access_counter_notify_handler(struct xe_guc *guc, u32 *msg, u32 len) ...@@ -636,9 +646,9 @@ int xe_guc_access_counter_notify_handler(struct xe_guc *guc, u32 *msg, u32 len)
spin_lock(&acc_queue->lock); spin_lock(&acc_queue->lock);
full = acc_queue_full(acc_queue); full = acc_queue_full(acc_queue);
if (!full) { if (!full) {
memcpy(acc_queue->data + acc_queue->tail, msg, memcpy(acc_queue->data + acc_queue->head, msg,
len * sizeof(u32)); len * sizeof(u32));
acc_queue->tail = (acc_queue->tail + len) % ACC_QUEUE_NUM_DW; acc_queue->head = (acc_queue->head + len) % ACC_QUEUE_NUM_DW;
queue_work(gt->usm.acc_wq, &acc_queue->worker); queue_work(gt->usm.acc_wq, &acc_queue->worker);
} else { } else {
drm_warn(&gt_to_xe(gt)->drm, "ACC Queue full, dropping ACC"); drm_warn(&gt_to_xe(gt)->drm, "ACC Queue full, dropping ACC");
......
...@@ -43,4 +43,48 @@ ...@@ -43,4 +43,48 @@
#define xe_gt_WARN_ON_ONCE(_gt, _condition) \ #define xe_gt_WARN_ON_ONCE(_gt, _condition) \
xe_gt_WARN_ONCE((_gt), _condition, "%s(%s)", "gt_WARN_ON_ONCE", __stringify(_condition)) xe_gt_WARN_ONCE((_gt), _condition, "%s(%s)", "gt_WARN_ON_ONCE", __stringify(_condition))
static inline void __xe_gt_printfn_err(struct drm_printer *p, struct va_format *vaf)
{
struct xe_gt *gt = p->arg;
xe_gt_err(gt, "%pV", vaf);
}
static inline void __xe_gt_printfn_info(struct drm_printer *p, struct va_format *vaf)
{
struct xe_gt *gt = p->arg;
xe_gt_info(gt, "%pV", vaf);
}
/**
* xe_gt_err_printer - Construct a &drm_printer that outputs to xe_gt_err()
* @gt: the &xe_gt pointer to use in xe_gt_err()
*
* Return: The &drm_printer object.
*/
static inline struct drm_printer xe_gt_err_printer(struct xe_gt *gt)
{
struct drm_printer p = {
.printfn = __xe_gt_printfn_err,
.arg = gt,
};
return p;
}
/**
* xe_gt_info_printer - Construct a &drm_printer that outputs to xe_gt_info()
* @gt: the &xe_gt pointer to use in xe_gt_info()
*
* Return: The &drm_printer object.
*/
static inline struct drm_printer xe_gt_info_printer(struct xe_gt *gt)
{
struct drm_printer p = {
.printfn = __xe_gt_printfn_info,
.arg = gt,
};
return p;
}
#endif #endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PRINTK_H_
#define _XE_GT_SRIOV_PRINTK_H_
#include "xe_gt_printk.h"
#include "xe_sriov_printk.h"
#define __xe_gt_sriov_printk(gt, _level, fmt, ...) \
xe_gt_printk((gt), _level, "%s" fmt, xe_sriov_printk_prefix(gt_to_xe(gt)), ##__VA_ARGS__)
#define xe_gt_sriov_err(_gt, _fmt, ...) \
__xe_gt_sriov_printk(_gt, err, _fmt, ##__VA_ARGS__)
#define xe_gt_sriov_notice(_gt, _fmt, ...) \
__xe_gt_sriov_printk(_gt, notice, _fmt, ##__VA_ARGS__)
#define xe_gt_sriov_info(_gt, _fmt, ...) \
__xe_gt_sriov_printk(_gt, info, _fmt, ##__VA_ARGS__)
#define xe_gt_sriov_dbg(_gt, _fmt, ...) \
__xe_gt_sriov_printk(_gt, dbg, _fmt, ##__VA_ARGS__)
/* for low level noisy debug messages */
#ifdef CONFIG_DRM_XE_DEBUG_SRIOV
#define xe_gt_sriov_dbg_verbose(_gt, _fmt, ...) xe_gt_sriov_dbg(_gt, _fmt, ##__VA_ARGS__)
#else
#define xe_gt_sriov_dbg_verbose(_gt, _fmt, ...) typecheck(struct xe_gt *, (_gt))
#endif
#endif
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include "abi/guc_actions_abi.h" #include "abi/guc_actions_abi.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_printk.h"
#include "xe_guc.h" #include "xe_guc.h"
#include "xe_guc_ct.h" #include "xe_guc_ct.h"
#include "xe_trace.h" #include "xe_trace.h"
...@@ -30,8 +31,8 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work) ...@@ -30,8 +31,8 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
break; break;
trace_xe_gt_tlb_invalidation_fence_timeout(fence); trace_xe_gt_tlb_invalidation_fence_timeout(fence);
drm_err(&gt_to_xe(gt)->drm, "gt%d: TLB invalidation fence timeout, seqno=%d recv=%d", xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d",
gt->info.id, fence->seqno, gt->tlb_invalidation.seqno_recv); fence->seqno, gt->tlb_invalidation.seqno_recv);
list_del(&fence->link); list_del(&fence->link);
fence->base.error = -ETIME; fence->base.error = -ETIME;
...@@ -312,9 +313,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt, ...@@ -312,9 +313,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
*/ */
int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno) int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno)
{ {
struct xe_device *xe = gt_to_xe(gt);
struct xe_guc *guc = &gt->uc.guc; struct xe_guc *guc = &gt->uc.guc;
struct drm_printer p = drm_err_printer(&xe->drm, __func__);
int ret; int ret;
/* /*
...@@ -325,8 +324,10 @@ int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno) ...@@ -325,8 +324,10 @@ int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno)
tlb_invalidation_seqno_past(gt, seqno), tlb_invalidation_seqno_past(gt, seqno),
TLB_TIMEOUT); TLB_TIMEOUT);
if (!ret) { if (!ret) {
drm_err(&xe->drm, "gt%d: TLB invalidation time'd out, seqno=%d, recv=%d\n", struct drm_printer p = xe_gt_err_printer(gt);
gt->info.id, seqno, gt->tlb_invalidation.seqno_recv);
xe_gt_err(gt, "TLB invalidation time'd out, seqno=%d, recv=%d\n",
seqno, gt->tlb_invalidation.seqno_recv);
xe_guc_ct_print(&guc->ct, &p, true); xe_guc_ct_print(&guc->ct, &p, true);
return -ETIME; return -ETIME;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
struct drm_printer; struct drm_printer;
void xe_guc_comm_init_early(struct xe_guc *guc);
int xe_guc_init(struct xe_guc *guc); int xe_guc_init(struct xe_guc *guc);
int xe_guc_init_post_hwconfig(struct xe_guc *guc); int xe_guc_init_post_hwconfig(struct xe_guc *guc);
int xe_guc_post_load_init(struct xe_guc *guc); int xe_guc_post_load_init(struct xe_guc *guc);
......
...@@ -273,7 +273,7 @@ int xe_guc_ads_init(struct xe_guc_ads *ads) ...@@ -273,7 +273,7 @@ int xe_guc_ads_init(struct xe_guc_ads *ads)
ads->regset_size = calculate_regset_size(gt); ads->regset_size = calculate_regset_size(gt);
bo = xe_managed_bo_create_pin_map(xe, tile, guc_ads_size(ads) + MAX_GOLDEN_LRC_SIZE, bo = xe_managed_bo_create_pin_map(xe, tile, guc_ads_size(ads) + MAX_GOLDEN_LRC_SIZE,
XE_BO_CREATE_VRAM_IF_DGFX(tile) | XE_BO_CREATE_SYSTEM_BIT |
XE_BO_CREATE_GGTT_BIT); XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(bo)) if (IS_ERR(bo))
return PTR_ERR(bo); return PTR_ERR(bo);
......
This diff is collapsed.
...@@ -13,6 +13,7 @@ struct drm_printer; ...@@ -13,6 +13,7 @@ struct drm_printer;
int xe_guc_ct_init(struct xe_guc_ct *ct); int xe_guc_ct_init(struct xe_guc_ct *ct);
int xe_guc_ct_enable(struct xe_guc_ct *ct); int xe_guc_ct_enable(struct xe_guc_ct *ct);
void xe_guc_ct_disable(struct xe_guc_ct *ct); void xe_guc_ct_disable(struct xe_guc_ct *ct);
void xe_guc_ct_stop(struct xe_guc_ct *ct);
void xe_guc_ct_fast_path(struct xe_guc_ct *ct); void xe_guc_ct_fast_path(struct xe_guc_ct *ct);
struct xe_guc_ct_snapshot * struct xe_guc_ct_snapshot *
...@@ -22,11 +23,18 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, ...@@ -22,11 +23,18 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot,
void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot); void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot);
void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool atomic); void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool atomic);
static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct)
{
return ct->state == XE_GUC_CT_STATE_ENABLED;
}
static inline void xe_guc_ct_irq_handler(struct xe_guc_ct *ct) static inline void xe_guc_ct_irq_handler(struct xe_guc_ct *ct)
{ {
if (!xe_guc_ct_enabled(ct))
return;
wake_up_all(&ct->wq); wake_up_all(&ct->wq);
if (ct->enabled) queue_work(system_unbound_wq, &ct->g2h_worker);
queue_work(system_unbound_wq, &ct->g2h_worker);
xe_guc_ct_fast_path(ct); xe_guc_ct_fast_path(ct);
} }
......
...@@ -72,6 +72,20 @@ struct xe_guc_ct_snapshot { ...@@ -72,6 +72,20 @@ struct xe_guc_ct_snapshot {
struct guc_ctb_snapshot h2g; struct guc_ctb_snapshot h2g;
}; };
/**
* enum xe_guc_ct_state - CT state
* @XE_GUC_CT_STATE_NOT_INITIALIZED: CT not initialized, messages not expected in this state
* @XE_GUC_CT_STATE_DISABLED: CT disabled, messages not expected in this state
* @XE_GUC_CT_STATE_STOPPED: CT stopped, drop messages without errors
* @XE_GUC_CT_STATE_ENABLED: CT enabled, messages sent / received in this state
*/
enum xe_guc_ct_state {
XE_GUC_CT_STATE_NOT_INITIALIZED = 0,
XE_GUC_CT_STATE_DISABLED,
XE_GUC_CT_STATE_STOPPED,
XE_GUC_CT_STATE_ENABLED,
};
/** /**
* struct xe_guc_ct - GuC command transport (CT) layer * struct xe_guc_ct - GuC command transport (CT) layer
* *
...@@ -87,17 +101,17 @@ struct xe_guc_ct { ...@@ -87,17 +101,17 @@ struct xe_guc_ct {
spinlock_t fast_lock; spinlock_t fast_lock;
/** @ctbs: buffers for sending and receiving commands */ /** @ctbs: buffers for sending and receiving commands */
struct { struct {
/** @send: Host to GuC (H2G, send) channel */ /** @ctbs.send: Host to GuC (H2G, send) channel */
struct guc_ctb h2g; struct guc_ctb h2g;
/** @recv: GuC to Host (G2H, receive) channel */ /** @ctbs.recv: GuC to Host (G2H, receive) channel */
struct guc_ctb g2h; struct guc_ctb g2h;
} ctbs; } ctbs;
/** @g2h_outstanding: number of outstanding G2H */ /** @g2h_outstanding: number of outstanding G2H */
u32 g2h_outstanding; u32 g2h_outstanding;
/** @g2h_worker: worker to process G2H messages */ /** @g2h_worker: worker to process G2H messages */
struct work_struct g2h_worker; struct work_struct g2h_worker;
/** @enabled: CT enabled */ /** @state: CT state */
bool enabled; enum xe_guc_ct_state state;
/** @fence_seqno: G2H fence seqno - 16 bits used by CT */ /** @fence_seqno: G2H fence seqno - 16 bits used by CT */
u32 fence_seqno; u32 fence_seqno;
/** @fence_lookup: G2H fence lookup */ /** @fence_lookup: G2H fence lookup */
......
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GUC_DB_MGR_H_
#define _XE_GUC_DB_MGR_H_
struct drm_printer;
struct xe_guc_db_mgr;
int xe_guc_db_mgr_init(struct xe_guc_db_mgr *dbm, unsigned int count);
int xe_guc_db_mgr_reserve_id_locked(struct xe_guc_db_mgr *dbm);
void xe_guc_db_mgr_release_id_locked(struct xe_guc_db_mgr *dbm, unsigned int id);
int xe_guc_db_mgr_reserve_range(struct xe_guc_db_mgr *dbm, unsigned int count, unsigned int spare);
void xe_guc_db_mgr_release_range(struct xe_guc_db_mgr *dbm, unsigned int start, unsigned int count);
void xe_guc_db_mgr_print(struct xe_guc_db_mgr *dbm, struct drm_printer *p, int indent);
#endif
...@@ -97,6 +97,7 @@ struct guc_update_exec_queue_policy { ...@@ -97,6 +97,7 @@ struct guc_update_exec_queue_policy {
#define GUC_WA_POLLCS BIT(18) #define GUC_WA_POLLCS BIT(18)
#define GUC_WA_RENDER_RST_RC6_EXIT BIT(19) #define GUC_WA_RENDER_RST_RC6_EXIT BIT(19)
#define GUC_WA_RCS_REGS_IN_CCS_REGS_LIST BIT(21) #define GUC_WA_RCS_REGS_IN_CCS_REGS_LIST BIT(21)
#define GUC_WA_ENABLE_TSC_CHECK_ON_RC6 BIT(22)
#define GUC_CTL_FEATURE 2 #define GUC_CTL_FEATURE 2
#define GUC_CTL_ENABLE_SLPC BIT(2) #define GUC_CTL_ENABLE_SLPC BIT(2)
......
...@@ -78,7 +78,7 @@ int xe_guc_hwconfig_init(struct xe_guc *guc) ...@@ -78,7 +78,7 @@ int xe_guc_hwconfig_init(struct xe_guc *guc)
return -EINVAL; return -EINVAL;
bo = xe_managed_bo_create_pin_map(xe, tile, PAGE_ALIGN(size), bo = xe_managed_bo_create_pin_map(xe, tile, PAGE_ALIGN(size),
XE_BO_CREATE_VRAM_IF_DGFX(tile) | XE_BO_CREATE_SYSTEM_BIT |
XE_BO_CREATE_GGTT_BIT); XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(bo)) if (IS_ERR(bo))
return PTR_ERR(bo); return PTR_ERR(bo);
......
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GUC_HXG_HELPERS_H_
#define _XE_GUC_HXG_HELPERS_H_
#include <linux/bitfield.h>
#include <linux/types.h>
#include "abi/guc_messages_abi.h"
/**
* hxg_sizeof - Queries size of the object or type (in HXG units).
* @T: the object or type
*
* Force a compilation error if actual size is not aligned to HXG unit (u32).
*
* Return: size in dwords (u32).
*/
#define hxg_sizeof(T) (sizeof(T) / sizeof(u32) + BUILD_BUG_ON_ZERO(sizeof(T) % sizeof(u32)))
static inline const char *guc_hxg_type_to_string(unsigned int type)
{
switch (type) {
case GUC_HXG_TYPE_REQUEST:
return "request";
case GUC_HXG_TYPE_FAST_REQUEST:
return "fast-request";
case GUC_HXG_TYPE_EVENT:
return "event";
case GUC_HXG_TYPE_NO_RESPONSE_BUSY:
return "busy";
case GUC_HXG_TYPE_NO_RESPONSE_RETRY:
return "retry";
case GUC_HXG_TYPE_RESPONSE_FAILURE:
return "failure";
case GUC_HXG_TYPE_RESPONSE_SUCCESS:
return "response";
default:
return "<invalid>";
}
}
static inline bool guc_hxg_type_is_action(unsigned int type)
{
switch (type) {
case GUC_HXG_TYPE_REQUEST:
case GUC_HXG_TYPE_FAST_REQUEST:
case GUC_HXG_TYPE_EVENT:
return true;
default:
return false;
}
}
static inline bool guc_hxg_type_is_reply(unsigned int type)
{
switch (type) {
case GUC_HXG_TYPE_NO_RESPONSE_BUSY:
case GUC_HXG_TYPE_NO_RESPONSE_RETRY:
case GUC_HXG_TYPE_RESPONSE_FAILURE:
case GUC_HXG_TYPE_RESPONSE_SUCCESS:
return true;
default:
return false;
}
}
static inline u32 guc_hxg_msg_encode_success(u32 *msg, u32 data0)
{
msg[0] = FIELD_PREP(GUC_HXG_MSG_0_ORIGIN, GUC_HXG_ORIGIN_HOST) |
FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_RESPONSE_SUCCESS) |
FIELD_PREP(GUC_HXG_RESPONSE_MSG_0_DATA0, data0);
return GUC_HXG_RESPONSE_MSG_MIN_LEN;
}
static inline u32 guc_hxg_msg_encode_failure(u32 *msg, u32 error, u32 hint)
{
msg[0] = FIELD_PREP(GUC_HXG_MSG_0_ORIGIN, GUC_HXG_ORIGIN_HOST) |
FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_RESPONSE_FAILURE) |
FIELD_PREP(GUC_HXG_FAILURE_MSG_0_HINT, hint) |
FIELD_PREP(GUC_HXG_FAILURE_MSG_0_ERROR, error);
return GUC_HXG_FAILURE_MSG_LEN;
}
static inline u32 guc_hxg_msg_encode_busy(u32 *msg, u32 counter)
{
msg[0] = FIELD_PREP(GUC_HXG_MSG_0_ORIGIN, GUC_HXG_ORIGIN_HOST) |
FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_NO_RESPONSE_BUSY) |
FIELD_PREP(GUC_HXG_BUSY_MSG_0_COUNTER, counter);
return GUC_HXG_BUSY_MSG_LEN;
}
static inline u32 guc_hxg_msg_encode_retry(u32 *msg, u32 reason)
{
msg[0] = FIELD_PREP(GUC_HXG_MSG_0_ORIGIN, GUC_HXG_ORIGIN_HOST) |
FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_NO_RESPONSE_RETRY) |
FIELD_PREP(GUC_HXG_RETRY_MSG_0_REASON, reason);
return GUC_HXG_RETRY_MSG_LEN;
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment