Commit 81dd4d4d authored by Tom Zanussi's avatar Tom Zanussi Committed by Vinod Koul

dmaengine: idxd: Add IDXD performance monitor support

Implement the IDXD performance monitor capability (named 'perfmon' in
the DSA (Data Streaming Accelerator) spec [1]), which supports the
collection of information about key events occurring during DSA and
IAX (Intel Analytics Accelerator) device execution, to assist in
performance tuning and debugging.

The idxd perfmon support is implemented as part of the IDXD driver and
interfaces with the Linux perf framework.  It has several features in
common with the existing uncore pmu support:

  - it does not support sampling
  - does not support per-thread counting

However it also has some unique features not present in the core and
uncore support:

  - all general-purpose counters are identical, thus no event constraints
  - operation is always system-wide

While the core perf subsystem assumes that all counters are by default
per-cpu, the uncore pmus are socket-scoped and use a cpu mask to
restrict counting to one cpu from each socket.  IDXD counters use a
similar strategy but expand the scope even further; since IDXD
counters are system-wide and can be read from any cpu, the IDXD perf
driver picks a single cpu to do the work (with cpu hotplug notifiers
to choose a different cpu if the chosen one is taken off-line).

More specifically, the perf userspace tool by default opens a counter
for each cpu for an event.  However, if it finds a cpumask file
associated with the pmu under sysfs, as is the case with the uncore
pmus, it will open counters only on the cpus specified by the cpumask.
Since perfmon only needs to open a single counter per event for a
given IDXD device, the perfmon driver will create a sysfs cpumask file
for the device and insert the first cpu of the system into it.  When a
user uses perf to open an event, perf will open a single counter on
the cpu specified by the cpu mask.  This amounts to the default
system-wide rather than per-cpu counting mentioned previously for
perfmon pmu events.  In order to keep the cpu mask up-to-date, the
driver implements cpu hotplug support for multiple devices, as IDXD
usually enumerates and registers more than one idxd device.

The perfmon driver implements basic perfmon hardware capability
discovery and configuration, and is initialized by the IDXD driver's
probe function.  During initialization, the driver retrieves the total
number of supported performance counters, the pmu ID, and the device
type from idxd device, and registers itself under the Linux perf
framework.

The perf userspace tool can be used to monitor single or multiple
events depending on the given configuration, as well as event groups,
which are also supported by the perfmon driver.  The user configures
events using the perf tool command-line interface by specifying the
event and corresponding event category, along with an optional set of
filters that can be used to restrict counting to specific work queues,
traffic classes, page and transfer sizes, and engines (See [1] for
specifics).

With the configuration specified by the user, the perf tool issues a
system call passing that information to the kernel, which uses it to
initialize the specified event(s).  The event(s) are opened and
started, and following termination of the perf command, they're
stopped.  At that point, the perfmon driver will read the latest count
for the event(s), calculate the difference between the latest counter
values and previously tracked counter values, and display the final
incremental count as the event count for the cycle.  An overflow
handler registered on the IDXD irq path is used to account for counter
overflows, which are signaled by an overflow interrupt.

Below are a couple of examples of perf usage for monitoring DSA events.

The following monitors all events in the 'engine' category.  Becuuse
no filters are specified, this captures all engine events for the
workload, which in this case is 19 iterations of the work generated by
the kernel dmatest module.

Details describing the events can be found in Appendix D of [1],
Performance Monitoring Events, but briefly they are:

  event 0x1:  total input data processed, in 32-byte units
  event 0x2:  total data written, in 32-byte units
  event 0x4:  number of work descriptors that read the source
  event 0x8:  number of work descriptors that write the destination
  event 0x10: number of work descriptors dispatched from batch descriptors
  event 0x20: number of work descriptors dispatched from work queues

 # perf stat -e dsa0/event=0x1,event_category=0x1/,
                dsa0/event=0x2,event_category=0x1/,
		dsa0/event=0x4,event_category=0x1/,
		dsa0/event=0x8,event_category=0x1/,
		dsa0/event=0x10,event_category=0x1/,
		dsa0/event=0x20,event_category=0x1/
		  modprobe dmatest channel=dma0chan0 timeout=2000
		  iterations=19 run=1 wait=1

     Performance counter stats for 'system wide':

                 5,332      dsa0/event=0x1,event_category=0x1/
                 5,327      dsa0/event=0x2,event_category=0x1/
                    19      dsa0/event=0x4,event_category=0x1/
                    19      dsa0/event=0x8,event_category=0x1/
                     0      dsa0/event=0x10,event_category=0x1/
                    19      dsa0/event=0x20,event_category=0x1/

          21.977436186 seconds time elapsed

The command below illustrates filter usage with a simple example.  It
specifies that MEM_MOVE operations should be counted for the DSA
device dsa0 (event 0x8 corresponds to the EV_MEM_MOVE event - Number
of Memory Move Descriptors, which is part of event category 0x3 -
Operations. The detailed category and event IDs are available in
Appendix D, Performance Monitoring Events, of [1]).  In addition to
the event and event category, a number of filters are also specified
(the detailed filter values are available in Chapter 6.4 (Filter
Support) of [1]), which will restrict counting to only those events
that meet all of the filter criteria.  In this case, the filters
specify that only MEM_MOVE operations that are serviced by work queue
wq0 and specifically engine number engine0 and traffic class tc0
having sizes between 0 and 4k and page size of between 0 and 1G result
in a counter hit; anything else will be filtered out and not appear in
the final count.  Note that filters are optional - any filter not
specified is assumed to be all ones and will pass anything.

 # perf stat -e dsa0/filter_wq=0x1,filter_tc=0x1,filter_sz=0x7,
                filter_eng=0x1,event=0x8,event_category=0x3/
		  modprobe dmatest channel=dma0chan0 timeout=2000
		  iterations=19 run=1 wait=1

     Performance counter stats for 'system wide':

       19      dsa0/filter_wq=0x1,filter_tc=0x1,filter_sz=0x7,
               filter_eng=0x1,event=0x8,event_category=0x3/

          21.865914091 seconds time elapsed

The output above reflects that the unspecified workload resulted in
the counting of 19 MEM_MOVE operation events that met the filter
criteria.

[1]: https://software.intel.com/content/www/us/en/develop/download/intel-data-streaming-accelerator-preliminary-architecture-specification.html

[ Based on work originally by Jing Lin. ]
Reviewed-by: default avatarDave Jiang <dave.jiang@intel.com>
Reviewed-by: default avatarKan Liang <kan.liang@linux.intel.com>
Signed-off-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
Link: https://lore.kernel.org/r/0c5080a7d541904c4ad42b848c76a1ce056ddac7.1619276133.git.zanussi@kernel.orgSigned-off-by: default avatarVinod Koul <vkoul@kernel.org>
parent a1610461
What: /sys/bus/event_source/devices/dsa*/format
Date: April 2021
KernelVersion: 5.13
Contact: Tom Zanussi <tom.zanussi@linux.intel.com>
Description: Read-only. Attribute group to describe the magic bits
that go into perf_event_attr.config or
perf_event_attr.config1 for the IDXD DSA pmu. (See also
ABI/testing/sysfs-bus-event_source-devices-format).
Each attribute in this group defines a bit range in
perf_event_attr.config or perf_event_attr.config1.
All supported attributes are listed below (See the
IDXD DSA Spec for possible attribute values)::
event_category = "config:0-3" - event category
event = "config:4-31" - event ID
filter_wq = "config1:0-31" - workqueue filter
filter_tc = "config1:32-39" - traffic class filter
filter_pgsz = "config1:40-43" - page size filter
filter_sz = "config1:44-51" - transfer size filter
filter_eng = "config1:52-59" - engine filter
What: /sys/bus/event_source/devices/dsa*/cpumask
Date: April 2021
KernelVersion: 5.13
Contact: Tom Zanussi <tom.zanussi@linux.intel.com>
Description: Read-only. This file always returns the cpu to which the
IDXD DSA pmu is bound for access to all dsa pmu
performance monitoring events.
......@@ -300,6 +300,18 @@ config INTEL_IDXD_SVM
depends on PCI_PASID
depends on PCI_IOV
config INTEL_IDXD_PERFMON
bool "Intel Data Accelerators performance monitor support"
depends on INTEL_IDXD
help
Enable performance monitor (pmu) support for the Intel(R)
data accelerators present in Intel Xeon CPU. With this
enabled, perf can be used to monitor the DSA (Intel Data
Streaming Accelerator) events described in the Intel DSA
spec.
If unsure, say N.
config INTEL_IOATDMA
tristate "Intel I/OAT DMA support"
depends on PCI && X86_64
......
obj-$(CONFIG_INTEL_IDXD) += idxd.o
idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o
idxd-$(CONFIG_INTEL_IDXD_PERFMON) += perfmon.o
......@@ -9,6 +9,8 @@
#include <linux/wait.h>
#include <linux/cdev.h>
#include <linux/idr.h>
#include <linux/pci.h>
#include <linux/perf_event.h>
#include "registers.h"
#define IDXD_DRIVER_VERSION "1.00"
......@@ -29,6 +31,7 @@ enum idxd_type {
};
#define IDXD_NAME_SIZE 128
#define IDXD_PMU_EVENT_MAX 64
struct idxd_device_driver {
struct device_driver drv;
......@@ -61,6 +64,31 @@ struct idxd_group {
int tc_b;
};
struct idxd_pmu {
struct idxd_device *idxd;
struct perf_event *event_list[IDXD_PMU_EVENT_MAX];
int n_events;
DECLARE_BITMAP(used_mask, IDXD_PMU_EVENT_MAX);
struct pmu pmu;
char name[IDXD_NAME_SIZE];
int cpu;
int n_counters;
int counter_width;
int n_event_categories;
bool per_counter_caps_supported;
unsigned long supported_event_categories;
unsigned long supported_filters;
int n_filters;
struct hlist_node cpuhp_node;
};
#define IDXD_MAX_PRIORITY 0xf
enum idxd_wq_state {
......@@ -241,6 +269,8 @@ struct idxd_device {
struct work_struct work;
int *int_handles;
struct idxd_pmu *idxd_pmu;
};
/* IDXD software descriptor */
......@@ -437,4 +467,19 @@ int idxd_cdev_get_major(struct idxd_device *idxd);
int idxd_wq_add_cdev(struct idxd_wq *wq);
void idxd_wq_del_cdev(struct idxd_wq *wq);
/* perfmon */
#if IS_ENABLED(CONFIG_INTEL_IDXD_PERFMON)
int perfmon_pmu_init(struct idxd_device *idxd);
void perfmon_pmu_remove(struct idxd_device *idxd);
void perfmon_counter_overflow(struct idxd_device *idxd);
void perfmon_init(void);
void perfmon_exit(void);
#else
static inline int perfmon_pmu_init(struct idxd_device *idxd) { return 0; }
static inline void perfmon_pmu_remove(struct idxd_device *idxd) {}
static inline void perfmon_counter_overflow(struct idxd_device *idxd) {}
static inline void perfmon_init(void) {}
static inline void perfmon_exit(void) {}
#endif
#endif
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2020 Intel Corporation. All rights rsvd. */
#ifndef _PERFMON_H_
#define _PERFMON_H_
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/sbitmap.h>
#include <linux/dmaengine.h>
#include <linux/percpu-rwsem.h>
#include <linux/wait.h>
#include <linux/cdev.h>
#include <linux/uuid.h>
#include <linux/idxd.h>
#include <linux/perf_event.h>
#include "registers.h"
static inline struct idxd_pmu *event_to_pmu(struct perf_event *event)
{
struct idxd_pmu *idxd_pmu;
struct pmu *pmu;
pmu = event->pmu;
idxd_pmu = container_of(pmu, struct idxd_pmu, pmu);
return idxd_pmu;
}
static inline struct idxd_device *event_to_idxd(struct perf_event *event)
{
struct idxd_pmu *idxd_pmu;
struct pmu *pmu;
pmu = event->pmu;
idxd_pmu = container_of(pmu, struct idxd_pmu, pmu);
return idxd_pmu->idxd;
}
static inline struct idxd_device *pmu_to_idxd(struct pmu *pmu)
{
struct idxd_pmu *idxd_pmu;
idxd_pmu = container_of(pmu, struct idxd_pmu, pmu);
return idxd_pmu->idxd;
}
enum dsa_perf_events {
DSA_PERF_EVENT_WQ = 0,
DSA_PERF_EVENT_ENGINE,
DSA_PERF_EVENT_ADDR_TRANS,
DSA_PERF_EVENT_OP,
DSA_PERF_EVENT_COMPL,
DSA_PERF_EVENT_MAX,
};
enum filter_enc {
FLT_WQ = 0,
FLT_TC,
FLT_PG_SZ,
FLT_XFER_SZ,
FLT_ENG,
FLT_MAX,
};
#define CONFIG_RESET 0x0000000000000001
#define CNTR_RESET 0x0000000000000002
#define CNTR_ENABLE 0x0000000000000001
#define INTR_OVFL 0x0000000000000002
#define COUNTER_FREEZE 0x00000000FFFFFFFF
#define COUNTER_UNFREEZE 0x0000000000000000
#define OVERFLOW_SIZE 32
#define CNTRCFG_ENABLE BIT(0)
#define CNTRCFG_IRQ_OVERFLOW BIT(1)
#define CNTRCFG_CATEGORY_SHIFT 8
#define CNTRCFG_EVENT_SHIFT 32
#define PERFMON_TABLE_OFFSET(_idxd) \
({ \
typeof(_idxd) __idxd = (_idxd); \
((__idxd)->reg_base + (__idxd)->perfmon_offset); \
})
#define PERFMON_REG_OFFSET(idxd, offset) \
(PERFMON_TABLE_OFFSET(idxd) + (offset))
#define PERFCAP_REG(idxd) (PERFMON_REG_OFFSET(idxd, IDXD_PERFCAP_OFFSET))
#define PERFRST_REG(idxd) (PERFMON_REG_OFFSET(idxd, IDXD_PERFRST_OFFSET))
#define OVFSTATUS_REG(idxd) (PERFMON_REG_OFFSET(idxd, IDXD_OVFSTATUS_OFFSET))
#define PERFFRZ_REG(idxd) (PERFMON_REG_OFFSET(idxd, IDXD_PERFFRZ_OFFSET))
#define FLTCFG_REG(idxd, cntr, flt) \
(PERFMON_REG_OFFSET(idxd, IDXD_FLTCFG_OFFSET) + ((cntr) * 32) + ((flt) * 4))
#define CNTRCFG_REG(idxd, cntr) \
(PERFMON_REG_OFFSET(idxd, IDXD_CNTRCFG_OFFSET) + ((cntr) * 8))
#define CNTRDATA_REG(idxd, cntr) \
(PERFMON_REG_OFFSET(idxd, IDXD_CNTRDATA_OFFSET) + ((cntr) * 8))
#define CNTRCAP_REG(idxd, cntr) \
(PERFMON_REG_OFFSET(idxd, IDXD_CNTRCAP_OFFSET) + ((cntr) * 8))
#define EVNTCAP_REG(idxd, category) \
(PERFMON_REG_OFFSET(idxd, IDXD_EVNTCAP_OFFSET) + ((category) * 8))
#define DEFINE_PERFMON_FORMAT_ATTR(_name, _format) \
static ssize_t __perfmon_idxd_##_name##_show(struct kobject *kobj, \
struct kobj_attribute *attr, \
char *page) \
{ \
BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
return sprintf(page, _format "\n"); \
} \
static struct kobj_attribute format_attr_idxd_##_name = \
__ATTR(_name, 0444, __perfmon_idxd_##_name##_show, NULL)
#endif
......@@ -386,4 +386,112 @@ union wqcfg {
#define GRPENGCFG_OFFSET(idxd_dev, n) ((idxd_dev)->grpcfg_offset + (n) * GRPCFG_SIZE + 32)
#define GRPFLGCFG_OFFSET(idxd_dev, n) ((idxd_dev)->grpcfg_offset + (n) * GRPCFG_SIZE + 40)
/* Following is performance monitor registers */
#define IDXD_PERFCAP_OFFSET 0x0
union idxd_perfcap {
struct {
u64 num_perf_counter:6;
u64 rsvd1:2;
u64 counter_width:8;
u64 num_event_category:4;
u64 global_event_category:16;
u64 filter:8;
u64 rsvd2:8;
u64 cap_per_counter:1;
u64 writeable_counter:1;
u64 counter_freeze:1;
u64 overflow_interrupt:1;
u64 rsvd3:8;
};
u64 bits;
} __packed;
#define IDXD_EVNTCAP_OFFSET 0x80
union idxd_evntcap {
struct {
u64 events:28;
u64 rsvd:36;
};
u64 bits;
} __packed;
struct idxd_event {
union {
struct {
u32 event_category:4;
u32 events:28;
};
u32 val;
};
} __packed;
#define IDXD_CNTRCAP_OFFSET 0x800
struct idxd_cntrcap {
union {
struct {
u32 counter_width:8;
u32 rsvd:20;
u32 num_events:4;
};
u32 val;
};
struct idxd_event events[];
} __packed;
#define IDXD_PERFRST_OFFSET 0x10
union idxd_perfrst {
struct {
u32 perfrst_config:1;
u32 perfrst_counter:1;
u32 rsvd:30;
};
u32 val;
} __packed;
#define IDXD_OVFSTATUS_OFFSET 0x30
#define IDXD_PERFFRZ_OFFSET 0x20
#define IDXD_CNTRCFG_OFFSET 0x100
union idxd_cntrcfg {
struct {
u64 enable:1;
u64 interrupt_ovf:1;
u64 global_freeze_ovf:1;
u64 rsvd1:5;
u64 event_category:4;
u64 rsvd2:20;
u64 events:28;
u64 rsvd3:4;
};
u64 val;
} __packed;
#define IDXD_FLTCFG_OFFSET 0x300
#define IDXD_CNTRDATA_OFFSET 0x200
union idxd_cntrdata {
struct {
u64 event_count_value;
};
u64 val;
} __packed;
union event_cfg {
struct {
u64 event_cat:4;
u64 event_enc:28;
};
u64 val;
} __packed;
union filter_cfg {
struct {
u64 wq:32;
u64 tc:8;
u64 pg_sz:4;
u64 xfer_sz:8;
u64 eng:8;
};
u64 val;
} __packed;
#endif
......@@ -167,6 +167,7 @@ enum cpuhp_state {
CPUHP_AP_PERF_X86_RAPL_ONLINE,
CPUHP_AP_PERF_X86_CQM_ONLINE,
CPUHP_AP_PERF_X86_CSTATE_ONLINE,
CPUHP_AP_PERF_X86_IDXD_ONLINE,
CPUHP_AP_PERF_S390_CF_ONLINE,
CPUHP_AP_PERF_S390_CFD_ONLINE,
CPUHP_AP_PERF_S390_SF_ONLINE,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment