Commit e55ed342 authored by Leo Yan's avatar Leo Yan Committed by Arnaldo Carvalho de Melo

perf arm-spe: Synthesize memory event

The memory event can deliver two benefits:

- The first benefit is the memory event can give out global view for
  memory accessing, rather than organizing events with scatter mode
  (e.g. uses separate event for L1 cache, last level cache, etc) which
  which can only display a event for single memory type, memory events
  include all memory accessing so it can display the data accessing
  cross memory levels in the same view;

- The second benefit is the sample generation might introduce a big
  overhead and need to wait for long time for Perf reporting, we can
  specify itrace option '--itrace=M' to filter out other events and only
  output memory events, this can significantly reduce the overhead
  caused by generating samples.

This patch is to enable memory event for Arm SPE.
Signed-off-by: default avatarLeo Yan <leo.yan@linaro.org>
Reviewed-by: default avatarJames Clark <james.clark@arm.com>
Tested-by: default avatarJames Clark <james.clark@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Al Grant <al.grant@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wei Li <liwei391@huawei.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: default avatarJames Clark <james.clark@arm.com>
Link: https://lore.kernel.org/r/20210211133856.2137-5-james.clark@arm.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
parent 54f7815e
......@@ -53,6 +53,7 @@ struct arm_spe {
u8 sample_tlb;
u8 sample_branch;
u8 sample_remote_access;
u8 sample_memory;
u64 l1d_miss_id;
u64 l1d_access_id;
......@@ -62,6 +63,7 @@ struct arm_spe {
u64 tlb_access_id;
u64 branch_miss_id;
u64 remote_access_id;
u64 memory_id;
u64 kernel_start;
......@@ -293,6 +295,18 @@ static int arm_spe__synth_branch_sample(struct arm_spe_queue *speq,
return arm_spe_deliver_synth_event(spe, speq, event, &sample);
}
#define SPE_MEM_TYPE (ARM_SPE_L1D_ACCESS | ARM_SPE_L1D_MISS | \
ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS | \
ARM_SPE_REMOTE_ACCESS)
static bool arm_spe__is_memory_event(enum arm_spe_sample_type type)
{
if (type & SPE_MEM_TYPE)
return true;
return false;
}
static int arm_spe_sample(struct arm_spe_queue *speq)
{
const struct arm_spe_record *record = &speq->decoder->record;
......@@ -354,6 +368,12 @@ static int arm_spe_sample(struct arm_spe_queue *speq)
return err;
}
if (spe->sample_memory && arm_spe__is_memory_event(record->type)) {
err = arm_spe__synth_mem_sample(speq, spe->memory_id);
if (err)
return err;
}
return 0;
}
......@@ -917,6 +937,16 @@ arm_spe_synth_events(struct arm_spe *spe, struct perf_session *session)
id += 1;
}
if (spe->synth_opts.mem) {
spe->sample_memory = true;
err = arm_spe_synth_event(session, &attr, id);
if (err)
return err;
spe->memory_id = id;
arm_spe_set_event_name(evlist, id, "memory");
}
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment