Commit 2d656b0f authored by Namhyung Kim's avatar Namhyung Kim Committed by Arnaldo Carvalho de Melo

perf stat: Fix handling of unsupported cgroup events when using BPF counters

When --for-each-cgroup option is used, it fails when any of events is
not supported and exits immediately.  This is not how 'perf stat'
handles unsupported events.

Let's ignore the failure and proceed with others so that the output is
similar to when BPF counters are not used:

Before:

  $ sudo ./perf stat -a --bpf-counters -e L1-icache-loads,L1-dcache-loads --for-each-cgroup system.slice,user.slice sleep 1
  Failed to open first cgroup events
  $

After it shows output similat to when --bpf-counters isn't specified:

  $ sudo ./perf stat -a --bpf-counters -e L1-icache-loads,L1-dcache-loads --for-each-cgroup system.slice,user.slice sleep 1

   Performance counter stats for 'system wide':

     <not supported>      L1-icache-loads                  system.slice
          29,892,418      L1-dcache-loads                  system.slice
     <not supported>      L1-icache-loads                  user.slice
          52,497,220      L1-dcache-loads                  user.slice
  $

Fixes: 944138f0 ("perf stat: Enable BPF counter with --for-each-cgroup")
Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/r/20230104064402.1551516-4-namhyung@kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
parent fb710dde
...@@ -116,27 +116,19 @@ static int bperf_load_program(struct evlist *evlist) ...@@ -116,27 +116,19 @@ static int bperf_load_program(struct evlist *evlist)
/* open single copy of the events w/o cgroup */ /* open single copy of the events w/o cgroup */
err = evsel__open_per_cpu(evsel, evsel->core.cpus, -1); err = evsel__open_per_cpu(evsel, evsel->core.cpus, -1);
if (err) { if (err == 0)
pr_err("Failed to open first cgroup events\n"); evsel->supported = true;
goto out;
}
map_fd = bpf_map__fd(skel->maps.events); map_fd = bpf_map__fd(skel->maps.events);
perf_cpu_map__for_each_cpu(cpu, j, evsel->core.cpus) { perf_cpu_map__for_each_cpu(cpu, j, evsel->core.cpus) {
int fd = FD(evsel, j); int fd = FD(evsel, j);
__u32 idx = evsel->core.idx * total_cpus + cpu.cpu; __u32 idx = evsel->core.idx * total_cpus + cpu.cpu;
err = bpf_map_update_elem(map_fd, &idx, &fd, bpf_map_update_elem(map_fd, &idx, &fd, BPF_ANY);
BPF_ANY);
if (err < 0) {
pr_err("Failed to update perf_event fd\n");
goto out;
}
} }
evsel->cgrp = leader_cgrp; evsel->cgrp = leader_cgrp;
} }
evsel->supported = true;
if (evsel->cgrp == cgrp) if (evsel->cgrp == cgrp)
continue; continue;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment