Commit 92ca7da4 authored by Alexander Shishkin's avatar Alexander Shishkin Committed by Peter Zijlstra

perf/x86/intel: Fix PT PMI handling

Commit:

  ccbebba4 ("perf/x86/intel/pt: Bypass PT vs. LBR exclusivity if the core supports it")

skips the PT/LBR exclusivity check on CPUs where PT and LBRs coexist, but
also inadvertently skips the active_events bump for PT in that case, which
is a bug. If there aren't any hardware events at the same time as PT, the
PMI handler will ignore PT PMIs, as active_events reads zero in that case,
resulting in the "Uhhuh" spurious NMI warning and PT data loss.

Fix this by always increasing active_events for PT events.

Fixes: ccbebba4 ("perf/x86/intel/pt: Bypass PT vs. LBR exclusivity if the core supports it")
Reported-by: default avatarVitaly Slobodskoy <vitaly.slobodskoy@intel.com>
Signed-off-by: default avatarAlexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarAlexey Budankov <alexey.budankov@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lkml.kernel.org/r/20191210105101.77210-1-alexander.shishkin@linux.intel.com
parent ff61541c
...@@ -376,7 +376,7 @@ int x86_add_exclusive(unsigned int what) ...@@ -376,7 +376,7 @@ int x86_add_exclusive(unsigned int what)
* LBR and BTS are still mutually exclusive. * LBR and BTS are still mutually exclusive.
*/ */
if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt) if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt)
return 0; goto out;
if (!atomic_inc_not_zero(&x86_pmu.lbr_exclusive[what])) { if (!atomic_inc_not_zero(&x86_pmu.lbr_exclusive[what])) {
mutex_lock(&pmc_reserve_mutex); mutex_lock(&pmc_reserve_mutex);
...@@ -388,6 +388,7 @@ int x86_add_exclusive(unsigned int what) ...@@ -388,6 +388,7 @@ int x86_add_exclusive(unsigned int what)
mutex_unlock(&pmc_reserve_mutex); mutex_unlock(&pmc_reserve_mutex);
} }
out:
atomic_inc(&active_events); atomic_inc(&active_events);
return 0; return 0;
...@@ -398,11 +399,15 @@ int x86_add_exclusive(unsigned int what) ...@@ -398,11 +399,15 @@ int x86_add_exclusive(unsigned int what)
void x86_del_exclusive(unsigned int what) void x86_del_exclusive(unsigned int what)
{ {
atomic_dec(&active_events);
/*
* See the comment in x86_add_exclusive().
*/
if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt) if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt)
return; return;
atomic_dec(&x86_pmu.lbr_exclusive[what]); atomic_dec(&x86_pmu.lbr_exclusive[what]);
atomic_dec(&active_events);
} }
int x86_setup_perfctr(struct perf_event *event) int x86_setup_perfctr(struct perf_event *event)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment