Commit 8c9ed8e1 authored by Xiao Guangrong's avatar Xiao Guangrong Committed by Ingo Molnar

perf_event: Fix event group handling in __perf_event_sched_*()

Paul Mackerras says:

 "Actually, looking at this more closely, it has to be a group
 leader anyway since it's at the top level of ctx->group_list.  In
 fact I see four places where we do:

  list_for_each_entry(event, &ctx->group_list, group_entry) {
	if (event == event->group_leader)
		...

 or the equivalent, three of which appear to have been introduced
 by afedadf2 ("perf_counter: Optimize sched in/out of counters")
 back in May by Peter Z.

 As far as I can see the if () is superfluous in each case (a
 singleton event will be a group of 1 and will have its
 group_leader pointing to itself)."

 [ See: http://marc.info/?l=linux-kernel&m=125361238901442&w=2 ]

And Peter Zijlstra points out this is a bugfix:

 "The intent was to call event_sched_{in,out}() for single event
  groups because that's cheaper than group_sched_{in,out}(),
  however..

  - as you noticed, I got the condition wrong, it should have read:

      list_empty(&event->sibling_list)

  - it failed to call group_can_go_on() which deals with ->exclusive.

  - it also doesn't call hw_perf_group_sched_in() which might break
    power."

 [ See: http://marc.info/?l=linux-kernel&m=125369523318583&w=2 ]

Changelog v1->v2:

 - Fix the title name according to Peter Zijlstra's suggestion

 - Remove the comments and WARN_ON_ONCE() as Peter Zijlstra's
   suggestion
Signed-off-by: default avatarXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <4ABC5A55.7000208@cn.fujitsu.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 39a90a8e
...@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx, ...@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
update_context_time(ctx); update_context_time(ctx);
perf_disable(); perf_disable();
if (ctx->nr_active) { if (ctx->nr_active)
list_for_each_entry(event, &ctx->group_list, group_entry) { list_for_each_entry(event, &ctx->group_list, group_entry)
if (event != event->group_leader) group_sched_out(event, cpuctx, ctx);
event_sched_out(event, cpuctx, ctx);
else
group_sched_out(event, cpuctx, ctx);
}
}
perf_enable(); perf_enable();
out: out:
spin_unlock(&ctx->lock); spin_unlock(&ctx->lock);
...@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx, ...@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu) if (event->cpu != -1 && event->cpu != cpu)
continue; continue;
if (event != event->group_leader) if (group_can_go_on(event, cpuctx, 1))
event_sched_in(event, cpuctx, ctx, cpu); group_sched_in(event, cpuctx, ctx, cpu);
else {
if (group_can_go_on(event, cpuctx, 1))
group_sched_in(event, cpuctx, ctx, cpu);
}
/* /*
* If this pinned group hasn't been scheduled, * If this pinned group hasn't been scheduled,
...@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx, ...@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx,
if (event->cpu != -1 && event->cpu != cpu) if (event->cpu != -1 && event->cpu != cpu)
continue; continue;
if (event != event->group_leader) { if (group_can_go_on(event, cpuctx, can_add_hw))
if (event_sched_in(event, cpuctx, ctx, cpu)) if (group_sched_in(event, cpuctx, ctx, cpu))
can_add_hw = 0; can_add_hw = 0;
} else {
if (group_can_go_on(event, cpuctx, can_add_hw)) {
if (group_sched_in(event, cpuctx, ctx, cpu))
can_add_hw = 0;
}
}
} }
perf_enable(); perf_enable();
out: out:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment