An error occurred fetching the project authors.
  1. 10 Feb, 2017 1 commit
    • Peter Zijlstra's avatar
      perf: Tighten (and fix) the grouping condition · ac74acf2
      Peter Zijlstra authored
      commit c3c87e77 upstream.
      
      The fix from 9fc81d87 ("perf: Fix events installation during
      moving group") was incomplete in that it failed to recognise that
      creating a group with events for different CPUs is semantically
      broken -- they cannot be co-scheduled.
      
      Furthermore, it leads to real breakage where, when we create an event
      for CPU Y and then migrate it to form a group on CPU X, the code gets
      confused where the counter is programmed -- triggered in practice
      as well by me via the perf fuzzer.
      
      Fix this by tightening the rules for creating groups. Only allow
      grouping of counters that can be co-scheduled in the same context.
      This means for the same task and/or the same cpu.
      
      Fixes: 9fc81d87 ("perf: Fix events installation during moving group")
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20150123125834.090683288@infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarWilly Tarreau <w@1wt.eu>
      ac74acf2
  2. 25 Feb, 2016 2 commits
    • Jann Horn's avatar
      ptrace: use fsuid, fsgid, effective creds for fs access checks · 414f6fbc
      Jann Horn authored
      commit caaee623 upstream.
      
      By checking the effective credentials instead of the real UID / permitted
      capabilities, ensure that the calling process actually intended to use its
      credentials.
      
      To ensure that all ptrace checks use the correct caller credentials (e.g.
      in case out-of-tree code or newly added code omits the PTRACE_MODE_*CREDS
      flag), use two new flags and require one of them to be set.
      
      The problem was that when a privileged task had temporarily dropped its
      privileges, e.g.  by calling setreuid(0, user_uid), with the intent to
      perform following syscalls with the credentials of a user, it still passed
      ptrace access checks that the user would not be able to pass.
      
      While an attacker should not be able to convince the privileged task to
      perform a ptrace() syscall, this is a problem because the ptrace access
      check is reused for things in procfs.
      
      In particular, the following somewhat interesting procfs entries only rely
      on ptrace access checks:
      
       /proc/$pid/stat - uses the check for determining whether pointers
           should be visible, useful for bypassing ASLR
       /proc/$pid/maps - also useful for bypassing ASLR
       /proc/$pid/cwd - useful for gaining access to restricted
           directories that contain files with lax permissions, e.g. in
           this scenario:
           lrwxrwxrwx root root /proc/13020/cwd -> /root/foobar
           drwx------ root root /root
           drwxr-xr-x root root /root/foobar
           -rw-r--r-- root root /root/foobar/secret
      
      Therefore, on a system where a root-owned mode 6755 binary changes its
      effective credentials as described and then dumps a user-specified file,
      this could be used by an attacker to reveal the memory layout of root's
      processes or reveal the contents of files he is not allowed to access
      (through /proc/$pid/cwd).
      
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: default avatarJann Horn <jann@thejh.net>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Casey Schaufler <casey@schaufler-ca.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morris <james.l.morris@oracle.com>
      Cc: "Serge E. Hallyn" <serge.hallyn@ubuntu.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      414f6fbc
    • Peter Zijlstra's avatar
      perf: Fix inherited events vs. tracepoint filters · 05c55825
      Peter Zijlstra authored
      commit b71b437e upstream.
      
      Arnaldo reported that tracepoint filters seem to misbehave (ie. not
      apply) on inherited events.
      
      The fix is obvious; filters are only set on the actual (parent)
      event, use the normal pattern of using this parent event for filters.
      This is safe because each child event has a reference to it.
      Reported-by: default avatarArnaldo Carvalho de Melo <acme@kernel.org>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/r/20151102095051.GN17308@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      05c55825
  3. 13 Sep, 2015 1 commit
  4. 13 Apr, 2015 1 commit
    • Peter Zijlstra's avatar
      perf: Fix irq_work 'tail' recursion · a49a0c95
      Peter Zijlstra authored
      commit d525211f upstream.
      
      Vince reported a watchdog lockup like:
      
      	[<ffffffff8115e114>] perf_tp_event+0xc4/0x210
      	[<ffffffff810b4f8a>] perf_trace_lock+0x12a/0x160
      	[<ffffffff810b7f10>] lock_release+0x130/0x260
      	[<ffffffff816c7474>] _raw_spin_unlock_irqrestore+0x24/0x40
      	[<ffffffff8107bb4d>] do_send_sig_info+0x5d/0x80
      	[<ffffffff811f69df>] send_sigio_to_task+0x12f/0x1a0
      	[<ffffffff811f71ce>] send_sigio+0xae/0x100
      	[<ffffffff811f72b7>] kill_fasync+0x97/0xf0
      	[<ffffffff8115d0b4>] perf_event_wakeup+0xd4/0xf0
      	[<ffffffff8115d103>] perf_pending_event+0x33/0x60
      	[<ffffffff8114e3fc>] irq_work_run_list+0x4c/0x80
      	[<ffffffff8114e448>] irq_work_run+0x18/0x40
      	[<ffffffff810196af>] smp_trace_irq_work_interrupt+0x3f/0xc0
      	[<ffffffff816c99bd>] trace_irq_work_interrupt+0x6d/0x80
      
      Which is caused by an irq_work generating new irq_work and therefore
      not allowing forward progress.
      
      This happens because processing the perf irq_work triggers another
      perf event (tracepoint stuff) which in turn generates an irq_work ad
      infinitum.
      
      Avoid this by raising the recursion counter in the irq_work -- which
      effectively disables all software events (including tracepoints) from
      actually triggering again.
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/20150219170311.GH21418@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a49a0c95
  5. 16 Jan, 2015 1 commit
    • Jiri Olsa's avatar
      perf: Fix events installation during moving group · d525563b
      Jiri Olsa authored
      commit 9fc81d87 upstream.
      
      We allow PMU driver to change the cpu on which the event
      should be installed to. This happened in patch:
      
        e2d37cd2 ("perf: Allow the PMU driver to choose the CPU on which to install events")
      
      This patch also forces all the group members to follow
      the currently opened events cpu if the group happened
      to be moved.
      
      This and the change of event->cpu in perf_install_in_context()
      function introduced in:
      
        0cda4c02 ("perf: Introduce perf_pmu_migrate_context()")
      
      forces group members to change their event->cpu,
      if the currently-opened-event's PMU changed the cpu
      and there is a group move.
      
      Above behaviour causes problem for breakpoint events,
      which uses event->cpu to touch cpu specific data for
      breakpoints accounting. By changing event->cpu, some
      breakpoints slots were wrongly accounted for given
      cpu.
      
      Vinces's perf fuzzer hit this issue and caused following
      WARN on my setup:
      
         WARNING: CPU: 0 PID: 20214 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0x142/0x150()
         Can't find any breakpoint slot
         [...]
      
      This patch changes the group moving code to keep the event's
      original cpu.
      Reported-by: default avatarVince Weaver <vince@deater.net>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vince@deater.net>
      Cc: Yan, Zheng <zheng.z.yan@intel.com>
      Link: http://lkml.kernel.org/r/1418243031-20367-3-git-send-email-jolsa@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d525563b
  6. 21 Nov, 2014 1 commit
  7. 09 Oct, 2014 1 commit
  8. 05 Oct, 2014 1 commit
  9. 11 Jun, 2014 6 commits
    • Knut Petersen's avatar
      perf: Enforce 1 as lower limit for perf_event_max_sample_rate · 02f98e3e
      Knut Petersen authored
      commit 723478c8 upstream.
      
      /proc/sys/kernel/perf_event_max_sample_rate will accept
      negative values as well as 0.
      
      Negative values are unreasonable, and 0 causes a
      divide by zero exception in perf_proc_update_handler.
      
      This patch enforces a lower limit of 1.
      Signed-off-by: default avatarKnut Petersen <Knut_Petersen@t-online.de>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/5242DB0C.4070005@t-online.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Weng Meiling <wengmeiling.weng@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      02f98e3e
    • Stephane Eranian's avatar
      perf: Fix interrupt handler timing harness · a4a108e8
      Stephane Eranian authored
      commit e5302920 upstream.
      
      This patch fixes a serious bug in:
      
        14c63f17 perf: Drop sample rate when sampling is too slow
      
      There was an misunderstanding on the API of the do_div()
      macro. It returns the remainder of the division and this
      was not what the function expected leading to disabling the
      interrupt latency watchdog.
      
      This patch also remove a duplicate assignment in
      perf_sample_event_took().
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Cc: dave.hansen@linux.intel.com
      Cc: ak@linux.intel.com
      Cc: jolsa@redhat.com
      Link: http://lkml.kernel.org/r/20130704223010.GA30625@quadSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Weng Meiling <wengmeiling.weng@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a4a108e8
    • Dave Hansen's avatar
      perf: Drop sample rate when sampling is too slow · 3cd49fd7
      Dave Hansen authored
      commit 14c63f17 upstream.
      
      This patch keeps track of how long perf's NMI handler is taking,
      and also calculates how many samples perf can take a second.  If
      the sample length times the expected max number of samples
      exceeds a configurable threshold, it drops the sample rate.
      
      This way, we don't have a runaway sampling process eating up the
      CPU.
      
      This patch can tend to drop the sample rate down to level where
      perf doesn't work very well.  *BUT* the alternative is that my
      system hangs because it spends all of its time handling NMIs.
      
      I'll take a busted performance tool over an entire system that's
      busted and undebuggable any day.
      
      BTW, my suspicion is that there's still an underlying bug here.
      Using the HPET instead of the TSC is definitely a contributing
      factor, but I suspect there are some other things going on.
      But, I can't go dig down on a bug like that with my machine
      hanging all the time.
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus@samba.org
      Cc: acme@ghostprotocols.net
      Cc: Dave Hansen <dave@sr71.net>
      [ Prettified it a bit. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Weng Meiling <wengmeiling.weng@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3cd49fd7
    • Peter Zijlstra's avatar
      perf: Fix race in removing an event · 54b3f8df
      Peter Zijlstra authored
      commit 46ce0fe9 upstream.
      
      When removing a (sibling) event we do:
      
      	raw_spin_lock_irq(&ctx->lock);
      	perf_group_detach(event);
      	raw_spin_unlock_irq(&ctx->lock);
      
      	<hole>
      
      	perf_remove_from_context(event);
      		raw_spin_lock_irq(&ctx->lock);
      		...
      		raw_spin_unlock_irq(&ctx->lock);
      
      Now, assuming the event is a sibling, it will be 'unreachable' for
      things like ctx_sched_out() because that iterates the
      groups->siblings, and we just unhooked the sibling.
      
      So, if during <hole> we get ctx_sched_out(), it will miss the event
      and not call event_sched_out() on it, leaving it programmed on the
      PMU.
      
      The subsequent perf_remove_from_context() call will find the ctx is
      inactive and only call list_del_event() to remove the event from all
      other lists.
      
      Hereafter we can proceed to free the event; while still programmed!
      
      Close this hole by moving perf_group_detach() inside the same
      ctx->lock region(s) perf_remove_from_context() has.
      
      The condition on inherited events only in __perf_event_exit_task() is
      likely complete crap because non-inherited events are part of groups
      too and we're tearing down just the same. But leave that for another
      patch.
      
      Most-likely-Fixes: e03a9a55 ("perf: Change close() semantics for group events")
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Much-staring-at-traces-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Much-staring-at-traces-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140505093124.GN17778@laptop.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      54b3f8df
    • Peter Zijlstra's avatar
      perf: Limit perf_event_attr::sample_period to 63 bits · 95090e8a
      Peter Zijlstra authored
      commit 0819b2e3 upstream.
      
      Vince reported that using a large sample_period (one with bit 63 set)
      results in wreckage since while the sample_period is fundamentally
      unsigned (negative periods don't make sense) the way we implement
      things very much rely on signed logic.
      
      So limit sample_period to 63 bits to avoid tripping over this.
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-p25fhunibl4y3qi0zuqmyf4b@git.kernel.orgSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      95090e8a
    • Jiri Olsa's avatar
      perf: Prevent false warning in perf_swevent_add · 26616604
      Jiri Olsa authored
      commit 39af6b16 upstream.
      
      The perf cpu offline callback takes down all cpu context
      events and releases swhash->swevent_hlist.
      
      This could race with task context software event being just
      scheduled on this cpu via perf_swevent_add while cpu hotplug
      code already cleaned up event's data.
      
      The race happens in the gap between the cpu notifier code
      and the cpu being actually taken down. Note that only cpu
      ctx events are terminated in the perf cpu hotplug code.
      
      It's easily reproduced with:
        $ perf record -e faults perf bench sched pipe
      
      while putting one of the cpus offline:
        # echo 0 > /sys/devices/system/cpu/cpu1/online
      
      Console emits following warning:
        WARNING: CPU: 1 PID: 2845 at kernel/events/core.c:5672 perf_swevent_add+0x18d/0x1a0()
        Modules linked in:
        CPU: 1 PID: 2845 Comm: sched-pipe Tainted: G        W    3.14.0+ #256
        Hardware name: Intel Corporation Montevina platform/To be filled by O.E.M., BIOS AMVACRB1.86C.0066.B00.0805070703 05/07/2008
         0000000000000009 ffff880077233ab8 ffffffff81665a23 0000000000200005
         0000000000000000 ffff880077233af8 ffffffff8104732c 0000000000000046
         ffff88007467c800 0000000000000002 ffff88007a9cf2a0 0000000000000001
        Call Trace:
         [<ffffffff81665a23>] dump_stack+0x4f/0x7c
         [<ffffffff8104732c>] warn_slowpath_common+0x8c/0xc0
         [<ffffffff8104737a>] warn_slowpath_null+0x1a/0x20
         [<ffffffff8110fb3d>] perf_swevent_add+0x18d/0x1a0
         [<ffffffff811162ae>] event_sched_in.isra.75+0x9e/0x1f0
         [<ffffffff8111646a>] group_sched_in+0x6a/0x1f0
         [<ffffffff81083dd5>] ? sched_clock_local+0x25/0xa0
         [<ffffffff811167e6>] ctx_sched_in+0x1f6/0x450
         [<ffffffff8111757b>] perf_event_sched_in+0x6b/0xa0
         [<ffffffff81117a4b>] perf_event_context_sched_in+0x7b/0xc0
         [<ffffffff81117ece>] __perf_event_task_sched_in+0x43e/0x460
         [<ffffffff81096f1e>] ? put_lock_stats.isra.18+0xe/0x30
         [<ffffffff8107b3c8>] finish_task_switch+0xb8/0x100
         [<ffffffff8166a7de>] __schedule+0x30e/0xad0
         [<ffffffff81172dd2>] ? pipe_read+0x3e2/0x560
         [<ffffffff8166b45e>] ? preempt_schedule_irq+0x3e/0x70
         [<ffffffff8166b45e>] ? preempt_schedule_irq+0x3e/0x70
         [<ffffffff8166b464>] preempt_schedule_irq+0x44/0x70
         [<ffffffff816707f0>] retint_kernel+0x20/0x30
         [<ffffffff8109e60a>] ? lockdep_sys_exit+0x1a/0x90
         [<ffffffff812a4234>] lockdep_sys_exit_thunk+0x35/0x67
         [<ffffffff81679321>] ? sysret_check+0x5/0x56
      
      Fixing this by tracking the cpu hotplug state and displaying
      the WARN only if current cpu is initialized properly.
      
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Reported-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1396861448-10097-1-git-send-email-jolsa@redhat.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      26616604
  10. 31 May, 2014 1 commit
  11. 07 Mar, 2014 1 commit
  12. 25 Jul, 2013 3 commits
    • Peter Zijlstra's avatar
      perf: Fix perf_lock_task_context() vs RCU · 65e303d7
      Peter Zijlstra authored
      commit 058ebd0e upstream.
      
      Jiri managed to trigger this warning:
      
       [] ======================================================
       [] [ INFO: possible circular locking dependency detected ]
       [] 3.10.0+ #228 Tainted: G        W
       [] -------------------------------------------------------
       [] p/6613 is trying to acquire lock:
       []  (rcu_node_0){..-...}, at: [<ffffffff810ca797>] rcu_read_unlock_special+0xa7/0x250
       []
       [] but task is already holding lock:
       []  (&ctx->lock){-.-...}, at: [<ffffffff810f2879>] perf_lock_task_context+0xd9/0x2c0
       []
       [] which lock already depends on the new lock.
       []
       [] the existing dependency chain (in reverse order) is:
       []
       [] -> #4 (&ctx->lock){-.-...}:
       [] -> #3 (&rq->lock){-.-.-.}:
       [] -> #2 (&p->pi_lock){-.-.-.}:
       [] -> #1 (&rnp->nocb_gp_wq[1]){......}:
       [] -> #0 (rcu_node_0){..-...}:
      
      Paul was quick to explain that due to preemptible RCU we cannot call
      rcu_read_unlock() while holding scheduler (or nested) locks when part
      of the read side critical section was preemptible.
      
      Therefore solve it by making the entire RCU read side non-preemptible.
      
      Also pull out the retry from under the non-preempt to play nice with RT.
      Reported-by: default avatarJiri Olsa <jolsa@redhat.com>
      Helped-out-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      65e303d7
    • Jiri Olsa's avatar
      perf: Remove WARN_ON_ONCE() check in __perf_event_enable() for valid scenario · b2412679
      Jiri Olsa authored
      commit 06f41796 upstream.
      
      The '!ctx->is_active' check has a valid scenario, so
      there's no need for the warning.
      
      The reason is that there's a time window between the
      'ctx->is_active' check in the perf_event_enable() function
      and the __perf_event_enable() function having:
      
        - IRQs on
        - ctx->lock unlocked
      
      where the task could be killed and 'ctx' deactivated by
      perf_event_exit_task(), ending up with the warning below.
      
      So remove the WARN_ON_ONCE() check and add comments to
      explain it all.
      
      This addresses the following warning reported by Vince Weaver:
      
      [  324.983534] ------------[ cut here ]------------
      [  324.984420] WARNING: at kernel/events/core.c:1953 __perf_event_enable+0x187/0x190()
      [  324.984420] Modules linked in:
      [  324.984420] CPU: 19 PID: 2715 Comm: nmi_bug_snb Not tainted 3.10.0+ #246
      [  324.984420] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
      [  324.984420]  0000000000000009 ffff88043fce3ec8 ffffffff8160ea0b ffff88043fce3f00
      [  324.984420]  ffffffff81080ff0 ffff8802314fdc00 ffff880231a8f800 ffff88043fcf7860
      [  324.984420]  0000000000000286 ffff880231a8f800 ffff88043fce3f10 ffffffff8108103a
      [  324.984420] Call Trace:
      [  324.984420]  <IRQ>  [<ffffffff8160ea0b>] dump_stack+0x19/0x1b
      [  324.984420]  [<ffffffff81080ff0>] warn_slowpath_common+0x70/0xa0
      [  324.984420]  [<ffffffff8108103a>] warn_slowpath_null+0x1a/0x20
      [  324.984420]  [<ffffffff81134437>] __perf_event_enable+0x187/0x190
      [  324.984420]  [<ffffffff81130030>] remote_function+0x40/0x50
      [  324.984420]  [<ffffffff810e51de>] generic_smp_call_function_single_interrupt+0xbe/0x130
      [  324.984420]  [<ffffffff81066a47>] smp_call_function_single_interrupt+0x27/0x40
      [  324.984420]  [<ffffffff8161fd2f>] call_function_single_interrupt+0x6f/0x80
      [  324.984420]  <EOI>  [<ffffffff816161a1>] ? _raw_spin_unlock_irqrestore+0x41/0x70
      [  324.984420]  [<ffffffff8113799d>] perf_event_exit_task+0x14d/0x210
      [  324.984420]  [<ffffffff810acd04>] ? switch_task_namespaces+0x24/0x60
      [  324.984420]  [<ffffffff81086946>] do_exit+0x2b6/0xa40
      [  324.984420]  [<ffffffff8161615c>] ? _raw_spin_unlock_irq+0x2c/0x30
      [  324.984420]  [<ffffffff81087279>] do_group_exit+0x49/0xc0
      [  324.984420]  [<ffffffff81096854>] get_signal_to_deliver+0x254/0x620
      [  324.984420]  [<ffffffff81043057>] do_signal+0x57/0x5a0
      [  324.984420]  [<ffffffff8161a164>] ? __do_page_fault+0x2a4/0x4e0
      [  324.984420]  [<ffffffff8161665c>] ? retint_restore_args+0xe/0xe
      [  324.984420]  [<ffffffff816166cd>] ? retint_signal+0x11/0x84
      [  324.984420]  [<ffffffff81043605>] do_notify_resume+0x65/0x80
      [  324.984420]  [<ffffffff81616702>] retint_signal+0x46/0x84
      [  324.984420] ---[ end trace 442ec2f04db3771a ]---
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Suggested-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1373384651-6109-2-git-send-email-jolsa@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2412679
    • Jiri Olsa's avatar
      perf: Clone child context from parent context pmu · f38bac3d
      Jiri Olsa authored
      commit 734df5ab upstream.
      
      Currently when the child context for inherited events is
      created, it's based on the pmu object of the first event
      of the parent context.
      
      This is wrong for the following scenario:
      
        - HW context having HW and SW event
        - HW event got removed (closed)
        - SW event stays in HW context as the only event
          and its pmu is used to clone the child context
      
      The issue starts when the cpu context object is touched
      based on the pmu context object (__get_cpu_context). In
      this case the HW context will work with SW cpu context
      ending up with following WARN below.
      
      Fixing this by using parent context pmu object to clone
      from child context.
      
      Addresses the following warning reported by Vince Weaver:
      
      [ 2716.472065] ------------[ cut here ]------------
      [ 2716.476035] WARNING: at kernel/events/core.c:2122 task_ctx_sched_out+0x3c/0x)
      [ 2716.476035] Modules linked in: nfsd auth_rpcgss oid_registry nfs_acl nfs locn
      [ 2716.476035] CPU: 0 PID: 3164 Comm: perf_fuzzer Not tainted 3.10.0-rc4 #2
      [ 2716.476035] Hardware name: AOpen   DE7000/nMCP7ALPx-DE R1.06 Oct.19.2012, BI2
      [ 2716.476035]  0000000000000000 ffffffff8102e215 0000000000000000 ffff88011fc18
      [ 2716.476035]  ffff8801175557f0 0000000000000000 ffff880119fda88c ffffffff810ad
      [ 2716.476035]  ffff880119fda880 ffffffff810af02a 0000000000000009 ffff880117550
      [ 2716.476035] Call Trace:
      [ 2716.476035]  [<ffffffff8102e215>] ? warn_slowpath_common+0x5b/0x70
      [ 2716.476035]  [<ffffffff810ab2bd>] ? task_ctx_sched_out+0x3c/0x5f
      [ 2716.476035]  [<ffffffff810af02a>] ? perf_event_exit_task+0xbf/0x194
      [ 2716.476035]  [<ffffffff81032a37>] ? do_exit+0x3e7/0x90c
      [ 2716.476035]  [<ffffffff810cd5ab>] ? __do_fault+0x359/0x394
      [ 2716.476035]  [<ffffffff81032fe6>] ? do_group_exit+0x66/0x98
      [ 2716.476035]  [<ffffffff8103dbcd>] ? get_signal_to_deliver+0x479/0x4ad
      [ 2716.476035]  [<ffffffff810ac05c>] ? __perf_event_task_sched_out+0x230/0x2d1
      [ 2716.476035]  [<ffffffff8100205d>] ? do_signal+0x3c/0x432
      [ 2716.476035]  [<ffffffff810abbf9>] ? ctx_sched_in+0x43/0x141
      [ 2716.476035]  [<ffffffff810ac2ca>] ? perf_event_context_sched_in+0x7a/0x90
      [ 2716.476035]  [<ffffffff810ac311>] ? __perf_event_task_sched_in+0x31/0x118
      [ 2716.476035]  [<ffffffff81050dd9>] ? mmdrop+0xd/0x1c
      [ 2716.476035]  [<ffffffff81051a39>] ? finish_task_switch+0x7d/0xa6
      [ 2716.476035]  [<ffffffff81002473>] ? do_notify_resume+0x20/0x5d
      [ 2716.476035]  [<ffffffff813654f5>] ? retint_signal+0x3d/0x78
      [ 2716.476035] ---[ end trace 827178d8a5966c3d ]---
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1373384651-6109-1-git-send-email-jolsa@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f38bac3d
  13. 19 Jun, 2013 1 commit
  14. 28 May, 2013 1 commit
    • Peter Zijlstra's avatar
      perf: Fix perf mmap bugs · 26cb63ad
      Peter Zijlstra authored
      Vince reported a problem found by his perf specific trinity
      fuzzer.
      
      Al noticed 2 problems with perf's mmap():
      
       - it has issues against fork() since we use vma->vm_mm for accounting.
       - it has an rb refcount leak on double mmap().
      
      We fix the issues against fork() by using VM_DONTCOPY; I don't
      think there's code out there that uses this; we didn't hear
      about weird accounting problems/crashes. If we do need this to
      work, the previously proposed VM_PINNED could make this work.
      
      Aside from the rb reference leak spotted by Al, Vince's example
      prog was indeed doing a double mmap() through the use of
      perf_event_set_output().
      
      This exposes another problem, since we now have 2 events with
      one buffer, the accounting gets screwy because we account per
      event. Fix this by making the buffer responsible for its own
      accounting.
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Link: http://lkml.kernel.org/r/20130528085548.GA12193@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      26cb63ad
  15. 07 May, 2013 2 commits
  16. 22 Apr, 2013 2 commits
    • Frederic Weisbecker's avatar
      perf: New helper to prevent full dynticks CPUs from stopping tick · 026249ef
      Frederic Weisbecker authored
      Provide a new helper that help full dynticks CPUs to prevent
      from stopping their tick in case there are events in the local
      rotation list.
      
      This way we make sure that perf_event_task_tick() is serviced
      on demand.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      026249ef
    • Frederic Weisbecker's avatar
      perf: Kick full dynticks CPU if events rotation is needed · 12351ef8
      Frederic Weisbecker authored
      Kick the current CPU's tick by sending it a self IPI when
      an event is queued on the rotation list and it is the first
      element inserted. This makes sure that perf_event_task_tick()
      works on full dynticks CPUs.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      12351ef8
  17. 21 Apr, 2013 1 commit
    • Paul E. McKenney's avatar
      events: Protect access via task_subsys_state_check() · c79aa0d9
      Paul E. McKenney authored
      The following RCU splat indicates lack of RCU protection:
      
      [  953.267649] ===============================
      [  953.267652] [ INFO: suspicious RCU usage. ]
      [  953.267657] 3.9.0-0.rc6.git2.4.fc19.ppc64p7 #1 Not tainted
      [  953.267661] -------------------------------
      [  953.267664] include/linux/cgroup.h:534 suspicious rcu_dereference_check() usage!
      [  953.267669]
      [  953.267669] other info that might help us debug this:
      [  953.267669]
      [  953.267675]
      [  953.267675] rcu_scheduler_active = 1, debug_locks = 0
      [  953.267680] 1 lock held by glxgears/1289:
      [  953.267683]  #0:  (&sig->cred_guard_mutex){+.+.+.}, at: [<c00000000027f884>] .prepare_bprm_creds+0x34/0xa0
      [  953.267700]
      [  953.267700] stack backtrace:
      [  953.267704] Call Trace:
      [  953.267709] [c0000001f0d1b6e0] [c000000000016e30] .show_stack+0x130/0x200 (unreliable)
      [  953.267717] [c0000001f0d1b7b0] [c0000000001267f8] .lockdep_rcu_suspicious+0x138/0x180
      [  953.267724] [c0000001f0d1b840] [c0000000001d43a4] .perf_event_comm+0x4c4/0x690
      [  953.267731] [c0000001f0d1b950] [c00000000027f6e4] .set_task_comm+0x84/0x1f0
      [  953.267737] [c0000001f0d1b9f0] [c000000000280414] .setup_new_exec+0x94/0x220
      [  953.267744] [c0000001f0d1ba70] [c0000000002f665c] .load_elf_binary+0x58c/0x19b0
      ...
      
      This commit therefore adds the required RCU read-side critical
      section to perf_event_comm().
      Reported-by: default avatarAdam Jackson <ajax@redhat.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: a.p.zijlstra@chello.nl
      Cc: paulus@samba.org
      Cc: acme@ghostprotocols.net
      Link: http://lkml.kernel.org/r/20130419190124.GA8638@linux.vnet.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Tested-by: default avatarGustavo Luiz Duarte <gusld@br.ibm.com>
      c79aa0d9
  18. 15 Apr, 2013 1 commit
  19. 12 Apr, 2013 1 commit
  20. 10 Apr, 2013 1 commit
    • Tejun Heo's avatar
      perf: make perf_event cgroup hierarchical · ef824fa1
      Tejun Heo authored
      perf_event is one of a couple remaining cgroup controllers with broken
      hierarchy support.  Converting it to support hierarchy is almost
      trivial.  The only thing necessary is to consider a task belonging to
      a descendant cgroup as a match.  IOW, if the cgroup of the currently
      executing task (@cpuctx->cgrp) equals or is a descendant of the
      event's cgroup (@event->cgrp), then the event should be enabled.
      
      Implement hierarchy support and remove .broken_hierarchy tag along
      with the incorrect comment on what needs to be done for hierarchy
      support.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      ef824fa1
  21. 08 Apr, 2013 1 commit
  22. 01 Apr, 2013 3 commits
  23. 18 Mar, 2013 3 commits
    • Namhyung Kim's avatar
      perf/cgroup: Add __percpu annotation to perf_cgroup->info · 86e213e1
      Namhyung Kim authored
      It's a per-cpu data structure but missed the __percpu annotation.
      Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Link: http://lkml.kernel.org/r/1363600594-11453-1-git-send-email-namhyung@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      86e213e1
    • Namhyung Kim's avatar
      perf: Generate EXIT event only once per task context · d610d98b
      Namhyung Kim authored
      perf_event_task_event() iterates pmu list and generate events
      for each eligible pmu context.  But if task_event has task_ctx
      like in EXIT it'll generate events even though the pmu doesn't
      have an eligible one. Fix it by moving the code to proper
      places.
      
      Before this patch:
      
        $ perf record -n true
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.006 MB perf.data (~248 samples) ]
      
        $ perf report -D | tail
        Aggregated stats:
                   TOTAL events:         73
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          4
        cycles stats:
                   TOTAL events:         73
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          4
      
      After this patch:
      
        $ perf report -D | tail
        Aggregated stats:
                   TOTAL events:         70
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          1
        cycles stats:
                   TOTAL events:         70
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          1
      Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1363332433-7637-1-git-send-email-namhyung@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d610d98b
    • Namhyung Kim's avatar
      perf: Reset hwc->last_period on sw clock events · 778141e3
      Namhyung Kim authored
      When cpu/task clock events are initialized, their sampling
      frequencies are converted to have a fixed value.  However it
      missed to update the hwc->last_period which was set to 1 for
      initial sampling frequency calibration.
      
      Because this hwc->last_period value is used as a period in
      perf_swevent_ hrtime(), every recorded sample will have an
      incorrected period of 1.
      
        $ perf record -e task-clock noploop 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.158 MB perf.data (~6919 samples) ]
      
        $ perf report -n --show-total-period  --stdio
        # Samples: 4K of event 'task-clock'
        # Event count (approx.): 4000
        #
        # Overhead       Samples        Period  Command  Shared Object              Symbol
        # ........  ............  ............  .......  .............  ..................
        #
            99.95%          3998          3998  noploop  noploop        [.] main
             0.03%             1             1  noploop  libc-2.15.so   [.] init_cacheinfo
             0.03%             1             1  noploop  ld-2.15.so     [.] open_verify
      
      Note that it doesn't affect the non-sampling event so that the
      perf stat still gets correct value with or without this patch.
      
        $ perf stat -e task-clock noploop 1
      
         Performance counter stats for 'noploop 1':
      
               1000.272525 task-clock                #    1.000 CPUs utilized
      
               1.000560605 seconds time elapsed
      Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1363574507-18808-1-git-send-email-namhyung@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      778141e3
  24. 06 Mar, 2013 1 commit
  25. 28 Feb, 2013 2 commits
    • Sasha Levin's avatar
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin authored
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: default avatarPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
    • Tejun Heo's avatar
      events: convert to idr_alloc() · 0e9c3be2
      Tejun Heo authored
      Convert to the much saner new idr interface.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0e9c3be2