1. 07 Apr, 2009 17 commits
    • Ingo Molnar's avatar
      x86, perfcounters: add atomic64_xchg() · 98c2aaf8
      Ingo Molnar authored
      Complete atomic64_t support on the 32-bit side by adding atomic64_xch().
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20090406094518.445450972@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      98c2aaf8
    • Mike Galbraith's avatar
      perf_counter tools: kerneltop: display per function percentage along with event count · 6278af66
      Mike Galbraith authored
      ```---------------------------------------------------------------------------
       KernelTop:   90551 irqs/sec  kernel:15.0% [NMI, 100000 CPU cycles],  (all, 4 CPUs)
      ```
      
      ---------------------------------------------------------------------------
      
                   events    pcnt         RIP          kernel function
        ______     ______   _____   ________________   _______________
      
                 16871.00 - 19.1% - ffffffff80328e20 : clear_page_c
                  8810.00 -  9.9% - ffffffff8048ce80 : page_fault
                  4746.00 -  5.4% - ffffffff8048cae2 : _spin_lock
                  4428.00 -  5.0% - ffffffff80328e70 : copy_page_c
                  3340.00 -  3.8% - ffffffff80329090 : copy_user_generic_string!
                  2679.00 -  3.0% - ffffffff8028a16b : get_page_from_freelist
                  2254.00 -  2.5% - ffffffff80296f19 : unmap_vmas
                  2082.00 -  2.4% - ffffffff80297e19 : handle_mm_fault
                  1754.00 -  2.0% - ffffffff80288dc8 : __rmqueue_smallest
                  1553.00 -  1.8% - ffffffff8048ca58 : _spin_lock_irqsave
                  1400.00 -  1.6% - ffffffff8028cdc8 : release_pages
                  1337.00 -  1.5% - ffffffff80285400 : find_get_page
                  1335.00 -  1.5% - ffffffff80225a23 : do_page_fault
                  1299.00 -  1.5% - ffffffff802ba8e7 : __d_lookup
                  1174.00 -  1.3% - ffffffff802b38f3 : __link_path_walk
                  1155.00 -  1.3% - ffffffff802843e1 : perf_swcounter_ctx_event!
                  1137.00 -  1.3% - ffffffff8028d118 : ____pagevec_lru_add
                   963.00 -  1.1% - ffffffff802a670b : kmem_cache_alloc
                   885.00 -  1.0% - ffffffff8024bc61 : __wake_up_bit
      
      Display per function percentage along with event count.
      Signed-off-by: default avatarMike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      6278af66
    • Peter Zijlstra's avatar
      perf_counter: minimize context time updates · bce379bf
      Peter Zijlstra authored
      Push the update_context_time() calls up the stack so that we get less
      invokations and thereby a less noisy output:
      
      before:
      
       # ./perfstat -e 1:0 -e 1:1 -e 1:1 -e 1:1 -l ls > /dev/null
      
       Performance counter stats for 'ls':
      
            10.163691  cpu clock ticks      (msecs)  (scaled from 98.94%)
            10.215360  task clock ticks     (msecs)  (scaled from 98.18%)
            10.185549  task clock ticks     (msecs)  (scaled from 98.53%)
            10.183581  task clock ticks     (msecs)  (scaled from 98.71%)
      
       Wall-clock time elapsed:    11.912858 msecs
      
      after:
      
       # ./perfstat -e 1:0 -e 1:1 -e 1:1 -e 1:1 -l ls > /dev/null
      
       Performance counter stats for 'ls':
      
             9.316630  cpu clock ticks      (msecs)
             9.280789  task clock ticks     (msecs)
             9.280789  task clock ticks     (msecs)
             9.280789  task clock ticks     (msecs)
      
       Wall-clock time elapsed:     9.574872 msecs
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.618876874@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      bce379bf
    • Peter Zijlstra's avatar
      perf_counter: remove rq->lock usage · 849691a6
      Peter Zijlstra authored
      Now that all the task runtime clock users are gone, remove the ugly
      rq->lock usage from perf counters, which solves the nasty deadlock
      seen when a software task clock counter was read from an NMI overflow
      context.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.531137582@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      849691a6
    • Peter Zijlstra's avatar
      perf_counter: rework the task clock software counter · a39d6f25
      Peter Zijlstra authored
      Rework the task clock software counter to use the context time instead
      of the task runtime clock, this removes the last such user.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.445450972@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a39d6f25
    • Peter Zijlstra's avatar
      perf_counter: rework context time · 4af4998b
      Peter Zijlstra authored
      Since perf_counter_context is switched along with tasks, we can
      maintain the context time without using the task runtime clock.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.353552838@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4af4998b
    • Peter Zijlstra's avatar
      perf_counter: change event definition · 4c9e2542
      Peter Zijlstra authored
      Currently the definition of an event is slightly ambiguous. We have
      wakeup events, for poll() and SIGIO, which are either generated
      when a record crosses a page boundary (hw_events.wakeup_events == 0),
      or every wakeup_events new records.
      
      Now a record can be either a counter overflow record, or a number of
      different things, like the mmap PROT_EXEC region notifications.
      
      Then there is the PERF_COUNTER_IOC_REFRESH event limit, which only
      considers counter overflows.
      
      This patch changes then wakeup_events and SIGIO notification to only
      consider overflow events. Furthermore it changes the SIGIO notification
      to report SIGHUP when the event limit is reached and the counter will
      be disabled.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.266679874@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4c9e2542
    • Peter Zijlstra's avatar
      perf_counter: comment the perf_event_type stuff · 0c593b34
      Peter Zijlstra authored
      Describe the event format.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.211174347@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0c593b34
    • Peter Zijlstra's avatar
      perf_counter: counter overflow limit · 79f14641
      Peter Zijlstra authored
      Provide means to auto-disable the counter after 'n' overflow events.
      
      Create the counter with hw_event.disabled = 1, and then issue an
      ioctl(fd, PREF_COUNTER_IOC_REFRESH, n); to set the limit and enable
      the counter.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.083139737@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      79f14641
    • Peter Zijlstra's avatar
      perf_counter: PERF_RECORD_TIME · 339f7c90
      Peter Zijlstra authored
      By popular request, provide means to log a timestamp along with the
      counter overflow event.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.024173282@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      339f7c90
    • Peter Zijlstra's avatar
      perf_counter: fix the mlock accounting · ebb3c4c4
      Peter Zijlstra authored
      Reading through the code I saw I forgot the finish the mlock accounting.
      Do so now.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.899767331@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ebb3c4c4
    • Peter Zijlstra's avatar
      perf_counter: theres more to overflow than writing events · f6c7d5fe
      Peter Zijlstra authored
      Prepare for more generic overflow handling. The new perf_counter_overflow()
      method will handle the generic bits of the counter overflow, and can return
      a !0 return value, in which case the counter should be (soft) disabled, so
      that it won't count until it's properly disabled.
      
      XXX: do powerpc and swcounter
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.812109629@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f6c7d5fe
    • Peter Zijlstra's avatar
      perf_counter: x86: self-IPI for pending work · b6276f35
      Peter Zijlstra authored
      Implement set_perf_counter_pending() with a self-IPI so that it will
      run ASAP in a usable context.
      
      For now use a second IRQ vector, because the primary vector pokes
      the apic in funny ways that seem to confuse things.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.724626696@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      b6276f35
    • Peter Zijlstra's avatar
      perf_counter: generalize pending infrastructure · 671dec5d
      Peter Zijlstra authored
      Prepare the pending infrastructure to do more than wakeups.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.634732847@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      671dec5d
    • Peter Zijlstra's avatar
      perf_counter: SIGIO support · 3c446b3d
      Peter Zijlstra authored
      Provide support for fcntl() I/O availability signals.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.579788800@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3c446b3d
    • Peter Zijlstra's avatar
      perf_counter: add more context information · 9c03d88e
      Peter Zijlstra authored
      Change the callchain context entries to u16, so as to gain some space.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.457320003@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9c03d88e
    • Peter Zijlstra's avatar
      perf_counter: update mmap() counter read, take 2 · a2e87d06
      Peter Zijlstra authored
      Update the userspace read method.
      
      Paul noted that:
       - userspace cannot observe ->lock & 1 on the same cpu.
       - we need a barrier() between reading ->lock and ->index
         to ensure we read them in that prticular order.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.368446033@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a2e87d06
  2. 06 Apr, 2009 23 commits
    • Peter Zijlstra's avatar
      perf_counter: update mmap() counter read · 92f22a38
      Peter Zijlstra authored
      Paul noted that we don't need SMP barriers for the mmap() counter read
      because its always on the same cpu (otherwise you can't access the hw
      counter anyway).
      
      So remove the SMP barriers and replace them with regular compiler
      barriers.
      
      Further, update the comment to include a race free method of reading
      said hardware counter. The primary change is putting the pmc_read
      inside the seq-loop, otherwise we can still race and read rubbish.
      Noticed-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.577951445@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      92f22a38
    • Peter Zijlstra's avatar
      perf_counter: add more context information · 5872bdb8
      Peter Zijlstra authored
      Put in counts to tell which ips belong to what context.
      
        -----
         | |  hv
         | --
      nr | |  kernel
         | --
         | |  user
        -----
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.493101305@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5872bdb8
    • Peter Zijlstra's avatar
      perf_counter: kerneltop: update to new ABI · 3df70fd6
      Peter Zijlstra authored
      Update to reflect the new record_type ABI changes.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.407283141@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3df70fd6
    • Peter Zijlstra's avatar
      perf_counter: per event wakeups · c457810a
      Peter Zijlstra authored
      By request, provide a way to request a wakeup every 'n' events instead
      of every page of output.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.323309784@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c457810a
    • Peter Zijlstra's avatar
      perf_counter: move the event overflow output bits to record_type · 8a057d84
      Peter Zijlstra authored
      Per suggestion from Paul, move the event overflow bits to record_type
      and sanitize the enums a bit.
      
      Breaks the ABI -- again ;-)
      Suggested-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.151921176@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8a057d84
    • Mike Galbraith's avatar
      perf_counter tools: kerneltop: add real-time data acquisition thread · 9dd49988
      Mike Galbraith authored
      Decouple kerneltop display from event acquisition by introducing
      a separate data acquisition thread. This fixes annnoying kerneltop
      display refresh jitter and missed events.
      
      Also add a -r <prio> option, to switch the data acquisition thread
      to real-time priority.
      Signed-off-by: default avatarMike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9dd49988
    • Peter Zijlstra's avatar
      perf_counter: pmc arbitration · 4e935e47
      Peter Zijlstra authored
      Follow the example set by powerpc and try to play nice with oprofile
      and the nmi watchdog.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.459968444@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4e935e47
    • Peter Zijlstra's avatar
      perf_counter: x86: callchain support · d7d59fb3
      Peter Zijlstra authored
      Provide the x86 perf_callchain() implementation.
      
      Code based on the ftrace/sysprof code from Soeren Sandmann Pedersen.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Orig-LKML-Reference: <20090330171024.341993293@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d7d59fb3
    • Peter Zijlstra's avatar
      perf_counter: provide generic callchain bits · 394ee076
      Peter Zijlstra authored
      Provide the generic callchain support bits. If hw_event->callchain is
      set the arch specific perf_callchain() function is called upon to
      provide a perf_callchain_entry structure filled with the current
      callchain.
      
      If it does so, it is added to the overflow output event.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.254266860@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      394ee076
    • Peter Zijlstra's avatar
      perf_counter tools: kerneltop: update event_types · 023c54c4
      Peter Zijlstra authored
      Go along with the new perf_event_type ABI.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.133985461@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      023c54c4
    • Peter Zijlstra's avatar
      perf_counter: re-arrange the perf_event_type · 5ed00415
      Peter Zijlstra authored
      Breaks ABI yet again :-)
      
      Change the event type so that [0, 2^31-1] are regular event types, but
      [2^31, 2^32-1] forms a bitmask for overflow events.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.047961770@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5ed00415
    • Peter Zijlstra's avatar
      perf_counter: small cleanup of the output routines · 78d613eb
      Peter Zijlstra authored
      Move the nmi argument to the _begin() function, so that _end() only needs the
      handle. This allows the _begin() function to generate a wakeup on event loss.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.959404268@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      78d613eb
    • Paul Mackerras's avatar
      perf_counter tools: optionally scale counter values in perfstat mode · 31f004df
      Paul Mackerras authored
      Impact: new functionality
      
      This adds add an option to the perfstat mode of kerneltop to scale the
      reported counter values according to the fraction of time that each
      counter gets to count.  This is invoked with the -l option (I used 'l'
      because s, c, a and e were all taken already.)  This uses the new
      PERF_RECORD_TOTAL_TIME_{ENABLED,RUNNING} read format options.
      
      With this, we get output like this:
      
      $ ./perfstat -l -e 0:0,0:1,0:2,0:3,0:4,0:5 ./spin
      
       Performance counter stats for './spin':
      
           4016072055  CPU cycles           (events)  (scaled from 66.53%)
           2005887318  instructions         (events)  (scaled from 66.53%)
              1762849  cache references     (events)  (scaled from 66.69%)
               165229  cache misses         (events)  (scaled from 66.85%)
           1001298009  branches             (events)  (scaled from 66.78%)
                41566  branch misses        (events)  (scaled from 66.61%)
      
       Wall-clock time elapsed:  2438.227446 msecs
      
      This also lets us detect when a counter is zero because the counter
      never got to go on the CPU at all.  In that case we print <not counted>
      rather than 0.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.871484899@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      31f004df
    • Peter Zijlstra's avatar
      perf_counter: x86: proper error propagation for the x86 hw_perf_counter_init() · 9ea98e19
      Peter Zijlstra authored
      Now that Paul cleaned up the error propagation paths, pass down the
      x86 error as well.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.792822360@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9ea98e19
    • Paul Mackerras's avatar
      perf_counter: make it possible for hw_perf_counter_init to return error codes · d5d2bc0d
      Paul Mackerras authored
      Impact: better error reporting
      
      At present, if hw_perf_counter_init encounters an error, all it can do
      is return NULL, which causes sys_perf_counter_open to return an EINVAL
      error to userspace.  This isn't very informative for userspace; it means
      that userspace can't tell the difference between "sorry, oprofile is
      already using the PMU" and "we don't support this CPU" and "this CPU
      doesn't support the requested generic hardware event".
      
      This commit uses the PTR_ERR/ERR_PTR/IS_ERR set of macros to let
      hw_perf_counter_init return an error code on error rather than just NULL
      if it wishes.  If it does so, that error code will be returned from
      sys_perf_counter_open to userspace.  If it returns NULL, an EINVAL
      error will be returned to userspace, as before.
      
      This also adapts the powerpc hw_perf_counter_init to make use of this
      to return ENXIO, EINVAL, EBUSY, or EOPNOTSUPP as appropriate.  It would
      be good to add extra error numbers in future to allow userspace to
      distinguish the various errors that are currently reported as EINVAL,
      i.e. irq_period < 0, too many events in a group, conflict between
      exclude_* settings in a group, and PMU resource conflict in a group.
      
      [ v2: fix a bug pointed out by Corey Ashford where error returns from
            hw_perf_counter_init were not handled correctly in the case of
            raw hardware events.]
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.682428180@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d5d2bc0d
    • Paul Mackerras's avatar
      perf_counter: powerpc: only reserve PMU hardware when we need it · 7595d63b
      Paul Mackerras authored
      Impact: cooperate with oprofile
      
      At present, on PowerPC, if you have perf_counters compiled in, oprofile
      doesn't work.  There is code to allow the PMU to be shared between
      competing subsystems, such as perf_counters and oprofile, but currently
      the perf_counter subsystem reserves the PMU for itself at boot time,
      and never releases it.
      
      This makes perf_counter play nicely with oprofile.  Now we keep a count
      of how many perf_counter instances are counting hardware events, and
      reserve the PMU when that count becomes non-zero, and release the PMU
      when that count becomes zero.  This means that it is possible to have
      perf_counters compiled in and still use oprofile, as long as there are
      no hardware perf_counters active.  This also means that if oprofile is
      active, sys_perf_counter_open will fail if the hw_event specifies a
      hardware event.
      
      To avoid races with other tasks creating and destroying perf_counters,
      we use a mutex.  We use atomic_inc_not_zero and atomic_add_unless to
      avoid having to take the mutex unless there is a possibility of the
      count going between 0 and 1.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.627912475@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7595d63b
    • Peter Zijlstra's avatar
      perf_counter: kerneltop: parse the mmap data stream · 3c1ba6fa
      Peter Zijlstra authored
      frob the kerneltop code to print the mmap data in the stream
      
      Better use would be collecting the IPs per PID and mapping them onto
      the provided userspace code.. TODO
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.501902515@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3c1ba6fa
    • Peter Zijlstra's avatar
      perf_counter: executable mmap() information · 0a4a9391
      Peter Zijlstra authored
      Currently the profiling information returns userspace IPs but no way
      to correlate them to userspace code. Userspace could look into
      /proc/$pid/maps but that might not be current or even present anymore
      at the time of analyzing the IPs.
      
      Therefore provide means to track the mmap information and provide it
      in the output stream.
      
      XXX: only covers mmap()/munmap(), mremap() and mprotect() are missing.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <20090330171023.417259499@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0a4a9391
    • Peter Zijlstra's avatar
      perf_counter: kerneltop: simplify data_head read · 19556439
      Peter Zijlstra authored
      Now that the kernel side changed, match up again.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.327144324@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      19556439
    • Peter Zijlstra's avatar
      perf_counter: fix update_userpage() · 38ff667b
      Peter Zijlstra authored
      It just occured to me it is possible to have multiple contending
      updates of the userpage (mmap information vs overflow vs counter).
      This would break the seqlock logic.
      
      It appear the arch code uses this from NMI context, so we cannot
      possibly serialize its use, therefore separate the data_head update
      from it and let it return to its original use.
      
      The arch code needs to make sure there are no contending callers by
      disabling the counter before using it -- powerpc appears to do this
      nicely.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.241410660@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      38ff667b
    • Peter Zijlstra's avatar
      perf_counter: unify and fix delayed counter wakeup · 925d519a
      Peter Zijlstra authored
      While going over the wakeup code I noticed delayed wakeups only work
      for hardware counters but basically all software counters rely on
      them.
      
      This patch unifies and generalizes the delayed wakeup to fix this
      issue.
      
      Since we're dealing with NMI context bits here, use a cmpxchg() based
      single link list implementation to track counters that have pending
      wakeups.
      
      [ This should really be generic code for delayed wakeups, but since we
        cannot use cmpxchg()/xchg() in generic code, I've let it live in the
        perf_counter code. -- Eric Dumazet could use it to aggregate the
        network wakeups. ]
      
      Furthermore, the x86 method of using TIF flags was flawed in that its
      quite possible to end up setting the bit on the idle task, loosing the
      wakeup.
      
      The powerpc method uses per-cpu storage and does appear to be
      sufficient.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.153932974@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      925d519a
    • Paul Mackerras's avatar
      perf_counter: record time running and time enabled for each counter · 53cfbf59
      Paul Mackerras authored
      Impact: new functionality
      
      Currently, if there are more counters enabled than can fit on the CPU,
      the kernel will multiplex the counters on to the hardware using
      round-robin scheduling.  That isn't too bad for sampling counters, but
      for counting counters it means that the value read from a counter
      represents some unknown fraction of the true count of events that
      occurred while the counter was enabled.
      
      This remedies the situation by keeping track of how long each counter
      is enabled for, and how long it is actually on the cpu and counting
      events.  These times are recorded in nanoseconds using the task clock
      for per-task counters and the cpu clock for per-cpu counters.
      
      These values can be supplied to userspace on a read from the counter.
      Userspace requests that they be supplied after the counter value by
      setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or
      PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field
      when creating the counter.  (There is no way to change the read format
      after the counter is created, though it would be possible to add some
      way to do that.)
      
      Using this information it is possible for userspace to scale the count
      it reads from the counter to get an estimate of the true count:
      
      true_count_estimate = count * total_time_enabled / total_time_running
      
      This also lets userspace detect the situation where the counter never
      got to go on the cpu: total_time_running == 0.
      
      This functionality has been requested by the PAPI developers, and will
      be generally needed for interpreting the count values from counting
      counters correctly.
      
      In the implementation, this keeps 5 time values (in nanoseconds) for
      each counter: total_time_enabled and total_time_running are used when
      the counter is in state OFF or ERROR and for reporting back to
      userspace.  When the counter is in state INACTIVE or ACTIVE, it is the
      tstamp_enabled, tstamp_running and tstamp_stopped values that are
      relevant, and total_time_enabled and total_time_running are determined
      from them.  (tstamp_stopped is only used in INACTIVE state.)  The
      reason for doing it like this is that it means that only counters
      being enabled or disabled at sched-in and sched-out time need to be
      updated.  There are no new loops that iterate over all counters to
      update total_time_enabled or total_time_running.
      
      This also keeps separate child_total_time_running and
      child_total_time_enabled fields that get added in when reporting the
      totals to userspace.  They are separate fields so that they can be
      atomic.  We don't want to use atomics for total_time_running,
      total_time_enabled etc., because then we would have to use atomic
      sequences to update them, which are slower than regular arithmetic and
      memory accesses.
      
      It is possible to measure total_time_running by adding a task_clock
      counter to each group of counters, and total_time_enabled can be
      measured approximately with a top-level task_clock counter (though
      inaccuracies will creep in if you need to disable and enable groups
      since it is not possible in general to disable/enable the top-level
      task_clock counter simultaneously with another group).  However, that
      adds extra overhead - I measured around 15% increase in the context
      switch latency reported by lat_ctx (from lmbench) when a task_clock
      counter was added to each of 2 groups, and around 25% increase when a
      task_clock counter was added to each of 4 groups.  (In both cases a
      top-level task-clock counter was also added.)
      
      In contrast, the code added in this commit gives better information
      with no overhead that I could measure (in fact in some cases I
      measured lower times with this code, but the differences were all less
      than one standard deviation).
      
      [ v2: address review comments by Andrew Morton. ]
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      53cfbf59
    • Peter Zijlstra's avatar
      perf_counter: allow and require one-page mmap on counting counters · 7730d865
      Peter Zijlstra authored
      A brainfart stopped single page mmap()s working. The rest of the code
      should be perfectly fine with not having any data pages.
      Reported-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <1237981712.7972.812.camel@twins>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7730d865