1. 23 May, 2008 25 commits
    • Steven Rostedt's avatar
      ftrace: don't use raw_local_irq_save/restore · 18cef379
      Steven Rostedt authored
      Using raw_local_irq_save/restore confuses lockdep.
      It's fine to use the normal ones.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      18cef379
    • Steven Rostedt's avatar
      ftrace: lockdep notrace annotations · 0764d23c
      Steven Rostedt authored
      Add notrace annotations to lockdep to keep ftrace from causing
      recursive problems with lock tracing and debugging.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      0764d23c
    • Steven Rostedt's avatar
      ftrace: irqs off smp_processor_id() fix · 361943ad
      Steven Rostedt authored
      The irqsoff function tracer did a __get_cpu_var to determine
      if it should trace the function or not. The problem is that
      __get_cpu_var can preempt between getting the CPU and reading
      the cpu variable. This means that the cpu variable that is
      being read is not from the cpu being run on.
      
      At worst, this can give a false positive, where we trace the
      function when we should not.  It will never give a false negative
      since we only want to trace when interrupts are disabled
      and we never preempt when they are.
      
      This fix adds a check after reading the irq flags to only
      trace if the interrupts are actually disabled. It also changes
      the reading of the cpu variable to use a raw_smp_processor_id
      since we now don't care if we preempt. We still catch that fact.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      361943ad
    • Steven Rostedt's avatar
      ftrace: debug smp_processor_id, use notrace preempt disable · 5568b139
      Steven Rostedt authored
      The debug smp_processor_id caused a recursive fault in debugging
      the irqsoff tracer. The tracer used a smp_processor_id in the
      ftrace callback, and this function called preempt_disable which
      also is traced. This caused a recursive fault (stack overload).
      
      Since using smp_processor_id without debugging on does not cause
      faults with the tracer (even when the tracer is wrong), the
      debug version should not cause a system reboot.
      
      This changes the debug_smp_processor_id to use the notrace versions
      of preempt_disable and enable.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      5568b139
    • Ingo Molnar's avatar
      ftrace: fix kexec · f43fdad8
      Ingo Molnar authored
      disable the tracer while kexec pulls the rug from under the old
      kernel.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      f43fdad8
    • Steven Rostedt's avatar
      ftrace: convert single large buffer into single pages. · 4c11d7ae
      Steven Rostedt authored
      Allocating large buffers for the tracer may fail easily.
      This patch converts the buffer from a large ordered allocation
      to single pages. It uses the struct page LRU field to link the
      pages together.
      
      Later patches may also implement dynamic increasing and decreasing
      of the trace buffers.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      4c11d7ae
    • Steven Rostedt's avatar
      ftrace: add filter select functions to trace · 5072c59f
      Steven Rostedt authored
      This patch adds two files to the debugfs system:
      
       /debugfs/tracing/available_filter_functions
      
      and
      
       /debugfs/tracing/set_ftrace_filter
      
      The available_filter_functions lists all functions that has been
      recorded by the ftraced that has called the ftrace_record_ip function.
      This is to allow users to see what functions have been converted
      to nops and can be enabled for tracing.
      
      To enable functions, simply echo the names (whitespace delimited)
      into set_ftrace_filter. Simple wildcards are also allowed.
      
      echo 'scheduler' > /debugfs/tracing/set_ftrace_filter
      
      Will have only the scheduler be activated when tracing is enabled.
      
      echo 'sched_*' > /debugfs/tracing/set_ftrace_filter
      
      Will have only the functions starting with 'sched_' be activated.
      
      echo '*lock' > /debugfs/tracing/set_ftrace_filter
      
      Will have only functions ending with 'lock' be activated.
      
      echo '*lock*' > /debugfs/tracing/set_ftrace_filter
      
      Will have only functions with 'lock' in its name be activated.
      
      Note: 'sched*lock' will not work. The only wildcards that are
      allowed is an asterisk and the beginning and or end of the string
      passed in.
      
      Multiple names can be passed in with whitespace delimited:
      
      echo 'scheduler *lock *acpi*' > /debugfs/tracing/set_ftrace_filter
      
      is also the same as:
      
      echo 'scheduler' > /debugfs/tracing/set_ftrace_filter
      echo '*lock' >> /debugfs/tracing/set_ftrace_filter
      echo '*acpi*' >> /debugfs/tracing/set_ftrace_filter
      
      Appending does just that. It appends to the list.
      
      To disable all filters simply echo an empty line in:
      
      echo > /debugfs/tracing/set_ftrace_filter
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      5072c59f
    • Steven Rostedt's avatar
      ftrace: use dynamic patching for updating mcount calls · d61f82d0
      Steven Rostedt authored
      This patch replaces the indirect call to the mcount function
      pointer with a direct call that will be patched by the
      dynamic ftrace routines.
      
      On boot up, the mcount function calls the ftace_stub function.
      When the dynamic ftrace code is initialized, the ftrace_stub
      is replaced with a call to the ftrace_record_ip, which records
      the instruction pointers of the locations that call it.
      
      Later, the ftraced daemon will call kstop_machine and patch all
      the locations to nops.
      
      When a ftrace is enabled, the original calls to mcount will now
      be set top call ftrace_caller, which will do a direct call
      to the registered ftrace function. This direct call is also patched
      when the function that should be called is updated.
      
      All patching is performed by a kstop_machine routine to prevent any
      type of race conditions that is associated with modifying code
      on the fly.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      d61f82d0
    • Steven Rostedt's avatar
      ftrace: move memory management out of arch code · 3c1720f0
      Steven Rostedt authored
      This patch moves the memory management of the ftrace
      records out of the arch code and into the generic code
      making the arch code simpler.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      3c1720f0
    • Steven Rostedt's avatar
      ftrace: use nops instead of jmp · dfa60aba
      Steven Rostedt authored
      This patch patches the call to mcount with nops instead
      of a jmp over the mcount call.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      dfa60aba
    • Steven Rostedt's avatar
      ftrace: add ftrace_enabled sysctl to disable mcount function · b0fc494f
      Steven Rostedt authored
      This patch adds back the sysctl ftrace_enabled. This time it is
      defaulted to on, if DYNAMIC_FTRACE is configured. When ftrace_enabled
      is disabled, the ftrace function is set to the stub return.
      
      If DYNAMIC_FTRACE is also configured, on ftrace_enabled = 0,
      the registered ftrace functions will all be set to jmps, but no more
      new calls to ftrace recording (used to find the ftrace calling sites)
      will be called.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      b0fc494f
    • Steven Rostedt's avatar
      ftrace: dynamic enabling/disabling of function calls · 3d083395
      Steven Rostedt authored
      This patch adds a feature to dynamically replace the ftrace code
      with the jmps to allow a kernel with ftrace configured to run
      as fast as it can without it configured.
      
      The way this works, is on bootup (if ftrace is enabled), a ftrace
      function is registered to record the instruction pointer of all
      places that call the function.
      
      Later, if there's still any code to patch, a kthread is awoken
      (rate limited to at most once a second) that performs a stop_machine,
      and replaces all the code that was called with a jmp over the call
      to ftrace. It only replaces what was found the previous time. Typically
      the system reaches equilibrium quickly after bootup and there's no code
      patching needed at all.
      
      e.g.
      
        call ftrace  /* 5 bytes */
      
      is replaced with
      
        jmp 3f  /* jmp is 2 bytes and we jump 3 forward */
      3:
      
      When we want to enable ftrace for function tracing, the IP recording
      is removed, and stop_machine is called again to replace all the locations
      of that were recorded back to the call of ftrace.  When it is disabled,
      we replace the code back to the jmp.
      
      Allocation is done by the kthread. If the ftrace recording function is
      called, and we don't have any record slots available, then we simply
      skip that call. Once a second a new page (if needed) is allocated for
      recording new ftrace function calls.  A large batch is allocated at
      boot up to get most of the calls there.
      
      Because we do this via stop_machine, we don't have to worry about another
      CPU executing a ftrace call as we modify it. But we do need to worry
      about NMI's so all functions that might be called via nmi must be
      annotated with notrace_nmi. When this code is configured in, the NMI code
      will not call notrace.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      3d083395
    • Steven Rostedt's avatar
      ftrace: trace preempt off critical timings · 6cd8a4bb
      Steven Rostedt authored
      Add preempt off timings. A lot of kernel core code is taken from the RT patch
      latency trace that was written by Ingo Molnar.
      
      This adds "preemptoff" and "preemptirqsoff" to /debugfs/tracing/available_tracers
      
      Now instead of just tracing irqs off, preemption off can be selected
      to be recorded.
      
      When this is selected, it shares the same files as irqs off timings.
      One can either trace preemption off, irqs off, or one or the other off.
      
      By echoing "preemptoff" into /debugfs/tracing/current_tracer, recording
      of preempt off only is performed. "irqsoff" will only record the time
      irqs are disabled, but "preemptirqsoff" will take the total time irqs
      or preemption are disabled. Runtime switching of these options is now
      supported by simpling echoing in the appropriate trace name into
      /debugfs/tracing/current_tracer.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      6cd8a4bb
    • Steven Rostedt's avatar
      ftrace: trace irq disabled critical timings · 81d68a96
      Steven Rostedt authored
      This patch adds latency tracing for critical timings
      (how long interrupts are disabled for).
      
       "irqsoff" is added to /debugfs/tracing/available_tracers
      
      Note:
        tracing_max_latency
          also holds the max latency for irqsoff (in usecs).
         (default to large number so one must start latency tracing)
      
        tracing_thresh
          threshold (in usecs) to always print out if irqs off
          is detected to be longer than stated here.
          If irq_thresh is non-zero, then max_irq_latency
          is ignored.
      
      Here's an example of a trace with ftrace_enabled = 0
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 100 us, #3/3, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1d.s3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1d.s3  100us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1d.s3  100us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      
      And this is a trace with ftrace_enabled == 1
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      --------------------------------------------------------------------
       latency: 102 us, #12/12, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1dNs3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_read_phy_reg+0x16/0x225 [e1000] (e1000_update_stats+0x5e2/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_swfw_sync_acquire+0x10/0x99 [e1000] (e1000_read_phy_reg+0x49/0x225 [e1000])
       swapper-0     1dNs3   46us : e1000_get_hw_eeprom_semaphore+0x12/0xa6 [e1000] (e1000_swfw_sync_acquire+0x36/0x99 [e1000])
       swapper-0     1dNs3   47us : __const_udelay+0x9/0x47 (e1000_read_phy_reg+0x116/0x225 [e1000])
       swapper-0     1dNs3   47us+: __delay+0x9/0x50 (__const_udelay+0x45/0x47)
       swapper-0     1dNs3   97us : preempt_schedule+0xc/0x84 (__delay+0x4e/0x50)
       swapper-0     1dNs3   98us : e1000_swfw_sync_release+0xc/0x55 [e1000] (e1000_read_phy_reg+0x211/0x225 [e1000])
       swapper-0     1dNs3   99us+: e1000_put_hw_eeprom_semaphore+0x9/0x35 [e1000] (e1000_swfw_sync_release+0x50/0x55 [e1000])
       swapper-0     1dNs3  101us : _spin_unlock_irqrestore+0xe/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      81d68a96
    • Steven Rostedt's avatar
      ftrace: tracer for scheduler wakeup latency · 352ad25a
      Steven Rostedt authored
      This patch adds the tracer that tracks the wakeup latency of the
      highest priority waking task.
      
        "wakeup" is added to /debugfs/tracing/available_tracers
      
      Also added to /debugfs/tracing
      
        tracing_max_latency
           holds the current max latency for the wakeup
      
        wakeup_thresh
           if set to other than zero, a log will be recorded
           for every wakeup that takes longer than the number
           entered in here (usecs for all counters)
           (deletes previous trace)
      
      Examples:
      
        (with ftrace_enabled = 0)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 26 us, #2/2, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/0-3 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
         quilt-8551  0d..3    0us+: wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
         quilt-8551  0d..4   26us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
        (with ftrace_enabled = 1)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      --------------------------------------------------------------------
       latency: 36 us, #45/45, CPU#0 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/1-5 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
          bash-10653 1d..3    0us : wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
          bash-10653 1d..3    1us : try_to_wake_up+0x271/0x2e7 <ffffffff80233dcf> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..2    2us : try_to_wake_up+0x296/0x2e7 <ffffffff80233df4> (update_rq_clock+0x9/0x20 <ffffffff802303f3>)
          bash-10653 1d..2    2us : update_rq_clock+0x1e/0x20 <ffffffff80230408> (__update_rq_clock+0xc/0x90 <ffffffff80230366>)
          bash-10653 1d..2    3us : __update_rq_clock+0x1b/0x90 <ffffffff80230375> (sched_clock+0x9/0x29 <ffffffff80214529>)
          bash-10653 1d..2    4us : try_to_wake_up+0x2a6/0x2e7 <ffffffff80233e04> (activate_task+0xc/0x3f <ffffffff8022ffca>)
          bash-10653 1d..2    4us : activate_task+0x2d/0x3f <ffffffff8022ffeb> (enqueue_task+0xe/0x66 <ffffffff8022ff66>)
          bash-10653 1d..2    5us : enqueue_task+0x5b/0x66 <ffffffff8022ffb3> (enqueue_task_rt+0x9/0x3c <ffffffff80233351>)
          bash-10653 1d..2    6us : try_to_wake_up+0x2ba/0x2e7 <ffffffff80233e18> (check_preempt_wakeup+0x12/0x99 <ffffffff80234f84>)
      [...]
          bash-10653 1d..5   33us : tracing_record_cmdline+0xcf/0xd4 <ffffffff80338aad> (_spin_unlock+0x9/0x33 <ffffffff8048d3ec>)
          bash-10653 1d..5   34us : _spin_unlock+0x19/0x33 <ffffffff8048d3fc> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..4   35us : wakeup_sched_switch+0x65/0x2ff <ffffffff80339f66> (_spin_lock_irqsave+0xc/0xa9 <ffffffff8048d08b>)
          bash-10653 1d..4   35us : _spin_lock_irqsave+0x19/0xa9 <ffffffff8048d098> (add_preempt_count+0xe/0x77 <ffffffff8023311a>)
          bash-10653 1d..4   36us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
      The [...] was added here to not waste your email box space.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      352ad25a
    • Steven Rostedt's avatar
      ftrace: add tracing of context switches · 35e8e302
      Steven Rostedt authored
      This patch adds context switch tracing, of the format of:
      
                        _------=> CPU#
                       / _-----=> irqs-off
                      | / _----=> need-resched
                      || / _---=> hardirq/softirq
                      ||| / _--=> preempt-depth
                      |||| /
                      |||||     delay
          cmd     pid ||||| time  |      pid:prio:state
             \   /    |||||   \   |      /
        swapper-0     1d..3    137us+:  0:140:R --> 2912:120
           sshd-2912  1d..3    216us+:  2912:120:S --> 0:140
        swapper-0     1d..3    261us+:  0:140:R --> 2912:120
           bash-2920  0d..3    267us+:  2920:120:S --> 0:140
           sshd-2912  1d..3    330us!:  2912:120:S --> 0:140
        swapper-0     1d..3   2389us+:  0:140:R --> 2847:120
       yum-upda-2847  1d..3   2411us!:  2847:120:S --> 0:140
        swapper-0     0d..3  11089us+:  0:140:R --> 3139:120
       gdm-bina-3139  0d..3  11113us!:  3139:120:S --> 0:140
        swapper-0     1d..3 102328us+:  0:140:R --> 2847:120
       yum-upda-2847  1d..3 102348us!:  2847:120:S --> 0:140
      
       "sched_switch" is added to /debugfs/tracing/available_tracers
      
      [ Eugene Teo <eugeneteo@kernel.sg: remove unused tracing_sched_switch_enabled ]
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      35e8e302
    • Steven Rostedt's avatar
      ftrace: function tracer · 1b29b018
      Steven Rostedt authored
      This is a simple trace that uses the ftrace infrastructure. It is
      designed to be fast and small, and easy to use. It is useful to
      record things that happen over a very short period of time, and
      not to analyze the system in general.
      
       Updates:
      
        available_tracers
           "function" is added to this file.
      
        current_tracer
          To enable the function tracer:
      
            echo function > /debugfs/tracing/current_tracer
      
           To disable the tracer:
      
             echo disable > /debugfs/tracing/current_tracer
      
      The output of the function_trace file is as follows
      
        "echo noverbose > /debugfs/tracing/iter_ctrl"
      
      preemption latency trace v1.1.5 on 2.6.24-rc7-tst
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 0 us, #419428/4361791, CPU#1 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
          -----------------
          | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     0d.h. 1595128us+: set_normalized_timespec+0x8/0x2d <c043841d> (ktime_get_ts+0x4a/0x4e <c04499d4>)
       swapper-0     0d.h. 1595131us+: _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
      
      Or with verbose turned on:
      
        "echo verbose > /debugfs/tracing/iter_ctrl"
      
      preemption latency trace v1.1.5 on 2.6.24-rc7-tst
      --------------------------------------------------------------------
       latency: 0 us, #419428/4361791, CPU#1 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
          -----------------
          | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
      
               swapper     0 0 9 00000000 00000000 [f3675f41] 1595.128ms (+0.003ms): set_normalized_timespec+0x8/0x2d <c043841d> (ktime_get_ts+0x4a/0x4e <c04499d4>)
               swapper     0 0 9 00000000 00000001 [f3675f45] 1595.131ms (+0.003ms): _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
               swapper     0 0 9 00000000 00000002 [f3675f48] 1595.135ms (+0.003ms): _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
      
      The "trace" file is not affected by the verbose mode, but is by the symonly.
      
       echo "nosymonly" > /debugfs/tracing/iter_ctrl
      
      tracer:
      [   81.479967] CPU 0: bash:3154 register_ftrace_function+0x5f/0x66 <ffffffff80337a4d> <-- _spin_unlock_irqrestore+0xe/0x5a <ffffffff8048cc8f>
      [   81.479967] CPU 0: bash:3154 _spin_unlock_irqrestore+0x3e/0x5a <ffffffff8048ccbf> <-- sub_preempt_count+0xc/0x7a <ffffffff80233d7b>
      [   81.479968] CPU 0: bash:3154 sub_preempt_count+0x30/0x7a <ffffffff80233d9f> <-- in_lock_functions+0x9/0x24 <ffffffff8025a75d>
      [   81.479968] CPU 0: bash:3154 vfs_write+0x11d/0x155 <ffffffff8029a043> <-- dnotify_parent+0x12/0x78 <ffffffff802d54fb>
      [   81.479968] CPU 0: bash:3154 dnotify_parent+0x2d/0x78 <ffffffff802d5516> <-- _spin_lock+0xe/0x70 <ffffffff8048c910>
      [   81.479969] CPU 0: bash:3154 _spin_lock+0x1b/0x70 <ffffffff8048c91d> <-- add_preempt_count+0xe/0x77 <ffffffff80233df7>
      [   81.479969] CPU 0: bash:3154 add_preempt_count+0x3e/0x77 <ffffffff80233e27> <-- in_lock_functions+0x9/0x24 <ffffffff8025a75d>
      
       echo "symonly" > /debugfs/tracing/iter_ctrl
      
      tracer:
      [   81.479913] CPU 0: bash:3154 register_ftrace_function+0x5f/0x66 <-- _spin_unlock_irqrestore+0xe/0x5a
      [   81.479913] CPU 0: bash:3154 _spin_unlock_irqrestore+0x3e/0x5a <-- sub_preempt_count+0xc/0x7a
      [   81.479913] CPU 0: bash:3154 sub_preempt_count+0x30/0x7a <-- in_lock_functions+0x9/0x24
      [   81.479914] CPU 0: bash:3154 vfs_write+0x11d/0x155 <-- dnotify_parent+0x12/0x78
      [   81.479914] CPU 0: bash:3154 dnotify_parent+0x2d/0x78 <-- _spin_lock+0xe/0x70
      [   81.479914] CPU 0: bash:3154 _spin_lock+0x1b/0x70 <-- add_preempt_count+0xe/0x77
      [   81.479914] CPU 0: bash:3154 add_preempt_count+0x3e/0x77 <-- in_lock_functions+0x9/0x24
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      1b29b018
    • Steven Rostedt's avatar
      ftrace: latency tracer infrastructure · bc0c38d1
      Steven Rostedt authored
      This patch adds the latency tracer infrastructure. This patch
      does not add anything that will select and turn it on, but will
      be used by later patches.
      
      If it were to be compiled, it would add the following files
      to the debugfs:
      
       The root tracing directory:
      
        /debugfs/tracing/
      
      This patch also adds the following files:
      
        available_tracers
           list of available tracers. Currently no tracers are
           available. Looking into this file only shows
           "none" which is used to unregister all tracers.
      
        current_tracer
           The trace that is currently active. Empty on start up.
           To switch to a tracer simply echo one of the tracers that
           are listed in available_tracers:
      
         example: (used with later patches)
      
            echo function > /debugfs/tracing/current_tracer
      
           To disable the tracer:
      
             echo disable > /debugfs/tracing/current_tracer
      
        tracing_enabled
           echoing "1" into this file starts the ftrace function tracing
            (if sysctl kernel.ftrace_enabled=1)
           echoing "0" turns it off.
      
        latency_trace
            This file is readonly and holds the result of the trace.
      
        trace
            This file outputs a easier to read version of the trace.
      
        iter_ctrl
            Controls the way the output of traces look.
            So far there's two controls:
              echoing in "symonly" will only show the kallsyms variables
                  without the addresses (if kallsyms was configured)
              echoing in "verbose" will change the output to show
                  a lot more data, but not very easy to understand by
                  humans.
              echoing in "nosymonly" turns off symonly.
              echoing in "noverbose" turns off verbose.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      bc0c38d1
    • Arnaldo Carvalho de Melo's avatar
      ftrace: add basic support for gcc profiler instrumentation · 16444a8a
      Arnaldo Carvalho de Melo authored
      If CONFIG_FTRACE is selected and /proc/sys/kernel/ftrace_enabled is
      set to a non-zero value the ftrace routine will be called everytime
      we enter a kernel function that is not marked with the "notrace"
      attribute.
      
      The ftrace routine will then call a registered function if a function
      happens to be registered.
      
      [ This code has been highly hacked by Steven Rostedt and Ingo Molnar,
        so don't blame Arnaldo for all of this ;-) ]
      
      Update:
        It is now possible to register more than one ftrace function.
        If only one ftrace function is registered, that will be the
        function that ftrace calls directly. If more than one function
        is registered, then ftrace will call a function that will loop
        through the functions to call.
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      16444a8a
    • Arnaldo Carvalho de Melo's avatar
      ftrace: annotate core code that should not be traced · 6e766410
      Arnaldo Carvalho de Melo authored
      Mark with "notrace" functions in core code that should not be
      traced.  The "notrace" attribute will prevent gcc from adding
      a call to ftrace on the annotated funtions.
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      6e766410
    • Steven Rostedt's avatar
      x86: add notrace annotations to vsyscall. · 23adec55
      Steven Rostedt authored
      Add the notrace annotations to the vsyscall functions - there we are
      not in kernel context yet, so the tracer function cannot (and must not)
      be called.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      23adec55
    • Ingo Molnar's avatar
      tracing: add notrace to linkage.h · ffdc1a09
      Ingo Molnar authored
      notrace signals that a function should not be traced. Most of the
      time this is used by tracers to annotate code that cannot be
      traced - it's in a volatile state (such as in user vdso context
      or NMI context) or it's in the tracer internals.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      ffdc1a09
    • Steven Rostedt's avatar
      ftrace: add preempt_enable/disable notrace macros · 50282528
      Steven Rostedt authored
      The tracer may need to call preempt_enable and disable functions
      for time keeping and such. The trace gets ugly when we see these
      functions show up for all traces. To make the output cleaner
      this patch adds preempt_enable_notrace and preempt_disable_notrace
      to be used by tracer (and debugging) functions.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      50282528
    • Steven Rostedt's avatar
      ftrace: make the task state char-string visible to all · 7c731e0a
      Steven Rostedt authored
      The tracer wants to be able to convert the state number
      into a user visible character. This patch pulls that conversion
      string out the scheduler into the header. This way if it were to
      ever change, other parts of the kernel will know.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      7c731e0a
    • Ingo Molnar's avatar
      sched: add latency tracer callbacks to the scheduler · bd3bff9e
      Ingo Molnar authored
      add 3 lightweight callbacks to the tracer backend.
      
      zero impact if tracing is turned off.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      bd3bff9e
  2. 22 May, 2008 14 commits
  3. 21 May, 2008 1 commit