1. 15 Jan, 2015 5 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Fix enabling of syscall events on the command line · ce1039bd
      Steven Rostedt (Red Hat) authored
      Commit 5f893b26 "tracing: Move enabling tracepoints to just after
      rcu_init()" broke the enabling of system call events from the command
      line. The reason was that the enabling of command line trace events
      was moved before PID 1 started, and the syscall tracepoints require
      that all tasks have the TIF_SYSCALL_TRACEPOINT flag set. But the
      swapper task (pid 0) is not part of that. Since the swapper task is the
      only task that is running at this early in boot, no task gets the
      flag set, and the tracepoint never gets reached.
      
      Instead of setting the swapper task flag (there should be no reason to
      do that), re-enabled trace events again after the init thread (PID 1)
      has been started. It requires disabling all command line events and
      re-enabling them, as just enabling them again will not reset the logic
      to set the TIF_SYSCALL_TRACEPOINT flag, as the syscall tracepoint will
      be fooled into thinking that it was already set, and wont try setting
      it again. For this reason, we must first disable it and re-enable it.
      
      Link: http://lkml.kernel.org/r/1421188517-18312-1-git-send-email-mpe@ellerman.id.au
      Link: http://lkml.kernel.org/r/20150115040506.216066449@goodmis.orgReported-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ce1039bd
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Remove extra call to init_ftrace_syscalls() · 83829b74
      Steven Rostedt (Red Hat) authored
      trace_init() calls init_ftrace_syscalls() and then calls trace_event_init()
      which also calls init_ftrace_syscalls(). It makes more sense to only
      call it from trace_event_init().
      
      Calling it twice wastes memory, as it allocates the syscall events twice,
      and loses the first copy of it.
      
      Link: http://lkml.kernel.org/r/54AF53BD.5070303@huawei.com
      Link: http://lkml.kernel.org/r/20150115040505.930398632@goodmis.orgReported-by: default avatarWang Nan <wangnan0@huawei.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      83829b74
    • Steven Rostedt (Red Hat)'s avatar
      ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing · 237d28db
      Steven Rostedt (Red Hat) authored
      If the function graph tracer traces a jprobe callback, the system will
      crash. This can easily be demonstrated by compiling the jprobe
      sample module that is in the kernel tree, loading it and running the
      function graph tracer.
      
       # modprobe jprobe_example.ko
       # echo function_graph > /sys/kernel/debug/tracing/current_tracer
       # ls
      
      The first two commands end up in a nice crash after the first fork.
      (do_fork has a jprobe attached to it, so "ls" just triggers that fork)
      
      The problem is caused by the jprobe_return() that all jprobe callbacks
      must end with. The way jprobes works is that the function a jprobe
      is attached to has a breakpoint placed at the start of it (or it uses
      ftrace if fentry is supported). The breakpoint handler (or ftrace callback)
      will copy the stack frame and change the ip address to return to the
      jprobe handler instead of the function. The jprobe handler must end
      with jprobe_return() which swaps the stack and does an int3 (breakpoint).
      This breakpoint handler will then put back the saved stack frame,
      simulate the instruction at the beginning of the function it added
      a breakpoint to, and then continue on.
      
      For function tracing to work, it hijakes the return address from the
      stack frame, and replaces it with a hook function that will trace
      the end of the call. This hook function will restore the return
      address of the function call.
      
      If the function tracer traces the jprobe handler, the hook function
      for that handler will not be called, and its saved return address
      will be used for the next function. This will result in a kernel crash.
      
      To solve this, pause function tracing before the jprobe handler is called
      and unpause it before it returns back to the function it probed.
      
      Some other updates:
      
      Used a variable "saved_sp" to hold kcb->jprobe_saved_sp. This makes the
      code look a bit cleaner and easier to understand (various tries to fix
      this bug required this change).
      
      Note, if fentry is being used, jprobes will change the ip address before
      the function graph tracer runs and it will not be able to trace the
      function that the jprobe is probing.
      
      Link: http://lkml.kernel.org/r/20150114154329.552437962@goodmis.org
      
      Cc: stable@vger.kernel.org # 2.6.30+
      Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      237d28db
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Check both notrace and filter for old hash · 7485058e
      Steven Rostedt (Red Hat) authored
      Using just the filter for checking for trampolines or regs is not enough
      when updating the code against the records that represent all functions.
      Both the filter hash and the notrace hash need to be checked.
      
      To trigger this bug (using trace-cmd and perf):
      
       # perf probe -a do_fork
       # trace-cmd start -B foo -e probe
       # trace-cmd record -p function_graph -n do_fork sleep 1
      
      The trace-cmd record at the end clears the filter before it disables
      function_graph tracing and then that causes the accounting of the
      ftrace function records to become incorrect and causes ftrace to bug.
      
      Link: http://lkml.kernel.org/r/20150114154329.358378039@goodmis.org
      
      Cc: stable@vger.kernel.org
      [ still need to switch old_hash_ops to old_ops_hash ]
      Reviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7485058e
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Fix updating of filters for shared global_ops filters · 8f86f837
      Steven Rostedt (Red Hat) authored
      As the set_ftrace_filter affects both the function tracer as well as the
      function graph tracer, the ops that represent each have a shared
      ftrace_ops_hash structure. This allows both to be updated when the filter
      files are updated.
      
      But if function graph is enabled and the global_ops (function tracing) ops
      is not, then it is possible that the filter could be changed without the
      update happening for the function graph ops. This will cause the changes
      to not take place and may even cause a ftrace_bug to occur as it could mess
      with the trampoline accounting.
      
      The solution is to check if the ops uses the shared global_ops filter and
      if the ops itself is not enabled, to check if there's another ops that is
      enabled and also shares the global_ops filter. In that case, the
      modification still needs to be executed.
      
      Link: http://lkml.kernel.org/r/20150114154329.055980438@goodmis.org
      
      Cc: stable@vger.kernel.org # 3.17+
      Reviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      8f86f837
  2. 06 Jan, 2015 1 commit
  3. 05 Jan, 2015 3 commits
  4. 04 Jan, 2015 4 commits
  5. 02 Jan, 2015 3 commits
  6. 31 Dec, 2014 10 commits
  7. 30 Dec, 2014 14 commits