1. 20 Dec, 2018 1 commit
    • Steven Rostedt (VMware)'s avatar
      x86/ftrace: Do not call function graph from dynamic trampolines · d2a68c4e
      Steven Rostedt (VMware) authored
      Since commit 79922b80 ("ftrace: Optimize function graph to be
      called directly"), dynamic trampolines should not be calling the
      function graph tracer at the end. If they do, it could cause the function
      graph tracer to trace functions that it filtered out.
      
      Right now it does not cause a problem because there's a test to check if
      the function graph tracer is attached to the same function as the
      function tracer, which for now is true. But the function graph tracer is
      undergoing changes that can make this no longer true which will cause
      the function graph tracer to trace other functions.
      
       For example:
      
       # cd /sys/kernel/tracing/
       # echo do_IRQ > set_ftrace_filter
       # mkdir instances/foo
       # echo ip_rcv > instances/foo/set_ftrace_filter
       # echo function_graph > current_tracer
       # echo function > instances/foo/current_tracer
      
      Would cause the function graph tracer to trace both do_IRQ and ip_rcv,
      if the current tests change.
      
      As the current tests prevent this from being a problem, this code does
      not need to be backported. But it does make the code cleaner.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      d2a68c4e
  2. 10 Dec, 2018 6 commits
  3. 09 Dec, 2018 23 commits
  4. 30 Nov, 2018 6 commits
  5. 28 Nov, 2018 4 commits
    • Pavankumar Kondeti's avatar
      sched, trace: Fix prev_state output in sched_switch tracepoint · 3054426d
      Pavankumar Kondeti authored
      commit 3f5fe9fe ("sched/debug: Fix task state recording/printout")
      tried to fix the problem introduced by a previous commit efb40f58
      ("sched/tracing: Fix trace_sched_switch task-state printing"). However
      the prev_state output in sched_switch is still broken.
      
      task_state_index() uses fls() which considers the LSB as 1. Left
      shifting 1 by this value gives an incorrect mapping to the task state.
      Fix this by decrementing the value returned by __get_task_state()
      before shifting.
      
      Link: http://lkml.kernel.org/r/1540882473-1103-1-git-send-email-pkondeti@codeaurora.org
      
      Cc: stable@vger.kernel.org
      Fixes: 3f5fe9fe ("sched/debug: Fix task state recording/printout")
      Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      3054426d
    • Steven Rostedt (VMware)'s avatar
      function_graph: Have profiler use curr_ret_stack and not depth · b1b35f2e
      Steven Rostedt (VMware) authored
      The profiler uses trace->depth to find its entry on the ret_stack, but the
      depth may not match the actual location of where its entry is (if an
      interrupt were to preempt the processing of the profiler for another
      function, the depth and the curr_ret_stack will be different).
      
      Have it use the curr_ret_stack as the index to find its ret_stack entry
      instead of using the depth variable, as that is no longer guaranteed to be
      the same.
      
      Cc: stable@kernel.org
      Fixes: 03274a3f ("tracing/fgraph: Adjust fgraph depth before calling trace return callback")
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      b1b35f2e
    • Steven Rostedt (VMware)'s avatar
      function_graph: Reverse the order of pushing the ret_stack and the callback · 7c6ea35e
      Steven Rostedt (VMware) authored
      The function graph profiler uses the ret_stack to store the "subtime" and
      reuse it by nested functions and also on the return. But the current logic
      has the profiler callback called before the ret_stack is updated, and it is
      just modifying the ret_stack that will later be allocated (it's just lucky
      that the "subtime" is not touched when it is allocated).
      
      This could also cause a crash if we are at the end of the ret_stack when
      this happens.
      
      By reversing the order of the allocating the ret_stack and then calling the
      callbacks attached to a function being traced, the ret_stack entry is no
      longer used before it is allocated.
      
      Cc: stable@kernel.org
      Fixes: 03274a3f ("tracing/fgraph: Adjust fgraph depth before calling trace return callback")
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      7c6ea35e
    • Steven Rostedt (VMware)'s avatar
      function_graph: Move return callback before update of curr_ret_stack · 552701dd
      Steven Rostedt (VMware) authored
      In the past, curr_ret_stack had two functions. One was to denote the depth
      of the call graph, the other is to keep track of where on the ret_stack the
      data is used. Although they may be slightly related, there are two cases
      where they need to be used differently.
      
      The one case is that it keeps the ret_stack data from being corrupted by an
      interrupt coming in and overwriting the data still in use. The other is just
      to know where the depth of the stack currently is.
      
      The function profiler uses the ret_stack to save a "subtime" variable that
      is part of the data on the ret_stack. If curr_ret_stack is modified too
      early, then this variable can be corrupted.
      
      The "max_depth" option, when set to 1, will record the first functions going
      into the kernel. To see all top functions (when dealing with timings), the
      depth variable needs to be lowered before calling the return hook. But by
      lowering the curr_ret_stack, it makes the data on the ret_stack still being
      used by the return hook susceptible to being overwritten.
      
      Now that there's two variables to handle both cases (curr_ret_depth), we can
      move them to the locations where they can handle both cases.
      
      Cc: stable@kernel.org
      Fixes: 03274a3f ("tracing/fgraph: Adjust fgraph depth before calling trace return callback")
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      552701dd