An error occurred fetching the project authors.
  1. 21 Apr, 2017 1 commit
    • Namhyung Kim's avatar
      ftrace: Fix function pid filter on instances · a0a1e90f
      Namhyung Kim authored
      commit d879d0b8 upstream.
      
      When function tracer has a pid filter, it adds a probe to sched_switch
      to track if current task can be ignored.  The probe checks the
      ftrace_ignore_pid from current tr to filter tasks.  But it misses to
      delete the probe when removing an instance so that it can cause a crash
      due to the invalid tr pointer (use-after-free).
      
      This is easily reproducible with the following:
      
        # cd /sys/kernel/debug/tracing
        # mkdir instances/buggy
        # echo $$ > instances/buggy/set_ftrace_pid
        # rmdir instances/buggy
      
        ============================================================================
        BUG: KASAN: use-after-free in ftrace_filter_pid_sched_switch_probe+0x3d/0x90
        Read of size 8 by task kworker/0:1/17
        CPU: 0 PID: 17 Comm: kworker/0:1 Tainted: G    B           4.11.0-rc3  #198
        Call Trace:
         dump_stack+0x68/0x9f
         kasan_object_err+0x21/0x70
         kasan_report.part.1+0x22b/0x500
         ? ftrace_filter_pid_sched_switch_probe+0x3d/0x90
         kasan_report+0x25/0x30
         __asan_load8+0x5e/0x70
         ftrace_filter_pid_sched_switch_probe+0x3d/0x90
         ? fpid_start+0x130/0x130
         __schedule+0x571/0xce0
         ...
      
      To fix it, use ftrace_clear_pids() to unregister the probe.  As
      instance_rmdir() already updated ftrace codes, it can just free the
      filter safely.
      
      Link: http://lkml.kernel.org/r/20170417024430.21194-2-namhyung@kernel.org
      
      Fixes: 0c8916c3 ("tracing: Add rmdir to remove multibuffer instances")
      Cc: Ingo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a0a1e90f
  2. 15 Mar, 2017 1 commit
    • Eric W. Biederman's avatar
      fs: Better permission checking for submounts · ade784b0
      Eric W. Biederman authored
      commit 93faccbb upstream.
      
      To support unprivileged users mounting filesystems two permission
      checks have to be performed: a test to see if the user allowed to
      create a mount in the mount namespace, and a test to see if
      the user is allowed to access the specified filesystem.
      
      The automount case is special in that mounting the original filesystem
      grants permission to mount the sub-filesystems, to any user who
      happens to stumble across the their mountpoint and satisfies the
      ordinary filesystem permission checks.
      
      Attempting to handle the automount case by using override_creds
      almost works.  It preserves the idea that permission to mount
      the original filesystem is permission to mount the sub-filesystem.
      Unfortunately using override_creds messes up the filesystems
      ordinary permission checks.
      
      Solve this by being explicit that a mount is a submount by introducing
      vfs_submount, and using it where appropriate.
      
      vfs_submount uses a new mount internal mount flags MS_SUBMOUNT, to let
      sget and friends know that a mount is a submount so they can take appropriate
      action.
      
      sget and sget_userns are modified to not perform any permission checks
      on submounts.
      
      follow_automount is modified to stop using override_creds as that
      has proven problemantic.
      
      do_mount is modified to always remove the new MS_SUBMOUNT flag so
      that we know userspace will never by able to specify it.
      
      autofs4 is modified to stop using current_real_cred that was put in
      there to handle the previous version of submount permission checking.
      
      cifs is modified to pass the mountpoint all of the way down to vfs_submount.
      
      debugfs is modified to pass the mountpoint all of the way down to
      trace_automount by adding a new parameter.  To make this change easier
      a new typedef debugfs_automount_t is introduced to capture the type of
      the debugfs automount function.
      
      Fixes: 069d5ac9 ("autofs:  Fix automounts by using current_real_cred()->uid")
      Fixes: aeaa4a79 ("fs: Call d_automount with the filesystems creds")
      Reviewed-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Reviewed-by: default avatarSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ade784b0
  3. 25 Dec, 2016 1 commit
  4. 12 Dec, 2016 1 commit
  5. 09 Dec, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Replace kmap with copy_from_user() in trace_marker writing · 656c7f0d
      Steven Rostedt (Red Hat) authored
      Instead of using get_user_pages_fast() and kmap_atomic() when writing
      to the trace_marker file, just allocate enough space on the ring buffer
      directly, and write into it via copy_from_user().
      
      Writing into the trace_marker file use to allocate a temporary buffer
      to perform the copy_from_user(), as we didn't want to write into the
      ring buffer if the copy failed. But as a trace_marker write is suppose
      to be extremely fast, and allocating memory causes other tracepoints to
      trigger, Peter Zijlstra suggested using get_user_pages_fast() and
      kmap_atomic() to keep the user space pages in memory and reading it
      directly. But Henrik Austad had issues with this because it required taking
      the mm->mmap_sem and causing long delays with the write.
      
      Instead, just allocate the space in the ring buffer and use
      copy_from_user() directly. If it faults, return -EFAULT and write
      "<faulted>" into the ring buffer.
      
      Link: http://lkml.kernel.org/r/20161208124018.72dd0f86@gandalf.local.home
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Henrik Austad <henrik@austad.us>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Updates: d696b58c "tracing: Do not allocate buffer for trace_marker"
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      656c7f0d
  6. 01 Dec, 2016 1 commit
  7. 29 Nov, 2016 1 commit
  8. 24 Nov, 2016 1 commit
  9. 23 Nov, 2016 2 commits
  10. 22 Nov, 2016 1 commit
  11. 15 Nov, 2016 1 commit
  12. 14 Nov, 2016 1 commit
  13. 25 Sep, 2016 2 commits
  14. 12 Sep, 2016 1 commit
  15. 02 Sep, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Added hardware latency tracer · e7c15cd8
      Steven Rostedt (Red Hat) authored
      The hardware latency tracer has been in the PREEMPT_RT patch for some time.
      It is used to detect possible SMIs or any other hardware interruptions that
      the kernel is unaware of. Note, NMIs may also be detected, but that may be
      good to note as well.
      
      The logic is pretty simple. It simply creates a thread that spins on a
      single CPU for a specified amount of time (width) within a periodic window
      (window). These numbers may be adjusted by their cooresponding names in
      
         /sys/kernel/tracing/hwlat_detector/
      
      The defaults are window = 1000000 us (1 second)
                       width  =  500000 us (1/2 second)
      
      The loop consists of:
      
      	t1 = trace_clock_local();
      	t2 = trace_clock_local();
      
      Where trace_clock_local() is a variant of sched_clock().
      
      The difference of t2 - t1 is recorded as the "inner" timestamp and also the
      timestamp  t1 - prev_t2 is recorded as the "outer" timestamp. If either of
      these differences are greater than the time denoted in
      /sys/kernel/tracing/tracing_thresh then it records the event.
      
      When this tracer is started, and tracing_thresh is zero, it changes to the
      default threshold of 10 us.
      
      The hwlat tracer in the PREEMPT_RT patch was originally written by
      Jon Masters. I have modified it quite a bit and turned it into a
      tracer.
      Based-on-code-by: default avatarJon Masters <jcm@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      e7c15cd8
  16. 23 Aug, 2016 1 commit
  17. 05 Jul, 2016 2 commits
  18. 23 Jun, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Skip more functions when doing stack tracing of events · be54f69c
      Steven Rostedt (Red Hat) authored
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [002] d..2  1982.525169: <stack trace>
       => save_stack_trace
       => __ftrace_trace_stack
       => trace_buffer_unlock_commit_regs
       => event_trigger_unlock_commit
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The above shows that we are seeing 6 functions before ever making it to the
      caller of the sched_switch event.
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  2146.335208: <stack trace>
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The stacktrace trigger isn't as bad, because it adds its own skip to the
      stacktracing, but still has two events extra.
      
      One issue is that if the stacktrace passes its own "regs" then there should
      be no addition to the skip, as the regs will not include the functions being
      called. This was an issue that was fixed by commit 7717c6be ("tracing:
      Fix stacktrace skip depth in trace_buffer_unlock_commit_regs()" as adding
      the skip number for kprobes made the probes not have any stack at all.
      
      But since this is only an issue when regs is being used, a skip should be
      added if regs is NULL. Now we have:
      
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [000] d..2  1297.676333: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => rest_init
       => start_kernel
       => x86_64_start_reservations
       => x86_64_start_kernel
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  1370.759745: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      And kprobes are not touched.
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      be54f69c
  19. 20 Jun, 2016 5 commits
  20. 03 May, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Use temp buffer when filtering events · 0fc1b09f
      Steven Rostedt (Red Hat) authored
      Filtering of events requires the data to be written to the ring buffer
      before it can be decided to filter or not. This is because the parameters of
      the filter are based on the result that is written to the ring buffer and
      not on the parameters that are passed into the trace functions.
      
      The ftrace ring buffer is optimized for writing into the ring buffer and
      committing. The discard procedure used when filtering decides the event
      should be discarded is much more heavy weight. Thus, using a temporary
      filter when filtering events can speed things up drastically.
      
      Without a temp buffer we have:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.790706626 seconds time elapsed ( +-  0.71% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.566904059 seconds time elapsed ( +-  0.27% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.690598511 seconds time elapsed ( +-  0.19% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.707486364 seconds time elapsed ( +-  0.30% )
      
      The first run above is without any tracing, just to get a based figure.
      hackbench takes ~0.79 seconds to run on the system.
      
      The second run enables tracing all events where nothing is filtered. This
      increases the time by 100% and hackbench takes 1.57 seconds to run.
      
      The third run filters all events where the preempt count will equal "20"
      (this should never happen) thus all events are discarded. This takes 1.69
      seconds to run. This is 10% slower than just committing the events!
      
      The last run enables all events and filters where the filter will commit all
      events, and this takes 1.70 seconds to run. The filtering overhead is
      approximately 10%. Thus, the discard and commit of an event from the ring
      buffer may be about the same time.
      
      With this patch, the numbers change:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.778233033 seconds time elapsed ( +-  0.38% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.582102692 seconds time elapsed ( +-  0.28% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.309230710 seconds time elapsed ( +-  0.22% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.786001924 seconds time elapsed ( +-  0.20% )
      
      The first run is again the base with no tracing.
      
      The second run is all tracing with no filtering. It is a little slower, but
      that may be well within the noise.
      
      The third run shows that discarding all events only took 1.3 seconds. This
      is a speed up of 23%! The discard is much faster than even the commit.
      
      The one downside is shown in the last run. Events that are not discarded by
      the filter will take longer to add, this is due to the extra copy of the
      event.
      
      Cc: Alexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      0fc1b09f
  21. 29 Apr, 2016 5 commits
  22. 27 Apr, 2016 1 commit
  23. 26 Apr, 2016 2 commits
  24. 19 Apr, 2016 5 commits