An error occurred fetching the project authors.
- 21 Apr, 2017 1 commit
-
-
Namhyung Kim authored
commit d879d0b8 upstream. When function tracer has a pid filter, it adds a probe to sched_switch to track if current task can be ignored. The probe checks the ftrace_ignore_pid from current tr to filter tasks. But it misses to delete the probe when removing an instance so that it can cause a crash due to the invalid tr pointer (use-after-free). This is easily reproducible with the following: # cd /sys/kernel/debug/tracing # mkdir instances/buggy # echo $$ > instances/buggy/set_ftrace_pid # rmdir instances/buggy ============================================================================ BUG: KASAN: use-after-free in ftrace_filter_pid_sched_switch_probe+0x3d/0x90 Read of size 8 by task kworker/0:1/17 CPU: 0 PID: 17 Comm: kworker/0:1 Tainted: G B 4.11.0-rc3 #198 Call Trace: dump_stack+0x68/0x9f kasan_object_err+0x21/0x70 kasan_report.part.1+0x22b/0x500 ? ftrace_filter_pid_sched_switch_probe+0x3d/0x90 kasan_report+0x25/0x30 __asan_load8+0x5e/0x70 ftrace_filter_pid_sched_switch_probe+0x3d/0x90 ? fpid_start+0x130/0x130 __schedule+0x571/0xce0 ... To fix it, use ftrace_clear_pids() to unregister the probe. As instance_rmdir() already updated ftrace codes, it can just free the filter safely. Link: http://lkml.kernel.org/r/20170417024430.21194-2-namhyung@kernel.org Fixes: 0c8916c3 ("tracing: Add rmdir to remove multibuffer instances") Cc: Ingo Molnar <mingo@kernel.org> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 Mar, 2017 1 commit
-
-
Eric W. Biederman authored
commit 93faccbb upstream. To support unprivileged users mounting filesystems two permission checks have to be performed: a test to see if the user allowed to create a mount in the mount namespace, and a test to see if the user is allowed to access the specified filesystem. The automount case is special in that mounting the original filesystem grants permission to mount the sub-filesystems, to any user who happens to stumble across the their mountpoint and satisfies the ordinary filesystem permission checks. Attempting to handle the automount case by using override_creds almost works. It preserves the idea that permission to mount the original filesystem is permission to mount the sub-filesystem. Unfortunately using override_creds messes up the filesystems ordinary permission checks. Solve this by being explicit that a mount is a submount by introducing vfs_submount, and using it where appropriate. vfs_submount uses a new mount internal mount flags MS_SUBMOUNT, to let sget and friends know that a mount is a submount so they can take appropriate action. sget and sget_userns are modified to not perform any permission checks on submounts. follow_automount is modified to stop using override_creds as that has proven problemantic. do_mount is modified to always remove the new MS_SUBMOUNT flag so that we know userspace will never by able to specify it. autofs4 is modified to stop using current_real_cred that was put in there to handle the previous version of submount permission checking. cifs is modified to pass the mountpoint all of the way down to vfs_submount. debugfs is modified to pass the mountpoint all of the way down to trace_automount by adding a new parameter. To make this change easier a new typedef debugfs_automount_t is introduced to capture the type of the debugfs automount function. Fixes: 069d5ac9 ("autofs: Fix automounts by using current_real_cred()->uid") Fixes: aeaa4a79 ("fs: Call d_automount with the filesystems creds") Reviewed-by:
Trond Myklebust <trond.myklebust@primarydata.com> Reviewed-by:
Seth Forshee <seth.forshee@canonical.com> Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 Dec, 2016 1 commit
-
-
Thomas Gleixner authored
There is no point in having an extra type for extra confusion. u64 is unambiguous. Conversion was done with the following coccinelle script: @rem@ @@ -typedef u64 cycle_t; @fix@ typedef cycle_t; @@ -cycle_t +u64 Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: John Stultz <john.stultz@linaro.org>
-
- 12 Dec, 2016 1 commit
-
-
Pavankumar Kondeti authored
The 's' flag is supposed to indicate that a softirq is running. This can be detected by testing the preempt_count with SOFTIRQ_OFFSET. The current code tests the preempt_count with SOFTIRQ_MASK, which would be true even when softirqs are disabled but not serving a softirq. Link: http://lkml.kernel.org/r/1481300417-3564-1-git-send-email-pkondeti@codeaurora.orgSigned-off-by:
Pavankumar Kondeti <pkondeti@codeaurora.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 09 Dec, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
Instead of using get_user_pages_fast() and kmap_atomic() when writing to the trace_marker file, just allocate enough space on the ring buffer directly, and write into it via copy_from_user(). Writing into the trace_marker file use to allocate a temporary buffer to perform the copy_from_user(), as we didn't want to write into the ring buffer if the copy failed. But as a trace_marker write is suppose to be extremely fast, and allocating memory causes other tracepoints to trigger, Peter Zijlstra suggested using get_user_pages_fast() and kmap_atomic() to keep the user space pages in memory and reading it directly. But Henrik Austad had issues with this because it required taking the mm->mmap_sem and causing long delays with the write. Instead, just allocate the space in the ring buffer and use copy_from_user() directly. If it faults, return -EFAULT and write "<faulted>" into the ring buffer. Link: http://lkml.kernel.org/r/20161208124018.72dd0f86@gandalf.local.home Cc: Ingo Molnar <mingo@kernel.org> Cc: Henrik Austad <henrik@austad.us> Cc: Peter Zijlstra <peterz@infradead.org> Updates: d696b58c "tracing: Do not allocate buffer for trace_marker" Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 01 Dec, 2016 1 commit
-
-
Sebastian Andrzej Siewior authored
Install the callbacks via the state machine. The notifier in struct ring_buffer is replaced by the multi instance interface. Upon __ring_buffer_alloc() invocation, cpuhp_state_add_instance() will invoke the trace_rb_cpu_prepare() on each CPU. This callback may now fail. This means __ring_buffer_alloc() will fail and cleanup (like previously) and during a CPU up event this failure will not allow the CPU to come up. Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20161126231350.10321-7-bigeasy@linutronix.deSigned-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 29 Nov, 2016 1 commit
-
-
Joel Fernandes authored
Unlike monotonic clock, boot clock as a trace clock will account for time spent in suspend useful for tracing suspend/resume. This uses earlier introduced infrastructure for using the fast boot clock. Signed-off-by:
Joel Fernandes <joelaf@google.com> Signed-off-by:
John Stultz <john.stultz@linaro.org> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Link: http://lkml.kernel.org/r/1480372524-15181-7-git-send-email-john.stultz@linaro.orgSigned-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 24 Nov, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
The function __buffer_unlock_commit() is called in a few places outside of trace.c. But for the most part, it should really be inlined, as it is in the hot path of the trace_events. For the callers outside of trace.c, create a new function trace_buffer_unlock_commit_nostack(), as the reason it was used was to avoid the stack tracing that trace_buffer_unlock_commit() could do. Link: http://lkml.kernel.org/r/20161121183700.GW26852@two.firstfloor.orgReported-by:
Andi Kleen <andi@firstfloor.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 23 Nov, 2016 2 commits
-
-
Steven Rostedt (Red Hat) authored
Currently, when tracepoint_printk is set (enabled by the "tp_printk" kernel command line), it causes trace events to print via printk(). This is a very dangerous operation, but is useful for debugging. The issue is, it's seldom used, but it is always checked even if it's not enabled by the kernel command line. Instead of having this feature called by a branch against a variable, turn that variable into a static key, and this will remove the test and jump. To simplify things, the functions output_printk() and trace_event_buffer_commit() were moved from trace_events.c to trace.c. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
As Andi Kleen pointed out in the Link below, the trace events has quite a bit of code execution. A lot of that happens to be calling functions, where some of them should simply be inlined. One of these functions happens to be trace_buffer_lock_reserve() which is also a global, but it is used throughout the file it is defined in. Create a __trace_buffer_lock_reserve() that is always inlined that the file can benefit from. Link: http://lkml.kernel.org/r/20161121183700.GW26852@two.firstfloor.orgReported-by:
Andi Kleen <andi@firstfloor.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 22 Nov, 2016 1 commit
-
-
Chunyan Zhang authored
Currently Function traces can be only exported to the ring buffer. This adds a trace_export concept which can process traces and export them to a registered destination as an addition to the current one that outputs to Ftrace - i.e. ring buffer. In this way, if we want function traces to be sent to other destinations rather than only to the ring buffer, we just need to register a new trace_export and implement its own .write() function for writing traces to storage. With this patch, only function tracing (trace type is TRACE_FN) is supported. Link: http://lkml.kernel.org/r/1479715043-6534-2-git-send-email-zhang.chunyan@linaro.orgSigned-off-by:
Chunyan Zhang <zhang.chunyan@linaro.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 15 Nov, 2016 1 commit
-
-
Steven Rostedt authored
A new file is created: /sys/kernel/debug/tracing/trace_marker_raw This allows for appications to create data structures and write the binary data directly into it, and then read the trace data out from trace_pipe_raw into the same type of data structure. This saves on converting numbers into ASCII that would be required by trace_marker. Suggested-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 14 Nov, 2016 1 commit
-
-
Masami Hiramatsu authored
Use glob_match() to support flexible glob wildcards (*,?) and character classes ([) for ftrace. Since the full glob matching is slower than the current partial matching routines(*pat, pat*, *pat*), this leaves those routines and just add MATCH_GLOB for complex glob expression. e.g. ---- [root@localhost tracing]# echo 'sched*group' > set_ftrace_filter [root@localhost tracing]# cat set_ftrace_filter sched_free_group sched_change_group sched_create_group sched_online_group sched_destroy_group sched_offline_group [root@localhost tracing]# echo '[Ss]y[Ss]_*' > set_ftrace_filter [root@localhost tracing]# head set_ftrace_filter sys_arch_prctl sys_rt_sigreturn sys_ioperm SyS_iopl sys_modify_ldt SyS_mmap SyS_set_thread_area SyS_get_thread_area SyS_set_tid_address sys_fork ---- Link: http://lkml.kernel.org/r/147566869501.29136.6462645009894738056.stgit@devboxAcked-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 25 Sep, 2016 2 commits
-
-
Al Viro authored
Cc: stable@vger.kernel.org Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Steven Rostedt (Red Hat) authored
The iter->seq can be reset outside the protection of the mutex. So can reading of user data. Move the mutex up to the beginning of the function. Fixes: d7350c3f ("tracing/core: make the read callbacks reentrants") Cc: stable@vger.kernel.org # 2.6.30+ Reported-by:
Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 12 Sep, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
The hwlat tracer uses tr->max_latency, and if it's the only tracer enabled that uses it, the build will fail. Add max_latency and its file when the hwlat tracer is enabled. Link: http://lkml.kernel.org/r/d6c3b7eb-ba95-1ffa-0453-464e1e24262a@infradead.orgReported-by:
Randy Dunlap <rdunlap@infradead.org> Tested-by:
Randy Dunlap <rdunlap@infradead.org> Acked-by:
Randy Dunlap <rdunlap@infradead.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 02 Sep, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
The hardware latency tracer has been in the PREEMPT_RT patch for some time. It is used to detect possible SMIs or any other hardware interruptions that the kernel is unaware of. Note, NMIs may also be detected, but that may be good to note as well. The logic is pretty simple. It simply creates a thread that spins on a single CPU for a specified amount of time (width) within a periodic window (window). These numbers may be adjusted by their cooresponding names in /sys/kernel/tracing/hwlat_detector/ The defaults are window = 1000000 us (1 second) width = 500000 us (1/2 second) The loop consists of: t1 = trace_clock_local(); t2 = trace_clock_local(); Where trace_clock_local() is a variant of sched_clock(). The difference of t2 - t1 is recorded as the "inner" timestamp and also the timestamp t1 - prev_t2 is recorded as the "outer" timestamp. If either of these differences are greater than the time denoted in /sys/kernel/tracing/tracing_thresh then it records the event. When this tracer is started, and tracing_thresh is zero, it changes to the default threshold of 10 us. The hwlat tracer in the PREEMPT_RT patch was originally written by Jon Masters. I have modified it quite a bit and turned it into a tracer. Based-on-code-by:
Jon Masters <jcm@redhat.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 23 Aug, 2016 1 commit
-
-
Masami Hiramatsu authored
Add README entries for kprobe-events and uprobe-events. This allows user to check what options can be acceptable for running kernel. E.g. perf tools can choose correct types for the kernel. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hemant Kumar <hemant@linux.vnet.ibm.com> Cc: Naohiro Aota <naohiro.aota@hgst.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/147151069524.12957.12957179170304055028.stgit@devboxSigned-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 05 Jul, 2016 2 commits
-
-
Wei Yongjun authored
Using for_each_set_bit() to simplify the code. Link: http://lkml.kernel.org/r/1467645004-11169-1-git-send-email-weiyj_lk@163.comSigned-off-by:
Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
Commit 345ddcc8 ("ftrace: Have set_ftrace_pid use the bitmap like events do") placed ftrace_init_tracefs into the instance creation, and encapsulated the top level updating with an if conditional, as the top level only gets updated at boot up. Unfortunately, this triggers section mismatch errors as the init functions are called from a function that can be called later, and the section mismatch logic is unaware of the if conditional that would prevent it from happening at run time. To make everyone happy, create a separate ftrace_init_tracefs_toplevel() routine that only gets called by init functions, and this will be what calls other init functions for the toplevel directory. Link: http://lkml.kernel.org/r/20160704102139.19cbc0d9@gandalf.local.homeReported-by:
kbuild test robot <fengguang.wu@intel.com> Reported-by:
Arnd Bergmann <arnd@arndb.de> Fixes: 345ddcc8 ("ftrace: Have set_ftrace_pid use the bitmap like events do") Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 23 Jun, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
# echo 1 > options/stacktrace # echo 1 > events/sched/sched_switch/enable # cat trace <idle>-0 [002] d..2 1982.525169: <stack trace> => save_stack_trace => __ftrace_trace_stack => trace_buffer_unlock_commit_regs => event_trigger_unlock_commit => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __schedule => schedule => schedule_preempt_disabled => cpu_startup_entry => start_secondary The above shows that we are seeing 6 functions before ever making it to the caller of the sched_switch event. # echo stacktrace > events/sched/sched_switch/trigger # cat trace <idle>-0 [002] d..3 2146.335208: <stack trace> => trace_event_buffer_commit => trace_event_raw_event_sched_switch => __schedule => schedule => schedule_preempt_disabled => cpu_startup_entry => start_secondary The stacktrace trigger isn't as bad, because it adds its own skip to the stacktracing, but still has two events extra. One issue is that if the stacktrace passes its own "regs" then there should be no addition to the skip, as the regs will not include the functions being called. This was an issue that was fixed by commit 7717c6be ("tracing: Fix stacktrace skip depth in trace_buffer_unlock_commit_regs()" as adding the skip number for kprobes made the probes not have any stack at all. But since this is only an issue when regs is being used, a skip should be added if regs is NULL. Now we have: # echo 1 > options/stacktrace # echo 1 > events/sched/sched_switch/enable # cat trace <idle>-0 [000] d..2 1297.676333: <stack trace> => __schedule => schedule => schedule_preempt_disabled => cpu_startup_entry => rest_init => start_kernel => x86_64_start_reservations => x86_64_start_kernel # echo stacktrace > events/sched/sched_switch/trigger # cat trace <idle>-0 [002] d..3 1370.759745: <stack trace> => __schedule => schedule => schedule_preempt_disabled => cpu_startup_entry => start_secondary And kprobes are not touched. Reported-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 20 Jun, 2016 5 commits
-
-
Andy Lutomirski authored
Currently, the trace_printk code chooses which static buffer to use based on what type of atomic context (NMI, IRQ, etc) it's in. Simplify the code and make it more robust: simply count the nesting depth and choose a buffer based on the current nesting depth. The new code will only drop an event if we nest more than 4 deep, and the old code was guaranteed to malfunction if that happened. Link: http://lkml.kernel.org/r/07ab03aecfba25fcce8f9a211b14c9c5e2865c58.1464289095.git.luto@kernel.orgAcked-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Andy Lutomirski <luto@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
Convert set_ftrace_pid to use the bitmap like set_event_pid does. This allows for instances to use the pid filtering as well, and will allow for function-fork option to set if the children of a traced function should be traced or not. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
The addition of PIDs into a pid_list via the write operation of set_event_pid is a bit complex. The same operation will be needed for function tracing pids. Move the code into its own generic function in trace.c, so that we can avoid duplication of this code. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
To allow other aspects of ftrace to use the pid_list logic, we need to reuse the seq_file functions. Making the generic part into functions that can be called by other files will help in this regard. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
As the filtered_pid functions are going to be used by function tracer as well as trace_events, move the code into the generic trace.c file. The functions moved are: trace_find_filtered_pid() trace_ignore_this_task() trace_filter_add_remove_task() Kernel Doc text was also added. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 03 May, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
Filtering of events requires the data to be written to the ring buffer before it can be decided to filter or not. This is because the parameters of the filter are based on the result that is written to the ring buffer and not on the parameters that are passed into the trace functions. The ftrace ring buffer is optimized for writing into the ring buffer and committing. The discard procedure used when filtering decides the event should be discarded is much more heavy weight. Thus, using a temporary filter when filtering events can speed things up drastically. Without a temp buffer we have: # trace-cmd start -p nop # perf stat -r 10 hackbench 50 0.790706626 seconds time elapsed ( +- 0.71% ) # trace-cmd start -e all # perf stat -r 10 hackbench 50 1.566904059 seconds time elapsed ( +- 0.27% ) # trace-cmd start -e all -f 'common_preempt_count==20' # perf stat -r 10 hackbench 50 1.690598511 seconds time elapsed ( +- 0.19% ) # trace-cmd start -e all -f 'common_preempt_count!=20' # perf stat -r 10 hackbench 50 1.707486364 seconds time elapsed ( +- 0.30% ) The first run above is without any tracing, just to get a based figure. hackbench takes ~0.79 seconds to run on the system. The second run enables tracing all events where nothing is filtered. This increases the time by 100% and hackbench takes 1.57 seconds to run. The third run filters all events where the preempt count will equal "20" (this should never happen) thus all events are discarded. This takes 1.69 seconds to run. This is 10% slower than just committing the events! The last run enables all events and filters where the filter will commit all events, and this takes 1.70 seconds to run. The filtering overhead is approximately 10%. Thus, the discard and commit of an event from the ring buffer may be about the same time. With this patch, the numbers change: # trace-cmd start -p nop # perf stat -r 10 hackbench 50 0.778233033 seconds time elapsed ( +- 0.38% ) # trace-cmd start -e all # perf stat -r 10 hackbench 50 1.582102692 seconds time elapsed ( +- 0.28% ) # trace-cmd start -e all -f 'common_preempt_count==20' # perf stat -r 10 hackbench 50 1.309230710 seconds time elapsed ( +- 0.22% ) # trace-cmd start -e all -f 'common_preempt_count!=20' # perf stat -r 10 hackbench 50 1.786001924 seconds time elapsed ( +- 0.20% ) The first run is again the base with no tracing. The second run is all tracing with no filtering. It is a little slower, but that may be well within the noise. The third run shows that discarding all events only took 1.3 seconds. This is a speed up of 23%! The discard is much faster than even the commit. The one downside is shown in the last run. Events that are not discarded by the filter will take longer to add, this is due to the extra copy of the event. Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 29 Apr, 2016 5 commits
-
-
Steven Rostedt (Red Hat) authored
trace_current_buffer_lock_reserve() has no more users. Remove it. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
There's no real difference between trace_buffer_unlock_commit() and trace_buffer_unlock_commit_regs() except that the former passes NULL to ftrace_stack_trace() instead of regs. Have the former be a static inline of the latter which passes NULL for regs. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
The function trace_current_buffer_discard_commit() has no callers, remove it. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
The functions trace_buffer_unlock_commit() and the _regs() version are only used within the kernel/trace directory. Move them to the local header and remove the export as well. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
The function filter_check_discard() is small and only called by one user, its code can be folded into that one caller and make the code a bit less comlplex. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 27 Apr, 2016 1 commit
-
-
Steven Rostedt (Red Hat) authored
Nothing outside of the tracing directory calls filter_check_discard() or check_filter_check_discard(). They should not be called by modules. Move their prototypes into the local tracing header and remove their EXPORT_SYMBOL() macros. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 26 Apr, 2016 2 commits
-
-
Wang Xiaoqiang authored
With the following code snippet: ... char buf[64]; ... if (copy_from_user(&buf, ubuf, cnt)) ... Even though the value of "&buf" equals "buf", but there is no need to get the address of the "buf" again. Use "buf" instead of "&buf". Link: http://lkml.kernel.org/r/20160418152329.18b72bea@debianSigned-off-by:
Wang Xiaoqiang <wangxq10@lzu.edu.cn> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt (Red Hat) authored
As the event-fork option requires doing work when enabled and disabled, it can not be passed down to created instances. The instance must clear this flag when it is created, and must clear it when its removed. As more options may be created with this need, a macro ZEROED_TRACE_FLAGS is created that holds the flags that must not be inherited by the top level instance, and must be cleared on removal of instances. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 19 Apr, 2016 5 commits
-
-
Namhyung Kim authored
Allow users to have numeric fields displayed as log2 values in case value range is very wide by appending '.log2' to field names. For example, # echo 'hist:key=bytes_req' > kmalloc/trigger # cat kmalloc/hist { bytes_req: 504 } hitcount: 1 { bytes_req: 11 } hitcount: 1 { bytes_req: 104 } hitcount: 1 { bytes_req: 48 } hitcount: 1 { bytes_req: 2048 } hitcount: 1 { bytes_req: 4096 } hitcount: 1 { bytes_req: 240 } hitcount: 1 { bytes_req: 392 } hitcount: 1 { bytes_req: 13 } hitcount: 1 { bytes_req: 28 } hitcount: 1 { bytes_req: 12 } hitcount: 1 { bytes_req: 64 } hitcount: 2 { bytes_req: 128 } hitcount: 2 { bytes_req: 32 } hitcount: 2 { bytes_req: 8 } hitcount: 11 { bytes_req: 10 } hitcount: 13 { bytes_req: 24 } hitcount: 25 { bytes_req: 160 } hitcount: 29 { bytes_req: 16 } hitcount: 33 { bytes_req: 80 } hitcount: 36 When using '.log2' modifier, the output looks like: # echo 'hist:key=bytes_req.log2' > kmalloc/trigger # cat kmalloc/hist { bytes_req: ~ 2^12 } hitcount: 1 { bytes_req: ~ 2^11 } hitcount: 1 { bytes_req: ~ 2^9 } hitcount: 2 { bytes_req: ~ 2^6 } hitcount: 3 { bytes_req: ~ 2^3 } hitcount: 13 { bytes_req: ~ 2^5 } hitcount: 19 { bytes_req: ~ 2^8 } hitcount: 49 { bytes_req: ~ 2^7 } hitcount: 57 { bytes_req: ~ 2^4 } hitcount: 74 Link: http://lkml.kernel.org/r/7ff396b246c6a881f46b979735fddf05a0d6c71a.1457029949.git.tom.zanussi@linux.intel.com Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by:
Namhyung Kim <namhyung@kernel.org> Reviewed-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by:
Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Tom Zanussi authored
Allow users to define 'named' hist triggers. All triggers created with the same 'name=xxx' option will update the same shared histogram data. This expands the hist trigger syntax from this: # echo hist:keys=xxx ... [ if filter] > event/trigger to this: # echo hist:name=xxx:keys=xxx ... [ if filter] > event/trigger Named histograms must use a 'compatible' set of keys and values, which means each event added to a set of named triggers must have the same names and types. Reading the 'hist' file of any of the participating events will produce the same output as any other participating event, which is to be expected since they share the same data. Link: http://lkml.kernel.org/r/1dbc84ee3322a75daaf5b3ef1d0cc0a2fb682fc7.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by:
Tom Zanussi <tom.zanussi@linux.intel.com> Tested-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Tom Zanussi authored
Allow users to define any number of hist triggers per trace event. Any number of hist triggers may be added for a given event, which may differ by key, value, or filter. Reading the event's 'hist' file will display the output of all the hist triggers defined on an event concatenated in the order they were defined. Link: http://lkml.kernel.org/r/48a0c8dd34c344571de880fb35e211c6d9a28961.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by:
Tom Zanussi <tom.zanussi@linux.intel.com> Tested-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Tom Zanussi authored
Similar to enable_event/disable_event triggers, these triggers enable and disable the aggregation of events into maps rather than enabling and disabling their writing into the trace buffer. They can be used to automatically start and stop hist triggers based on a matching filter condition. If there's a paused hist trigger on system:event, the following would start it when the filter condition was hit: # echo enable_hist:system:event [ if filter] > event/trigger And the following would disable a running system:event hist trigger: # echo disable_hist:system:event [ if filter] > event/trigger See Documentation/trace/events.txt for real examples. Link: http://lkml.kernel.org/r/f812f086e52c8b7c8ad5443487375e03c96a601f.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by:
Tom Zanussi <tom.zanussi@linux.intel.com> Tested-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Tom Zanussi authored
It's often useful to be able to use a stacktrace as a hash key, for keeping a count of the number of times a particular call path resulted in a trace event, for instance. Add a special key named 'stacktrace' which can be used as key in a 'keys=' param for this purpose: # echo hist:keys=stacktrace ... \ [ if filter] > event/trigger Link: http://lkml.kernel.org/r/87515e90b3785232a874a12156174635a348edb1.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by:
Tom Zanussi <tom.zanussi@linux.intel.com> Tested-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-