Commit 483e3cd6 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'tracing-core-for-linus' of...

Merge branch 'tracing-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'tracing-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (105 commits)
  ring-buffer: only enable ring_buffer_swap_cpu when needed
  ring-buffer: check for swapped buffers in start of committing
  tracing: report error in trace if we fail to swap latency buffer
  tracing: add trace_array_printk for internal tracers to use
  tracing: pass around ring buffer instead of tracer
  tracing: make tracing_reset safe for external use
  tracing: use timestamp to determine start of latency traces
  tracing: Remove mentioning of legacy latency_trace file from documentation
  tracing/filters: Defer pred allocation, fix memory leak
  tracing: remove users of tracing_reset
  tracing: disable buffers and synchronize_sched before resetting
  tracing: disable update max tracer while reading trace
  tracing: print out start and stop in latency traces
  ring-buffer: disable all cpu buffers when one finds a problem
  ring-buffer: do not count discarded events
  ring-buffer: remove ring_buffer_event_discard
  ring-buffer: fix ring_buffer_read crossing pages
  ring-buffer: remove unnecessary cpu_relax
  ring-buffer: do not swap buffers during a commit
  ring-buffer: do not reset while in a commit
  ...
parents 774a694f d28daf92
...@@ -2480,6 +2480,11 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -2480,6 +2480,11 @@ and is between 256 and 4096 characters. It is defined in the file
trace_buf_size=nn[KMG] trace_buf_size=nn[KMG]
[FTRACE] will set tracing buffer size. [FTRACE] will set tracing buffer size.
trace_event=[event-list]
[FTRACE] Set and start specified trace events in order
to facilitate early boot debugging.
See also Documentation/trace/events.txt
trix= [HW,OSS] MediaTrix AudioTrix Pro trix= [HW,OSS] MediaTrix AudioTrix Pro
Format: Format:
<io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq> <io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>
......
...@@ -83,6 +83,15 @@ When reading one of these enable files, there are four results: ...@@ -83,6 +83,15 @@ When reading one of these enable files, there are four results:
X - there is a mixture of events enabled and disabled X - there is a mixture of events enabled and disabled
? - this file does not affect any event ? - this file does not affect any event
2.3 Boot option
---------------
In order to facilitate early boot debugging, use boot option:
trace_event=[event-list]
The format of this boot option is the same as described in section 2.1.
3. Defining an event-enabled tracepoint 3. Defining an event-enabled tracepoint
======================================= =======================================
......
...@@ -85,26 +85,19 @@ of ftrace. Here is a list of some of the key files: ...@@ -85,26 +85,19 @@ of ftrace. Here is a list of some of the key files:
This file holds the output of the trace in a human This file holds the output of the trace in a human
readable format (described below). readable format (described below).
latency_trace:
This file shows the same trace but the information
is organized more to display possible latencies
in the system (described below).
trace_pipe: trace_pipe:
The output is the same as the "trace" file but this The output is the same as the "trace" file but this
file is meant to be streamed with live tracing. file is meant to be streamed with live tracing.
Reads from this file will block until new data Reads from this file will block until new data is
is retrieved. Unlike the "trace" and "latency_trace" retrieved. Unlike the "trace" file, this file is a
files, this file is a consumer. This means reading consumer. This means reading from this file causes
from this file causes sequential reads to display sequential reads to display more current data. Once
more current data. Once data is read from this data is read from this file, it is consumed, and
file, it is consumed, and will not be read will not be read again with a sequential read. The
again with a sequential read. The "trace" and "trace" file is static, and if the tracer is not
"latency_trace" files are static, and if the adding more data,they will display the same
tracer is not adding more data, they will display information every time they are read.
the same information every time they are read.
trace_options: trace_options:
...@@ -117,10 +110,10 @@ of ftrace. Here is a list of some of the key files: ...@@ -117,10 +110,10 @@ of ftrace. Here is a list of some of the key files:
Some of the tracers record the max latency. Some of the tracers record the max latency.
For example, the time interrupts are disabled. For example, the time interrupts are disabled.
This time is saved in this file. The max trace This time is saved in this file. The max trace
will also be stored, and displayed by either will also be stored, and displayed by "trace".
"trace" or "latency_trace". A new max trace will A new max trace will only be recorded if the
only be recorded if the latency is greater than latency is greater than the value in this
the value in this file. (in microseconds) file. (in microseconds)
buffer_size_kb: buffer_size_kb:
...@@ -210,7 +203,7 @@ Here is the list of current tracers that may be configured. ...@@ -210,7 +203,7 @@ Here is the list of current tracers that may be configured.
the trace with the longest max latency. the trace with the longest max latency.
See tracing_max_latency. When a new max is recorded, See tracing_max_latency. When a new max is recorded,
it replaces the old trace. It is best to view this it replaces the old trace. It is best to view this
trace via the latency_trace file. trace with the latency-format option enabled.
"preemptoff" "preemptoff"
...@@ -307,8 +300,8 @@ the lowest priority thread (pid 0). ...@@ -307,8 +300,8 @@ the lowest priority thread (pid 0).
Latency trace format Latency trace format
-------------------- --------------------
For traces that display latency times, the latency_trace file When the latency-format option is enabled, the trace file gives
gives somewhat more information to see why a latency happened. somewhat more information to see why a latency happened.
Here is a typical trace. Here is a typical trace.
# tracer: irqsoff # tracer: irqsoff
...@@ -380,9 +373,10 @@ explains which is which. ...@@ -380,9 +373,10 @@ explains which is which.
The above is mostly meaningful for kernel developers. The above is mostly meaningful for kernel developers.
time: This differs from the trace file output. The trace file output time: When the latency-format option is enabled, the trace file
includes an absolute timestamp. The timestamp used by the output includes a timestamp relative to the start of the
latency_trace file is relative to the start of the trace. trace. This differs from the output when latency-format
is disabled, which includes an absolute timestamp.
delay: This is just to help catch your eye a bit better. And delay: This is just to help catch your eye a bit better. And
needs to be fixed to be only relative to the same CPU. needs to be fixed to be only relative to the same CPU.
...@@ -440,7 +434,8 @@ Here are the available options: ...@@ -440,7 +434,8 @@ Here are the available options:
sym-addr: sym-addr:
bash-4000 [01] 1477.606694: simple_strtoul <c0339346> bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
verbose - This deals with the latency_trace file. verbose - This deals with the trace file when the
latency-format option is enabled.
bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
(+0.000ms): simple_strtoul (strict_strtoul) (+0.000ms): simple_strtoul (strict_strtoul)
...@@ -472,7 +467,7 @@ Here are the available options: ...@@ -472,7 +467,7 @@ Here are the available options:
the app is no longer running the app is no longer running
The lookup is performed when you read The lookup is performed when you read
trace,trace_pipe,latency_trace. Example: trace,trace_pipe. Example:
a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
...@@ -481,6 +476,11 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] ...@@ -481,6 +476,11 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
every scheduling event. Will add overhead if every scheduling event. Will add overhead if
there's a lot of tasks running at once. there's a lot of tasks running at once.
latency-format - This option changes the trace. When
it is enabled, the trace displays
additional information about the
latencies, as described in "Latency
trace format".
sched_switch sched_switch
------------ ------------
...@@ -596,12 +596,13 @@ To reset the maximum, echo 0 into tracing_max_latency. Here is ...@@ -596,12 +596,13 @@ To reset the maximum, echo 0 into tracing_max_latency. Here is
an example: an example:
# echo irqsoff > current_tracer # echo irqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency # echo 0 > tracing_max_latency
# echo 1 > tracing_enabled # echo 1 > tracing_enabled
# ls -ltr # ls -ltr
[...] [...]
# echo 0 > tracing_enabled # echo 0 > tracing_enabled
# cat latency_trace # cat trace
# tracer: irqsoff # tracer: irqsoff
# #
irqsoff latency trace v1.1.5 on 2.6.26 irqsoff latency trace v1.1.5 on 2.6.26
...@@ -703,12 +704,13 @@ which preemption was disabled. The control of preemptoff tracer ...@@ -703,12 +704,13 @@ which preemption was disabled. The control of preemptoff tracer
is much like the irqsoff tracer. is much like the irqsoff tracer.
# echo preemptoff > current_tracer # echo preemptoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency # echo 0 > tracing_max_latency
# echo 1 > tracing_enabled # echo 1 > tracing_enabled
# ls -ltr # ls -ltr
[...] [...]
# echo 0 > tracing_enabled # echo 0 > tracing_enabled
# cat latency_trace # cat trace
# tracer: preemptoff # tracer: preemptoff
# #
preemptoff latency trace v1.1.5 on 2.6.26-rc8 preemptoff latency trace v1.1.5 on 2.6.26-rc8
...@@ -850,12 +852,13 @@ Again, using this trace is much like the irqsoff and preemptoff ...@@ -850,12 +852,13 @@ Again, using this trace is much like the irqsoff and preemptoff
tracers. tracers.
# echo preemptirqsoff > current_tracer # echo preemptirqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency # echo 0 > tracing_max_latency
# echo 1 > tracing_enabled # echo 1 > tracing_enabled
# ls -ltr # ls -ltr
[...] [...]
# echo 0 > tracing_enabled # echo 0 > tracing_enabled
# cat latency_trace # cat trace
# tracer: preemptirqsoff # tracer: preemptirqsoff
# #
preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
...@@ -1012,11 +1015,12 @@ Instead of performing an 'ls', we will run 'sleep 1' under ...@@ -1012,11 +1015,12 @@ Instead of performing an 'ls', we will run 'sleep 1' under
'chrt' which changes the priority of the task. 'chrt' which changes the priority of the task.
# echo wakeup > current_tracer # echo wakeup > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency # echo 0 > tracing_max_latency
# echo 1 > tracing_enabled # echo 1 > tracing_enabled
# chrt -f 5 sleep 1 # chrt -f 5 sleep 1
# echo 0 > tracing_enabled # echo 0 > tracing_enabled
# cat latency_trace # cat trace
# tracer: wakeup # tracer: wakeup
# #
wakeup latency trace v1.1.5 on 2.6.26-rc8 wakeup latency trace v1.1.5 on 2.6.26-rc8
......
" Enable folding for ftrace function_graph traces.
"
" To use, :source this file while viewing a function_graph trace, or use vim's
" -S option to load from the command-line together with a trace. You can then
" use the usual vim fold commands, such as "za", to open and close nested
" functions. While closed, a fold will show the total time taken for a call,
" as would normally appear on the line with the closing brace. Folded
" functions will not include finish_task_switch(), so folding should remain
" relatively sane even through a context switch.
"
" Note that this will almost certainly only work well with a
" single-CPU trace (e.g. trace-cmd report --cpu 1).
function! FunctionGraphFoldExpr(lnum)
let line = getline(a:lnum)
if line[-1:] == '{'
if line =~ 'finish_task_switch() {$'
return '>1'
endif
return 'a1'
elseif line[-1:] == '}'
return 's1'
else
return '='
endif
endfunction
function! FunctionGraphFoldText()
let s = split(getline(v:foldstart), '|', 1)
if getline(v:foldend+1) =~ 'finish_task_switch() {$'
let s[2] = ' task switch '
else
let e = split(getline(v:foldend), '|', 1)
let s[2] = e[2]
endif
return join(s, '|')
endfunction
setlocal foldexpr=FunctionGraphFoldExpr(v:lnum)
setlocal foldtext=FunctionGraphFoldText()
setlocal foldcolumn=12
setlocal foldmethod=expr
This diff is collapsed.
...@@ -84,7 +84,7 @@ config S390 ...@@ -84,7 +84,7 @@ config S390
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FTRACE_SYSCALLS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_DEFAULT_NO_SPIN_MUTEXES select HAVE_DEFAULT_NO_SPIN_MUTEXES
......
...@@ -900,7 +900,7 @@ CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y ...@@ -900,7 +900,7 @@ CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_FTRACE_SYSCALLS=y CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_TRACING_SUPPORT=y CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set # CONFIG_FUNCTION_TRACER is not set
......
...@@ -92,7 +92,7 @@ static inline struct thread_info *current_thread_info(void) ...@@ -92,7 +92,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_TRACE 8 /* syscall trace active */ #define TIF_SYSCALL_TRACE 8 /* syscall trace active */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */ #define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */
#define TIF_SECCOMP 10 /* secure computing */ #define TIF_SECCOMP 10 /* secure computing */
#define TIF_SYSCALL_FTRACE 11 /* ftrace syscall instrumentation */ #define TIF_SYSCALL_TRACEPOINT 11 /* syscall tracepoint instrumentation */
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling
TIF_NEED_RESCHED */ TIF_NEED_RESCHED */
...@@ -111,7 +111,7 @@ static inline struct thread_info *current_thread_info(void) ...@@ -111,7 +111,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1<<TIF_SECCOMP) #define _TIF_SECCOMP (1<<TIF_SECCOMP)
#define _TIF_SYSCALL_FTRACE (1<<TIF_SYSCALL_FTRACE) #define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_USEDFPU (1<<TIF_USEDFPU) #define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_31BIT (1<<TIF_31BIT) #define _TIF_31BIT (1<<TIF_31BIT)
......
...@@ -54,7 +54,7 @@ _TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ ...@@ -54,7 +54,7 @@ _TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ _TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING) _TIF_MCCK_PENDING)
_TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \ _TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \
_TIF_SECCOMP>>8 | _TIF_SYSCALL_FTRACE>>8) _TIF_SECCOMP>>8 | _TIF_SYSCALL_TRACEPOINT>>8)
STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER
STACK_SIZE = 1 << STACK_SHIFT STACK_SIZE = 1 << STACK_SHIFT
......
...@@ -57,7 +57,7 @@ _TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ ...@@ -57,7 +57,7 @@ _TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ _TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING) _TIF_MCCK_PENDING)
_TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \ _TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \
_TIF_SECCOMP>>8 | _TIF_SYSCALL_FTRACE>>8) _TIF_SECCOMP>>8 | _TIF_SYSCALL_TRACEPOINT>>8)
#define BASED(name) name-system_call(%r13) #define BASED(name) name-system_call(%r13)
......
...@@ -220,6 +220,29 @@ struct syscall_metadata *syscall_nr_to_meta(int nr) ...@@ -220,6 +220,29 @@ struct syscall_metadata *syscall_nr_to_meta(int nr)
return syscalls_metadata[nr]; return syscalls_metadata[nr];
} }
int syscall_name_to_nr(char *name)
{
int i;
if (!syscalls_metadata)
return -1;
for (i = 0; i < NR_syscalls; i++)
if (syscalls_metadata[i])
if (!strcmp(syscalls_metadata[i]->name, name))
return i;
return -1;
}
void set_syscall_enter_id(int num, int id)
{
syscalls_metadata[num]->enter_id = id;
}
void set_syscall_exit_id(int num, int id)
{
syscalls_metadata[num]->exit_id = id;
}
static struct syscall_metadata *find_syscall_meta(unsigned long syscall) static struct syscall_metadata *find_syscall_meta(unsigned long syscall)
{ {
struct syscall_metadata *start; struct syscall_metadata *start;
...@@ -237,24 +260,19 @@ static struct syscall_metadata *find_syscall_meta(unsigned long syscall) ...@@ -237,24 +260,19 @@ static struct syscall_metadata *find_syscall_meta(unsigned long syscall)
return NULL; return NULL;
} }
void arch_init_ftrace_syscalls(void) static int __init arch_init_ftrace_syscalls(void)
{ {
struct syscall_metadata *meta; struct syscall_metadata *meta;
int i; int i;
static atomic_t refs;
if (atomic_inc_return(&refs) != 1)
goto out;
syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * NR_syscalls, syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * NR_syscalls,
GFP_KERNEL); GFP_KERNEL);
if (!syscalls_metadata) if (!syscalls_metadata)
goto out; return -ENOMEM;
for (i = 0; i < NR_syscalls; i++) { for (i = 0; i < NR_syscalls; i++) {
meta = find_syscall_meta((unsigned long)sys_call_table[i]); meta = find_syscall_meta((unsigned long)sys_call_table[i]);
syscalls_metadata[i] = meta; syscalls_metadata[i] = meta;
} }
return; return 0;
out:
atomic_dec(&refs);
} }
arch_initcall(arch_init_ftrace_syscalls);
#endif #endif
...@@ -51,6 +51,9 @@ ...@@ -51,6 +51,9 @@
#include "compat_ptrace.h" #include "compat_ptrace.h"
#endif #endif
#define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h>
enum s390_regset { enum s390_regset {
REGSET_GENERAL, REGSET_GENERAL,
REGSET_FP, REGSET_FP,
...@@ -661,8 +664,8 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) ...@@ -661,8 +664,8 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
ret = -1; ret = -1;
} }
if (unlikely(test_thread_flag(TIF_SYSCALL_FTRACE))) if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
ftrace_syscall_enter(regs); trace_sys_enter(regs, regs->gprs[2]);
if (unlikely(current->audit_context)) if (unlikely(current->audit_context))
audit_syscall_entry(is_compat_task() ? audit_syscall_entry(is_compat_task() ?
...@@ -679,8 +682,8 @@ asmlinkage void do_syscall_trace_exit(struct pt_regs *regs) ...@@ -679,8 +682,8 @@ asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)
audit_syscall_exit(AUDITSC_RESULT(regs->gprs[2]), audit_syscall_exit(AUDITSC_RESULT(regs->gprs[2]),
regs->gprs[2]); regs->gprs[2]);
if (unlikely(test_thread_flag(TIF_SYSCALL_FTRACE))) if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
ftrace_syscall_exit(regs); trace_sys_exit(regs, regs->gprs[2]);
if (test_thread_flag(TIF_SYSCALL_TRACE)) if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, 0); tracehook_report_syscall_exit(regs, 0);
......
...@@ -38,7 +38,7 @@ config X86 ...@@ -38,7 +38,7 @@ config X86
select HAVE_FUNCTION_GRAPH_FP_TEST select HAVE_FUNCTION_GRAPH_FP_TEST
select HAVE_FUNCTION_TRACE_MCOUNT_TEST select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE
select HAVE_FTRACE_SYSCALLS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_KVM select HAVE_KVM
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
......
...@@ -2355,7 +2355,7 @@ CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y ...@@ -2355,7 +2355,7 @@ CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_HW_BRANCH_TRACER=y CONFIG_HAVE_HW_BRANCH_TRACER=y
CONFIG_HAVE_FTRACE_SYSCALLS=y CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_RING_BUFFER=y CONFIG_RING_BUFFER=y
CONFIG_TRACING=y CONFIG_TRACING=y
CONFIG_TRACING_SUPPORT=y CONFIG_TRACING_SUPPORT=y
......
...@@ -2329,7 +2329,7 @@ CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y ...@@ -2329,7 +2329,7 @@ CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_HW_BRANCH_TRACER=y CONFIG_HAVE_HW_BRANCH_TRACER=y
CONFIG_HAVE_FTRACE_SYSCALLS=y CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_RING_BUFFER=y CONFIG_RING_BUFFER=y
CONFIG_TRACING=y CONFIG_TRACING=y
CONFIG_TRACING_SUPPORT=y CONFIG_TRACING_SUPPORT=y
......
...@@ -28,13 +28,6 @@ ...@@ -28,13 +28,6 @@
#endif #endif
/* FIXME: I don't want to stay hardcoded */
#ifdef CONFIG_X86_64
# define FTRACE_SYSCALL_MAX 296
#else
# define FTRACE_SYSCALL_MAX 333
#endif
#ifdef CONFIG_FUNCTION_TRACER #ifdef CONFIG_FUNCTION_TRACER
#define MCOUNT_ADDR ((long)(mcount)) #define MCOUNT_ADDR ((long)(mcount))
#define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */ #define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
......
...@@ -95,7 +95,7 @@ struct thread_info { ...@@ -95,7 +95,7 @@ struct thread_info {
#define TIF_DEBUGCTLMSR 25 /* uses thread_struct.debugctlmsr */ #define TIF_DEBUGCTLMSR 25 /* uses thread_struct.debugctlmsr */
#define TIF_DS_AREA_MSR 26 /* uses thread_struct.ds_area_msr */ #define TIF_DS_AREA_MSR 26 /* uses thread_struct.ds_area_msr */
#define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */ #define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */
#define TIF_SYSCALL_FTRACE 28 /* for ftrace syscall instrumentation */ #define TIF_SYSCALL_TRACEPOINT 28 /* syscall tracepoint instrumentation */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
...@@ -118,17 +118,17 @@ struct thread_info { ...@@ -118,17 +118,17 @@ struct thread_info {
#define _TIF_DEBUGCTLMSR (1 << TIF_DEBUGCTLMSR) #define _TIF_DEBUGCTLMSR (1 << TIF_DEBUGCTLMSR)
#define _TIF_DS_AREA_MSR (1 << TIF_DS_AREA_MSR) #define _TIF_DS_AREA_MSR (1 << TIF_DS_AREA_MSR)
#define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES) #define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES)
#define _TIF_SYSCALL_FTRACE (1 << TIF_SYSCALL_FTRACE) #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
/* work to do in syscall_trace_enter() */ /* work to do in syscall_trace_enter() */
#define _TIF_WORK_SYSCALL_ENTRY \ #define _TIF_WORK_SYSCALL_ENTRY \
(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_EMU | _TIF_SYSCALL_FTRACE | \ (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_EMU | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_AUDIT | _TIF_SECCOMP | _TIF_SINGLESTEP) _TIF_SECCOMP | _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT)
/* work to do in syscall_trace_leave() */ /* work to do in syscall_trace_leave() */
#define _TIF_WORK_SYSCALL_EXIT \ #define _TIF_WORK_SYSCALL_EXIT \
(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SINGLESTEP | \ (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SINGLESTEP | \
_TIF_SYSCALL_FTRACE) _TIF_SYSCALL_TRACEPOINT)
/* work to do on interrupt/exception return */ /* work to do on interrupt/exception return */
#define _TIF_WORK_MASK \ #define _TIF_WORK_MASK \
...@@ -137,7 +137,8 @@ struct thread_info { ...@@ -137,7 +137,8 @@ struct thread_info {
_TIF_SINGLESTEP|_TIF_SECCOMP|_TIF_SYSCALL_EMU)) _TIF_SINGLESTEP|_TIF_SECCOMP|_TIF_SYSCALL_EMU))
/* work to do on any return to user space */ /* work to do on any return to user space */
#define _TIF_ALLWORK_MASK ((0x0000FFFF & ~_TIF_SECCOMP) | _TIF_SYSCALL_FTRACE) #define _TIF_ALLWORK_MASK \
((0x0000FFFF & ~_TIF_SECCOMP) | _TIF_SYSCALL_TRACEPOINT)
/* Only used for 64 bit */ /* Only used for 64 bit */
#define _TIF_DO_NOTIFY_MASK \ #define _TIF_DO_NOTIFY_MASK \
......
...@@ -345,6 +345,8 @@ ...@@ -345,6 +345,8 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#define NR_syscalls 337
#define __ARCH_WANT_IPC_PARSE_VERSION #define __ARCH_WANT_IPC_PARSE_VERSION
#define __ARCH_WANT_OLD_READDIR #define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_OLD_STAT #define __ARCH_WANT_OLD_STAT
......
...@@ -688,6 +688,12 @@ __SYSCALL(__NR_perf_counter_open, sys_perf_counter_open) ...@@ -688,6 +688,12 @@ __SYSCALL(__NR_perf_counter_open, sys_perf_counter_open)
#endif /* __NO_STUBS */ #endif /* __NO_STUBS */
#ifdef __KERNEL__ #ifdef __KERNEL__
#ifndef COMPILE_OFFSETS
#include <asm/asm-offsets.h>
#define NR_syscalls (__NR_syscall_max + 1)
#endif
/* /*
* "Conditional" syscalls * "Conditional" syscalls
* *
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* This code generates raw asm output which is post-processed to extract * This code generates raw asm output which is post-processed to extract
* and format the required data. * and format the required data.
*/ */
#define COMPILE_OFFSETS
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/sched.h> #include <linux/sched.h>
......
...@@ -417,10 +417,6 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr, ...@@ -417,10 +417,6 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long return_hooker = (unsigned long) unsigned long return_hooker = (unsigned long)
&return_to_handler; &return_to_handler;
/* Nmi's are currently unsupported */
if (unlikely(in_nmi()))
return;
if (unlikely(atomic_read(&current->tracing_graph_pause))) if (unlikely(atomic_read(&current->tracing_graph_pause)))
return; return;
...@@ -498,37 +494,56 @@ static struct syscall_metadata *find_syscall_meta(unsigned long *syscall) ...@@ -498,37 +494,56 @@ static struct syscall_metadata *find_syscall_meta(unsigned long *syscall)
struct syscall_metadata *syscall_nr_to_meta(int nr) struct syscall_metadata *syscall_nr_to_meta(int nr)
{ {
if (!syscalls_metadata || nr >= FTRACE_SYSCALL_MAX || nr < 0) if (!syscalls_metadata || nr >= NR_syscalls || nr < 0)
return NULL; return NULL;
return syscalls_metadata[nr]; return syscalls_metadata[nr];
} }
void arch_init_ftrace_syscalls(void) int syscall_name_to_nr(char *name)
{
int i;
if (!syscalls_metadata)
return -1;
for (i = 0; i < NR_syscalls; i++) {
if (syscalls_metadata[i]) {
if (!strcmp(syscalls_metadata[i]->name, name))
return i;
}
}
return -1;
}
void set_syscall_enter_id(int num, int id)
{
syscalls_metadata[num]->enter_id = id;
}
void set_syscall_exit_id(int num, int id)
{
syscalls_metadata[num]->exit_id = id;
}
static int __init arch_init_ftrace_syscalls(void)
{ {
int i; int i;
struct syscall_metadata *meta; struct syscall_metadata *meta;
unsigned long **psys_syscall_table = &sys_call_table; unsigned long **psys_syscall_table = &sys_call_table;
static atomic_t refs;
if (atomic_inc_return(&refs) != 1)
goto end;
syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) *
FTRACE_SYSCALL_MAX, GFP_KERNEL); NR_syscalls, GFP_KERNEL);
if (!syscalls_metadata) { if (!syscalls_metadata) {
WARN_ON(1); WARN_ON(1);
return; return -ENOMEM;
} }
for (i = 0; i < FTRACE_SYSCALL_MAX; i++) { for (i = 0; i < NR_syscalls; i++) {
meta = find_syscall_meta(psys_syscall_table[i]); meta = find_syscall_meta(psys_syscall_table[i]);
syscalls_metadata[i] = meta; syscalls_metadata[i] = meta;
} }
return; return 0;
/* Paranoid: avoid overflow */
end:
atomic_dec(&refs);
} }
arch_initcall(arch_init_ftrace_syscalls);
#endif #endif
...@@ -35,10 +35,11 @@ ...@@ -35,10 +35,11 @@
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/ds.h> #include <asm/ds.h>
#include <trace/syscall.h>
#include "tls.h" #include "tls.h"
#define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h>
enum x86_regset { enum x86_regset {
REGSET_GENERAL, REGSET_GENERAL,
REGSET_FP, REGSET_FP,
...@@ -1497,8 +1498,8 @@ asmregparm long syscall_trace_enter(struct pt_regs *regs) ...@@ -1497,8 +1498,8 @@ asmregparm long syscall_trace_enter(struct pt_regs *regs)
tracehook_report_syscall_entry(regs)) tracehook_report_syscall_entry(regs))
ret = -1L; ret = -1L;
if (unlikely(test_thread_flag(TIF_SYSCALL_FTRACE))) if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
ftrace_syscall_enter(regs); trace_sys_enter(regs, regs->orig_ax);
if (unlikely(current->audit_context)) { if (unlikely(current->audit_context)) {
if (IS_IA32) if (IS_IA32)
...@@ -1523,8 +1524,8 @@ asmregparm void syscall_trace_leave(struct pt_regs *regs) ...@@ -1523,8 +1524,8 @@ asmregparm void syscall_trace_leave(struct pt_regs *regs)
if (unlikely(current->audit_context)) if (unlikely(current->audit_context))
audit_syscall_exit(AUDITSC_RESULT(regs->ax), regs->ax); audit_syscall_exit(AUDITSC_RESULT(regs->ax), regs->ax);
if (unlikely(test_thread_flag(TIF_SYSCALL_FTRACE))) if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
ftrace_syscall_exit(regs); trace_sys_exit(regs, regs->ax);
if (test_thread_flag(TIF_SYSCALL_TRACE)) if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, 0); tracehook_report_syscall_exit(regs, 0);
......
...@@ -18,9 +18,9 @@ ...@@ -18,9 +18,9 @@
#include <asm/ia32.h> #include <asm/ia32.h>
#include <asm/syscalls.h> #include <asm/syscalls.h>
asmlinkage long sys_mmap(unsigned long addr, unsigned long len, SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
unsigned long prot, unsigned long flags, unsigned long, prot, unsigned long, flags,
unsigned long fd, unsigned long off) unsigned long, fd, unsigned long, off)
{ {
long error; long error;
struct file *file; struct file *file;
...@@ -226,7 +226,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, ...@@ -226,7 +226,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
} }
asmlinkage long sys_uname(struct new_utsname __user *name) SYSCALL_DEFINE1(uname, struct new_utsname __user *, name)
{ {
int err; int err;
down_read(&uts_sem); down_read(&uts_sem);
......
...@@ -93,16 +93,22 @@ void tracing_generic_entry_update(struct trace_entry *entry, ...@@ -93,16 +93,22 @@ void tracing_generic_entry_update(struct trace_entry *entry,
unsigned long flags, unsigned long flags,
int pc); int pc);
struct ring_buffer_event * struct ring_buffer_event *
trace_current_buffer_lock_reserve(int type, unsigned long len, trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
int type, unsigned long len,
unsigned long flags, int pc); unsigned long flags, int pc);
void trace_current_buffer_unlock_commit(struct ring_buffer_event *event, void trace_current_buffer_unlock_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc); unsigned long flags, int pc);
void trace_nowake_buffer_unlock_commit(struct ring_buffer_event *event, void trace_nowake_buffer_unlock_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc); unsigned long flags, int pc);
void trace_current_buffer_discard_commit(struct ring_buffer_event *event); void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event);
void tracing_record_cmdline(struct task_struct *tsk); void tracing_record_cmdline(struct task_struct *tsk);
struct event_filter;
struct ftrace_event_call { struct ftrace_event_call {
struct list_head list; struct list_head list;
char *name; char *name;
...@@ -110,16 +116,18 @@ struct ftrace_event_call { ...@@ -110,16 +116,18 @@ struct ftrace_event_call {
struct dentry *dir; struct dentry *dir;
struct trace_event *event; struct trace_event *event;
int enabled; int enabled;
int (*regfunc)(void); int (*regfunc)(void *);
void (*unregfunc)(void); void (*unregfunc)(void *);
int id; int id;
int (*raw_init)(void); int (*raw_init)(void);
int (*show_format)(struct trace_seq *s); int (*show_format)(struct ftrace_event_call *call,
int (*define_fields)(void); struct trace_seq *s);
int (*define_fields)(struct ftrace_event_call *);
struct list_head fields; struct list_head fields;
int filter_active; int filter_active;
void *filter; struct event_filter *filter;
void *mod; void *mod;
void *data;
atomic_t profile_count; atomic_t profile_count;
int (*profile_enable)(struct ftrace_event_call *); int (*profile_enable)(struct ftrace_event_call *);
...@@ -129,15 +137,25 @@ struct ftrace_event_call { ...@@ -129,15 +137,25 @@ struct ftrace_event_call {
#define MAX_FILTER_PRED 32 #define MAX_FILTER_PRED 32
#define MAX_FILTER_STR_VAL 128 #define MAX_FILTER_STR_VAL 128
extern int init_preds(struct ftrace_event_call *call);
extern void destroy_preds(struct ftrace_event_call *call); extern void destroy_preds(struct ftrace_event_call *call);
extern int filter_match_preds(struct ftrace_event_call *call, void *rec); extern int filter_match_preds(struct ftrace_event_call *call, void *rec);
extern int filter_current_check_discard(struct ftrace_event_call *call, extern int filter_current_check_discard(struct ring_buffer *buffer,
struct ftrace_event_call *call,
void *rec, void *rec,
struct ring_buffer_event *event); struct ring_buffer_event *event);
extern int trace_define_field(struct ftrace_event_call *call, char *type, enum {
char *name, int offset, int size, int is_signed); FILTER_OTHER = 0,
FILTER_STATIC_STRING,
FILTER_DYN_STRING,
FILTER_PTR_STRING,
};
extern int trace_define_field(struct ftrace_event_call *call,
const char *type, const char *name,
int offset, int size, int is_signed,
int filter_type);
extern int trace_define_common_fields(struct ftrace_event_call *call);
#define is_signed_type(type) (((type)(-1)) < 0) #define is_signed_type(type) (((type)(-1)) < 0)
...@@ -162,11 +180,4 @@ do { \ ...@@ -162,11 +180,4 @@ do { \
__trace_printk(ip, fmt, ##args); \ __trace_printk(ip, fmt, ##args); \
} while (0) } while (0)
#define __common_field(type, item, is_signed) \
ret = trace_define_field(event_call, #type, "common_" #item, \
offsetof(typeof(field.ent), item), \
sizeof(field.ent.item), is_signed); \
if (ret) \
return ret;
#endif /* _LINUX_FTRACE_EVENT_H */ #endif /* _LINUX_FTRACE_EVENT_H */
...@@ -17,10 +17,12 @@ ...@@ -17,10 +17,12 @@
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/marker.h> #include <linux/marker.h>
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#include <asm/local.h>
#include <asm/local.h>
#include <asm/module.h> #include <asm/module.h>
#include <trace/events/module.h>
/* Not Yet Implemented */ /* Not Yet Implemented */
#define MODULE_SUPPORTED_DEVICE(name) #define MODULE_SUPPORTED_DEVICE(name)
...@@ -462,7 +464,10 @@ static inline local_t *__module_ref_addr(struct module *mod, int cpu) ...@@ -462,7 +464,10 @@ static inline local_t *__module_ref_addr(struct module *mod, int cpu)
static inline void __module_get(struct module *module) static inline void __module_get(struct module *module)
{ {
if (module) { if (module) {
local_inc(__module_ref_addr(module, get_cpu())); unsigned int cpu = get_cpu();
local_inc(__module_ref_addr(module, cpu));
trace_module_get(module, _THIS_IP_,
local_read(__module_ref_addr(module, cpu)));
put_cpu(); put_cpu();
} }
} }
...@@ -473,8 +478,11 @@ static inline int try_module_get(struct module *module) ...@@ -473,8 +478,11 @@ static inline int try_module_get(struct module *module)
if (module) { if (module) {
unsigned int cpu = get_cpu(); unsigned int cpu = get_cpu();
if (likely(module_is_live(module))) if (likely(module_is_live(module))) {
local_inc(__module_ref_addr(module, cpu)); local_inc(__module_ref_addr(module, cpu));
trace_module_get(module, _THIS_IP_,
local_read(__module_ref_addr(module, cpu)));
}
else else
ret = 0; ret = 0;
put_cpu(); put_cpu();
......
...@@ -766,6 +766,8 @@ extern int sysctl_perf_counter_mlock; ...@@ -766,6 +766,8 @@ extern int sysctl_perf_counter_mlock;
extern int sysctl_perf_counter_sample_rate; extern int sysctl_perf_counter_sample_rate;
extern void perf_counter_init(void); extern void perf_counter_init(void);
extern void perf_tpcounter_event(int event_id, u64 addr, u64 count,
void *record, int entry_size);
#ifndef perf_misc_flags #ifndef perf_misc_flags
#define perf_misc_flags(regs) (user_mode(regs) ? PERF_EVENT_MISC_USER : \ #define perf_misc_flags(regs) (user_mode(regs) ? PERF_EVENT_MISC_USER : \
......
...@@ -74,20 +74,6 @@ ring_buffer_event_time_delta(struct ring_buffer_event *event) ...@@ -74,20 +74,6 @@ ring_buffer_event_time_delta(struct ring_buffer_event *event)
return event->time_delta; return event->time_delta;
} }
/*
* ring_buffer_event_discard can discard any event in the ring buffer.
* it is up to the caller to protect against a reader from
* consuming it or a writer from wrapping and replacing it.
*
* No external protection is needed if this is called before
* the event is commited. But in that case it would be better to
* use ring_buffer_discard_commit.
*
* Note, if an event that has not been committed is discarded
* with ring_buffer_event_discard, it must still be committed.
*/
void ring_buffer_event_discard(struct ring_buffer_event *event);
/* /*
* ring_buffer_discard_commit will remove an event that has not * ring_buffer_discard_commit will remove an event that has not
* ben committed yet. If this is used, then ring_buffer_unlock_commit * ben committed yet. If this is used, then ring_buffer_unlock_commit
...@@ -154,8 +140,17 @@ unsigned long ring_buffer_size(struct ring_buffer *buffer); ...@@ -154,8 +140,17 @@ unsigned long ring_buffer_size(struct ring_buffer *buffer);
void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu); void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu);
void ring_buffer_reset(struct ring_buffer *buffer); void ring_buffer_reset(struct ring_buffer *buffer);
#ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
int ring_buffer_swap_cpu(struct ring_buffer *buffer_a, int ring_buffer_swap_cpu(struct ring_buffer *buffer_a,
struct ring_buffer *buffer_b, int cpu); struct ring_buffer *buffer_b, int cpu);
#else
static inline int
ring_buffer_swap_cpu(struct ring_buffer *buffer_a,
struct ring_buffer *buffer_b, int cpu)
{
return -ENODEV;
}
#endif
int ring_buffer_empty(struct ring_buffer *buffer); int ring_buffer_empty(struct ring_buffer *buffer);
int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu); int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu);
...@@ -170,7 +165,6 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer); ...@@ -170,7 +165,6 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer);
unsigned long ring_buffer_entries_cpu(struct ring_buffer *buffer, int cpu); unsigned long ring_buffer_entries_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu); unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu); unsigned long ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_nmi_dropped_cpu(struct ring_buffer *buffer, int cpu);
u64 ring_buffer_time_stamp(struct ring_buffer *buffer, int cpu); u64 ring_buffer_time_stamp(struct ring_buffer *buffer, int cpu);
void ring_buffer_normalize_time_stamp(struct ring_buffer *buffer, void ring_buffer_normalize_time_stamp(struct ring_buffer *buffer,
......
...@@ -64,6 +64,7 @@ struct perf_counter_attr; ...@@ -64,6 +64,7 @@ struct perf_counter_attr;
#include <linux/sem.h> #include <linux/sem.h>
#include <asm/siginfo.h> #include <asm/siginfo.h>
#include <asm/signal.h> #include <asm/signal.h>
#include <linux/unistd.h>
#include <linux/quota.h> #include <linux/quota.h>
#include <linux/key.h> #include <linux/key.h>
#include <trace/syscall.h> #include <trace/syscall.h>
...@@ -97,6 +98,53 @@ struct perf_counter_attr; ...@@ -97,6 +98,53 @@ struct perf_counter_attr;
#define __SC_TEST5(t5, a5, ...) __SC_TEST(t5); __SC_TEST4(__VA_ARGS__) #define __SC_TEST5(t5, a5, ...) __SC_TEST(t5); __SC_TEST4(__VA_ARGS__)
#define __SC_TEST6(t6, a6, ...) __SC_TEST(t6); __SC_TEST5(__VA_ARGS__) #define __SC_TEST6(t6, a6, ...) __SC_TEST(t6); __SC_TEST5(__VA_ARGS__)
#ifdef CONFIG_EVENT_PROFILE
#define TRACE_SYS_ENTER_PROFILE(sname) \
static int prof_sysenter_enable_##sname(struct ftrace_event_call *event_call) \
{ \
int ret = 0; \
if (!atomic_inc_return(&event_enter_##sname.profile_count)) \
ret = reg_prof_syscall_enter("sys"#sname); \
return ret; \
} \
\
static void prof_sysenter_disable_##sname(struct ftrace_event_call *event_call)\
{ \
if (atomic_add_negative(-1, &event_enter_##sname.profile_count)) \
unreg_prof_syscall_enter("sys"#sname); \
}
#define TRACE_SYS_EXIT_PROFILE(sname) \
static int prof_sysexit_enable_##sname(struct ftrace_event_call *event_call) \
{ \
int ret = 0; \
if (!atomic_inc_return(&event_exit_##sname.profile_count)) \
ret = reg_prof_syscall_exit("sys"#sname); \
return ret; \
} \
\
static void prof_sysexit_disable_##sname(struct ftrace_event_call *event_call) \
{ \
if (atomic_add_negative(-1, &event_exit_##sname.profile_count)) \
unreg_prof_syscall_exit("sys"#sname); \
}
#define TRACE_SYS_ENTER_PROFILE_INIT(sname) \
.profile_count = ATOMIC_INIT(-1), \
.profile_enable = prof_sysenter_enable_##sname, \
.profile_disable = prof_sysenter_disable_##sname,
#define TRACE_SYS_EXIT_PROFILE_INIT(sname) \
.profile_count = ATOMIC_INIT(-1), \
.profile_enable = prof_sysexit_enable_##sname, \
.profile_disable = prof_sysexit_disable_##sname,
#else
#define TRACE_SYS_ENTER_PROFILE(sname)
#define TRACE_SYS_ENTER_PROFILE_INIT(sname)
#define TRACE_SYS_EXIT_PROFILE(sname)
#define TRACE_SYS_EXIT_PROFILE_INIT(sname)
#endif
#ifdef CONFIG_FTRACE_SYSCALLS #ifdef CONFIG_FTRACE_SYSCALLS
#define __SC_STR_ADECL1(t, a) #a #define __SC_STR_ADECL1(t, a) #a
#define __SC_STR_ADECL2(t, a, ...) #a, __SC_STR_ADECL1(__VA_ARGS__) #define __SC_STR_ADECL2(t, a, ...) #a, __SC_STR_ADECL1(__VA_ARGS__)
...@@ -112,7 +160,81 @@ struct perf_counter_attr; ...@@ -112,7 +160,81 @@ struct perf_counter_attr;
#define __SC_STR_TDECL5(t, a, ...) #t, __SC_STR_TDECL4(__VA_ARGS__) #define __SC_STR_TDECL5(t, a, ...) #t, __SC_STR_TDECL4(__VA_ARGS__)
#define __SC_STR_TDECL6(t, a, ...) #t, __SC_STR_TDECL5(__VA_ARGS__) #define __SC_STR_TDECL6(t, a, ...) #t, __SC_STR_TDECL5(__VA_ARGS__)
#define SYSCALL_TRACE_ENTER_EVENT(sname) \
static struct ftrace_event_call event_enter_##sname; \
struct trace_event enter_syscall_print_##sname = { \
.trace = print_syscall_enter, \
}; \
static int init_enter_##sname(void) \
{ \
int num, id; \
num = syscall_name_to_nr("sys"#sname); \
if (num < 0) \
return -ENOSYS; \
id = register_ftrace_event(&enter_syscall_print_##sname);\
if (!id) \
return -ENODEV; \
event_enter_##sname.id = id; \
set_syscall_enter_id(num, id); \
INIT_LIST_HEAD(&event_enter_##sname.fields); \
return 0; \
} \
TRACE_SYS_ENTER_PROFILE(sname); \
static struct ftrace_event_call __used \
__attribute__((__aligned__(4))) \
__attribute__((section("_ftrace_events"))) \
event_enter_##sname = { \
.name = "sys_enter"#sname, \
.system = "syscalls", \
.event = &event_syscall_enter, \
.raw_init = init_enter_##sname, \
.show_format = syscall_enter_format, \
.define_fields = syscall_enter_define_fields, \
.regfunc = reg_event_syscall_enter, \
.unregfunc = unreg_event_syscall_enter, \
.data = "sys"#sname, \
TRACE_SYS_ENTER_PROFILE_INIT(sname) \
}
#define SYSCALL_TRACE_EXIT_EVENT(sname) \
static struct ftrace_event_call event_exit_##sname; \
struct trace_event exit_syscall_print_##sname = { \
.trace = print_syscall_exit, \
}; \
static int init_exit_##sname(void) \
{ \
int num, id; \
num = syscall_name_to_nr("sys"#sname); \
if (num < 0) \
return -ENOSYS; \
id = register_ftrace_event(&exit_syscall_print_##sname);\
if (!id) \
return -ENODEV; \
event_exit_##sname.id = id; \
set_syscall_exit_id(num, id); \
INIT_LIST_HEAD(&event_exit_##sname.fields); \
return 0; \
} \
TRACE_SYS_EXIT_PROFILE(sname); \
static struct ftrace_event_call __used \
__attribute__((__aligned__(4))) \
__attribute__((section("_ftrace_events"))) \
event_exit_##sname = { \
.name = "sys_exit"#sname, \
.system = "syscalls", \
.event = &event_syscall_exit, \
.raw_init = init_exit_##sname, \
.show_format = syscall_exit_format, \
.define_fields = syscall_exit_define_fields, \
.regfunc = reg_event_syscall_exit, \
.unregfunc = unreg_event_syscall_exit, \
.data = "sys"#sname, \
TRACE_SYS_EXIT_PROFILE_INIT(sname) \
}
#define SYSCALL_METADATA(sname, nb) \ #define SYSCALL_METADATA(sname, nb) \
SYSCALL_TRACE_ENTER_EVENT(sname); \
SYSCALL_TRACE_EXIT_EVENT(sname); \
static const struct syscall_metadata __used \ static const struct syscall_metadata __used \
__attribute__((__aligned__(4))) \ __attribute__((__aligned__(4))) \
__attribute__((section("__syscalls_metadata"))) \ __attribute__((section("__syscalls_metadata"))) \
...@@ -121,18 +243,23 @@ struct perf_counter_attr; ...@@ -121,18 +243,23 @@ struct perf_counter_attr;
.nb_args = nb, \ .nb_args = nb, \
.types = types_##sname, \ .types = types_##sname, \
.args = args_##sname, \ .args = args_##sname, \
} .enter_event = &event_enter_##sname, \
.exit_event = &event_exit_##sname, \
};
#define SYSCALL_DEFINE0(sname) \ #define SYSCALL_DEFINE0(sname) \
SYSCALL_TRACE_ENTER_EVENT(_##sname); \
SYSCALL_TRACE_EXIT_EVENT(_##sname); \
static const struct syscall_metadata __used \ static const struct syscall_metadata __used \
__attribute__((__aligned__(4))) \ __attribute__((__aligned__(4))) \
__attribute__((section("__syscalls_metadata"))) \ __attribute__((section("__syscalls_metadata"))) \
__syscall_meta_##sname = { \ __syscall_meta_##sname = { \
.name = "sys_"#sname, \ .name = "sys_"#sname, \
.nb_args = 0, \ .nb_args = 0, \
.enter_event = &event_enter__##sname, \
.exit_event = &event_exit__##sname, \
}; \ }; \
asmlinkage long sys_##sname(void) asmlinkage long sys_##sname(void)
#else #else
#define SYSCALL_DEFINE0(name) asmlinkage long sys_##name(void) #define SYSCALL_DEFINE0(name) asmlinkage long sys_##name(void)
#endif #endif
......
...@@ -23,6 +23,8 @@ struct tracepoint; ...@@ -23,6 +23,8 @@ struct tracepoint;
struct tracepoint { struct tracepoint {
const char *name; /* Tracepoint name */ const char *name; /* Tracepoint name */
int state; /* State. */ int state; /* State. */
void (*regfunc)(void);
void (*unregfunc)(void);
void **funcs; void **funcs;
} __attribute__((aligned(32))); /* } __attribute__((aligned(32))); /*
* Aligned on 32 bytes because it is * Aligned on 32 bytes because it is
...@@ -78,12 +80,16 @@ struct tracepoint { ...@@ -78,12 +80,16 @@ struct tracepoint {
return tracepoint_probe_unregister(#name, (void *)probe);\ return tracepoint_probe_unregister(#name, (void *)probe);\
} }
#define DEFINE_TRACE(name) \
#define DEFINE_TRACE_FN(name, reg, unreg) \
static const char __tpstrtab_##name[] \ static const char __tpstrtab_##name[] \
__attribute__((section("__tracepoints_strings"))) = #name; \ __attribute__((section("__tracepoints_strings"))) = #name; \
struct tracepoint __tracepoint_##name \ struct tracepoint __tracepoint_##name \
__attribute__((section("__tracepoints"), aligned(32))) = \ __attribute__((section("__tracepoints"), aligned(32))) = \
{ __tpstrtab_##name, 0, NULL } { __tpstrtab_##name, 0, reg, unreg, NULL }
#define DEFINE_TRACE(name) \
DEFINE_TRACE_FN(name, NULL, NULL);
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \ #define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \
EXPORT_SYMBOL_GPL(__tracepoint_##name) EXPORT_SYMBOL_GPL(__tracepoint_##name)
...@@ -108,6 +114,7 @@ extern void tracepoint_update_probe_range(struct tracepoint *begin, ...@@ -108,6 +114,7 @@ extern void tracepoint_update_probe_range(struct tracepoint *begin,
return -ENOSYS; \ return -ENOSYS; \
} }
#define DEFINE_TRACE_FN(name, reg, unreg)
#define DEFINE_TRACE(name) #define DEFINE_TRACE(name)
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name) #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
#define EXPORT_TRACEPOINT_SYMBOL(name) #define EXPORT_TRACEPOINT_SYMBOL(name)
...@@ -158,6 +165,15 @@ static inline void tracepoint_synchronize_unregister(void) ...@@ -158,6 +165,15 @@ static inline void tracepoint_synchronize_unregister(void)
#define PARAMS(args...) args #define PARAMS(args...) args
#endif /* _LINUX_TRACEPOINT_H */
/*
* Note: we keep the TRACE_EVENT outside the include file ifdef protection.
* This is due to the way trace events work. If a file includes two
* trace event headers under one "CREATE_TRACE_POINTS" the first include
* will override the TRACE_EVENT and break the second include.
*/
#ifndef TRACE_EVENT #ifndef TRACE_EVENT
/* /*
* For use with the TRACE_EVENT macro: * For use with the TRACE_EVENT macro:
...@@ -259,10 +275,15 @@ static inline void tracepoint_synchronize_unregister(void) ...@@ -259,10 +275,15 @@ static inline void tracepoint_synchronize_unregister(void)
* can also by used by generic instrumentation like SystemTap), and * can also by used by generic instrumentation like SystemTap), and
* it is also used to expose a structured trace record in * it is also used to expose a structured trace record in
* /sys/kernel/debug/tracing/events/. * /sys/kernel/debug/tracing/events/.
*
* A set of (un)registration functions can be passed to the variant
* TRACE_EVENT_FN to perform any (un)registration work.
*/ */
#define TRACE_EVENT(name, proto, args, struct, assign, print) \ #define TRACE_EVENT(name, proto, args, struct, assign, print) \
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args)) DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
#endif #define TRACE_EVENT_FN(name, proto, args, struct, \
assign, print, reg, unreg) \
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
#endif #endif /* ifdef TRACE_EVENT (see note above) */
...@@ -26,6 +26,11 @@ ...@@ -26,6 +26,11 @@
#define TRACE_EVENT(name, proto, args, tstruct, assign, print) \ #define TRACE_EVENT(name, proto, args, tstruct, assign, print) \
DEFINE_TRACE(name) DEFINE_TRACE(name)
#undef TRACE_EVENT_FN
#define TRACE_EVENT_FN(name, proto, args, tstruct, \
assign, print, reg, unreg) \
DEFINE_TRACE_FN(name, reg, unreg)
#undef DECLARE_TRACE #undef DECLARE_TRACE
#define DECLARE_TRACE(name, proto, args) \ #define DECLARE_TRACE(name, proto, args) \
DEFINE_TRACE(name) DEFINE_TRACE(name)
...@@ -56,6 +61,8 @@ ...@@ -56,6 +61,8 @@
#include <trace/ftrace.h> #include <trace/ftrace.h>
#endif #endif
#undef TRACE_EVENT
#undef TRACE_EVENT_FN
#undef TRACE_HEADER_MULTI_READ #undef TRACE_HEADER_MULTI_READ
/* Only undef what we defined in this file */ /* Only undef what we defined in this file */
......
#undef TRACE_SYSTEM
#define TRACE_SYSTEM module
#if !defined(_TRACE_MODULE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_MODULE_H
#include <linux/tracepoint.h>
#ifdef CONFIG_MODULES
struct module;
#define show_module_flags(flags) __print_flags(flags, "", \
{ (1UL << TAINT_PROPRIETARY_MODULE), "P" }, \
{ (1UL << TAINT_FORCED_MODULE), "F" }, \
{ (1UL << TAINT_CRAP), "C" })
TRACE_EVENT(module_load,
TP_PROTO(struct module *mod),
TP_ARGS(mod),
TP_STRUCT__entry(
__field( unsigned int, taints )
__string( name, mod->name )
),
TP_fast_assign(
__entry->taints = mod->taints;
__assign_str(name, mod->name);
),
TP_printk("%s %s", __get_str(name), show_module_flags(__entry->taints))
);
TRACE_EVENT(module_free,
TP_PROTO(struct module *mod),
TP_ARGS(mod),
TP_STRUCT__entry(
__string( name, mod->name )
),
TP_fast_assign(
__assign_str(name, mod->name);
),
TP_printk("%s", __get_str(name))
);
TRACE_EVENT(module_get,
TP_PROTO(struct module *mod, unsigned long ip, int refcnt),
TP_ARGS(mod, ip, refcnt),
TP_STRUCT__entry(
__field( unsigned long, ip )
__field( int, refcnt )
__string( name, mod->name )
),
TP_fast_assign(
__entry->ip = ip;
__entry->refcnt = refcnt;
__assign_str(name, mod->name);
),
TP_printk("%s call_site=%pf refcnt=%d",
__get_str(name), (void *)__entry->ip, __entry->refcnt)
);
TRACE_EVENT(module_put,
TP_PROTO(struct module *mod, unsigned long ip, int refcnt),
TP_ARGS(mod, ip, refcnt),
TP_STRUCT__entry(
__field( unsigned long, ip )
__field( int, refcnt )
__string( name, mod->name )
),
TP_fast_assign(
__entry->ip = ip;
__entry->refcnt = refcnt;
__assign_str(name, mod->name);
),
TP_printk("%s call_site=%pf refcnt=%d",
__get_str(name), (void *)__entry->ip, __entry->refcnt)
);
TRACE_EVENT(module_request,
TP_PROTO(char *name, bool wait, unsigned long ip),
TP_ARGS(name, wait, ip),
TP_STRUCT__entry(
__field( bool, wait )
__field( unsigned long, ip )
__string( name, name )
),
TP_fast_assign(
__entry->wait = wait;
__entry->ip = ip;
__assign_str(name, name);
),
TP_printk("%s wait=%d call_site=%pf",
__get_str(name), (int)__entry->wait, (void *)__entry->ip)
);
#endif /* CONFIG_MODULES */
#endif /* _TRACE_MODULE_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
...@@ -94,6 +94,7 @@ TRACE_EVENT(sched_wakeup, ...@@ -94,6 +94,7 @@ TRACE_EVENT(sched_wakeup,
__field( pid_t, pid ) __field( pid_t, pid )
__field( int, prio ) __field( int, prio )
__field( int, success ) __field( int, success )
__field( int, cpu )
), ),
TP_fast_assign( TP_fast_assign(
...@@ -101,11 +102,12 @@ TRACE_EVENT(sched_wakeup, ...@@ -101,11 +102,12 @@ TRACE_EVENT(sched_wakeup,
__entry->pid = p->pid; __entry->pid = p->pid;
__entry->prio = p->prio; __entry->prio = p->prio;
__entry->success = success; __entry->success = success;
__entry->cpu = task_cpu(p);
), ),
TP_printk("task %s:%d [%d] success=%d", TP_printk("task %s:%d [%d] success=%d [%03d]",
__entry->comm, __entry->pid, __entry->prio, __entry->comm, __entry->pid, __entry->prio,
__entry->success) __entry->success, __entry->cpu)
); );
/* /*
...@@ -125,6 +127,7 @@ TRACE_EVENT(sched_wakeup_new, ...@@ -125,6 +127,7 @@ TRACE_EVENT(sched_wakeup_new,
__field( pid_t, pid ) __field( pid_t, pid )
__field( int, prio ) __field( int, prio )
__field( int, success ) __field( int, success )
__field( int, cpu )
), ),
TP_fast_assign( TP_fast_assign(
...@@ -132,11 +135,12 @@ TRACE_EVENT(sched_wakeup_new, ...@@ -132,11 +135,12 @@ TRACE_EVENT(sched_wakeup_new,
__entry->pid = p->pid; __entry->pid = p->pid;
__entry->prio = p->prio; __entry->prio = p->prio;
__entry->success = success; __entry->success = success;
__entry->cpu = task_cpu(p);
), ),
TP_printk("task %s:%d [%d] success=%d", TP_printk("task %s:%d [%d] success=%d [%03d]",
__entry->comm, __entry->pid, __entry->prio, __entry->comm, __entry->pid, __entry->prio,
__entry->success) __entry->success, __entry->cpu)
); );
/* /*
......
#undef TRACE_SYSTEM
#define TRACE_SYSTEM syscalls
#if !defined(_TRACE_EVENTS_SYSCALLS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_EVENTS_SYSCALLS_H
#include <linux/tracepoint.h>
#include <asm/ptrace.h>
#include <asm/syscall.h>
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
extern void syscall_regfunc(void);
extern void syscall_unregfunc(void);
TRACE_EVENT_FN(sys_enter,
TP_PROTO(struct pt_regs *regs, long id),
TP_ARGS(regs, id),
TP_STRUCT__entry(
__field( long, id )
__array( unsigned long, args, 6 )
),
TP_fast_assign(
__entry->id = id;
syscall_get_arguments(current, regs, 0, 6, __entry->args);
),
TP_printk("NR %ld (%lx, %lx, %lx, %lx, %lx, %lx)",
__entry->id,
__entry->args[0], __entry->args[1], __entry->args[2],
__entry->args[3], __entry->args[4], __entry->args[5]),
syscall_regfunc, syscall_unregfunc
);
TRACE_EVENT_FN(sys_exit,
TP_PROTO(struct pt_regs *regs, long ret),
TP_ARGS(regs, ret),
TP_STRUCT__entry(
__field( long, id )
__field( long, ret )
),
TP_fast_assign(
__entry->id = syscall_get_nr(current, regs);
__entry->ret = ret;
),
TP_printk("NR %ld = %ld",
__entry->id, __entry->ret),
syscall_regfunc, syscall_unregfunc
);
#endif /* CONFIG_HAVE_SYSCALL_TRACEPOINTS */
#endif /* _TRACE_EVENTS_SYSCALLS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
...@@ -21,11 +21,14 @@ ...@@ -21,11 +21,14 @@
#undef __field #undef __field
#define __field(type, item) type item; #define __field(type, item) type item;
#undef __field_ext
#define __field_ext(type, item, filter_type) type item;
#undef __array #undef __array
#define __array(type, item, len) type item[len]; #define __array(type, item, len) type item[len];
#undef __dynamic_array #undef __dynamic_array
#define __dynamic_array(type, item, len) unsigned short __data_loc_##item; #define __dynamic_array(type, item, len) u32 __data_loc_##item;
#undef __string #undef __string
#define __string(item, src) __dynamic_array(char, item, -1) #define __string(item, src) __dynamic_array(char, item, -1)
...@@ -42,6 +45,16 @@ ...@@ -42,6 +45,16 @@
}; \ }; \
static struct ftrace_event_call event_##name static struct ftrace_event_call event_##name
#undef __cpparg
#define __cpparg(arg...) arg
/* Callbacks are meaningless to ftrace. */
#undef TRACE_EVENT_FN
#define TRACE_EVENT_FN(name, proto, args, tstruct, \
assign, print, reg, unreg) \
TRACE_EVENT(name, __cpparg(proto), __cpparg(args), \
__cpparg(tstruct), __cpparg(assign), __cpparg(print)) \
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE) #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
...@@ -51,23 +64,27 @@ ...@@ -51,23 +64,27 @@
* Include the following: * Include the following:
* *
* struct ftrace_data_offsets_<call> { * struct ftrace_data_offsets_<call> {
* int <item1>; * u32 <item1>;
* int <item2>; * u32 <item2>;
* [...] * [...]
* }; * };
* *
* The __dynamic_array() macro will create each int <item>, this is * The __dynamic_array() macro will create each u32 <item>, this is
* to keep the offset of each array from the beginning of the event. * to keep the offset of each array from the beginning of the event.
* The size of an array is also encoded, in the higher 16 bits of <item>.
*/ */
#undef __field #undef __field
#define __field(type, item); #define __field(type, item)
#undef __field_ext
#define __field_ext(type, item, filter_type)
#undef __array #undef __array
#define __array(type, item, len) #define __array(type, item, len)
#undef __dynamic_array #undef __dynamic_array
#define __dynamic_array(type, item, len) int item; #define __dynamic_array(type, item, len) u32 item;
#undef __string #undef __string
#define __string(item, src) __dynamic_array(char, item, -1) #define __string(item, src) __dynamic_array(char, item, -1)
...@@ -109,6 +126,9 @@ ...@@ -109,6 +126,9 @@
if (!ret) \ if (!ret) \
return 0; return 0;
#undef __field_ext
#define __field_ext(type, item, filter_type) __field(type, item)
#undef __array #undef __array
#define __array(type, item, len) \ #define __array(type, item, len) \
ret = trace_seq_printf(s, "\tfield:" #type " " #item "[" #len "];\t" \ ret = trace_seq_printf(s, "\tfield:" #type " " #item "[" #len "];\t" \
...@@ -120,7 +140,7 @@ ...@@ -120,7 +140,7 @@
#undef __dynamic_array #undef __dynamic_array
#define __dynamic_array(type, item, len) \ #define __dynamic_array(type, item, len) \
ret = trace_seq_printf(s, "\tfield:__data_loc " #item ";\t" \ ret = trace_seq_printf(s, "\tfield:__data_loc " #type "[] " #item ";\t"\
"offset:%u;\tsize:%u;\n", \ "offset:%u;\tsize:%u;\n", \
(unsigned int)offsetof(typeof(field), \ (unsigned int)offsetof(typeof(field), \
__data_loc_##item), \ __data_loc_##item), \
...@@ -150,7 +170,8 @@ ...@@ -150,7 +170,8 @@
#undef TRACE_EVENT #undef TRACE_EVENT
#define TRACE_EVENT(call, proto, args, tstruct, func, print) \ #define TRACE_EVENT(call, proto, args, tstruct, func, print) \
static int \ static int \
ftrace_format_##call(struct trace_seq *s) \ ftrace_format_##call(struct ftrace_event_call *unused, \
struct trace_seq *s) \
{ \ { \
struct ftrace_raw_##call field __attribute__((unused)); \ struct ftrace_raw_##call field __attribute__((unused)); \
int ret = 0; \ int ret = 0; \
...@@ -210,7 +231,7 @@ ftrace_format_##call(struct trace_seq *s) \ ...@@ -210,7 +231,7 @@ ftrace_format_##call(struct trace_seq *s) \
#undef __get_dynamic_array #undef __get_dynamic_array
#define __get_dynamic_array(field) \ #define __get_dynamic_array(field) \
((void *)__entry + __entry->__data_loc_##field) ((void *)__entry + (__entry->__data_loc_##field & 0xffff))
#undef __get_str #undef __get_str
#define __get_str(field) (char *)__get_dynamic_array(field) #define __get_str(field) (char *)__get_dynamic_array(field)
...@@ -263,28 +284,33 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags) \ ...@@ -263,28 +284,33 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags) \
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE) #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
#undef __field #undef __field_ext
#define __field(type, item) \ #define __field_ext(type, item, filter_type) \
ret = trace_define_field(event_call, #type, #item, \ ret = trace_define_field(event_call, #type, #item, \
offsetof(typeof(field), item), \ offsetof(typeof(field), item), \
sizeof(field.item), is_signed_type(type)); \ sizeof(field.item), \
is_signed_type(type), filter_type); \
if (ret) \ if (ret) \
return ret; return ret;
#undef __field
#define __field(type, item) __field_ext(type, item, FILTER_OTHER)
#undef __array #undef __array
#define __array(type, item, len) \ #define __array(type, item, len) \
BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \ BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \
ret = trace_define_field(event_call, #type "[" #len "]", #item, \ ret = trace_define_field(event_call, #type "[" #len "]", #item, \
offsetof(typeof(field), item), \ offsetof(typeof(field), item), \
sizeof(field.item), 0); \ sizeof(field.item), 0, FILTER_OTHER); \
if (ret) \ if (ret) \
return ret; return ret;
#undef __dynamic_array #undef __dynamic_array
#define __dynamic_array(type, item, len) \ #define __dynamic_array(type, item, len) \
ret = trace_define_field(event_call, "__data_loc" "[" #type "]", #item,\ ret = trace_define_field(event_call, "__data_loc " #type "[]", #item, \
offsetof(typeof(field), __data_loc_##item), \ offsetof(typeof(field), __data_loc_##item), \
sizeof(field.__data_loc_##item), 0); sizeof(field.__data_loc_##item), 0, \
FILTER_OTHER);
#undef __string #undef __string
#define __string(item, src) __dynamic_array(char, item, -1) #define __string(item, src) __dynamic_array(char, item, -1)
...@@ -292,17 +318,14 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags) \ ...@@ -292,17 +318,14 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags) \
#undef TRACE_EVENT #undef TRACE_EVENT
#define TRACE_EVENT(call, proto, args, tstruct, func, print) \ #define TRACE_EVENT(call, proto, args, tstruct, func, print) \
int \ int \
ftrace_define_fields_##call(void) \ ftrace_define_fields_##call(struct ftrace_event_call *event_call) \
{ \ { \
struct ftrace_raw_##call field; \ struct ftrace_raw_##call field; \
struct ftrace_event_call *event_call = &event_##call; \
int ret; \ int ret; \
\ \
__common_field(int, type, 1); \ ret = trace_define_common_fields(event_call); \
__common_field(unsigned char, flags, 0); \ if (ret) \
__common_field(unsigned char, preempt_count, 0); \ return ret; \
__common_field(int, pid, 1); \
__common_field(int, tgid, 1); \
\ \
tstruct; \ tstruct; \
\ \
...@@ -321,6 +344,9 @@ ftrace_define_fields_##call(void) \ ...@@ -321,6 +344,9 @@ ftrace_define_fields_##call(void) \
#undef __field #undef __field
#define __field(type, item) #define __field(type, item)
#undef __field_ext
#define __field_ext(type, item, filter_type)
#undef __array #undef __array
#define __array(type, item, len) #define __array(type, item, len)
...@@ -328,6 +354,7 @@ ftrace_define_fields_##call(void) \ ...@@ -328,6 +354,7 @@ ftrace_define_fields_##call(void) \
#define __dynamic_array(type, item, len) \ #define __dynamic_array(type, item, len) \
__data_offsets->item = __data_size + \ __data_offsets->item = __data_size + \
offsetof(typeof(*entry), __data); \ offsetof(typeof(*entry), __data); \
__data_offsets->item |= (len * sizeof(type)) << 16; \
__data_size += (len) * sizeof(type); __data_size += (len) * sizeof(type);
#undef __string #undef __string
...@@ -433,13 +460,15 @@ static void ftrace_profile_disable_##call(struct ftrace_event_call *event_call)\ ...@@ -433,13 +460,15 @@ static void ftrace_profile_disable_##call(struct ftrace_event_call *event_call)\
* { * {
* struct ring_buffer_event *event; * struct ring_buffer_event *event;
* struct ftrace_raw_<call> *entry; <-- defined in stage 1 * struct ftrace_raw_<call> *entry; <-- defined in stage 1
* struct ring_buffer *buffer;
* unsigned long irq_flags; * unsigned long irq_flags;
* int pc; * int pc;
* *
* local_save_flags(irq_flags); * local_save_flags(irq_flags);
* pc = preempt_count(); * pc = preempt_count();
* *
* event = trace_current_buffer_lock_reserve(event_<call>.id, * event = trace_current_buffer_lock_reserve(&buffer,
* event_<call>.id,
* sizeof(struct ftrace_raw_<call>), * sizeof(struct ftrace_raw_<call>),
* irq_flags, pc); * irq_flags, pc);
* if (!event) * if (!event)
...@@ -449,7 +478,7 @@ static void ftrace_profile_disable_##call(struct ftrace_event_call *event_call)\ ...@@ -449,7 +478,7 @@ static void ftrace_profile_disable_##call(struct ftrace_event_call *event_call)\
* <assign>; <-- Here we assign the entries by the __field and * <assign>; <-- Here we assign the entries by the __field and
* __array macros. * __array macros.
* *
* trace_current_buffer_unlock_commit(event, irq_flags, pc); * trace_current_buffer_unlock_commit(buffer, event, irq_flags, pc);
* } * }
* *
* static int ftrace_raw_reg_event_<call>(void) * static int ftrace_raw_reg_event_<call>(void)
...@@ -541,6 +570,7 @@ static void ftrace_raw_event_##call(proto) \ ...@@ -541,6 +570,7 @@ static void ftrace_raw_event_##call(proto) \
struct ftrace_event_call *event_call = &event_##call; \ struct ftrace_event_call *event_call = &event_##call; \
struct ring_buffer_event *event; \ struct ring_buffer_event *event; \
struct ftrace_raw_##call *entry; \ struct ftrace_raw_##call *entry; \
struct ring_buffer *buffer; \
unsigned long irq_flags; \ unsigned long irq_flags; \
int __data_size; \ int __data_size; \
int pc; \ int pc; \
...@@ -550,7 +580,8 @@ static void ftrace_raw_event_##call(proto) \ ...@@ -550,7 +580,8 @@ static void ftrace_raw_event_##call(proto) \
\ \
__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \ __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
\ \
event = trace_current_buffer_lock_reserve(event_##call.id, \ event = trace_current_buffer_lock_reserve(&buffer, \
event_##call.id, \
sizeof(*entry) + __data_size, \ sizeof(*entry) + __data_size, \
irq_flags, pc); \ irq_flags, pc); \
if (!event) \ if (!event) \
...@@ -562,11 +593,12 @@ static void ftrace_raw_event_##call(proto) \ ...@@ -562,11 +593,12 @@ static void ftrace_raw_event_##call(proto) \
\ \
{ assign; } \ { assign; } \
\ \
if (!filter_current_check_discard(event_call, entry, event)) \ if (!filter_current_check_discard(buffer, event_call, entry, event)) \
trace_nowake_buffer_unlock_commit(event, irq_flags, pc); \ trace_nowake_buffer_unlock_commit(buffer, \
event, irq_flags, pc); \
} \ } \
\ \
static int ftrace_raw_reg_event_##call(void) \ static int ftrace_raw_reg_event_##call(void *ptr) \
{ \ { \
int ret; \ int ret; \
\ \
...@@ -577,7 +609,7 @@ static int ftrace_raw_reg_event_##call(void) \ ...@@ -577,7 +609,7 @@ static int ftrace_raw_reg_event_##call(void) \
return ret; \ return ret; \
} \ } \
\ \
static void ftrace_raw_unreg_event_##call(void) \ static void ftrace_raw_unreg_event_##call(void *ptr) \
{ \ { \
unregister_trace_##call(ftrace_raw_event_##call); \ unregister_trace_##call(ftrace_raw_event_##call); \
} \ } \
...@@ -595,7 +627,6 @@ static int ftrace_raw_init_event_##call(void) \ ...@@ -595,7 +627,6 @@ static int ftrace_raw_init_event_##call(void) \
return -ENODEV; \ return -ENODEV; \
event_##call.id = id; \ event_##call.id = id; \
INIT_LIST_HEAD(&event_##call.fields); \ INIT_LIST_HEAD(&event_##call.fields); \
init_preds(&event_##call); \
return 0; \ return 0; \
} \ } \
\ \
......
#ifndef _TRACE_SYSCALL_H #ifndef _TRACE_SYSCALL_H
#define _TRACE_SYSCALL_H #define _TRACE_SYSCALL_H
#include <linux/tracepoint.h>
#include <linux/unistd.h>
#include <linux/ftrace_event.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
/* /*
* A syscall entry in the ftrace syscalls array. * A syscall entry in the ftrace syscalls array.
* *
...@@ -10,26 +15,49 @@ ...@@ -10,26 +15,49 @@
* @nb_args: number of parameters it takes * @nb_args: number of parameters it takes
* @types: list of types as strings * @types: list of types as strings
* @args: list of args as strings (args[i] matches types[i]) * @args: list of args as strings (args[i] matches types[i])
* @enter_id: associated ftrace enter event id
* @exit_id: associated ftrace exit event id
* @enter_event: associated syscall_enter trace event
* @exit_event: associated syscall_exit trace event
*/ */
struct syscall_metadata { struct syscall_metadata {
const char *name; const char *name;
int nb_args; int nb_args;
const char **types; const char **types;
const char **args; const char **args;
int enter_id;
int exit_id;
struct ftrace_event_call *enter_event;
struct ftrace_event_call *exit_event;
}; };
#ifdef CONFIG_FTRACE_SYSCALLS #ifdef CONFIG_FTRACE_SYSCALLS
extern void arch_init_ftrace_syscalls(void);
extern struct syscall_metadata *syscall_nr_to_meta(int nr); extern struct syscall_metadata *syscall_nr_to_meta(int nr);
extern void start_ftrace_syscalls(void); extern int syscall_name_to_nr(char *name);
extern void stop_ftrace_syscalls(void); void set_syscall_enter_id(int num, int id);
extern void ftrace_syscall_enter(struct pt_regs *regs); void set_syscall_exit_id(int num, int id);
extern void ftrace_syscall_exit(struct pt_regs *regs); extern struct trace_event event_syscall_enter;
#else extern struct trace_event event_syscall_exit;
static inline void start_ftrace_syscalls(void) { } extern int reg_event_syscall_enter(void *ptr);
static inline void stop_ftrace_syscalls(void) { } extern void unreg_event_syscall_enter(void *ptr);
static inline void ftrace_syscall_enter(struct pt_regs *regs) { } extern int reg_event_syscall_exit(void *ptr);
static inline void ftrace_syscall_exit(struct pt_regs *regs) { } extern void unreg_event_syscall_exit(void *ptr);
extern int syscall_enter_format(struct ftrace_event_call *call,
struct trace_seq *s);
extern int syscall_exit_format(struct ftrace_event_call *call,
struct trace_seq *s);
extern int syscall_enter_define_fields(struct ftrace_event_call *call);
extern int syscall_exit_define_fields(struct ftrace_event_call *call);
enum print_line_t print_syscall_enter(struct trace_iterator *iter, int flags);
enum print_line_t print_syscall_exit(struct trace_iterator *iter, int flags);
#endif
#ifdef CONFIG_EVENT_PROFILE
int reg_prof_syscall_enter(char *name);
void unreg_prof_syscall_enter(char *name);
int reg_prof_syscall_exit(char *name);
void unreg_prof_syscall_exit(char *name);
#endif #endif
#endif /* _TRACE_SYSCALL_H */ #endif /* _TRACE_SYSCALL_H */
...@@ -37,6 +37,8 @@ ...@@ -37,6 +37,8 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <trace/events/module.h>
extern int max_threads; extern int max_threads;
static struct workqueue_struct *khelper_wq; static struct workqueue_struct *khelper_wq;
...@@ -112,6 +114,8 @@ int __request_module(bool wait, const char *fmt, ...) ...@@ -112,6 +114,8 @@ int __request_module(bool wait, const char *fmt, ...)
return -ENOMEM; return -ENOMEM;
} }
trace_module_request(module_name, wait, _RET_IP_);
ret = call_usermodehelper(modprobe_path, argv, envp, ret = call_usermodehelper(modprobe_path, argv, envp,
wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC); wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
atomic_dec(&kmod_concurrent); atomic_dec(&kmod_concurrent);
......
...@@ -103,7 +103,7 @@ static struct kprobe_blackpoint kprobe_blacklist[] = { ...@@ -103,7 +103,7 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
#define INSNS_PER_PAGE (PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t))) #define INSNS_PER_PAGE (PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
struct kprobe_insn_page { struct kprobe_insn_page {
struct hlist_node hlist; struct list_head list;
kprobe_opcode_t *insns; /* Page of instruction slots */ kprobe_opcode_t *insns; /* Page of instruction slots */
char slot_used[INSNS_PER_PAGE]; char slot_used[INSNS_PER_PAGE];
int nused; int nused;
...@@ -117,7 +117,7 @@ enum kprobe_slot_state { ...@@ -117,7 +117,7 @@ enum kprobe_slot_state {
}; };
static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_pages */ static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_pages */
static struct hlist_head kprobe_insn_pages; static LIST_HEAD(kprobe_insn_pages);
static int kprobe_garbage_slots; static int kprobe_garbage_slots;
static int collect_garbage_slots(void); static int collect_garbage_slots(void);
...@@ -152,10 +152,9 @@ static int __kprobes check_safety(void) ...@@ -152,10 +152,9 @@ static int __kprobes check_safety(void)
static kprobe_opcode_t __kprobes *__get_insn_slot(void) static kprobe_opcode_t __kprobes *__get_insn_slot(void)
{ {
struct kprobe_insn_page *kip; struct kprobe_insn_page *kip;
struct hlist_node *pos;
retry: retry:
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) { list_for_each_entry(kip, &kprobe_insn_pages, list) {
if (kip->nused < INSNS_PER_PAGE) { if (kip->nused < INSNS_PER_PAGE) {
int i; int i;
for (i = 0; i < INSNS_PER_PAGE; i++) { for (i = 0; i < INSNS_PER_PAGE; i++) {
...@@ -189,8 +188,8 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void) ...@@ -189,8 +188,8 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
kfree(kip); kfree(kip);
return NULL; return NULL;
} }
INIT_HLIST_NODE(&kip->hlist); INIT_LIST_HEAD(&kip->list);
hlist_add_head(&kip->hlist, &kprobe_insn_pages); list_add(&kip->list, &kprobe_insn_pages);
memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE); memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
kip->slot_used[0] = SLOT_USED; kip->slot_used[0] = SLOT_USED;
kip->nused = 1; kip->nused = 1;
...@@ -219,12 +218,8 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) ...@@ -219,12 +218,8 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
* so as not to have to set it up again the * so as not to have to set it up again the
* next time somebody inserts a probe. * next time somebody inserts a probe.
*/ */
hlist_del(&kip->hlist); if (!list_is_singular(&kprobe_insn_pages)) {
if (hlist_empty(&kprobe_insn_pages)) { list_del(&kip->list);
INIT_HLIST_NODE(&kip->hlist);
hlist_add_head(&kip->hlist,
&kprobe_insn_pages);
} else {
module_free(NULL, kip->insns); module_free(NULL, kip->insns);
kfree(kip); kfree(kip);
} }
...@@ -235,14 +230,13 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) ...@@ -235,14 +230,13 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
static int __kprobes collect_garbage_slots(void) static int __kprobes collect_garbage_slots(void)
{ {
struct kprobe_insn_page *kip; struct kprobe_insn_page *kip, *next;
struct hlist_node *pos, *next;
/* Ensure no-one is preepmted on the garbages */ /* Ensure no-one is preepmted on the garbages */
if (check_safety()) if (check_safety())
return -EAGAIN; return -EAGAIN;
hlist_for_each_entry_safe(kip, pos, next, &kprobe_insn_pages, hlist) { list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) {
int i; int i;
if (kip->ngarbage == 0) if (kip->ngarbage == 0)
continue; continue;
...@@ -260,19 +254,17 @@ static int __kprobes collect_garbage_slots(void) ...@@ -260,19 +254,17 @@ static int __kprobes collect_garbage_slots(void)
void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty) void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
{ {
struct kprobe_insn_page *kip; struct kprobe_insn_page *kip;
struct hlist_node *pos;
mutex_lock(&kprobe_insn_mutex); mutex_lock(&kprobe_insn_mutex);
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) { list_for_each_entry(kip, &kprobe_insn_pages, list) {
if (kip->insns <= slot && if (kip->insns <= slot &&
slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) { slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
int i = (slot - kip->insns) / MAX_INSN_SIZE; int i = (slot - kip->insns) / MAX_INSN_SIZE;
if (dirty) { if (dirty) {
kip->slot_used[i] = SLOT_DIRTY; kip->slot_used[i] = SLOT_DIRTY;
kip->ngarbage++; kip->ngarbage++;
} else { } else
collect_one_slot(kip, i); collect_one_slot(kip, i);
}
break; break;
} }
} }
......
...@@ -55,6 +55,11 @@ ...@@ -55,6 +55,11 @@
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/kmemleak.h> #include <linux/kmemleak.h>
#define CREATE_TRACE_POINTS
#include <trace/events/module.h>
EXPORT_TRACEPOINT_SYMBOL(module_get);
#if 0 #if 0
#define DEBUGP printk #define DEBUGP printk
#else #else
...@@ -942,6 +947,8 @@ void module_put(struct module *module) ...@@ -942,6 +947,8 @@ void module_put(struct module *module)
if (module) { if (module) {
unsigned int cpu = get_cpu(); unsigned int cpu = get_cpu();
local_dec(__module_ref_addr(module, cpu)); local_dec(__module_ref_addr(module, cpu));
trace_module_put(module, _RET_IP_,
local_read(__module_ref_addr(module, cpu)));
/* Maybe they're waiting for us to drop reference? */ /* Maybe they're waiting for us to drop reference? */
if (unlikely(!module_is_live(module))) if (unlikely(!module_is_live(module)))
wake_up_process(module->waiter); wake_up_process(module->waiter);
...@@ -1497,6 +1504,8 @@ static int __unlink_module(void *_mod) ...@@ -1497,6 +1504,8 @@ static int __unlink_module(void *_mod)
/* Free a module, remove from lists, etc (must hold module_mutex). */ /* Free a module, remove from lists, etc (must hold module_mutex). */
static void free_module(struct module *mod) static void free_module(struct module *mod)
{ {
trace_module_free(mod);
/* Delete from various lists */ /* Delete from various lists */
stop_machine(__unlink_module, mod, NULL); stop_machine(__unlink_module, mod, NULL);
remove_notes_attrs(mod); remove_notes_attrs(mod);
...@@ -2364,6 +2373,8 @@ static noinline struct module *load_module(void __user *umod, ...@@ -2364,6 +2373,8 @@ static noinline struct module *load_module(void __user *umod,
/* Get rid of temporary copy */ /* Get rid of temporary copy */
vfree(hdr); vfree(hdr);
trace_module_load(mod);
/* Done! */ /* Done! */
return mod; return mod;
......
...@@ -41,7 +41,7 @@ config HAVE_FTRACE_MCOUNT_RECORD ...@@ -41,7 +41,7 @@ config HAVE_FTRACE_MCOUNT_RECORD
config HAVE_HW_BRANCH_TRACER config HAVE_HW_BRANCH_TRACER
bool bool
config HAVE_FTRACE_SYSCALLS config HAVE_SYSCALL_TRACEPOINTS
bool bool
config TRACER_MAX_TRACE config TRACER_MAX_TRACE
...@@ -60,9 +60,14 @@ config EVENT_TRACING ...@@ -60,9 +60,14 @@ config EVENT_TRACING
bool bool
config CONTEXT_SWITCH_TRACER config CONTEXT_SWITCH_TRACER
select MARKERS
bool bool
config RING_BUFFER_ALLOW_SWAP
bool
help
Allow the use of ring_buffer_swap_cpu.
Adds a very slight overhead to tracing when enabled.
# All tracer options should select GENERIC_TRACER. For those options that are # All tracer options should select GENERIC_TRACER. For those options that are
# enabled by all tracers (context switch and event tracer) they select TRACING. # enabled by all tracers (context switch and event tracer) they select TRACING.
# This allows those options to appear when no other tracer is selected. But the # This allows those options to appear when no other tracer is selected. But the
...@@ -147,6 +152,7 @@ config IRQSOFF_TRACER ...@@ -147,6 +152,7 @@ config IRQSOFF_TRACER
select TRACE_IRQFLAGS select TRACE_IRQFLAGS
select GENERIC_TRACER select GENERIC_TRACER
select TRACER_MAX_TRACE select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
help help
This option measures the time spent in irqs-off critical This option measures the time spent in irqs-off critical
sections, with microsecond accuracy. sections, with microsecond accuracy.
...@@ -168,6 +174,7 @@ config PREEMPT_TRACER ...@@ -168,6 +174,7 @@ config PREEMPT_TRACER
depends on PREEMPT depends on PREEMPT
select GENERIC_TRACER select GENERIC_TRACER
select TRACER_MAX_TRACE select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
help help
This option measures the time spent in preemption off critical This option measures the time spent in preemption off critical
sections, with microsecond accuracy. sections, with microsecond accuracy.
...@@ -211,7 +218,7 @@ config ENABLE_DEFAULT_TRACERS ...@@ -211,7 +218,7 @@ config ENABLE_DEFAULT_TRACERS
config FTRACE_SYSCALLS config FTRACE_SYSCALLS
bool "Trace syscalls" bool "Trace syscalls"
depends on HAVE_FTRACE_SYSCALLS depends on HAVE_SYSCALL_TRACEPOINTS
select GENERIC_TRACER select GENERIC_TRACER
select KALLSYMS select KALLSYMS
help help
......
...@@ -65,13 +65,15 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action, ...@@ -65,13 +65,15 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action,
{ {
struct blk_io_trace *t; struct blk_io_trace *t;
struct ring_buffer_event *event = NULL; struct ring_buffer_event *event = NULL;
struct ring_buffer *buffer = NULL;
int pc = 0; int pc = 0;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
bool blk_tracer = blk_tracer_enabled; bool blk_tracer = blk_tracer_enabled;
if (blk_tracer) { if (blk_tracer) {
buffer = blk_tr->buffer;
pc = preempt_count(); pc = preempt_count();
event = trace_buffer_lock_reserve(blk_tr, TRACE_BLK, event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + len, sizeof(*t) + len,
0, pc); 0, pc);
if (!event) if (!event)
...@@ -96,7 +98,7 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action, ...@@ -96,7 +98,7 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action,
memcpy((void *) t + sizeof(*t), data, len); memcpy((void *) t + sizeof(*t), data, len);
if (blk_tracer) if (blk_tracer)
trace_buffer_unlock_commit(blk_tr, event, 0, pc); trace_buffer_unlock_commit(buffer, event, 0, pc);
} }
} }
...@@ -179,6 +181,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, ...@@ -179,6 +181,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
struct ring_buffer_event *event = NULL; struct ring_buffer_event *event = NULL;
struct ring_buffer *buffer = NULL;
struct blk_io_trace *t; struct blk_io_trace *t;
unsigned long flags = 0; unsigned long flags = 0;
unsigned long *sequence; unsigned long *sequence;
...@@ -204,8 +207,9 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, ...@@ -204,8 +207,9 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
if (blk_tracer) { if (blk_tracer) {
tracing_record_cmdline(current); tracing_record_cmdline(current);
buffer = blk_tr->buffer;
pc = preempt_count(); pc = preempt_count();
event = trace_buffer_lock_reserve(blk_tr, TRACE_BLK, event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + pdu_len, sizeof(*t) + pdu_len,
0, pc); 0, pc);
if (!event) if (!event)
...@@ -252,7 +256,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, ...@@ -252,7 +256,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
memcpy((void *) t + sizeof(*t), pdu_data, pdu_len); memcpy((void *) t + sizeof(*t), pdu_data, pdu_len);
if (blk_tracer) { if (blk_tracer) {
trace_buffer_unlock_commit(blk_tr, event, 0, pc); trace_buffer_unlock_commit(buffer, event, 0, pc);
return; return;
} }
} }
......
...@@ -1016,71 +1016,35 @@ static int ...@@ -1016,71 +1016,35 @@ static int
__ftrace_replace_code(struct dyn_ftrace *rec, int enable) __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
{ {
unsigned long ftrace_addr; unsigned long ftrace_addr;
unsigned long ip, fl; unsigned long flag = 0UL;
ftrace_addr = (unsigned long)FTRACE_ADDR; ftrace_addr = (unsigned long)FTRACE_ADDR;
ip = rec->ip;
/* /*
* If this record is not to be traced and * If this record is not to be traced or we want to disable it,
* it is not enabled then do nothing. * then disable it.
* *
* If this record is not to be traced and * If we want to enable it and filtering is off, then enable it.
* it is enabled then disable it.
* *
* If we want to enable it and filtering is on, enable it only if
* it's filtered
*/ */
if (rec->flags & FTRACE_FL_NOTRACE) { if (enable && !(rec->flags & FTRACE_FL_NOTRACE)) {
if (rec->flags & FTRACE_FL_ENABLED) if (!ftrace_filtered || (rec->flags & FTRACE_FL_FILTER))
rec->flags &= ~FTRACE_FL_ENABLED; flag = FTRACE_FL_ENABLED;
else }
return 0;
} else if (ftrace_filtered && enable) {
/*
* Filtering is on:
*/
fl = rec->flags & (FTRACE_FL_FILTER | FTRACE_FL_ENABLED);
/* Record is filtered and enabled, do nothing */
if (fl == (FTRACE_FL_FILTER | FTRACE_FL_ENABLED))
return 0;
/* Record is not filtered or enabled, do nothing */
if (!fl)
return 0;
/* Record is not filtered but enabled, disable it */
if (fl == FTRACE_FL_ENABLED)
rec->flags &= ~FTRACE_FL_ENABLED;
else
/* Otherwise record is filtered but not enabled, enable it */
rec->flags |= FTRACE_FL_ENABLED;
} else {
/* Disable or not filtered */
if (enable) {
/* if record is enabled, do nothing */
if (rec->flags & FTRACE_FL_ENABLED)
return 0;
rec->flags |= FTRACE_FL_ENABLED;
} else {
/* if record is not enabled, do nothing */ /* If the state of this record hasn't changed, then do nothing */
if (!(rec->flags & FTRACE_FL_ENABLED)) if ((rec->flags & FTRACE_FL_ENABLED) == flag)
return 0; return 0;
rec->flags &= ~FTRACE_FL_ENABLED; if (flag) {
} rec->flags |= FTRACE_FL_ENABLED;
return ftrace_make_call(rec, ftrace_addr);
} }
if (rec->flags & FTRACE_FL_ENABLED) rec->flags &= ~FTRACE_FL_ENABLED;
return ftrace_make_call(rec, ftrace_addr); return ftrace_make_nop(NULL, rec, ftrace_addr);
else
return ftrace_make_nop(NULL, rec, ftrace_addr);
} }
static void ftrace_replace_code(int enable) static void ftrace_replace_code(int enable)
...@@ -1375,7 +1339,6 @@ struct ftrace_iterator { ...@@ -1375,7 +1339,6 @@ struct ftrace_iterator {
unsigned flags; unsigned flags;
unsigned char buffer[FTRACE_BUFF_MAX+1]; unsigned char buffer[FTRACE_BUFF_MAX+1];
unsigned buffer_idx; unsigned buffer_idx;
unsigned filtered;
}; };
static void * static void *
...@@ -1438,18 +1401,13 @@ static int t_hash_show(struct seq_file *m, void *v) ...@@ -1438,18 +1401,13 @@ static int t_hash_show(struct seq_file *m, void *v)
{ {
struct ftrace_func_probe *rec; struct ftrace_func_probe *rec;
struct hlist_node *hnd = v; struct hlist_node *hnd = v;
char str[KSYM_SYMBOL_LEN];
rec = hlist_entry(hnd, struct ftrace_func_probe, node); rec = hlist_entry(hnd, struct ftrace_func_probe, node);
if (rec->ops->print) if (rec->ops->print)
return rec->ops->print(m, rec->ip, rec->ops, rec->data); return rec->ops->print(m, rec->ip, rec->ops, rec->data);
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str); seq_printf(m, "%pf:%pf", (void *)rec->ip, (void *)rec->ops->func);
seq_printf(m, "%s:", str);
kallsyms_lookup((unsigned long)rec->ops->func, NULL, NULL, NULL, str);
seq_printf(m, "%s", str);
if (rec->data) if (rec->data)
seq_printf(m, ":%p", rec->data); seq_printf(m, ":%p", rec->data);
...@@ -1547,7 +1505,6 @@ static int t_show(struct seq_file *m, void *v) ...@@ -1547,7 +1505,6 @@ static int t_show(struct seq_file *m, void *v)
{ {
struct ftrace_iterator *iter = m->private; struct ftrace_iterator *iter = m->private;
struct dyn_ftrace *rec = v; struct dyn_ftrace *rec = v;
char str[KSYM_SYMBOL_LEN];
if (iter->flags & FTRACE_ITER_HASH) if (iter->flags & FTRACE_ITER_HASH)
return t_hash_show(m, v); return t_hash_show(m, v);
...@@ -1560,9 +1517,7 @@ static int t_show(struct seq_file *m, void *v) ...@@ -1560,9 +1517,7 @@ static int t_show(struct seq_file *m, void *v)
if (!rec) if (!rec)
return 0; return 0;
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str); seq_printf(m, "%pf\n", (void *)rec->ip);
seq_printf(m, "%s\n", str);
return 0; return 0;
} }
...@@ -1601,17 +1556,6 @@ ftrace_avail_open(struct inode *inode, struct file *file) ...@@ -1601,17 +1556,6 @@ ftrace_avail_open(struct inode *inode, struct file *file)
return ret; return ret;
} }
int ftrace_avail_release(struct inode *inode, struct file *file)
{
struct seq_file *m = (struct seq_file *)file->private_data;
struct ftrace_iterator *iter = m->private;
seq_release(inode, file);
kfree(iter);
return 0;
}
static int static int
ftrace_failures_open(struct inode *inode, struct file *file) ftrace_failures_open(struct inode *inode, struct file *file)
{ {
...@@ -2317,7 +2261,6 @@ ftrace_regex_write(struct file *file, const char __user *ubuf, ...@@ -2317,7 +2261,6 @@ ftrace_regex_write(struct file *file, const char __user *ubuf,
} }
if (isspace(ch)) { if (isspace(ch)) {
iter->filtered++;
iter->buffer[iter->buffer_idx] = 0; iter->buffer[iter->buffer_idx] = 0;
ret = ftrace_process_regex(iter->buffer, ret = ftrace_process_regex(iter->buffer,
iter->buffer_idx, enable); iter->buffer_idx, enable);
...@@ -2448,7 +2391,6 @@ ftrace_regex_release(struct inode *inode, struct file *file, int enable) ...@@ -2448,7 +2391,6 @@ ftrace_regex_release(struct inode *inode, struct file *file, int enable)
iter = file->private_data; iter = file->private_data;
if (iter->buffer_idx) { if (iter->buffer_idx) {
iter->filtered++;
iter->buffer[iter->buffer_idx] = 0; iter->buffer[iter->buffer_idx] = 0;
ftrace_match_records(iter->buffer, iter->buffer_idx, enable); ftrace_match_records(iter->buffer, iter->buffer_idx, enable);
} }
...@@ -2479,14 +2421,14 @@ static const struct file_operations ftrace_avail_fops = { ...@@ -2479,14 +2421,14 @@ static const struct file_operations ftrace_avail_fops = {
.open = ftrace_avail_open, .open = ftrace_avail_open,
.read = seq_read, .read = seq_read,
.llseek = seq_lseek, .llseek = seq_lseek,
.release = ftrace_avail_release, .release = seq_release_private,
}; };
static const struct file_operations ftrace_failures_fops = { static const struct file_operations ftrace_failures_fops = {
.open = ftrace_failures_open, .open = ftrace_failures_open,
.read = seq_read, .read = seq_read,
.llseek = seq_lseek, .llseek = seq_lseek,
.release = ftrace_avail_release, .release = seq_release_private,
}; };
static const struct file_operations ftrace_filter_fops = { static const struct file_operations ftrace_filter_fops = {
...@@ -2548,7 +2490,6 @@ static void g_stop(struct seq_file *m, void *p) ...@@ -2548,7 +2490,6 @@ static void g_stop(struct seq_file *m, void *p)
static int g_show(struct seq_file *m, void *v) static int g_show(struct seq_file *m, void *v)
{ {
unsigned long *ptr = v; unsigned long *ptr = v;
char str[KSYM_SYMBOL_LEN];
if (!ptr) if (!ptr)
return 0; return 0;
...@@ -2558,9 +2499,7 @@ static int g_show(struct seq_file *m, void *v) ...@@ -2558,9 +2499,7 @@ static int g_show(struct seq_file *m, void *v)
return 0; return 0;
} }
kallsyms_lookup(*ptr, NULL, NULL, NULL, str); seq_printf(m, "%pf\n", v);
seq_printf(m, "%s\n", str);
return 0; return 0;
} }
......
...@@ -183,11 +183,9 @@ static void kmemtrace_stop_probes(void) ...@@ -183,11 +183,9 @@ static void kmemtrace_stop_probes(void)
static int kmem_trace_init(struct trace_array *tr) static int kmem_trace_init(struct trace_array *tr)
{ {
int cpu;
kmemtrace_array = tr; kmemtrace_array = tr;
for_each_cpu(cpu, cpu_possible_mask) tracing_reset_online_cpus(tr);
tracing_reset(tr, cpu);
kmemtrace_start_probes(); kmemtrace_start_probes();
...@@ -239,12 +237,52 @@ struct kmemtrace_user_event_alloc { ...@@ -239,12 +237,52 @@ struct kmemtrace_user_event_alloc {
}; };
static enum print_line_t static enum print_line_t
kmemtrace_print_alloc_user(struct trace_iterator *iter, kmemtrace_print_alloc(struct trace_iterator *iter, int flags)
struct kmemtrace_alloc_entry *entry)
{ {
struct kmemtrace_user_event_alloc *ev_alloc;
struct trace_seq *s = &iter->seq; struct trace_seq *s = &iter->seq;
struct kmemtrace_alloc_entry *entry;
int ret;
trace_assign_type(entry, iter->ent);
ret = trace_seq_printf(s, "type_id %d call_site %pF ptr %lu "
"bytes_req %lu bytes_alloc %lu gfp_flags %lu node %d\n",
entry->type_id, (void *)entry->call_site, (unsigned long)entry->ptr,
(unsigned long)entry->bytes_req, (unsigned long)entry->bytes_alloc,
(unsigned long)entry->gfp_flags, entry->node);
if (!ret)
return TRACE_TYPE_PARTIAL_LINE;
return TRACE_TYPE_HANDLED;
}
static enum print_line_t
kmemtrace_print_free(struct trace_iterator *iter, int flags)
{
struct trace_seq *s = &iter->seq;
struct kmemtrace_free_entry *entry;
int ret;
trace_assign_type(entry, iter->ent);
ret = trace_seq_printf(s, "type_id %d call_site %pF ptr %lu\n",
entry->type_id, (void *)entry->call_site,
(unsigned long)entry->ptr);
if (!ret)
return TRACE_TYPE_PARTIAL_LINE;
return TRACE_TYPE_HANDLED;
}
static enum print_line_t
kmemtrace_print_alloc_user(struct trace_iterator *iter, int flags)
{
struct trace_seq *s = &iter->seq;
struct kmemtrace_alloc_entry *entry;
struct kmemtrace_user_event *ev; struct kmemtrace_user_event *ev;
struct kmemtrace_user_event_alloc *ev_alloc;
trace_assign_type(entry, iter->ent);
ev = trace_seq_reserve(s, sizeof(*ev)); ev = trace_seq_reserve(s, sizeof(*ev));
if (!ev) if (!ev)
...@@ -271,12 +309,14 @@ kmemtrace_print_alloc_user(struct trace_iterator *iter, ...@@ -271,12 +309,14 @@ kmemtrace_print_alloc_user(struct trace_iterator *iter,
} }
static enum print_line_t static enum print_line_t
kmemtrace_print_free_user(struct trace_iterator *iter, kmemtrace_print_free_user(struct trace_iterator *iter, int flags)
struct kmemtrace_free_entry *entry)
{ {
struct trace_seq *s = &iter->seq; struct trace_seq *s = &iter->seq;
struct kmemtrace_free_entry *entry;
struct kmemtrace_user_event *ev; struct kmemtrace_user_event *ev;
trace_assign_type(entry, iter->ent);
ev = trace_seq_reserve(s, sizeof(*ev)); ev = trace_seq_reserve(s, sizeof(*ev));
if (!ev) if (!ev)
return TRACE_TYPE_PARTIAL_LINE; return TRACE_TYPE_PARTIAL_LINE;
...@@ -294,12 +334,14 @@ kmemtrace_print_free_user(struct trace_iterator *iter, ...@@ -294,12 +334,14 @@ kmemtrace_print_free_user(struct trace_iterator *iter,
/* The two other following provide a more minimalistic output */ /* The two other following provide a more minimalistic output */
static enum print_line_t static enum print_line_t
kmemtrace_print_alloc_compress(struct trace_iterator *iter, kmemtrace_print_alloc_compress(struct trace_iterator *iter)
struct kmemtrace_alloc_entry *entry)
{ {
struct kmemtrace_alloc_entry *entry;
struct trace_seq *s = &iter->seq; struct trace_seq *s = &iter->seq;
int ret; int ret;
trace_assign_type(entry, iter->ent);
/* Alloc entry */ /* Alloc entry */
ret = trace_seq_printf(s, " + "); ret = trace_seq_printf(s, " + ");
if (!ret) if (!ret)
...@@ -345,29 +387,24 @@ kmemtrace_print_alloc_compress(struct trace_iterator *iter, ...@@ -345,29 +387,24 @@ kmemtrace_print_alloc_compress(struct trace_iterator *iter,
if (!ret) if (!ret)
return TRACE_TYPE_PARTIAL_LINE; return TRACE_TYPE_PARTIAL_LINE;
/* Node */ /* Node and call site*/
ret = trace_seq_printf(s, "%4d ", entry->node); ret = trace_seq_printf(s, "%4d %pf\n", entry->node,
if (!ret) (void *)entry->call_site);
return TRACE_TYPE_PARTIAL_LINE;
/* Call site */
ret = seq_print_ip_sym(s, entry->call_site, 0);
if (!ret) if (!ret)
return TRACE_TYPE_PARTIAL_LINE; return TRACE_TYPE_PARTIAL_LINE;
if (!trace_seq_printf(s, "\n"))
return TRACE_TYPE_PARTIAL_LINE;
return TRACE_TYPE_HANDLED; return TRACE_TYPE_HANDLED;
} }
static enum print_line_t static enum print_line_t
kmemtrace_print_free_compress(struct trace_iterator *iter, kmemtrace_print_free_compress(struct trace_iterator *iter)
struct kmemtrace_free_entry *entry)
{ {
struct kmemtrace_free_entry *entry;
struct trace_seq *s = &iter->seq; struct trace_seq *s = &iter->seq;
int ret; int ret;
trace_assign_type(entry, iter->ent);
/* Free entry */ /* Free entry */
ret = trace_seq_printf(s, " - "); ret = trace_seq_printf(s, " - ");
if (!ret) if (!ret)
...@@ -401,19 +438,11 @@ kmemtrace_print_free_compress(struct trace_iterator *iter, ...@@ -401,19 +438,11 @@ kmemtrace_print_free_compress(struct trace_iterator *iter,
if (!ret) if (!ret)
return TRACE_TYPE_PARTIAL_LINE; return TRACE_TYPE_PARTIAL_LINE;
/* Skip node */ /* Skip node and print call site*/
ret = trace_seq_printf(s, " "); ret = trace_seq_printf(s, " %pf\n", (void *)entry->call_site);
if (!ret) if (!ret)
return TRACE_TYPE_PARTIAL_LINE; return TRACE_TYPE_PARTIAL_LINE;
/* Call site */
ret = seq_print_ip_sym(s, entry->call_site, 0);
if (!ret)
return TRACE_TYPE_PARTIAL_LINE;
if (!trace_seq_printf(s, "\n"))
return TRACE_TYPE_PARTIAL_LINE;
return TRACE_TYPE_HANDLED; return TRACE_TYPE_HANDLED;
} }
...@@ -421,32 +450,31 @@ static enum print_line_t kmemtrace_print_line(struct trace_iterator *iter) ...@@ -421,32 +450,31 @@ static enum print_line_t kmemtrace_print_line(struct trace_iterator *iter)
{ {
struct trace_entry *entry = iter->ent; struct trace_entry *entry = iter->ent;
switch (entry->type) { if (!(kmem_tracer_flags.val & TRACE_KMEM_OPT_MINIMAL))
case TRACE_KMEM_ALLOC: { return TRACE_TYPE_UNHANDLED;
struct kmemtrace_alloc_entry *field;
trace_assign_type(field, entry);
if (kmem_tracer_flags.val & TRACE_KMEM_OPT_MINIMAL)
return kmemtrace_print_alloc_compress(iter, field);
else
return kmemtrace_print_alloc_user(iter, field);
}
case TRACE_KMEM_FREE: {
struct kmemtrace_free_entry *field;
trace_assign_type(field, entry);
if (kmem_tracer_flags.val & TRACE_KMEM_OPT_MINIMAL)
return kmemtrace_print_free_compress(iter, field);
else
return kmemtrace_print_free_user(iter, field);
}
switch (entry->type) {
case TRACE_KMEM_ALLOC:
return kmemtrace_print_alloc_compress(iter);
case TRACE_KMEM_FREE:
return kmemtrace_print_free_compress(iter);
default: default:
return TRACE_TYPE_UNHANDLED; return TRACE_TYPE_UNHANDLED;
} }
} }
static struct trace_event kmem_trace_alloc = {
.type = TRACE_KMEM_ALLOC,
.trace = kmemtrace_print_alloc,
.binary = kmemtrace_print_alloc_user,
};
static struct trace_event kmem_trace_free = {
.type = TRACE_KMEM_FREE,
.trace = kmemtrace_print_free,
.binary = kmemtrace_print_free_user,
};
static struct tracer kmem_tracer __read_mostly = { static struct tracer kmem_tracer __read_mostly = {
.name = "kmemtrace", .name = "kmemtrace",
.init = kmem_trace_init, .init = kmem_trace_init,
...@@ -463,6 +491,21 @@ void kmemtrace_init(void) ...@@ -463,6 +491,21 @@ void kmemtrace_init(void)
static int __init init_kmem_tracer(void) static int __init init_kmem_tracer(void)
{ {
return register_tracer(&kmem_tracer); if (!register_ftrace_event(&kmem_trace_alloc)) {
pr_warning("Warning: could not register kmem events\n");
return 1;
}
if (!register_ftrace_event(&kmem_trace_free)) {
pr_warning("Warning: could not register kmem events\n");
return 1;
}
if (!register_tracer(&kmem_tracer)) {
pr_warning("Warning: could not register the kmem tracer\n");
return 1;
}
return 0;
} }
device_initcall(init_kmem_tracer); device_initcall(init_kmem_tracer);
This diff is collapsed.
This diff is collapsed.
...@@ -34,8 +34,6 @@ enum trace_type { ...@@ -34,8 +34,6 @@ enum trace_type {
TRACE_GRAPH_ENT, TRACE_GRAPH_ENT,
TRACE_USER_STACK, TRACE_USER_STACK,
TRACE_HW_BRANCHES, TRACE_HW_BRANCHES,
TRACE_SYSCALL_ENTER,
TRACE_SYSCALL_EXIT,
TRACE_KMEM_ALLOC, TRACE_KMEM_ALLOC,
TRACE_KMEM_FREE, TRACE_KMEM_FREE,
TRACE_POWER, TRACE_POWER,
...@@ -236,9 +234,6 @@ struct trace_array_cpu { ...@@ -236,9 +234,6 @@ struct trace_array_cpu {
atomic_t disabled; atomic_t disabled;
void *buffer_page; /* ring buffer spare */ void *buffer_page; /* ring buffer spare */
/* these fields get copied into max-trace: */
unsigned long trace_idx;
unsigned long overrun;
unsigned long saved_latency; unsigned long saved_latency;
unsigned long critical_start; unsigned long critical_start;
unsigned long critical_end; unsigned long critical_end;
...@@ -246,6 +241,7 @@ struct trace_array_cpu { ...@@ -246,6 +241,7 @@ struct trace_array_cpu {
unsigned long nice; unsigned long nice;
unsigned long policy; unsigned long policy;
unsigned long rt_priority; unsigned long rt_priority;
unsigned long skipped_entries;
cycle_t preempt_timestamp; cycle_t preempt_timestamp;
pid_t pid; pid_t pid;
uid_t uid; uid_t uid;
...@@ -319,10 +315,6 @@ extern void __ftrace_bad_type(void); ...@@ -319,10 +315,6 @@ extern void __ftrace_bad_type(void);
TRACE_KMEM_ALLOC); \ TRACE_KMEM_ALLOC); \
IF_ASSIGN(var, ent, struct kmemtrace_free_entry, \ IF_ASSIGN(var, ent, struct kmemtrace_free_entry, \
TRACE_KMEM_FREE); \ TRACE_KMEM_FREE); \
IF_ASSIGN(var, ent, struct syscall_trace_enter, \
TRACE_SYSCALL_ENTER); \
IF_ASSIGN(var, ent, struct syscall_trace_exit, \
TRACE_SYSCALL_EXIT); \
__ftrace_bad_type(); \ __ftrace_bad_type(); \
} while (0) } while (0)
...@@ -423,12 +415,13 @@ void init_tracer_sysprof_debugfs(struct dentry *d_tracer); ...@@ -423,12 +415,13 @@ void init_tracer_sysprof_debugfs(struct dentry *d_tracer);
struct ring_buffer_event; struct ring_buffer_event;
struct ring_buffer_event *trace_buffer_lock_reserve(struct trace_array *tr, struct ring_buffer_event *
int type, trace_buffer_lock_reserve(struct ring_buffer *buffer,
unsigned long len, int type,
unsigned long flags, unsigned long len,
int pc); unsigned long flags,
void trace_buffer_unlock_commit(struct trace_array *tr, int pc);
void trace_buffer_unlock_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event, struct ring_buffer_event *event,
unsigned long flags, int pc); unsigned long flags, int pc);
...@@ -467,6 +460,7 @@ void trace_function(struct trace_array *tr, ...@@ -467,6 +460,7 @@ void trace_function(struct trace_array *tr,
void trace_graph_return(struct ftrace_graph_ret *trace); void trace_graph_return(struct ftrace_graph_ret *trace);
int trace_graph_entry(struct ftrace_graph_ent *trace); int trace_graph_entry(struct ftrace_graph_ent *trace);
void set_graph_array(struct trace_array *tr);
void tracing_start_cmdline_record(void); void tracing_start_cmdline_record(void);
void tracing_stop_cmdline_record(void); void tracing_stop_cmdline_record(void);
...@@ -478,16 +472,40 @@ void unregister_tracer(struct tracer *type); ...@@ -478,16 +472,40 @@ void unregister_tracer(struct tracer *type);
extern unsigned long nsecs_to_usecs(unsigned long nsecs); extern unsigned long nsecs_to_usecs(unsigned long nsecs);
#ifdef CONFIG_TRACER_MAX_TRACE
extern unsigned long tracing_max_latency; extern unsigned long tracing_max_latency;
extern unsigned long tracing_thresh; extern unsigned long tracing_thresh;
void update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu); void update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu);
void update_max_tr_single(struct trace_array *tr, void update_max_tr_single(struct trace_array *tr,
struct task_struct *tsk, int cpu); struct task_struct *tsk, int cpu);
#endif /* CONFIG_TRACER_MAX_TRACE */
void __trace_stack(struct trace_array *tr, #ifdef CONFIG_STACKTRACE
unsigned long flags, void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
int skip, int pc); int skip, int pc);
void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
int pc);
void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
int pc);
#else
static inline void ftrace_trace_stack(struct trace_array *tr,
unsigned long flags, int skip, int pc)
{
}
static inline void ftrace_trace_userstack(struct trace_array *tr,
unsigned long flags, int pc)
{
}
static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
int skip, int pc)
{
}
#endif /* CONFIG_STACKTRACE */
extern cycle_t ftrace_now(int cpu); extern cycle_t ftrace_now(int cpu);
...@@ -513,6 +531,10 @@ extern unsigned long ftrace_update_tot_cnt; ...@@ -513,6 +531,10 @@ extern unsigned long ftrace_update_tot_cnt;
extern int DYN_FTRACE_TEST_NAME(void); extern int DYN_FTRACE_TEST_NAME(void);
#endif #endif
extern int ring_buffer_expanded;
extern bool tracing_selftest_disabled;
DECLARE_PER_CPU(local_t, ftrace_cpu_disabled);
#ifdef CONFIG_FTRACE_STARTUP_TEST #ifdef CONFIG_FTRACE_STARTUP_TEST
extern int trace_selftest_startup_function(struct tracer *trace, extern int trace_selftest_startup_function(struct tracer *trace,
struct trace_array *tr); struct trace_array *tr);
...@@ -544,9 +566,16 @@ extern int ...@@ -544,9 +566,16 @@ extern int
trace_vbprintk(unsigned long ip, const char *fmt, va_list args); trace_vbprintk(unsigned long ip, const char *fmt, va_list args);
extern int extern int
trace_vprintk(unsigned long ip, const char *fmt, va_list args); trace_vprintk(unsigned long ip, const char *fmt, va_list args);
extern int
trace_array_vprintk(struct trace_array *tr,
unsigned long ip, const char *fmt, va_list args);
int trace_array_printk(struct trace_array *tr,
unsigned long ip, const char *fmt, ...);
extern unsigned long trace_flags; extern unsigned long trace_flags;
extern int trace_clock_id;
/* Standard output formatting function used for function return traces */ /* Standard output formatting function used for function return traces */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
extern enum print_line_t print_graph_function(struct trace_iterator *iter); extern enum print_line_t print_graph_function(struct trace_iterator *iter);
...@@ -635,9 +664,8 @@ enum trace_iterator_flags { ...@@ -635,9 +664,8 @@ enum trace_iterator_flags {
TRACE_ITER_PRINTK_MSGONLY = 0x10000, TRACE_ITER_PRINTK_MSGONLY = 0x10000,
TRACE_ITER_CONTEXT_INFO = 0x20000, /* Print pid/cpu/time */ TRACE_ITER_CONTEXT_INFO = 0x20000, /* Print pid/cpu/time */
TRACE_ITER_LATENCY_FMT = 0x40000, TRACE_ITER_LATENCY_FMT = 0x40000,
TRACE_ITER_GLOBAL_CLK = 0x80000, TRACE_ITER_SLEEP_TIME = 0x80000,
TRACE_ITER_SLEEP_TIME = 0x100000, TRACE_ITER_GRAPH_TIME = 0x100000,
TRACE_ITER_GRAPH_TIME = 0x200000,
}; };
/* /*
...@@ -734,6 +762,7 @@ struct ftrace_event_field { ...@@ -734,6 +762,7 @@ struct ftrace_event_field {
struct list_head link; struct list_head link;
char *name; char *name;
char *type; char *type;
int filter_type;
int offset; int offset;
int size; int size;
int is_signed; int is_signed;
...@@ -743,13 +772,15 @@ struct event_filter { ...@@ -743,13 +772,15 @@ struct event_filter {
int n_preds; int n_preds;
struct filter_pred **preds; struct filter_pred **preds;
char *filter_string; char *filter_string;
bool no_reset;
}; };
struct event_subsystem { struct event_subsystem {
struct list_head list; struct list_head list;
const char *name; const char *name;
struct dentry *entry; struct dentry *entry;
void *filter; struct event_filter *filter;
int nr_events;
}; };
struct filter_pred; struct filter_pred;
...@@ -777,6 +808,7 @@ extern int apply_subsystem_event_filter(struct event_subsystem *system, ...@@ -777,6 +808,7 @@ extern int apply_subsystem_event_filter(struct event_subsystem *system,
char *filter_string); char *filter_string);
extern void print_subsystem_event_filter(struct event_subsystem *system, extern void print_subsystem_event_filter(struct event_subsystem *system,
struct trace_seq *s); struct trace_seq *s);
extern int filter_assign_type(const char *type);
static inline int static inline int
filter_check_discard(struct ftrace_event_call *call, void *rec, filter_check_discard(struct ftrace_event_call *call, void *rec,
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -288,11 +288,9 @@ static int ...@@ -288,11 +288,9 @@ static int
ftrace_trace_onoff_print(struct seq_file *m, unsigned long ip, ftrace_trace_onoff_print(struct seq_file *m, unsigned long ip,
struct ftrace_probe_ops *ops, void *data) struct ftrace_probe_ops *ops, void *data)
{ {
char str[KSYM_SYMBOL_LEN];
long count = (long)data; long count = (long)data;
kallsyms_lookup(ip, NULL, NULL, NULL, str); seq_printf(m, "%pf:", (void *)ip);
seq_printf(m, "%s:", str);
if (ops == &traceon_probe_ops) if (ops == &traceon_probe_ops)
seq_printf(m, "traceon"); seq_printf(m, "traceon");
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment