Commit 60eaa019 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "This pull request has a new feature to ftrace, namely the trace event
  triggers by Tom Zanussi.  A trigger is a way to enable an action when
  an event is hit.  The actions are:

   o  trace on/off - enable or disable tracing
   o  snapshot     - save the current trace buffer in the snapshot
   o  stacktrace   - dump the current stack trace to the ringbuffer
   o  enable/disable events - enable or disable another event

  Namhyung Kim added updates to the tracing uprobes code.  Having the
  uprobes add support for fetch methods.

  The rest are various bug fixes with the new code, and minor ones for
  the old code"

* tag 'trace-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (38 commits)
  tracing: Fix buggered tee(2) on tracing_pipe
  tracing: Have trace buffer point back to trace_array
  ftrace: Fix synchronization location disabling and freeing ftrace_ops
  ftrace: Have function graph only trace based on global_ops filters
  ftrace: Synchronize setting function_trace_op with ftrace_trace_function
  tracing: Show available event triggers when no trigger is set
  tracing: Consolidate event trigger code
  tracing: Fix counter for traceon/off event triggers
  tracing: Remove double-underscore naming in syscall trigger invocations
  tracing/kprobes: Add trace event trigger invocations
  tracing/probes: Fix build break on !CONFIG_KPROBE_EVENT
  tracing/uprobes: Add @+file_offset fetch method
  uprobes: Allocate ->utask before handler_chain() for tracing handlers
  tracing/uprobes: Add support for full argument access methods
  tracing/uprobes: Fetch args before reserving a ring buffer
  tracing/uprobes: Pass 'is_return' to traceprobe_parse_probe_arg()
  tracing/probes: Implement 'memory' fetch method for uprobes
  tracing/probes: Add fetch{,_size} member into deref fetch method
  tracing/probes: Move 'symbol' fetch method to kprobes
  tracing/probes: Implement 'stack' fetch method for uprobes
  ...
parents df32e43a 92fdd98c
......@@ -287,3 +287,210 @@ their old filters):
prev_pid == 0
# cat sched_wakeup/filter
common_pid == 0
6. Event triggers
=================
Trace events can be made to conditionally invoke trigger 'commands'
which can take various forms and are described in detail below;
examples would be enabling or disabling other trace events or invoking
a stack trace whenever the trace event is hit. Whenever a trace event
with attached triggers is invoked, the set of trigger commands
associated with that event is invoked. Any given trigger can
additionally have an event filter of the same form as described in
section 5 (Event filtering) associated with it - the command will only
be invoked if the event being invoked passes the associated filter.
If no filter is associated with the trigger, it always passes.
Triggers are added to and removed from a particular event by writing
trigger expressions to the 'trigger' file for the given event.
A given event can have any number of triggers associated with it,
subject to any restrictions that individual commands may have in that
regard.
Event triggers are implemented on top of "soft" mode, which means that
whenever a trace event has one or more triggers associated with it,
the event is activated even if it isn't actually enabled, but is
disabled in a "soft" mode. That is, the tracepoint will be called,
but just will not be traced, unless of course it's actually enabled.
This scheme allows triggers to be invoked even for events that aren't
enabled, and also allows the current event filter implementation to be
used for conditionally invoking triggers.
The syntax for event triggers is roughly based on the syntax for
set_ftrace_filter 'ftrace filter commands' (see the 'Filter commands'
section of Documentation/trace/ftrace.txt), but there are major
differences and the implementation isn't currently tied to it in any
way, so beware about making generalizations between the two.
6.1 Expression syntax
---------------------
Triggers are added by echoing the command to the 'trigger' file:
# echo 'command[:count] [if filter]' > trigger
Triggers are removed by echoing the same command but starting with '!'
to the 'trigger' file:
# echo '!command[:count] [if filter]' > trigger
The [if filter] part isn't used in matching commands when removing, so
leaving that off in a '!' command will accomplish the same thing as
having it in.
The filter syntax is the same as that described in the 'Event
filtering' section above.
For ease of use, writing to the trigger file using '>' currently just
adds or removes a single trigger and there's no explicit '>>' support
('>' actually behaves like '>>') or truncation support to remove all
triggers (you have to use '!' for each one added.)
6.2 Supported trigger commands
------------------------------
The following commands are supported:
- enable_event/disable_event
These commands can enable or disable another trace event whenever
the triggering event is hit. When these commands are registered,
the other trace event is activated, but disabled in a "soft" mode.
That is, the tracepoint will be called, but just will not be traced.
The event tracepoint stays in this mode as long as there's a trigger
in effect that can trigger it.
For example, the following trigger causes kmalloc events to be
traced when a read system call is entered, and the :1 at the end
specifies that this enablement happens only once:
# echo 'enable_event:kmem:kmalloc:1' > \
/sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
The following trigger causes kmalloc events to stop being traced
when a read system call exits. This disablement happens on every
read system call exit:
# echo 'disable_event:kmem:kmalloc' > \
/sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
The format is:
enable_event:<system>:<event>[:count]
disable_event:<system>:<event>[:count]
To remove the above commands:
# echo '!enable_event:kmem:kmalloc:1' > \
/sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
# echo '!disable_event:kmem:kmalloc' > \
/sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
Note that there can be any number of enable/disable_event triggers
per triggering event, but there can only be one trigger per
triggered event. e.g. sys_enter_read can have triggers enabling both
kmem:kmalloc and sched:sched_switch, but can't have two kmem:kmalloc
versions such as kmem:kmalloc and kmem:kmalloc:1 or 'kmem:kmalloc if
bytes_req == 256' and 'kmem:kmalloc if bytes_alloc == 256' (they
could be combined into a single filter on kmem:kmalloc though).
- stacktrace
This command dumps a stacktrace in the trace buffer whenever the
triggering event occurs.
For example, the following trigger dumps a stacktrace every time the
kmalloc tracepoint is hit:
# echo 'stacktrace' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
The following trigger dumps a stacktrace the first 5 times a kmalloc
request happens with a size >= 64K
# echo 'stacktrace:5 if bytes_req >= 65536' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
The format is:
stacktrace[:count]
To remove the above commands:
# echo '!stacktrace' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
# echo '!stacktrace:5 if bytes_req >= 65536' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
The latter can also be removed more simply by the following (without
the filter):
# echo '!stacktrace:5' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
Note that there can be only one stacktrace trigger per triggering
event.
- snapshot
This command causes a snapshot to be triggered whenever the
triggering event occurs.
The following command creates a snapshot every time a block request
queue is unplugged with a depth > 1. If you were tracing a set of
events or functions at the time, the snapshot trace buffer would
capture those events when the trigger event occured:
# echo 'snapshot if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
To only snapshot once:
# echo 'snapshot:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
To remove the above commands:
# echo '!snapshot if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
# echo '!snapshot:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
Note that there can be only one snapshot trigger per triggering
event.
- traceon/traceoff
These commands turn tracing on and off when the specified events are
hit. The parameter determines how many times the tracing system is
turned on and off. If unspecified, there is no limit.
The following command turns tracing off the first time a block
request queue is unplugged with a depth > 1. If you were tracing a
set of events or functions at the time, you could then examine the
trace buffer to see the sequence of events that led up to the
trigger event:
# echo 'traceoff:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
To always disable tracing when nr_rq > 1 :
# echo 'traceoff if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
To remove the above commands:
# echo '!traceoff:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
# echo '!traceoff if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
Note that there can be only one traceon or traceoff trigger per
triggering event.
......@@ -19,18 +19,44 @@ user to calculate the offset of the probepoint in the object.
Synopsis of uprobe_tracer
-------------------------
p[:[GRP/]EVENT] PATH:SYMBOL[+offs] [FETCHARGS] : Set a uprobe
r[:[GRP/]EVENT] PATH:SYMBOL[+offs] [FETCHARGS] : Set a return uprobe (uretprobe)
p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a uprobe
r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return uprobe (uretprobe)
-:[GRP/]EVENT : Clear uprobe or uretprobe event
GRP : Group name. If omitted, "uprobes" is the default value.
EVENT : Event name. If omitted, the event name is generated based
on SYMBOL+offs.
on PATH+OFFSET.
PATH : Path to an executable or a library.
SYMBOL[+offs] : Symbol+offset where the probe is inserted.
OFFSET : Offset where the probe is inserted.
FETCHARGS : Arguments. Each probe can have up to 128 args.
%REG : Fetch register REG
@ADDR : Fetch memory at ADDR (ADDR should be in userspace)
@+OFFSET : Fetch memory at OFFSET (OFFSET from same file as PATH)
$stackN : Fetch Nth entry of stack (N >= 0)
$stack : Fetch stack address.
$retval : Fetch return value.(*)
+|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**)
NAME=FETCHARG : Set NAME as the argument name of FETCHARG.
FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types
(u8/u16/u32/u64/s8/s16/s32/s64), "string" and bitfield
are supported.
(*) only for return probe.
(**) this is useful for fetching a field of data structures.
Types
-----
Several types are supported for fetch-args. Uprobe tracer will access memory
by given type. Prefix 's' and 'u' means those types are signed and unsigned
respectively. Traced arguments are shown in decimal (signed) or hex (unsigned).
String type is a special type, which fetches a "null-terminated" string from
user space.
Bitfield is another special type, which takes 3 parameters, bit-width, bit-
offset, and container-size (usually 32). The syntax is;
b<bit-width>@<bit-offset>/<container-size>
Event Profiling
---------------
......
......@@ -570,8 +570,6 @@ static inline int
ftrace_regex_release(struct inode *inode, struct file *file) { return -ENODEV; }
#endif /* CONFIG_DYNAMIC_FTRACE */
loff_t ftrace_filter_lseek(struct file *file, loff_t offset, int whence);
/* totally disable ftrace - can not re-enable after this */
void ftrace_kill(void);
......
#ifndef _LINUX_FTRACE_EVENT_H
#define _LINUX_FTRACE_EVENT_H
......@@ -264,6 +265,8 @@ enum {
FTRACE_EVENT_FL_NO_SET_FILTER_BIT,
FTRACE_EVENT_FL_SOFT_MODE_BIT,
FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
FTRACE_EVENT_FL_TRIGGER_MODE_BIT,
FTRACE_EVENT_FL_TRIGGER_COND_BIT,
};
/*
......@@ -275,6 +278,8 @@ enum {
* SOFT_MODE - The event is enabled/disabled by SOFT_DISABLED
* SOFT_DISABLED - When set, do not trace the event (even though its
* tracepoint may be enabled)
* TRIGGER_MODE - When set, invoke the triggers associated with the event
* TRIGGER_COND - When set, one or more triggers has an associated filter
*/
enum {
FTRACE_EVENT_FL_ENABLED = (1 << FTRACE_EVENT_FL_ENABLED_BIT),
......@@ -283,6 +288,8 @@ enum {
FTRACE_EVENT_FL_NO_SET_FILTER = (1 << FTRACE_EVENT_FL_NO_SET_FILTER_BIT),
FTRACE_EVENT_FL_SOFT_MODE = (1 << FTRACE_EVENT_FL_SOFT_MODE_BIT),
FTRACE_EVENT_FL_SOFT_DISABLED = (1 << FTRACE_EVENT_FL_SOFT_DISABLED_BIT),
FTRACE_EVENT_FL_TRIGGER_MODE = (1 << FTRACE_EVENT_FL_TRIGGER_MODE_BIT),
FTRACE_EVENT_FL_TRIGGER_COND = (1 << FTRACE_EVENT_FL_TRIGGER_COND_BIT),
};
struct ftrace_event_file {
......@@ -292,6 +299,7 @@ struct ftrace_event_file {
struct dentry *dir;
struct trace_array *tr;
struct ftrace_subsystem_dir *system;
struct list_head triggers;
/*
* 32 bit flags:
......@@ -299,6 +307,7 @@ struct ftrace_event_file {
* bit 1: enabled cmd record
* bit 2: enable/disable with the soft disable bit
* bit 3: soft disabled
* bit 4: trigger enabled
*
* Note: The bits must be set atomically to prevent races
* from other writers. Reads of flags do not need to be in
......@@ -310,6 +319,7 @@ struct ftrace_event_file {
*/
unsigned long flags;
atomic_t sm_ref; /* soft-mode reference counter */
atomic_t tm_ref; /* trigger-mode reference counter */
};
#define __TRACE_EVENT_FLAGS(name, value) \
......@@ -337,6 +347,14 @@ struct ftrace_event_file {
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
enum event_trigger_type {
ETT_NONE = (0),
ETT_TRACE_ONOFF = (1 << 0),
ETT_SNAPSHOT = (1 << 1),
ETT_STACKTRACE = (1 << 2),
ETT_EVENT_ENABLE = (1 << 3),
};
extern void destroy_preds(struct ftrace_event_file *file);
extern void destroy_call_preds(struct ftrace_event_call *call);
extern int filter_match_preds(struct event_filter *filter, void *rec);
......@@ -347,6 +365,127 @@ extern int filter_check_discard(struct ftrace_event_file *file, void *rec,
extern int call_filter_check_discard(struct ftrace_event_call *call, void *rec,
struct ring_buffer *buffer,
struct ring_buffer_event *event);
extern enum event_trigger_type event_triggers_call(struct ftrace_event_file *file,
void *rec);
extern void event_triggers_post_call(struct ftrace_event_file *file,
enum event_trigger_type tt);
/**
* ftrace_trigger_soft_disabled - do triggers and test if soft disabled
* @file: The file pointer of the event to test
*
* If any triggers without filters are attached to this event, they
* will be called here. If the event is soft disabled and has no
* triggers that require testing the fields, it will return true,
* otherwise false.
*/
static inline bool
ftrace_trigger_soft_disabled(struct ftrace_event_file *file)
{
unsigned long eflags = file->flags;
if (!(eflags & FTRACE_EVENT_FL_TRIGGER_COND)) {
if (eflags & FTRACE_EVENT_FL_TRIGGER_MODE)
event_triggers_call(file, NULL);
if (eflags & FTRACE_EVENT_FL_SOFT_DISABLED)
return true;
}
return false;
}
/*
* Helper function for event_trigger_unlock_commit{_regs}().
* If there are event triggers attached to this event that requires
* filtering against its fields, then they wil be called as the
* entry already holds the field information of the current event.
*
* It also checks if the event should be discarded or not.
* It is to be discarded if the event is soft disabled and the
* event was only recorded to process triggers, or if the event
* filter is active and this event did not match the filters.
*
* Returns true if the event is discarded, false otherwise.
*/
static inline bool
__event_trigger_test_discard(struct ftrace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry,
enum event_trigger_type *tt)
{
unsigned long eflags = file->flags;
if (eflags & FTRACE_EVENT_FL_TRIGGER_COND)
*tt = event_triggers_call(file, entry);
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
ring_buffer_discard_commit(buffer, event);
else if (!filter_check_discard(file, entry, buffer, event))
return false;
return true;
}
/**
* event_trigger_unlock_commit - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*/
static inline void
event_trigger_unlock_commit(struct ftrace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
if (tt)
event_triggers_post_call(file, tt);
}
/**
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*
* Same as event_trigger_unlock_commit() but calls
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
*/
static inline void
event_trigger_unlock_commit_regs(struct ftrace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc,
struct pt_regs *regs)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit_regs(buffer, event,
irq_flags, pc, regs);
if (tt)
event_triggers_post_call(file, tt);
}
enum {
FILTER_OTHER = 0,
......
......@@ -418,6 +418,8 @@ static inline notrace int ftrace_get_offsets_##call( \
* struct ftrace_event_file *ftrace_file = __data;
* struct ftrace_event_call *event_call = ftrace_file->event_call;
* struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
* unsigned long eflags = ftrace_file->flags;
* enum event_trigger_type __tt = ETT_NONE;
* struct ring_buffer_event *event;
* struct ftrace_raw_<call> *entry; <-- defined in stage 1
* struct ring_buffer *buffer;
......@@ -425,9 +427,12 @@ static inline notrace int ftrace_get_offsets_##call( \
* int __data_size;
* int pc;
*
* if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
* &ftrace_file->flags))
* if (!(eflags & FTRACE_EVENT_FL_TRIGGER_COND)) {
* if (eflags & FTRACE_EVENT_FL_TRIGGER_MODE)
* event_triggers_call(ftrace_file, NULL);
* if (eflags & FTRACE_EVENT_FL_SOFT_DISABLED)
* return;
* }
*
* local_save_flags(irq_flags);
* pc = preempt_count();
......@@ -445,8 +450,17 @@ static inline notrace int ftrace_get_offsets_##call( \
* { <assign>; } <-- Here we assign the entries by the __field and
* __array macros.
*
* if (!filter_check_discard(ftrace_file, entry, buffer, event))
* if (eflags & FTRACE_EVENT_FL_TRIGGER_COND)
* __tt = event_triggers_call(ftrace_file, entry);
*
* if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
* &ftrace_file->flags))
* ring_buffer_discard_commit(buffer, event);
* else if (!filter_check_discard(ftrace_file, entry, buffer, event))
* trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
*
* if (__tt)
* event_triggers_post_call(ftrace_file, __tt);
* }
*
* static struct trace_event ftrace_event_type_<call> = {
......@@ -539,8 +553,7 @@ ftrace_raw_event_##call(void *__data, proto) \
int __data_size; \
int pc; \
\
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
&ftrace_file->flags)) \
if (ftrace_trigger_soft_disabled(ftrace_file)) \
return; \
\
local_save_flags(irq_flags); \
......@@ -560,8 +573,8 @@ ftrace_raw_event_##call(void *__data, proto) \
\
{ assign; } \
\
if (!filter_check_discard(ftrace_file, entry, buffer, event)) \
trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
event_trigger_unlock_commit(ftrace_file, buffer, event, entry, \
irq_flags, pc); \
}
/*
* The ftrace_test_probe is compiled out, it is only here as a build time check
......
......@@ -1854,6 +1854,10 @@ static void handle_swbp(struct pt_regs *regs)
if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))
goto out;
/* Tracing handlers use ->utask to communicate with fetch methods */
if (!get_utask())
goto out;
handler_chain(uprobe, regs);
if (can_skip_sstep(uprobe, regs))
goto out;
......
......@@ -50,6 +50,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o
endif
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
obj-$(CONFIG_TRACEPOINTS) += power-traces.o
ifeq ($(CONFIG_PM_RUNTIME),y)
......
This diff is collapsed.
......@@ -594,6 +594,28 @@ void free_snapshot(struct trace_array *tr)
tr->allocated_snapshot = false;
}
/**
* tracing_alloc_snapshot - allocate snapshot buffer.
*
* This only allocates the snapshot buffer if it isn't already
* allocated - it doesn't also take a snapshot.
*
* This is meant to be used in cases where the snapshot buffer needs
* to be set up for events that can't sleep but need to be able to
* trigger a snapshot.
*/
int tracing_alloc_snapshot(void)
{
struct trace_array *tr = &global_trace;
int ret;
ret = alloc_snapshot(tr);
WARN_ON(ret < 0);
return ret;
}
EXPORT_SYMBOL_GPL(tracing_alloc_snapshot);
/**
* trace_snapshot_alloc - allocate and take a snapshot of the current buffer.
*
......@@ -607,11 +629,10 @@ void free_snapshot(struct trace_array *tr)
*/
void tracing_snapshot_alloc(void)
{
struct trace_array *tr = &global_trace;
int ret;
ret = alloc_snapshot(tr);
if (WARN_ON(ret < 0))
ret = tracing_alloc_snapshot();
if (ret < 0)
return;
tracing_snapshot();
......@@ -623,6 +644,12 @@ void tracing_snapshot(void)
WARN_ONCE(1, "Snapshot feature not enabled, but internal snapshot used");
}
EXPORT_SYMBOL_GPL(tracing_snapshot);
int tracing_alloc_snapshot(void)
{
WARN_ONCE(1, "Snapshot feature not enabled, but snapshot allocation used");
return -ENODEV;
}
EXPORT_SYMBOL_GPL(tracing_alloc_snapshot);
void tracing_snapshot_alloc(void)
{
/* Give warning */
......@@ -3156,19 +3183,23 @@ tracing_write_stub(struct file *filp, const char __user *ubuf,
return count;
}
static loff_t tracing_seek(struct file *file, loff_t offset, int origin)
loff_t tracing_lseek(struct file *file, loff_t offset, int whence)
{
int ret;
if (file->f_mode & FMODE_READ)
return seq_lseek(file, offset, origin);
ret = seq_lseek(file, offset, whence);
else
return 0;
file->f_pos = ret = 0;
return ret;
}
static const struct file_operations tracing_fops = {
.open = tracing_open,
.read = seq_read,
.write = tracing_write_stub,
.llseek = tracing_seek,
.llseek = tracing_lseek,
.release = tracing_release,
};
......@@ -4212,12 +4243,6 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
return sret;
}
static void tracing_pipe_buf_release(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)
{
__free_page(buf->page);
}
static void tracing_spd_release_pipe(struct splice_pipe_desc *spd,
unsigned int idx)
{
......@@ -4229,7 +4254,7 @@ static const struct pipe_buf_operations tracing_pipe_buf_ops = {
.map = generic_pipe_buf_map,
.unmap = generic_pipe_buf_unmap,
.confirm = generic_pipe_buf_confirm,
.release = tracing_pipe_buf_release,
.release = generic_pipe_buf_release,
.steal = generic_pipe_buf_steal,
.get = generic_pipe_buf_get,
};
......@@ -4913,7 +4938,7 @@ static const struct file_operations snapshot_fops = {
.open = tracing_snapshot_open,
.read = seq_read,
.write = tracing_snapshot_write,
.llseek = tracing_seek,
.llseek = tracing_lseek,
.release = tracing_snapshot_release,
};
......@@ -5883,6 +5908,8 @@ allocate_trace_buffer(struct trace_array *tr, struct trace_buffer *buf, int size
rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
buf->tr = tr;
buf->buffer = ring_buffer_alloc(size, rb_flags);
if (!buf->buffer)
return -ENOMEM;
......
#ifndef _LINUX_KERNEL_TRACE_H
#define _LINUX_KERNEL_TRACE_H
......@@ -587,6 +588,8 @@ void tracing_start_sched_switch_record(void);
int register_tracer(struct tracer *type);
int is_tracing_stopped(void);
loff_t tracing_lseek(struct file *file, loff_t offset, int whence);
extern cpumask_var_t __read_mostly tracing_buffer_mask;
#define for_each_tracing_cpu(cpu) \
......@@ -1020,6 +1023,10 @@ extern int apply_subsystem_event_filter(struct ftrace_subsystem_dir *dir,
extern void print_subsystem_event_filter(struct event_subsystem *system,
struct trace_seq *s);
extern int filter_assign_type(const char *type);
extern int create_event_filter(struct ftrace_event_call *call,
char *filter_str, bool set_str,
struct event_filter **filterp);
extern void free_event_filter(struct event_filter *filter);
struct ftrace_event_field *
trace_find_event_field(struct ftrace_event_call *call, char *name);
......@@ -1028,9 +1035,195 @@ extern void trace_event_enable_cmd_record(bool enable);
extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);
extern int event_trace_del_tracer(struct trace_array *tr);
extern struct ftrace_event_file *find_event_file(struct trace_array *tr,
const char *system,
const char *event);
static inline void *event_file_data(struct file *filp)
{
return ACCESS_ONCE(file_inode(filp)->i_private);
}
extern struct mutex event_mutex;
extern struct list_head ftrace_events;
extern const struct file_operations event_trigger_fops;
extern int register_trigger_cmds(void);
extern void clear_event_triggers(struct trace_array *tr);
struct event_trigger_data {
unsigned long count;
int ref;
struct event_trigger_ops *ops;
struct event_command *cmd_ops;
struct event_filter __rcu *filter;
char *filter_str;
void *private_data;
struct list_head list;
};
/**
* struct event_trigger_ops - callbacks for trace event triggers
*
* The methods in this structure provide per-event trigger hooks for
* various trigger operations.
*
* All the methods below, except for @init() and @free(), must be
* implemented.
*
* @func: The trigger 'probe' function called when the triggering
* event occurs. The data passed into this callback is the data
* that was supplied to the event_command @reg() function that
* registered the trigger (see struct event_command).
*
* @init: An optional initialization function called for the trigger
* when the trigger is registered (via the event_command reg()
* function). This can be used to perform per-trigger
* initialization such as incrementing a per-trigger reference
* count, for instance. This is usually implemented by the
* generic utility function @event_trigger_init() (see
* trace_event_triggers.c).
*
* @free: An optional de-initialization function called for the
* trigger when the trigger is unregistered (via the
* event_command @reg() function). This can be used to perform
* per-trigger de-initialization such as decrementing a
* per-trigger reference count and freeing corresponding trigger
* data, for instance. This is usually implemented by the
* generic utility function @event_trigger_free() (see
* trace_event_triggers.c).
*
* @print: The callback function invoked to have the trigger print
* itself. This is usually implemented by a wrapper function
* that calls the generic utility function @event_trigger_print()
* (see trace_event_triggers.c).
*/
struct event_trigger_ops {
void (*func)(struct event_trigger_data *data);
int (*init)(struct event_trigger_ops *ops,
struct event_trigger_data *data);
void (*free)(struct event_trigger_ops *ops,
struct event_trigger_data *data);
int (*print)(struct seq_file *m,
struct event_trigger_ops *ops,
struct event_trigger_data *data);
};
/**
* struct event_command - callbacks and data members for event commands
*
* Event commands are invoked by users by writing the command name
* into the 'trigger' file associated with a trace event. The
* parameters associated with a specific invocation of an event
* command are used to create an event trigger instance, which is
* added to the list of trigger instances associated with that trace
* event. When the event is hit, the set of triggers associated with
* that event is invoked.
*
* The data members in this structure provide per-event command data
* for various event commands.
*
* All the data members below, except for @post_trigger, must be set
* for each event command.
*
* @name: The unique name that identifies the event command. This is
* the name used when setting triggers via trigger files.
*
* @trigger_type: A unique id that identifies the event command
* 'type'. This value has two purposes, the first to ensure that
* only one trigger of the same type can be set at a given time
* for a particular event e.g. it doesn't make sense to have both
* a traceon and traceoff trigger attached to a single event at
* the same time, so traceon and traceoff have the same type
* though they have different names. The @trigger_type value is
* also used as a bit value for deferring the actual trigger
* action until after the current event is finished. Some
* commands need to do this if they themselves log to the trace
* buffer (see the @post_trigger() member below). @trigger_type
* values are defined by adding new values to the trigger_type
* enum in include/linux/ftrace_event.h.
*
* @post_trigger: A flag that says whether or not this command needs
* to have its action delayed until after the current event has
* been closed. Some triggers need to avoid being invoked while
* an event is currently in the process of being logged, since
* the trigger may itself log data into the trace buffer. Thus
* we make sure the current event is committed before invoking
* those triggers. To do that, the trigger invocation is split
* in two - the first part checks the filter using the current
* trace record; if a command has the @post_trigger flag set, it
* sets a bit for itself in the return value, otherwise it
* directly invokes the trigger. Once all commands have been
* either invoked or set their return flag, the current record is
* either committed or discarded. At that point, if any commands
* have deferred their triggers, those commands are finally
* invoked following the close of the current event. In other
* words, if the event_trigger_ops @func() probe implementation
* itself logs to the trace buffer, this flag should be set,
* otherwise it can be left unspecified.
*
* All the methods below, except for @set_filter(), must be
* implemented.
*
* @func: The callback function responsible for parsing and
* registering the trigger written to the 'trigger' file by the
* user. It allocates the trigger instance and registers it with
* the appropriate trace event. It makes use of the other
* event_command callback functions to orchestrate this, and is
* usually implemented by the generic utility function
* @event_trigger_callback() (see trace_event_triggers.c).
*
* @reg: Adds the trigger to the list of triggers associated with the
* event, and enables the event trigger itself, after
* initializing it (via the event_trigger_ops @init() function).
* This is also where commands can use the @trigger_type value to
* make the decision as to whether or not multiple instances of
* the trigger should be allowed. This is usually implemented by
* the generic utility function @register_trigger() (see
* trace_event_triggers.c).
*
* @unreg: Removes the trigger from the list of triggers associated
* with the event, and disables the event trigger itself, after
* initializing it (via the event_trigger_ops @free() function).
* This is usually implemented by the generic utility function
* @unregister_trigger() (see trace_event_triggers.c).
*
* @set_filter: An optional function called to parse and set a filter
* for the trigger. If no @set_filter() method is set for the
* event command, filters set by the user for the command will be
* ignored. This is usually implemented by the generic utility
* function @set_trigger_filter() (see trace_event_triggers.c).
*
* @get_trigger_ops: The callback function invoked to retrieve the
* event_trigger_ops implementation associated with the command.
*/
struct event_command {
struct list_head list;
char *name;
enum event_trigger_type trigger_type;
bool post_trigger;
int (*func)(struct event_command *cmd_ops,
struct ftrace_event_file *file,
char *glob, char *cmd, char *params);
int (*reg)(char *glob,
struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct ftrace_event_file *file);
void (*unreg)(char *glob,
struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct ftrace_event_file *file);
int (*set_filter)(char *filter_str,
struct event_trigger_data *data,
struct ftrace_event_file *file);
struct event_trigger_ops *(*get_trigger_ops)(char *cmd, char *param);
};
extern int trace_event_enable_disable(struct ftrace_event_file *file,
int enable, int soft_disable);
extern int tracing_alloc_snapshot(void);
extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[];
......
......@@ -342,6 +342,12 @@ static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
return ret;
}
int trace_event_enable_disable(struct ftrace_event_file *file,
int enable, int soft_disable)
{
return __ftrace_event_enable_disable(file, enable, soft_disable);
}
static int ftrace_event_enable_disable(struct ftrace_event_file *file,
int enable)
{
......@@ -421,11 +427,6 @@ static void remove_subsystem(struct ftrace_subsystem_dir *dir)
}
}
static void *event_file_data(struct file *filp)
{
return ACCESS_ONCE(file_inode(filp)->i_private);
}
static void remove_event_file_dir(struct ftrace_event_file *file)
{
struct dentry *dir = file->dir;
......@@ -1549,6 +1550,9 @@ event_create_dir(struct dentry *parent, struct ftrace_event_file *file)
trace_create_file("filter", 0644, file->dir, file,
&ftrace_event_filter_fops);
trace_create_file("trigger", 0644, file->dir, file,
&event_trigger_fops);
trace_create_file("format", 0444, file->dir, call,
&ftrace_event_format_fops);
......@@ -1645,6 +1649,8 @@ trace_create_new_event(struct ftrace_event_call *call,
file->event_call = call;
file->tr = tr;
atomic_set(&file->sm_ref, 0);
atomic_set(&file->tm_ref, 0);
INIT_LIST_HEAD(&file->triggers);
list_add(&file->list, &tr->events);
return file;
......@@ -1849,20 +1855,7 @@ __trace_add_event_dirs(struct trace_array *tr)
}
}
#ifdef CONFIG_DYNAMIC_FTRACE
/* Avoid typos */
#define ENABLE_EVENT_STR "enable_event"
#define DISABLE_EVENT_STR "disable_event"
struct event_probe_data {
struct ftrace_event_file *file;
unsigned long count;
int ref;
bool enable;
};
static struct ftrace_event_file *
struct ftrace_event_file *
find_event_file(struct trace_array *tr, const char *system, const char *event)
{
struct ftrace_event_file *file;
......@@ -1885,6 +1878,19 @@ find_event_file(struct trace_array *tr, const char *system, const char *event)
return NULL;
}
#ifdef CONFIG_DYNAMIC_FTRACE
/* Avoid typos */
#define ENABLE_EVENT_STR "enable_event"
#define DISABLE_EVENT_STR "disable_event"
struct event_probe_data {
struct ftrace_event_file *file;
unsigned long count;
int ref;
bool enable;
};
static void
event_enable_probe(unsigned long ip, unsigned long parent_ip, void **_data)
{
......@@ -2311,6 +2317,9 @@ int event_trace_del_tracer(struct trace_array *tr)
{
mutex_lock(&event_mutex);
/* Disable any event triggers and associated soft-disabled events */
clear_event_triggers(tr);
/* Disable any running events */
__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
......@@ -2377,6 +2386,8 @@ static __init int event_trace_enable(void)
register_event_cmds();
register_trigger_cmds();
return 0;
}
......
......@@ -799,6 +799,11 @@ static void __free_filter(struct event_filter *filter)
kfree(filter);
}
void free_event_filter(struct event_filter *filter)
{
__free_filter(filter);
}
void destroy_call_preds(struct ftrace_event_call *call)
{
__free_filter(call->filter);
......@@ -1938,6 +1943,13 @@ static int create_filter(struct ftrace_event_call *call,
return err;
}
int create_event_filter(struct ftrace_event_call *call,
char *filter_str, bool set_str,
struct event_filter **filterp)
{
return create_filter(call, filter_str, set_str, filterp);
}
/**
* create_system_filter - create a filter for an event_subsystem
* @system: event_subsystem to create a filter for
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -382,7 +382,7 @@ static const struct file_operations stack_trace_filter_fops = {
.open = stack_trace_filter_open,
.read = seq_read,
.write = ftrace_filter_write,
.llseek = ftrace_filter_lseek,
.llseek = tracing_lseek,
.release = ftrace_regex_release,
};
......
......@@ -321,7 +321,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
if (!ftrace_file)
return;
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
if (ftrace_trigger_soft_disabled(ftrace_file))
return;
sys_data = syscall_nr_to_meta(syscall_nr);
......@@ -343,8 +343,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
entry->nr = syscall_nr;
syscall_get_arguments(current, regs, 0, sys_data->nb_args, entry->args);
if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_current_buffer_unlock_commit(buffer, event,
event_trigger_unlock_commit(ftrace_file, buffer, event, entry,
irq_flags, pc);
}
......@@ -369,7 +368,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (!ftrace_file)
return;
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
if (ftrace_trigger_soft_disabled(ftrace_file))
return;
sys_data = syscall_nr_to_meta(syscall_nr);
......@@ -390,8 +389,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
entry->nr = syscall_nr;
entry->ret = syscall_get_return_value(current, regs);
if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_current_buffer_unlock_commit(buffer, event,
event_trigger_unlock_commit(ftrace_file, buffer, event, entry,
irq_flags, pc);
}
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment