Commit 01732755 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'probes-v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull probes updates from Masami Hiramatsu:
 "x86 kprobes:

   - Use boolean for some function return instead of 0 and 1

   - Prohibit probing on INT/UD. This prevents user to put kprobe on
     INTn/INT1/INT3/INTO and UD0/UD1/UD2 because these are used for a
     special purpose in the kernel

   - Boost Grp instructions. Because a few percent of kernel
     instructions are Grp 2/3/4/5 and those are safe to be executed
     without ip register fixup, allow those to be boosted (direct
     execution on the trampoline buffer with a JMP)

  tracing:

   - Add function argument access from return events (kretprobe and
     fprobe). This allows user to compare how a data structure field is
     changed after executing a function. With BTF, return event also
     accepts function argument access by name.

   - Fix a wrong comment (using "Kretprobe" in fprobe)

   - Cleanup a big probe argument parser function into three parts, type
     parser, post-processing function, and main parser

   - Cleanup to set nr_args field when initializing trace_probe instead
     of counting up it while parsing

   - Cleanup a redundant #else block from tracefs/README source code

   - Update selftests to check entry argument access from return probes

   - Documentation update about entry argument access from return
     probes"

* tag 'probes-v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  Documentation: tracing: Add entry argument access at function exit
  selftests/ftrace: Add test cases for entry args at function exit
  tracing/probes: Support $argN in return probe (kprobe and fprobe)
  tracing: Remove redundant #else block for BTF args from README
  tracing/probes: cleanup: Set trace_probe::nr_args at trace_probe_init
  tracing/probes: Cleanup probe argument parser
  tracing/fprobe-event: cleanup: Fix a wrong comment in fprobe event
  x86/kprobes: Boost more instructions from grp2/3/4/5
  x86/kprobes: Prohibit kprobing on INT and UD
  x86/kprobes: Refactor can_{probe,boost} return type to bool
parents c0a614e8 e8c32f24
...@@ -70,6 +70,14 @@ Synopsis of fprobe-events ...@@ -70,6 +70,14 @@ Synopsis of fprobe-events
For the details of TYPE, see :ref:`kprobetrace documentation <kprobetrace_types>`. For the details of TYPE, see :ref:`kprobetrace documentation <kprobetrace_types>`.
Function arguments at exit
--------------------------
Function arguments can be accessed at exit probe using $arg<N> fetcharg. This
is useful to record the function parameter and return value at once, and
trace the difference of structure fields (for debuging a function whether it
correctly updates the given data structure or not)
See the :ref:`sample<fprobetrace_exit_args_sample>` below for how it works.
BTF arguments BTF arguments
------------- -------------
BTF (BPF Type Format) argument allows user to trace function and tracepoint BTF (BPF Type Format) argument allows user to trace function and tracepoint
...@@ -218,3 +226,26 @@ traceprobe event, you can trace that field as below. ...@@ -218,3 +226,26 @@ traceprobe event, you can trace that field as below.
<idle>-0 [000] d..3. 5606.690317: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="kworker/0:1" usage=1 start_time=137000000 <idle>-0 [000] d..3. 5606.690317: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="kworker/0:1" usage=1 start_time=137000000
kworker/0:1-14 [000] d..3. 5606.690339: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="swapper/0" usage=2 start_time=0 kworker/0:1-14 [000] d..3. 5606.690339: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="swapper/0" usage=2 start_time=0
<idle>-0 [000] d..3. 5606.692368: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="kworker/0:1" usage=1 start_time=137000000 <idle>-0 [000] d..3. 5606.692368: sched_switch: (__probestub_sched_switch+0x4/0x10) comm="kworker/0:1" usage=1 start_time=137000000
.. _fprobetrace_exit_args_sample:
The return probe allows us to access the results of some functions, which returns
the error code and its results are passed via function parameter, such as an
structure-initialization function.
For example, vfs_open() will link the file structure to the inode and update
mode. You can trace that changes with return probe.
::
# echo 'f vfs_open mode=file->f_mode:x32 inode=file->f_inode:x64' >> dynamic_events
# echo 'f vfs_open%%return mode=file->f_mode:x32 inode=file->f_inode:x64' >> dynamic_events
# echo 1 > events/fprobes/enable
# cat trace
sh-131 [006] ...1. 1945.714346: vfs_open__entry: (vfs_open+0x4/0x40) mode=0x2 inode=0x0
sh-131 [006] ...1. 1945.714358: vfs_open__exit: (do_open+0x274/0x3d0 <- vfs_open) mode=0x4d801e inode=0xffff888008470168
cat-143 [007] ...1. 1945.717949: vfs_open__entry: (vfs_open+0x4/0x40) mode=0x1 inode=0x0
cat-143 [007] ...1. 1945.717956: vfs_open__exit: (do_open+0x274/0x3d0 <- vfs_open) mode=0x4a801d inode=0xffff888005f78d28
cat-143 [007] ...1. 1945.720616: vfs_open__entry: (vfs_open+0x4/0x40) mode=0x1 inode=0x0
cat-143 [007] ...1. 1945.728263: vfs_open__exit: (do_open+0x274/0x3d0 <- vfs_open) mode=0xa800d inode=0xffff888004ada8d8
You can see the `file::f_mode` and `file::f_inode` are upated in `vfs_open()`.
...@@ -70,6 +70,15 @@ Synopsis of kprobe_events ...@@ -70,6 +70,15 @@ Synopsis of kprobe_events
(\*3) this is useful for fetching a field of data structures. (\*3) this is useful for fetching a field of data structures.
(\*4) "u" means user-space dereference. See :ref:`user_mem_access`. (\*4) "u" means user-space dereference. See :ref:`user_mem_access`.
Function arguments at kretprobe
-------------------------------
Function arguments can be accessed at kretprobe using $arg<N> fetcharg. This
is useful to record the function parameter and return value at once, and
trace the difference of structure fields (for debuging a function whether it
correctly updates the given data structure or not).
See the :ref:`sample<fprobetrace_exit_args_sample>` in fprobe event for how
it works.
.. _kprobetrace_types: .. _kprobetrace_types:
Types Types
......
...@@ -78,7 +78,7 @@ ...@@ -78,7 +78,7 @@
#endif #endif
/* Ensure if the instruction can be boostable */ /* Ensure if the instruction can be boostable */
extern int can_boost(struct insn *insn, void *orig_addr); extern bool can_boost(struct insn *insn, void *orig_addr);
/* Recover instruction if given address is probed */ /* Recover instruction if given address is probed */
extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf, extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf,
unsigned long addr); unsigned long addr);
......
...@@ -137,14 +137,14 @@ NOKPROBE_SYMBOL(synthesize_relcall); ...@@ -137,14 +137,14 @@ NOKPROBE_SYMBOL(synthesize_relcall);
* Returns non-zero if INSN is boostable. * Returns non-zero if INSN is boostable.
* RIP relative instructions are adjusted at copying time in 64 bits mode * RIP relative instructions are adjusted at copying time in 64 bits mode
*/ */
int can_boost(struct insn *insn, void *addr) bool can_boost(struct insn *insn, void *addr)
{ {
kprobe_opcode_t opcode; kprobe_opcode_t opcode;
insn_byte_t prefix; insn_byte_t prefix;
int i; int i;
if (search_exception_tables((unsigned long)addr)) if (search_exception_tables((unsigned long)addr))
return 0; /* Page fault may occur on this address. */ return false; /* Page fault may occur on this address. */
/* 2nd-byte opcode */ /* 2nd-byte opcode */
if (insn->opcode.nbytes == 2) if (insn->opcode.nbytes == 2)
...@@ -152,7 +152,7 @@ int can_boost(struct insn *insn, void *addr) ...@@ -152,7 +152,7 @@ int can_boost(struct insn *insn, void *addr)
(unsigned long *)twobyte_is_boostable); (unsigned long *)twobyte_is_boostable);
if (insn->opcode.nbytes != 1) if (insn->opcode.nbytes != 1)
return 0; return false;
for_each_insn_prefix(insn, i, prefix) { for_each_insn_prefix(insn, i, prefix) {
insn_attr_t attr; insn_attr_t attr;
...@@ -160,7 +160,7 @@ int can_boost(struct insn *insn, void *addr) ...@@ -160,7 +160,7 @@ int can_boost(struct insn *insn, void *addr)
attr = inat_get_opcode_attribute(prefix); attr = inat_get_opcode_attribute(prefix);
/* Can't boost Address-size override prefix and CS override prefix */ /* Can't boost Address-size override prefix and CS override prefix */
if (prefix == 0x2e || inat_is_address_size_prefix(attr)) if (prefix == 0x2e || inat_is_address_size_prefix(attr))
return 0; return false;
} }
opcode = insn->opcode.bytes[0]; opcode = insn->opcode.bytes[0];
...@@ -169,24 +169,35 @@ int can_boost(struct insn *insn, void *addr) ...@@ -169,24 +169,35 @@ int can_boost(struct insn *insn, void *addr)
case 0x62: /* bound */ case 0x62: /* bound */
case 0x70 ... 0x7f: /* Conditional jumps */ case 0x70 ... 0x7f: /* Conditional jumps */
case 0x9a: /* Call far */ case 0x9a: /* Call far */
case 0xc0 ... 0xc1: /* Grp2 */
case 0xcc ... 0xce: /* software exceptions */ case 0xcc ... 0xce: /* software exceptions */
case 0xd0 ... 0xd3: /* Grp2 */
case 0xd6: /* (UD) */ case 0xd6: /* (UD) */
case 0xd8 ... 0xdf: /* ESC */ case 0xd8 ... 0xdf: /* ESC */
case 0xe0 ... 0xe3: /* LOOP*, JCXZ */ case 0xe0 ... 0xe3: /* LOOP*, JCXZ */
case 0xe8 ... 0xe9: /* near Call, JMP */ case 0xe8 ... 0xe9: /* near Call, JMP */
case 0xeb: /* Short JMP */ case 0xeb: /* Short JMP */
case 0xf0 ... 0xf4: /* LOCK/REP, HLT */ case 0xf0 ... 0xf4: /* LOCK/REP, HLT */
/* ... are not boostable */
return false;
case 0xc0 ... 0xc1: /* Grp2 */
case 0xd0 ... 0xd3: /* Grp2 */
/*
* AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
*/
return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
case 0xf6 ... 0xf7: /* Grp3 */ case 0xf6 ... 0xf7: /* Grp3 */
/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
case 0xfe: /* Grp4 */ case 0xfe: /* Grp4 */
/* ... are not boostable */ /* Only INC and DEC are boostable */
return 0; return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
case 0xff: /* Grp5 */ case 0xff: /* Grp5 */
/* Only indirect jmp is boostable */ /* Only INC, DEC, and indirect JMP are boostable */
return X86_MODRM_REG(insn->modrm.bytes[0]) == 4; return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
default: default:
return 1; return true;
} }
} }
...@@ -252,21 +263,40 @@ unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long add ...@@ -252,21 +263,40 @@ unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long add
return __recover_probed_insn(buf, addr); return __recover_probed_insn(buf, addr);
} }
/* Check if paddr is at an instruction boundary */ /* Check if insn is INT or UD */
static int can_probe(unsigned long paddr) static inline bool is_exception_insn(struct insn *insn)
{
/* UD uses 0f escape */
if (insn->opcode.bytes[0] == 0x0f) {
/* UD0 / UD1 / UD2 */
return insn->opcode.bytes[1] == 0xff ||
insn->opcode.bytes[1] == 0xb9 ||
insn->opcode.bytes[1] == 0x0b;
}
/* INT3 / INT n / INTO / INT1 */
return insn->opcode.bytes[0] == 0xcc ||
insn->opcode.bytes[0] == 0xcd ||
insn->opcode.bytes[0] == 0xce ||
insn->opcode.bytes[0] == 0xf1;
}
/*
* Check if paddr is at an instruction boundary and that instruction can
* be probed
*/
static bool can_probe(unsigned long paddr)
{ {
unsigned long addr, __addr, offset = 0; unsigned long addr, __addr, offset = 0;
struct insn insn; struct insn insn;
kprobe_opcode_t buf[MAX_INSN_SIZE]; kprobe_opcode_t buf[MAX_INSN_SIZE];
if (!kallsyms_lookup_size_offset(paddr, NULL, &offset)) if (!kallsyms_lookup_size_offset(paddr, NULL, &offset))
return 0; return false;
/* Decode instructions */ /* Decode instructions */
addr = paddr - offset; addr = paddr - offset;
while (addr < paddr) { while (addr < paddr) {
int ret;
/* /*
* Check if the instruction has been modified by another * Check if the instruction has been modified by another
* kprobe, in which case we replace the breakpoint by the * kprobe, in which case we replace the breakpoint by the
...@@ -277,11 +307,10 @@ static int can_probe(unsigned long paddr) ...@@ -277,11 +307,10 @@ static int can_probe(unsigned long paddr)
*/ */
__addr = recover_probed_instruction(buf, addr); __addr = recover_probed_instruction(buf, addr);
if (!__addr) if (!__addr)
return 0; return false;
ret = insn_decode_kernel(&insn, (void *)__addr); if (insn_decode_kernel(&insn, (void *)__addr) < 0)
if (ret < 0) return false;
return 0;
#ifdef CONFIG_KGDB #ifdef CONFIG_KGDB
/* /*
...@@ -290,10 +319,26 @@ static int can_probe(unsigned long paddr) ...@@ -290,10 +319,26 @@ static int can_probe(unsigned long paddr)
*/ */
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE && if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
kgdb_has_hit_break(addr)) kgdb_has_hit_break(addr))
return 0; return false;
#endif #endif
addr += insn.length; addr += insn.length;
} }
/* Check if paddr is at an instruction boundary */
if (addr != paddr)
return false;
__addr = recover_probed_instruction(buf, addr);
if (!__addr)
return false;
if (insn_decode_kernel(&insn, (void *)__addr) < 0)
return false;
/* INT and UD are special and should not be kprobed */
if (is_exception_insn(&insn))
return false;
if (IS_ENABLED(CONFIG_CFI_CLANG)) { if (IS_ENABLED(CONFIG_CFI_CLANG)) {
/* /*
* The compiler generates the following instruction sequence * The compiler generates the following instruction sequence
...@@ -308,13 +353,6 @@ static int can_probe(unsigned long paddr) ...@@ -308,13 +353,6 @@ static int can_probe(unsigned long paddr)
* Also, these movl and addl are used for showing expected * Also, these movl and addl are used for showing expected
* type. So those must not be touched. * type. So those must not be touched.
*/ */
__addr = recover_probed_instruction(buf, addr);
if (!__addr)
return 0;
if (insn_decode_kernel(&insn, (void *)__addr) < 0)
return 0;
if (insn.opcode.value == 0xBA) if (insn.opcode.value == 0xBA)
offset = 12; offset = 12;
else if (insn.opcode.value == 0x3) else if (insn.opcode.value == 0x3)
...@@ -324,11 +362,11 @@ static int can_probe(unsigned long paddr) ...@@ -324,11 +362,11 @@ static int can_probe(unsigned long paddr)
/* This movl/addl is used for decoding CFI. */ /* This movl/addl is used for decoding CFI. */
if (is_cfi_trap(addr + offset)) if (is_cfi_trap(addr + offset))
return 0; return false;
} }
out: out:
return (addr == paddr); return true;
} }
/* If x86 supports IBT (ENDBR) it must be skipped. */ /* If x86 supports IBT (ENDBR) it must be skipped. */
......
...@@ -5747,16 +5747,15 @@ static const char readme_msg[] = ...@@ -5747,16 +5747,15 @@ static const char readme_msg[] =
"\t args: <name>=fetcharg[:type]\n" "\t args: <name>=fetcharg[:type]\n"
"\t fetcharg: (%<register>|$<efield>), @<address>, @<symbol>[+|-<offset>],\n" "\t fetcharg: (%<register>|$<efield>), @<address>, @<symbol>[+|-<offset>],\n"
#ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
#ifdef CONFIG_PROBE_EVENTS_BTF_ARGS
"\t $stack<index>, $stack, $retval, $comm, $arg<N>,\n" "\t $stack<index>, $stack, $retval, $comm, $arg<N>,\n"
#ifdef CONFIG_PROBE_EVENTS_BTF_ARGS
"\t <argname>[->field[->field|.field...]],\n" "\t <argname>[->field[->field|.field...]],\n"
#else
"\t $stack<index>, $stack, $retval, $comm, $arg<N>,\n"
#endif #endif
#else #else
"\t $stack<index>, $stack, $retval, $comm,\n" "\t $stack<index>, $stack, $retval, $comm,\n"
#endif #endif
"\t +|-[u]<offset>(<fetcharg>), \\imm-value, \\\"imm-string\"\n" "\t +|-[u]<offset>(<fetcharg>), \\imm-value, \\\"imm-string\"\n"
"\t kernel return probes support: $retval, $arg<N>, $comm\n"
"\t type: s8/16/32/64, u8/16/32/64, x8/16/32/64, char, string, symbol,\n" "\t type: s8/16/32/64, u8/16/32/64, x8/16/32/64, char, string, symbol,\n"
"\t b<bit-width>@<bit-offset>/<container-size>, ustring,\n" "\t b<bit-width>@<bit-offset>/<container-size>, ustring,\n"
"\t symstr, <type>\\[<array-size>\\]\n" "\t symstr, <type>\\[<array-size>\\]\n"
......
...@@ -220,7 +220,7 @@ static struct trace_eprobe *alloc_event_probe(const char *group, ...@@ -220,7 +220,7 @@ static struct trace_eprobe *alloc_event_probe(const char *group,
if (!ep->event_system) if (!ep->event_system)
goto error; goto error;
ret = trace_probe_init(&ep->tp, this_event, group, false); ret = trace_probe_init(&ep->tp, this_event, group, false, nargs);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -390,8 +390,8 @@ static int get_eprobe_size(struct trace_probe *tp, void *rec) ...@@ -390,8 +390,8 @@ static int get_eprobe_size(struct trace_probe *tp, void *rec)
/* Note that we don't verify it, since the code does not come from user space */ /* Note that we don't verify it, since the code does not come from user space */
static int static int
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *base) void *dest, void *base)
{ {
unsigned long val; unsigned long val;
int ret; int ret;
...@@ -438,7 +438,7 @@ __eprobe_trace_func(struct eprobe_data *edata, void *rec) ...@@ -438,7 +438,7 @@ __eprobe_trace_func(struct eprobe_data *edata, void *rec)
return; return;
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
store_trace_args(&entry[1], &edata->ep->tp, rec, sizeof(*entry), dsize); store_trace_args(&entry[1], &edata->ep->tp, rec, NULL, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer); trace_event_buffer_commit(&fbuffer);
} }
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* Copyright (C) 2022 Google LLC. * Copyright (C) 2022 Google LLC.
*/ */
#define pr_fmt(fmt) "trace_fprobe: " fmt #define pr_fmt(fmt) "trace_fprobe: " fmt
#include <asm/ptrace.h>
#include <linux/fprobe.h> #include <linux/fprobe.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -129,8 +130,8 @@ static bool trace_fprobe_is_registered(struct trace_fprobe *tf) ...@@ -129,8 +130,8 @@ static bool trace_fprobe_is_registered(struct trace_fprobe *tf)
* from user space. * from user space.
*/ */
static int static int
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *base) void *dest, void *base)
{ {
struct pt_regs *regs = rec; struct pt_regs *regs = rec;
unsigned long val; unsigned long val;
...@@ -152,6 +153,9 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, ...@@ -152,6 +153,9 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
case FETCH_OP_ARG: case FETCH_OP_ARG:
val = regs_get_kernel_argument(regs, code->param); val = regs_get_kernel_argument(regs, code->param);
break; break;
case FETCH_OP_EDATA:
val = *(unsigned long *)((unsigned long)edata + code->offset);
break;
#endif #endif
case FETCH_NOP_SYMBOL: /* Ignore a place holder */ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
code++; code++;
...@@ -184,7 +188,7 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -184,7 +188,7 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (trace_trigger_soft_disabled(trace_file)) if (trace_trigger_soft_disabled(trace_file))
return; return;
dsize = __get_data_size(&tf->tp, regs); dsize = __get_data_size(&tf->tp, regs, NULL);
entry = trace_event_buffer_reserve(&fbuffer, trace_file, entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tf->tp.size + dsize); sizeof(*entry) + tf->tp.size + dsize);
...@@ -194,7 +198,7 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -194,7 +198,7 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
fbuffer.regs = regs; fbuffer.regs = regs;
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->ip = entry_ip; entry->ip = entry_ip;
store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer); trace_event_buffer_commit(&fbuffer);
} }
...@@ -210,11 +214,24 @@ fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -210,11 +214,24 @@ fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
} }
NOKPROBE_SYMBOL(fentry_trace_func); NOKPROBE_SYMBOL(fentry_trace_func);
/* Kretprobe handler */ /* function exit handler */
static int trace_fprobe_entry_handler(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
void *entry_data)
{
struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
if (tf->tp.entry_arg)
store_trace_entry_data(entry_data, &tf->tp, regs);
return 0;
}
NOKPROBE_SYMBOL(trace_fprobe_entry_handler)
static nokprobe_inline void static nokprobe_inline void
__fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs, unsigned long ret_ip, struct pt_regs *regs,
struct trace_event_file *trace_file) void *entry_data, struct trace_event_file *trace_file)
{ {
struct fexit_trace_entry_head *entry; struct fexit_trace_entry_head *entry;
struct trace_event_buffer fbuffer; struct trace_event_buffer fbuffer;
...@@ -227,7 +244,7 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -227,7 +244,7 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (trace_trigger_soft_disabled(trace_file)) if (trace_trigger_soft_disabled(trace_file))
return; return;
dsize = __get_data_size(&tf->tp, regs); dsize = __get_data_size(&tf->tp, regs, entry_data);
entry = trace_event_buffer_reserve(&fbuffer, trace_file, entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tf->tp.size + dsize); sizeof(*entry) + tf->tp.size + dsize);
...@@ -238,19 +255,19 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -238,19 +255,19 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->func = entry_ip; entry->func = entry_ip;
entry->ret_ip = ret_ip; entry->ret_ip = ret_ip;
store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer); trace_event_buffer_commit(&fbuffer);
} }
static void static void
fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs) unsigned long ret_ip, struct pt_regs *regs, void *entry_data)
{ {
struct event_file_link *link; struct event_file_link *link;
trace_probe_for_each_link_rcu(link, &tf->tp) trace_probe_for_each_link_rcu(link, &tf->tp)
__fexit_trace_func(tf, entry_ip, ret_ip, regs, link->file); __fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data, link->file);
} }
NOKPROBE_SYMBOL(fexit_trace_func); NOKPROBE_SYMBOL(fexit_trace_func);
...@@ -269,7 +286,7 @@ static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -269,7 +286,7 @@ static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (hlist_empty(head)) if (hlist_empty(head))
return 0; return 0;
dsize = __get_data_size(&tf->tp, regs); dsize = __get_data_size(&tf->tp, regs, NULL);
__size = sizeof(*entry) + tf->tp.size + dsize; __size = sizeof(*entry) + tf->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64)); size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32); size -= sizeof(u32);
...@@ -280,7 +297,7 @@ static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -280,7 +297,7 @@ static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
entry->ip = entry_ip; entry->ip = entry_ip;
memset(&entry[1], 0, dsize); memset(&entry[1], 0, dsize);
store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL); head, NULL);
return 0; return 0;
...@@ -289,7 +306,8 @@ NOKPROBE_SYMBOL(fentry_perf_func); ...@@ -289,7 +306,8 @@ NOKPROBE_SYMBOL(fentry_perf_func);
static void static void
fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs) unsigned long ret_ip, struct pt_regs *regs,
void *entry_data)
{ {
struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct trace_event_call *call = trace_probe_event_call(&tf->tp);
struct fexit_trace_entry_head *entry; struct fexit_trace_entry_head *entry;
...@@ -301,7 +319,7 @@ fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -301,7 +319,7 @@ fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (hlist_empty(head)) if (hlist_empty(head))
return; return;
dsize = __get_data_size(&tf->tp, regs); dsize = __get_data_size(&tf->tp, regs, entry_data);
__size = sizeof(*entry) + tf->tp.size + dsize; __size = sizeof(*entry) + tf->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64)); size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32); size -= sizeof(u32);
...@@ -312,7 +330,7 @@ fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, ...@@ -312,7 +330,7 @@ fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
entry->func = entry_ip; entry->func = entry_ip;
entry->ret_ip = ret_ip; entry->ret_ip = ret_ip;
store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL); head, NULL);
} }
...@@ -343,10 +361,10 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, ...@@ -343,10 +361,10 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip,
struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
fexit_trace_func(tf, entry_ip, ret_ip, regs); fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data);
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE))
fexit_perf_func(tf, entry_ip, ret_ip, regs); fexit_perf_func(tf, entry_ip, ret_ip, regs, entry_data);
#endif #endif
} }
NOKPROBE_SYMBOL(fexit_dispatcher); NOKPROBE_SYMBOL(fexit_dispatcher);
...@@ -389,7 +407,7 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, ...@@ -389,7 +407,7 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group,
tf->tpoint = tpoint; tf->tpoint = tpoint;
tf->fp.nr_maxactive = maxactive; tf->fp.nr_maxactive = maxactive;
ret = trace_probe_init(&tf->tp, event, group, false); ret = trace_probe_init(&tf->tp, event, group, false, nargs);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -1109,6 +1127,11 @@ static int __trace_fprobe_create(int argc, const char *argv[]) ...@@ -1109,6 +1127,11 @@ static int __trace_fprobe_create(int argc, const char *argv[])
goto error; /* This can be -ENOMEM */ goto error; /* This can be -ENOMEM */
} }
if (is_return && tf->tp.entry_arg) {
tf->fp.entry_handler = trace_fprobe_entry_handler;
tf->fp.entry_data_size = traceprobe_get_entry_data_size(&tf->tp);
}
ret = traceprobe_set_print_fmt(&tf->tp, ret = traceprobe_set_print_fmt(&tf->tp,
is_return ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL); is_return ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL);
if (ret < 0) if (ret < 0)
......
...@@ -290,7 +290,7 @@ static struct trace_kprobe *alloc_trace_kprobe(const char *group, ...@@ -290,7 +290,7 @@ static struct trace_kprobe *alloc_trace_kprobe(const char *group,
INIT_HLIST_NODE(&tk->rp.kp.hlist); INIT_HLIST_NODE(&tk->rp.kp.hlist);
INIT_LIST_HEAD(&tk->rp.kp.list); INIT_LIST_HEAD(&tk->rp.kp.list);
ret = trace_probe_init(&tk->tp, event, group, false); ret = trace_probe_init(&tk->tp, event, group, false, nargs);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -740,6 +740,9 @@ static unsigned int number_of_same_symbols(char *func_name) ...@@ -740,6 +740,9 @@ static unsigned int number_of_same_symbols(char *func_name)
return ctx.count; return ctx.count;
} }
static int trace_kprobe_entry_handler(struct kretprobe_instance *ri,
struct pt_regs *regs);
static int __trace_kprobe_create(int argc, const char *argv[]) static int __trace_kprobe_create(int argc, const char *argv[])
{ {
/* /*
...@@ -948,6 +951,11 @@ static int __trace_kprobe_create(int argc, const char *argv[]) ...@@ -948,6 +951,11 @@ static int __trace_kprobe_create(int argc, const char *argv[])
if (ret) if (ret)
goto error; /* This can be -ENOMEM */ goto error; /* This can be -ENOMEM */
} }
/* entry handler for kretprobe */
if (is_return && tk->tp.entry_arg) {
tk->rp.entry_handler = trace_kprobe_entry_handler;
tk->rp.data_size = traceprobe_get_entry_data_size(&tk->tp);
}
ptype = is_return ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL; ptype = is_return ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL;
ret = traceprobe_set_print_fmt(&tk->tp, ptype); ret = traceprobe_set_print_fmt(&tk->tp, ptype);
...@@ -1303,8 +1311,8 @@ static const struct file_operations kprobe_profile_ops = { ...@@ -1303,8 +1311,8 @@ static const struct file_operations kprobe_profile_ops = {
/* Note that we don't verify it, since the code does not come from user space */ /* Note that we don't verify it, since the code does not come from user space */
static int static int
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *base) void *dest, void *base)
{ {
struct pt_regs *regs = rec; struct pt_regs *regs = rec;
unsigned long val; unsigned long val;
...@@ -1329,6 +1337,9 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, ...@@ -1329,6 +1337,9 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
case FETCH_OP_ARG: case FETCH_OP_ARG:
val = regs_get_kernel_argument(regs, code->param); val = regs_get_kernel_argument(regs, code->param);
break; break;
case FETCH_OP_EDATA:
val = *(unsigned long *)((unsigned long)edata + code->offset);
break;
#endif #endif
case FETCH_NOP_SYMBOL: /* Ignore a place holder */ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
code++; code++;
...@@ -1359,7 +1370,7 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs, ...@@ -1359,7 +1370,7 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs,
if (trace_trigger_soft_disabled(trace_file)) if (trace_trigger_soft_disabled(trace_file))
return; return;
dsize = __get_data_size(&tk->tp, regs); dsize = __get_data_size(&tk->tp, regs, NULL);
entry = trace_event_buffer_reserve(&fbuffer, trace_file, entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tk->tp.size + dsize); sizeof(*entry) + tk->tp.size + dsize);
...@@ -1368,7 +1379,7 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs, ...@@ -1368,7 +1379,7 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs,
fbuffer.regs = regs; fbuffer.regs = regs;
entry->ip = (unsigned long)tk->rp.kp.addr; entry->ip = (unsigned long)tk->rp.kp.addr;
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tk->tp, regs, NULL, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer); trace_event_buffer_commit(&fbuffer);
} }
...@@ -1384,6 +1395,31 @@ kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs) ...@@ -1384,6 +1395,31 @@ kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs)
NOKPROBE_SYMBOL(kprobe_trace_func); NOKPROBE_SYMBOL(kprobe_trace_func);
/* Kretprobe handler */ /* Kretprobe handler */
static int trace_kprobe_entry_handler(struct kretprobe_instance *ri,
struct pt_regs *regs)
{
struct kretprobe *rp = get_kretprobe(ri);
struct trace_kprobe *tk;
/*
* There is a small chance that get_kretprobe(ri) returns NULL when
* the kretprobe is unregister on another CPU between kretprobe's
* trampoline_handler and this function.
*/
if (unlikely(!rp))
return -ENOENT;
tk = container_of(rp, struct trace_kprobe, rp);
/* store argument values into ri->data as entry data */
if (tk->tp.entry_arg)
store_trace_entry_data(ri->data, &tk->tp, regs);
return 0;
}
static nokprobe_inline void static nokprobe_inline void
__kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri, __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
struct pt_regs *regs, struct pt_regs *regs,
...@@ -1399,7 +1435,7 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri, ...@@ -1399,7 +1435,7 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
if (trace_trigger_soft_disabled(trace_file)) if (trace_trigger_soft_disabled(trace_file))
return; return;
dsize = __get_data_size(&tk->tp, regs); dsize = __get_data_size(&tk->tp, regs, ri->data);
entry = trace_event_buffer_reserve(&fbuffer, trace_file, entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tk->tp.size + dsize); sizeof(*entry) + tk->tp.size + dsize);
...@@ -1409,7 +1445,7 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri, ...@@ -1409,7 +1445,7 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
fbuffer.regs = regs; fbuffer.regs = regs;
entry->func = (unsigned long)tk->rp.kp.addr; entry->func = (unsigned long)tk->rp.kp.addr;
entry->ret_ip = get_kretprobe_retaddr(ri); entry->ret_ip = get_kretprobe_retaddr(ri);
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tk->tp, regs, ri->data, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer); trace_event_buffer_commit(&fbuffer);
} }
...@@ -1557,7 +1593,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) ...@@ -1557,7 +1593,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
if (hlist_empty(head)) if (hlist_empty(head))
return 0; return 0;
dsize = __get_data_size(&tk->tp, regs); dsize = __get_data_size(&tk->tp, regs, NULL);
__size = sizeof(*entry) + tk->tp.size + dsize; __size = sizeof(*entry) + tk->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64)); size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32); size -= sizeof(u32);
...@@ -1568,7 +1604,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) ...@@ -1568,7 +1604,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
entry->ip = (unsigned long)tk->rp.kp.addr; entry->ip = (unsigned long)tk->rp.kp.addr;
memset(&entry[1], 0, dsize); memset(&entry[1], 0, dsize);
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tk->tp, regs, NULL, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL); head, NULL);
return 0; return 0;
...@@ -1593,7 +1629,7 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri, ...@@ -1593,7 +1629,7 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
if (hlist_empty(head)) if (hlist_empty(head))
return; return;
dsize = __get_data_size(&tk->tp, regs); dsize = __get_data_size(&tk->tp, regs, ri->data);
__size = sizeof(*entry) + tk->tp.size + dsize; __size = sizeof(*entry) + tk->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64)); size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32); size -= sizeof(u32);
...@@ -1604,7 +1640,7 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri, ...@@ -1604,7 +1640,7 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
entry->func = (unsigned long)tk->rp.kp.addr; entry->func = (unsigned long)tk->rp.kp.addr;
entry->ret_ip = get_kretprobe_retaddr(ri); entry->ret_ip = get_kretprobe_retaddr(ri);
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); store_trace_args(&entry[1], &tk->tp, regs, ri->data, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL); head, NULL);
} }
......
This diff is collapsed.
...@@ -92,6 +92,7 @@ enum fetch_op { ...@@ -92,6 +92,7 @@ enum fetch_op {
FETCH_OP_ARG, /* Function argument : .param */ FETCH_OP_ARG, /* Function argument : .param */
FETCH_OP_FOFFS, /* File offset: .immediate */ FETCH_OP_FOFFS, /* File offset: .immediate */
FETCH_OP_DATA, /* Allocated data: .data */ FETCH_OP_DATA, /* Allocated data: .data */
FETCH_OP_EDATA, /* Entry data: .offset */
// Stage 2 (dereference) op // Stage 2 (dereference) op
FETCH_OP_DEREF, /* Dereference: .offset */ FETCH_OP_DEREF, /* Dereference: .offset */
FETCH_OP_UDEREF, /* User-space Dereference: .offset */ FETCH_OP_UDEREF, /* User-space Dereference: .offset */
...@@ -102,6 +103,7 @@ enum fetch_op { ...@@ -102,6 +103,7 @@ enum fetch_op {
FETCH_OP_ST_STRING, /* String: .offset, .size */ FETCH_OP_ST_STRING, /* String: .offset, .size */
FETCH_OP_ST_USTRING, /* User String: .offset, .size */ FETCH_OP_ST_USTRING, /* User String: .offset, .size */
FETCH_OP_ST_SYMSTR, /* Kernel Symbol String: .offset, .size */ FETCH_OP_ST_SYMSTR, /* Kernel Symbol String: .offset, .size */
FETCH_OP_ST_EDATA, /* Store Entry Data: .offset */
// Stage 4 (modify) op // Stage 4 (modify) op
FETCH_OP_MOD_BF, /* Bitfield: .basesize, .lshift, .rshift */ FETCH_OP_MOD_BF, /* Bitfield: .basesize, .lshift, .rshift */
// Stage 5 (loop) op // Stage 5 (loop) op
...@@ -232,6 +234,11 @@ struct probe_arg { ...@@ -232,6 +234,11 @@ struct probe_arg {
const struct fetch_type *type; /* Type of this argument */ const struct fetch_type *type; /* Type of this argument */
}; };
struct probe_entry_arg {
struct fetch_insn *code;
unsigned int size; /* The entry data size */
};
struct trace_uprobe_filter { struct trace_uprobe_filter {
rwlock_t rwlock; rwlock_t rwlock;
int nr_systemwide; int nr_systemwide;
...@@ -253,6 +260,7 @@ struct trace_probe { ...@@ -253,6 +260,7 @@ struct trace_probe {
struct trace_probe_event *event; struct trace_probe_event *event;
ssize_t size; /* trace entry size */ ssize_t size; /* trace entry size */
unsigned int nr_args; unsigned int nr_args;
struct probe_entry_arg *entry_arg; /* This is only for return probe */
struct probe_arg args[]; struct probe_arg args[];
}; };
...@@ -338,7 +346,7 @@ static inline bool trace_probe_has_single_file(struct trace_probe *tp) ...@@ -338,7 +346,7 @@ static inline bool trace_probe_has_single_file(struct trace_probe *tp)
} }
int trace_probe_init(struct trace_probe *tp, const char *event, int trace_probe_init(struct trace_probe *tp, const char *event,
const char *group, bool alloc_filter); const char *group, bool alloc_filter, int nargs);
void trace_probe_cleanup(struct trace_probe *tp); void trace_probe_cleanup(struct trace_probe *tp);
int trace_probe_append(struct trace_probe *tp, struct trace_probe *to); int trace_probe_append(struct trace_probe *tp, struct trace_probe *to);
void trace_probe_unlink(struct trace_probe *tp); void trace_probe_unlink(struct trace_probe *tp);
...@@ -355,6 +363,18 @@ int trace_probe_create(const char *raw_command, int (*createfn)(int, const char ...@@ -355,6 +363,18 @@ int trace_probe_create(const char *raw_command, int (*createfn)(int, const char
int trace_probe_print_args(struct trace_seq *s, struct probe_arg *args, int nr_args, int trace_probe_print_args(struct trace_seq *s, struct probe_arg *args, int nr_args,
u8 *data, void *field); u8 *data, void *field);
#ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
int traceprobe_get_entry_data_size(struct trace_probe *tp);
/* This is a runtime function to store entry data */
void store_trace_entry_data(void *edata, struct trace_probe *tp, struct pt_regs *regs);
#else /* !CONFIG_HAVE_FUNCTION_ARG_ACCESS_API */
static inline int traceprobe_get_entry_data_size(struct trace_probe *tp)
{
return 0;
}
#define store_trace_entry_data(edata, tp, regs) do { } while (0)
#endif
#define trace_probe_for_each_link(pos, tp) \ #define trace_probe_for_each_link(pos, tp) \
list_for_each_entry(pos, &(tp)->event->files, list) list_for_each_entry(pos, &(tp)->event->files, list)
#define trace_probe_for_each_link_rcu(pos, tp) \ #define trace_probe_for_each_link_rcu(pos, tp) \
...@@ -381,6 +401,11 @@ static inline bool tparg_is_function_entry(unsigned int flags) ...@@ -381,6 +401,11 @@ static inline bool tparg_is_function_entry(unsigned int flags)
return (flags & TPARG_FL_LOC_MASK) == (TPARG_FL_KERNEL | TPARG_FL_FENTRY); return (flags & TPARG_FL_LOC_MASK) == (TPARG_FL_KERNEL | TPARG_FL_FENTRY);
} }
static inline bool tparg_is_function_return(unsigned int flags)
{
return (flags & TPARG_FL_LOC_MASK) == (TPARG_FL_KERNEL | TPARG_FL_RETURN);
}
struct traceprobe_parse_context { struct traceprobe_parse_context {
struct trace_event_call *event; struct trace_event_call *event;
/* BTF related parameters */ /* BTF related parameters */
...@@ -392,6 +417,7 @@ struct traceprobe_parse_context { ...@@ -392,6 +417,7 @@ struct traceprobe_parse_context {
const struct btf_type *last_type; /* Saved type */ const struct btf_type *last_type; /* Saved type */
u32 last_bitoffs; /* Saved bitoffs */ u32 last_bitoffs; /* Saved bitoffs */
u32 last_bitsize; /* Saved bitsize */ u32 last_bitsize; /* Saved bitsize */
struct trace_probe *tp;
unsigned int flags; unsigned int flags;
int offset; int offset;
}; };
...@@ -506,7 +532,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call, ...@@ -506,7 +532,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
C(NO_BTFARG, "This variable is not found at this probe point"),\ C(NO_BTFARG, "This variable is not found at this probe point"),\
C(NO_BTF_ENTRY, "No BTF entry for this probe point"), \ C(NO_BTF_ENTRY, "No BTF entry for this probe point"), \
C(BAD_VAR_ARGS, "$arg* must be an independent parameter without name etc."),\ C(BAD_VAR_ARGS, "$arg* must be an independent parameter without name etc."),\
C(NOFENTRY_ARGS, "$arg* can be used only on function entry"), \ C(NOFENTRY_ARGS, "$arg* can be used only on function entry or exit"), \
C(DOUBLE_ARGS, "$arg* can be used only once in the parameters"), \ C(DOUBLE_ARGS, "$arg* can be used only once in the parameters"), \
C(ARGS_2LONG, "$arg* failed because the argument list is too long"), \ C(ARGS_2LONG, "$arg* failed because the argument list is too long"), \
C(ARGIDX_2BIG, "$argN index is too big"), \ C(ARGIDX_2BIG, "$argN index is too big"), \
......
...@@ -54,7 +54,7 @@ fetch_apply_bitfield(struct fetch_insn *code, void *buf) ...@@ -54,7 +54,7 @@ fetch_apply_bitfield(struct fetch_insn *code, void *buf)
* If dest is NULL, don't store result and return required dynamic data size. * If dest is NULL, don't store result and return required dynamic data size.
*/ */
static int static int
process_fetch_insn(struct fetch_insn *code, void *rec, process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *dest, void *base); void *dest, void *base);
static nokprobe_inline int fetch_store_strlen(unsigned long addr); static nokprobe_inline int fetch_store_strlen(unsigned long addr);
static nokprobe_inline int static nokprobe_inline int
...@@ -232,7 +232,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val, ...@@ -232,7 +232,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val,
/* Sum up total data length for dynamic arrays (strings) */ /* Sum up total data length for dynamic arrays (strings) */
static nokprobe_inline int static nokprobe_inline int
__get_data_size(struct trace_probe *tp, struct pt_regs *regs) __get_data_size(struct trace_probe *tp, struct pt_regs *regs, void *edata)
{ {
struct probe_arg *arg; struct probe_arg *arg;
int i, len, ret = 0; int i, len, ret = 0;
...@@ -240,7 +240,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs) ...@@ -240,7 +240,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs)
for (i = 0; i < tp->nr_args; i++) { for (i = 0; i < tp->nr_args; i++) {
arg = tp->args + i; arg = tp->args + i;
if (unlikely(arg->dynamic)) { if (unlikely(arg->dynamic)) {
len = process_fetch_insn(arg->code, regs, NULL, NULL); len = process_fetch_insn(arg->code, regs, edata, NULL, NULL);
if (len > 0) if (len > 0)
ret += len; ret += len;
} }
...@@ -251,7 +251,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs) ...@@ -251,7 +251,7 @@ __get_data_size(struct trace_probe *tp, struct pt_regs *regs)
/* Store the value of each argument */ /* Store the value of each argument */
static nokprobe_inline void static nokprobe_inline void
store_trace_args(void *data, struct trace_probe *tp, void *rec, store_trace_args(void *data, struct trace_probe *tp, void *rec, void *edata,
int header_size, int maxlen) int header_size, int maxlen)
{ {
struct probe_arg *arg; struct probe_arg *arg;
...@@ -266,7 +266,7 @@ store_trace_args(void *data, struct trace_probe *tp, void *rec, ...@@ -266,7 +266,7 @@ store_trace_args(void *data, struct trace_probe *tp, void *rec,
/* Point the dynamic data area if needed */ /* Point the dynamic data area if needed */
if (unlikely(arg->dynamic)) if (unlikely(arg->dynamic))
*dl = make_data_loc(maxlen, dyndata - base); *dl = make_data_loc(maxlen, dyndata - base);
ret = process_fetch_insn(arg->code, rec, dl, base); ret = process_fetch_insn(arg->code, rec, edata, dl, base);
if (arg->dynamic && likely(ret > 0)) { if (arg->dynamic && likely(ret > 0)) {
dyndata += ret; dyndata += ret;
maxlen -= ret; maxlen -= ret;
......
...@@ -211,8 +211,8 @@ static unsigned long translate_user_vaddr(unsigned long file_offset) ...@@ -211,8 +211,8 @@ static unsigned long translate_user_vaddr(unsigned long file_offset)
/* Note that we don't verify it, since the code does not come from user space */ /* Note that we don't verify it, since the code does not come from user space */
static int static int
process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *base) void *dest, void *base)
{ {
struct pt_regs *regs = rec; struct pt_regs *regs = rec;
unsigned long val; unsigned long val;
...@@ -337,7 +337,7 @@ alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret) ...@@ -337,7 +337,7 @@ alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret)
if (!tu) if (!tu)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
ret = trace_probe_init(&tu->tp, event, group, true); ret = trace_probe_init(&tu->tp, event, group, true, nargs);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -1490,11 +1490,11 @@ static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs) ...@@ -1490,11 +1490,11 @@ static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs)
if (WARN_ON_ONCE(!uprobe_cpu_buffer)) if (WARN_ON_ONCE(!uprobe_cpu_buffer))
return 0; return 0;
dsize = __get_data_size(&tu->tp, regs); dsize = __get_data_size(&tu->tp, regs, NULL);
esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu)); esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
ucb = uprobe_buffer_get(); ucb = uprobe_buffer_get();
store_trace_args(ucb->buf, &tu->tp, regs, esize, dsize); store_trace_args(ucb->buf, &tu->tp, regs, NULL, esize, dsize);
if (trace_probe_test_flag(&tu->tp, TP_FLAG_TRACE)) if (trace_probe_test_flag(&tu->tp, TP_FLAG_TRACE))
ret |= uprobe_trace_func(tu, regs, ucb, dsize); ret |= uprobe_trace_func(tu, regs, ucb, dsize);
...@@ -1525,11 +1525,11 @@ static int uretprobe_dispatcher(struct uprobe_consumer *con, ...@@ -1525,11 +1525,11 @@ static int uretprobe_dispatcher(struct uprobe_consumer *con,
if (WARN_ON_ONCE(!uprobe_cpu_buffer)) if (WARN_ON_ONCE(!uprobe_cpu_buffer))
return 0; return 0;
dsize = __get_data_size(&tu->tp, regs); dsize = __get_data_size(&tu->tp, regs, NULL);
esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu)); esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
ucb = uprobe_buffer_get(); ucb = uprobe_buffer_get();
store_trace_args(ucb->buf, &tu->tp, regs, esize, dsize); store_trace_args(ucb->buf, &tu->tp, regs, NULL, esize, dsize);
if (trace_probe_test_flag(&tu->tp, TP_FLAG_TRACE)) if (trace_probe_test_flag(&tu->tp, TP_FLAG_TRACE))
uretprobe_trace_func(tu, func, regs, ucb, dsize); uretprobe_trace_func(tu, func, regs, ucb, dsize);
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Function return probe entry argument access
# requires: dynamic_events 'f[:[<group>/][<event>]] <func-name>':README 'kernel return probes support:':README
echo 'f:tests/myevent1 vfs_open arg=$arg1' >> dynamic_events
echo 'f:tests/myevent2 vfs_open%return arg=$arg1' >> dynamic_events
echo 1 > events/tests/enable
echo > trace
cat trace > /dev/null
function streq() {
test $1 = $2
}
streq `grep -A 1 -m 1 myevent1 trace | sed -r 's/^.*(arg=.*)/\1/' `
...@@ -34,7 +34,9 @@ check_error 'f vfs_read ^$stack10000' # BAD_STACK_NUM ...@@ -34,7 +34,9 @@ check_error 'f vfs_read ^$stack10000' # BAD_STACK_NUM
check_error 'f vfs_read ^$arg10000' # BAD_ARG_NUM check_error 'f vfs_read ^$arg10000' # BAD_ARG_NUM
if !grep -q 'kernel return probes support:' README; then
check_error 'f vfs_read $retval ^$arg1' # BAD_VAR check_error 'f vfs_read $retval ^$arg1' # BAD_VAR
fi
check_error 'f vfs_read ^$none_var' # BAD_VAR check_error 'f vfs_read ^$none_var' # BAD_VAR
check_error 'f vfs_read ^'$REG # BAD_VAR check_error 'f vfs_read ^'$REG # BAD_VAR
...@@ -99,7 +101,9 @@ if grep -q "<argname>" README; then ...@@ -99,7 +101,9 @@ if grep -q "<argname>" README; then
check_error 'f vfs_read args=^$arg*' # BAD_VAR_ARGS check_error 'f vfs_read args=^$arg*' # BAD_VAR_ARGS
check_error 'f vfs_read +0(^$arg*)' # BAD_VAR_ARGS check_error 'f vfs_read +0(^$arg*)' # BAD_VAR_ARGS
check_error 'f vfs_read $arg* ^$arg*' # DOUBLE_ARGS check_error 'f vfs_read $arg* ^$arg*' # DOUBLE_ARGS
if !grep -q 'kernel return probes support:' README; then
check_error 'f vfs_read%return ^$arg*' # NOFENTRY_ARGS check_error 'f vfs_read%return ^$arg*' # NOFENTRY_ARGS
fi
check_error 'f vfs_read ^hoge' # NO_BTFARG check_error 'f vfs_read ^hoge' # NO_BTFARG
check_error 'f kfree ^$arg10' # NO_BTFARG (exceed the number of parameters) check_error 'f kfree ^$arg10' # NO_BTFARG (exceed the number of parameters)
check_error 'f kfree%return ^$retval' # NO_RETVAL check_error 'f kfree%return ^$retval' # NO_RETVAL
......
...@@ -108,7 +108,9 @@ if grep -q "<argname>" README; then ...@@ -108,7 +108,9 @@ if grep -q "<argname>" README; then
check_error 'p vfs_read args=^$arg*' # BAD_VAR_ARGS check_error 'p vfs_read args=^$arg*' # BAD_VAR_ARGS
check_error 'p vfs_read +0(^$arg*)' # BAD_VAR_ARGS check_error 'p vfs_read +0(^$arg*)' # BAD_VAR_ARGS
check_error 'p vfs_read $arg* ^$arg*' # DOUBLE_ARGS check_error 'p vfs_read $arg* ^$arg*' # DOUBLE_ARGS
if !grep -q 'kernel return probes support:' README; then
check_error 'r vfs_read ^$arg*' # NOFENTRY_ARGS check_error 'r vfs_read ^$arg*' # NOFENTRY_ARGS
fi
check_error 'p vfs_read+8 ^$arg*' # NOFENTRY_ARGS check_error 'p vfs_read+8 ^$arg*' # NOFENTRY_ARGS
check_error 'p vfs_read ^hoge' # NO_BTFARG check_error 'p vfs_read ^hoge' # NO_BTFARG
check_error 'p kfree ^$arg10' # NO_BTFARG (exceed the number of parameters) check_error 'p kfree ^$arg10' # NO_BTFARG (exceed the number of parameters)
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Kretprobe entry argument access
# requires: kprobe_events 'kernel return probes support:':README
echo 'p:myevent1 vfs_open arg=$arg1' >> kprobe_events
echo 'r:myevent2 vfs_open arg=$arg1' >> kprobe_events
echo 1 > events/kprobes/enable
echo > trace
cat trace > /dev/null
function streq() {
test $1 = $2
}
streq `grep -A 1 -m 1 myevent1 trace | sed -r 's/^.*(arg=.*)/\1/' `
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment