Commit 87e098e6 authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'bpf: Support ->fill_link_info for kprobe_multi and perf_event links'

Yafang Shao says:

====================
This patchset enhances the usability of kprobe_multi program by introducing
support for ->fill_link_info. This allows users to easily determine the
probed functions associated with a kprobe_multi program. While
`bpftool perf show` already provides information about functions probed by
perf_event programs, supporting ->fill_link_info ensures consistent access
to this information across all bpf links.

In addition, this patch extends support to generic perf events, which are
currently not covered by `bpftool perf show`. While userspace is exposed to
only the perf type and config, other attributes such as sample_period and
sample_freq are disregarded.

To ensure accurate identification of probed functions, it is preferable to
expose the address directly rather than relying solely on the symbol name.
However, this implementation respects the kptr_restrict setting and avoids
exposing the address if it is not permitted.

v6->v7:
- From Daniel
  - No need to explicitly cast in many places
  - Use ptr_to_u64() instead of the cast
  - return -ENOMEM when calloc fails
  - Simplify the code in bpf_get_kprobe_info() further
  - Squash #9 with #8
  - And other coding style improvement
- From Andrii
  - Comment improvement
  - Use ENOSPC instead of E2BIG
  - Use strlen only when buf in not NULL
- Clear probe_addr in bpf_get_uprobe_info()

v5->v6:
- From Andrii
  - if ucount is too less, copy ucount items and return -E2BIG
  - zero out kmulti_link->cnt elements if it is not permitted by kptr
  - avoid leaking information when ucount is greater than kmulti_link->cnt
  - drop the flags, and add BPF_PERF_EVENT_[UK]RETPROBE
- From Quentin
  - use jsonw_null instead when we have no module name
  - add explanation on perf_type_name in the commit log
  - avoid the unnecessary out lable

v4->v5:
- Print "func [module]" in the kprobe_multi header (Andrii)
- Remove MAX_BPF_PERF_EVENT_TYPE (Alexei)
- Add padding field for future reuse (Yonghong)

v3->v4:
- From Quentin
  - Rename MODULE_NAME_LEN to MODULE_MAX_NAME
  - Convert retprobe to boolean for json output
  - Trim the square brackets around module names for json output
  - Move perf names into link.c
  - Use a generic helper to get perf names
  - Show address before func name, for consistency
  - Use switch-case instead of if-else
  - Increase the buff len to PATH_MAX
  - Move macros to the top of the file
- From Andrii
  - kprobe_multi flags should always be returned
  - Keep it single line if it fits in under 100 characters
  - Change the output format when showing kprobe_multi
  - Imporve the format of perf_event names
  - Rename struct perf_link to struct perf_event, and change the names of
    the enum consequently
- From Yonghong
  - Avoid disallowing extensions for all structs in the big union
- From Jiri
  - Add flags to bpf_kprobe_multi_link
  - Report kprobe_multi selftests errors
  - Rename bpf_perf_link_fill_name and make it a separate patch
  - Avoid breaking compilation when CONFIG_KPROBE_EVENTS or
    CONFIG_UPROBE_EVENTS options are not defined

v2->v3:
- Expose flags instead of retporbe (Andrii)
- Simplify the check on kmulti_link->cnt (Andrii)
- Use kallsyms_show_value() instead (Andrii)
- Show also the module name for kprobe_multi (Andrii)
- Add new enum bpf_perf_link_type (Andrii)
- Move perf event names into bpftool (Andrii, Quentin, Jiri)
- Keep perf event names in sync with perf tools (Jiri)

v1->v2:
- Fix sparse warning (Stanislav, lkp@intel.com)
- Fix BPF CI build error
- Reuse kernel_syms_load() (Alexei)
- Print 'name' instead of 'func' (Alexei)
- Show whether the probe is retprobe or not (Andrii)
- Add comment for the meaning of perf_event name (Andrii)
- Add support for generic perf event
- Adhere to the kptr_restrict setting

RFC->v1:
- Use a single copy_to_user() instead (Jiri)
- Show also the symbol name in bpftool (Quentin, Alexei)
- Use calloc() instead of malloc() in bpftool (Quentin)
- Avoid having conditional entries in the JSON output (Quentin)
- Drop ->show_fdinfo (Alexei)
- Use __u64 instead of __aligned_u64 for the field addr (Alexei)
- Avoid the contradiction in perf_event name length (Alexei)
- Address a build warning reported by kernel test robot <lkp@intel.com>
====================
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents 07018b57 88d61607
......@@ -864,7 +864,8 @@ extern int perf_uprobe_init(struct perf_event *event,
extern void perf_uprobe_destroy(struct perf_event *event);
extern int bpf_get_uprobe_info(const struct perf_event *event,
u32 *fd_type, const char **filename,
u64 *probe_offset, bool perf_type_tracepoint);
u64 *probe_offset, u64 *probe_addr,
bool perf_type_tracepoint);
#endif
extern int ftrace_profile_set_filter(struct perf_event *event, int event_id,
char *filter_str);
......
......@@ -1057,6 +1057,16 @@ enum bpf_link_type {
MAX_BPF_LINK_TYPE,
};
enum bpf_perf_event_type {
BPF_PERF_EVENT_UNSPEC = 0,
BPF_PERF_EVENT_UPROBE = 1,
BPF_PERF_EVENT_URETPROBE = 2,
BPF_PERF_EVENT_KPROBE = 3,
BPF_PERF_EVENT_KRETPROBE = 4,
BPF_PERF_EVENT_TRACEPOINT = 5,
BPF_PERF_EVENT_EVENT = 6,
};
/* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
*
* NONE(default): No further bpf programs allowed in the subtree.
......@@ -6439,6 +6449,36 @@ struct bpf_link_info {
__s32 priority;
__u32 flags;
} netfilter;
struct {
__aligned_u64 addrs;
__u32 count; /* in/out: kprobe_multi function count */
__u32 flags;
} kprobe_multi;
struct {
__u32 type; /* enum bpf_perf_event_type */
__u32 :32;
union {
struct {
__aligned_u64 file_name; /* in/out */
__u32 name_len;
__u32 offset; /* offset from file_name */
} uprobe; /* BPF_PERF_EVENT_UPROBE, BPF_PERF_EVENT_URETPROBE */
struct {
__aligned_u64 func_name; /* in/out */
__u32 name_len;
__u32 offset; /* offset from func_name */
__u64 addr;
} kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */
struct {
__aligned_u64 tp_name; /* in/out */
__u32 name_len;
} tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
struct {
__u64 config;
__u32 type;
} event; /* BPF_PERF_EVENT_EVENT */
};
} perf_event;
};
} __attribute__((aligned(8)));
......
......@@ -3295,6 +3295,25 @@ static void bpf_raw_tp_link_show_fdinfo(const struct bpf_link *link,
raw_tp_link->btp->tp->name);
}
static int bpf_copy_to_user(char __user *ubuf, const char *buf, u32 ulen,
u32 len)
{
if (ulen >= len + 1) {
if (copy_to_user(ubuf, buf, len + 1))
return -EFAULT;
} else {
char zero = '\0';
if (copy_to_user(ubuf, buf, ulen - 1))
return -EFAULT;
if (put_user(zero, ubuf + ulen - 1))
return -EFAULT;
return -ENOSPC;
}
return 0;
}
static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
......@@ -3313,20 +3332,7 @@ static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link,
if (!ubuf)
return 0;
if (ulen >= tp_len + 1) {
if (copy_to_user(ubuf, tp_name, tp_len + 1))
return -EFAULT;
} else {
char zero = '\0';
if (copy_to_user(ubuf, tp_name, ulen - 1))
return -EFAULT;
if (put_user(zero, ubuf + ulen - 1))
return -EFAULT;
return -ENOSPC;
}
return 0;
return bpf_copy_to_user(ubuf, tp_name, ulen, tp_len);
}
static const struct bpf_link_ops bpf_raw_tp_link_lops = {
......@@ -3358,9 +3364,155 @@ static void bpf_perf_link_dealloc(struct bpf_link *link)
kfree(perf_link);
}
static int bpf_perf_link_fill_common(const struct perf_event *event,
char __user *uname, u32 ulen,
u64 *probe_offset, u64 *probe_addr,
u32 *fd_type)
{
const char *buf;
u32 prog_id;
size_t len;
int err;
if (!ulen ^ !uname)
return -EINVAL;
if (!uname)
return 0;
err = bpf_get_perf_event_info(event, &prog_id, fd_type, &buf,
probe_offset, probe_addr);
if (err)
return err;
if (buf) {
len = strlen(buf);
err = bpf_copy_to_user(uname, buf, ulen, len);
if (err)
return err;
} else {
char zero = '\0';
if (put_user(zero, uname))
return -EFAULT;
}
return 0;
}
#ifdef CONFIG_KPROBE_EVENTS
static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
struct bpf_link_info *info)
{
char __user *uname;
u64 addr, offset;
u32 ulen, type;
int err;
uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
ulen = info->perf_event.kprobe.name_len;
err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
&type);
if (err)
return err;
if (type == BPF_FD_TYPE_KRETPROBE)
info->perf_event.type = BPF_PERF_EVENT_KRETPROBE;
else
info->perf_event.type = BPF_PERF_EVENT_KPROBE;
info->perf_event.kprobe.offset = offset;
if (!kallsyms_show_value(current_cred()))
addr = 0;
info->perf_event.kprobe.addr = addr;
return 0;
}
#endif
#ifdef CONFIG_UPROBE_EVENTS
static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
struct bpf_link_info *info)
{
char __user *uname;
u64 addr, offset;
u32 ulen, type;
int err;
uname = u64_to_user_ptr(info->perf_event.uprobe.file_name);
ulen = info->perf_event.uprobe.name_len;
err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
&type);
if (err)
return err;
if (type == BPF_FD_TYPE_URETPROBE)
info->perf_event.type = BPF_PERF_EVENT_URETPROBE;
else
info->perf_event.type = BPF_PERF_EVENT_UPROBE;
info->perf_event.uprobe.offset = offset;
return 0;
}
#endif
static int bpf_perf_link_fill_probe(const struct perf_event *event,
struct bpf_link_info *info)
{
#ifdef CONFIG_KPROBE_EVENTS
if (event->tp_event->flags & TRACE_EVENT_FL_KPROBE)
return bpf_perf_link_fill_kprobe(event, info);
#endif
#ifdef CONFIG_UPROBE_EVENTS
if (event->tp_event->flags & TRACE_EVENT_FL_UPROBE)
return bpf_perf_link_fill_uprobe(event, info);
#endif
return -EOPNOTSUPP;
}
static int bpf_perf_link_fill_tracepoint(const struct perf_event *event,
struct bpf_link_info *info)
{
char __user *uname;
u32 ulen;
uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name);
ulen = info->perf_event.tracepoint.name_len;
info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT;
return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL);
}
static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
struct bpf_link_info *info)
{
info->perf_event.event.type = event->attr.type;
info->perf_event.event.config = event->attr.config;
info->perf_event.type = BPF_PERF_EVENT_EVENT;
return 0;
}
static int bpf_perf_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
struct bpf_perf_link *perf_link;
const struct perf_event *event;
perf_link = container_of(link, struct bpf_perf_link, link);
event = perf_get_event(perf_link->perf_file);
if (IS_ERR(event))
return PTR_ERR(event);
switch (event->prog->type) {
case BPF_PROG_TYPE_PERF_EVENT:
return bpf_perf_link_fill_perf_event(event, info);
case BPF_PROG_TYPE_TRACEPOINT:
return bpf_perf_link_fill_tracepoint(event, info);
case BPF_PROG_TYPE_KPROBE:
return bpf_perf_link_fill_probe(event, info);
default:
return -EOPNOTSUPP;
}
}
static const struct bpf_link_ops bpf_perf_link_lops = {
.release = bpf_perf_link_release,
.dealloc = bpf_perf_link_dealloc,
.fill_link_info = bpf_perf_link_fill_link_info,
};
static int bpf_perf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
......
......@@ -2369,9 +2369,13 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
if (is_tracepoint || is_syscall_tp) {
*buf = is_tracepoint ? event->tp_event->tp->name
: event->tp_event->name;
*fd_type = BPF_FD_TYPE_TRACEPOINT;
*probe_offset = 0x0;
*probe_addr = 0x0;
/* We allow NULL pointer for tracepoint */
if (fd_type)
*fd_type = BPF_FD_TYPE_TRACEPOINT;
if (probe_offset)
*probe_offset = 0x0;
if (probe_addr)
*probe_addr = 0x0;
} else {
/* kprobe/uprobe */
err = -EOPNOTSUPP;
......@@ -2384,7 +2388,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
#ifdef CONFIG_UPROBE_EVENTS
if (flags & TRACE_EVENT_FL_UPROBE)
err = bpf_get_uprobe_info(event, fd_type, buf,
probe_offset,
probe_offset, probe_addr,
event->attr.type == PERF_TYPE_TRACEPOINT);
#endif
}
......@@ -2469,6 +2473,7 @@ struct bpf_kprobe_multi_link {
u32 cnt;
u32 mods_cnt;
struct module **mods;
u32 flags;
};
struct bpf_kprobe_multi_run_ctx {
......@@ -2558,9 +2563,44 @@ static void bpf_kprobe_multi_link_dealloc(struct bpf_link *link)
kfree(kmulti_link);
}
static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
u64 __user *uaddrs = u64_to_user_ptr(info->kprobe_multi.addrs);
struct bpf_kprobe_multi_link *kmulti_link;
u32 ucount = info->kprobe_multi.count;
int err = 0, i;
if (!uaddrs ^ !ucount)
return -EINVAL;
kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
info->kprobe_multi.count = kmulti_link->cnt;
info->kprobe_multi.flags = kmulti_link->flags;
if (!uaddrs)
return 0;
if (ucount < kmulti_link->cnt)
err = -ENOSPC;
else
ucount = kmulti_link->cnt;
if (kallsyms_show_value(current_cred())) {
if (copy_to_user(uaddrs, kmulti_link->addrs, ucount * sizeof(u64)))
return -EFAULT;
} else {
for (i = 0; i < ucount; i++) {
if (put_user(0, uaddrs + i))
return -EFAULT;
}
}
return err;
}
static const struct bpf_link_ops bpf_kprobe_multi_link_lops = {
.release = bpf_kprobe_multi_link_release,
.dealloc = bpf_kprobe_multi_link_dealloc,
.fill_link_info = bpf_kprobe_multi_link_fill_link_info,
};
static void bpf_kprobe_multi_cookie_swap(void *a, void *b, int size, const void *priv)
......@@ -2872,6 +2912,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
link->addrs = addrs;
link->cookies = cookies;
link->cnt = cnt;
link->flags = flags;
if (cookies) {
/*
......
......@@ -1544,15 +1544,10 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
*fd_type = trace_kprobe_is_return(tk) ? BPF_FD_TYPE_KRETPROBE
: BPF_FD_TYPE_KPROBE;
if (tk->symbol) {
*symbol = tk->symbol;
*probe_offset = tk->rp.kp.offset;
*probe_addr = 0;
} else {
*symbol = NULL;
*probe_offset = 0;
*probe_addr = (unsigned long)tk->rp.kp.addr;
}
*probe_offset = tk->rp.kp.offset;
*probe_addr = kallsyms_show_value(current_cred()) ?
(unsigned long)tk->rp.kp.addr : 0;
*symbol = tk->symbol;
return 0;
}
#endif /* CONFIG_PERF_EVENTS */
......
......@@ -1415,7 +1415,7 @@ static void uretprobe_perf_func(struct trace_uprobe *tu, unsigned long func,
int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
const char **filename, u64 *probe_offset,
bool perf_type_tracepoint)
u64 *probe_addr, bool perf_type_tracepoint)
{
const char *pevent = trace_event_name(event->tp_event);
const char *group = event->tp_event->class->system;
......@@ -1432,6 +1432,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
: BPF_FD_TYPE_UPROBE;
*filename = tu->filename;
*probe_offset = tu->offset;
*probe_addr = 0;
return 0;
}
#endif /* CONFIG_PERF_EVENTS */
......
......@@ -5,6 +5,7 @@
#include <linux/err.h>
#include <linux/netfilter.h>
#include <linux/netfilter_arp.h>
#include <linux/perf_event.h>
#include <net/if.h>
#include <stdio.h>
#include <unistd.h>
......@@ -14,8 +15,78 @@
#include "json_writer.h"
#include "main.h"
#include "xlated_dumper.h"
#define PERF_HW_CACHE_LEN 128
static struct hashmap *link_table;
static struct dump_data dd;
static const char *perf_type_name[PERF_TYPE_MAX] = {
[PERF_TYPE_HARDWARE] = "hardware",
[PERF_TYPE_SOFTWARE] = "software",
[PERF_TYPE_TRACEPOINT] = "tracepoint",
[PERF_TYPE_HW_CACHE] = "hw-cache",
[PERF_TYPE_RAW] = "raw",
[PERF_TYPE_BREAKPOINT] = "breakpoint",
};
const char *event_symbols_hw[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = "cpu-cycles",
[PERF_COUNT_HW_INSTRUCTIONS] = "instructions",
[PERF_COUNT_HW_CACHE_REFERENCES] = "cache-references",
[PERF_COUNT_HW_CACHE_MISSES] = "cache-misses",
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = "branch-instructions",
[PERF_COUNT_HW_BRANCH_MISSES] = "branch-misses",
[PERF_COUNT_HW_BUS_CYCLES] = "bus-cycles",
[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = "stalled-cycles-frontend",
[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = "stalled-cycles-backend",
[PERF_COUNT_HW_REF_CPU_CYCLES] = "ref-cycles",
};
const char *event_symbols_sw[PERF_COUNT_SW_MAX] = {
[PERF_COUNT_SW_CPU_CLOCK] = "cpu-clock",
[PERF_COUNT_SW_TASK_CLOCK] = "task-clock",
[PERF_COUNT_SW_PAGE_FAULTS] = "page-faults",
[PERF_COUNT_SW_CONTEXT_SWITCHES] = "context-switches",
[PERF_COUNT_SW_CPU_MIGRATIONS] = "cpu-migrations",
[PERF_COUNT_SW_PAGE_FAULTS_MIN] = "minor-faults",
[PERF_COUNT_SW_PAGE_FAULTS_MAJ] = "major-faults",
[PERF_COUNT_SW_ALIGNMENT_FAULTS] = "alignment-faults",
[PERF_COUNT_SW_EMULATION_FAULTS] = "emulation-faults",
[PERF_COUNT_SW_DUMMY] = "dummy",
[PERF_COUNT_SW_BPF_OUTPUT] = "bpf-output",
[PERF_COUNT_SW_CGROUP_SWITCHES] = "cgroup-switches",
};
const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX] = {
[PERF_COUNT_HW_CACHE_L1D] = "L1-dcache",
[PERF_COUNT_HW_CACHE_L1I] = "L1-icache",
[PERF_COUNT_HW_CACHE_LL] = "LLC",
[PERF_COUNT_HW_CACHE_DTLB] = "dTLB",
[PERF_COUNT_HW_CACHE_ITLB] = "iTLB",
[PERF_COUNT_HW_CACHE_BPU] = "branch",
[PERF_COUNT_HW_CACHE_NODE] = "node",
};
const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX] = {
[PERF_COUNT_HW_CACHE_OP_READ] = "load",
[PERF_COUNT_HW_CACHE_OP_WRITE] = "store",
[PERF_COUNT_HW_CACHE_OP_PREFETCH] = "prefetch",
};
const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
[PERF_COUNT_HW_CACHE_RESULT_ACCESS] = "refs",
[PERF_COUNT_HW_CACHE_RESULT_MISS] = "misses",
};
#define perf_event_name(array, id) ({ \
const char *event_str = NULL; \
\
if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \
event_str = array[id]; \
event_str; \
})
static int link_parse_fd(int *argc, char ***argv)
{
......@@ -166,6 +237,154 @@ static int get_prog_info(int prog_id, struct bpf_prog_info *info)
return err;
}
static int cmp_u64(const void *A, const void *B)
{
const __u64 *a = A, *b = B;
return *a - *b;
}
static void
show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
{
__u32 i, j = 0;
__u64 *addrs;
jsonw_bool_field(json_wtr, "retprobe",
info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count);
jsonw_name(json_wtr, "funcs");
jsonw_start_array(json_wtr);
addrs = u64_to_ptr(info->kprobe_multi.addrs);
qsort(addrs, info->kprobe_multi.count, sizeof(addrs[0]), cmp_u64);
/* Load it once for all. */
if (!dd.sym_count)
kernel_syms_load(&dd);
for (i = 0; i < dd.sym_count; i++) {
if (dd.sym_mapping[i].address != addrs[j])
continue;
jsonw_start_object(json_wtr);
jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
/* Print null if it is vmlinux */
if (dd.sym_mapping[i].module[0] == '\0') {
jsonw_name(json_wtr, "module");
jsonw_null(json_wtr);
} else {
jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module);
}
jsonw_end_object(json_wtr);
if (j++ == info->kprobe_multi.count)
break;
}
jsonw_end_array(json_wtr);
}
static void
show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
{
jsonw_bool_field(wtr, "retprobe", info->perf_event.type == BPF_PERF_EVENT_KRETPROBE);
jsonw_uint_field(wtr, "addr", info->perf_event.kprobe.addr);
jsonw_string_field(wtr, "func",
u64_to_ptr(info->perf_event.kprobe.func_name));
jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
}
static void
show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
{
jsonw_bool_field(wtr, "retprobe", info->perf_event.type == BPF_PERF_EVENT_URETPROBE);
jsonw_string_field(wtr, "file",
u64_to_ptr(info->perf_event.uprobe.file_name));
jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset);
}
static void
show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr)
{
jsonw_string_field(wtr, "tracepoint",
u64_to_ptr(info->perf_event.tracepoint.tp_name));
}
static char *perf_config_hw_cache_str(__u64 config)
{
const char *hw_cache, *result, *op;
char *str = malloc(PERF_HW_CACHE_LEN);
if (!str) {
p_err("mem alloc failed");
return NULL;
}
hw_cache = perf_event_name(evsel__hw_cache, config & 0xff);
if (hw_cache)
snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache);
else
snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff);
op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff);
if (op)
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
"%s-", op);
else
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
"%lld-", (config >> 8) & 0xff);
result = perf_event_name(evsel__hw_cache_result, config >> 16);
if (result)
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
"%s", result);
else
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
"%lld", config >> 16);
return str;
}
static const char *perf_config_str(__u32 type, __u64 config)
{
const char *perf_config;
switch (type) {
case PERF_TYPE_HARDWARE:
perf_config = perf_event_name(event_symbols_hw, config);
break;
case PERF_TYPE_SOFTWARE:
perf_config = perf_event_name(event_symbols_sw, config);
break;
case PERF_TYPE_HW_CACHE:
perf_config = perf_config_hw_cache_str(config);
break;
default:
perf_config = NULL;
break;
}
return perf_config;
}
static void
show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr)
{
__u64 config = info->perf_event.event.config;
__u32 type = info->perf_event.event.type;
const char *perf_type, *perf_config;
perf_type = perf_event_name(perf_type_name, type);
if (perf_type)
jsonw_string_field(wtr, "event_type", perf_type);
else
jsonw_uint_field(wtr, "event_type", type);
perf_config = perf_config_str(type, config);
if (perf_config)
jsonw_string_field(wtr, "event_config", perf_config);
else
jsonw_uint_field(wtr, "event_config", config);
if (type == PERF_TYPE_HW_CACHE && perf_config)
free((void *)perf_config);
}
static int show_link_close_json(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
......@@ -218,6 +437,29 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
jsonw_uint_field(json_wtr, "map_id",
info->struct_ops.map_id);
break;
case BPF_LINK_TYPE_KPROBE_MULTI:
show_kprobe_multi_json(info, json_wtr);
break;
case BPF_LINK_TYPE_PERF_EVENT:
switch (info->perf_event.type) {
case BPF_PERF_EVENT_EVENT:
show_perf_event_event_json(info, json_wtr);
break;
case BPF_PERF_EVENT_TRACEPOINT:
show_perf_event_tracepoint_json(info, json_wtr);
break;
case BPF_PERF_EVENT_KPROBE:
case BPF_PERF_EVENT_KRETPROBE:
show_perf_event_kprobe_json(info, json_wtr);
break;
case BPF_PERF_EVENT_UPROBE:
case BPF_PERF_EVENT_URETPROBE:
show_perf_event_uprobe_json(info, json_wtr);
break;
default:
break;
}
break;
default:
break;
}
......@@ -351,6 +593,113 @@ void netfilter_dump_plain(const struct bpf_link_info *info)
printf(" flags 0x%x", info->netfilter.flags);
}
static void show_kprobe_multi_plain(struct bpf_link_info *info)
{
__u32 i, j = 0;
__u64 *addrs;
if (!info->kprobe_multi.count)
return;
if (info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN)
printf("\n\tkretprobe.multi ");
else
printf("\n\tkprobe.multi ");
printf("func_cnt %u ", info->kprobe_multi.count);
addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
qsort(addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
/* Load it once for all. */
if (!dd.sym_count)
kernel_syms_load(&dd);
if (!dd.sym_count)
return;
printf("\n\t%-16s %s", "addr", "func [module]");
for (i = 0; i < dd.sym_count; i++) {
if (dd.sym_mapping[i].address != addrs[j])
continue;
printf("\n\t%016lx %s",
dd.sym_mapping[i].address, dd.sym_mapping[i].name);
if (dd.sym_mapping[i].module[0] != '\0')
printf(" [%s] ", dd.sym_mapping[i].module);
else
printf(" ");
if (j++ == info->kprobe_multi.count)
break;
}
}
static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
{
const char *buf;
buf = u64_to_ptr(info->perf_event.kprobe.func_name);
if (buf[0] == '\0' && !info->perf_event.kprobe.addr)
return;
if (info->perf_event.type == BPF_PERF_EVENT_KRETPROBE)
printf("\n\tkretprobe ");
else
printf("\n\tkprobe ");
if (info->perf_event.kprobe.addr)
printf("%llx ", info->perf_event.kprobe.addr);
printf("%s", buf);
if (info->perf_event.kprobe.offset)
printf("+%#x", info->perf_event.kprobe.offset);
printf(" ");
}
static void show_perf_event_uprobe_plain(struct bpf_link_info *info)
{
const char *buf;
buf = u64_to_ptr(info->perf_event.uprobe.file_name);
if (buf[0] == '\0')
return;
if (info->perf_event.type == BPF_PERF_EVENT_URETPROBE)
printf("\n\turetprobe ");
else
printf("\n\tuprobe ");
printf("%s+%#x ", buf, info->perf_event.uprobe.offset);
}
static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
{
const char *buf;
buf = u64_to_ptr(info->perf_event.tracepoint.tp_name);
if (buf[0] == '\0')
return;
printf("\n\ttracepoint %s ", buf);
}
static void show_perf_event_event_plain(struct bpf_link_info *info)
{
__u64 config = info->perf_event.event.config;
__u32 type = info->perf_event.event.type;
const char *perf_type, *perf_config;
printf("\n\tevent ");
perf_type = perf_event_name(perf_type_name, type);
if (perf_type)
printf("%s:", perf_type);
else
printf("%u :", type);
perf_config = perf_config_str(type, config);
if (perf_config)
printf("%s ", perf_config);
else
printf("%llu ", config);
if (type == PERF_TYPE_HW_CACHE && perf_config)
free((void *)perf_config);
}
static int show_link_close_plain(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
......@@ -396,6 +745,29 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_NETFILTER:
netfilter_dump_plain(info);
break;
case BPF_LINK_TYPE_KPROBE_MULTI:
show_kprobe_multi_plain(info);
break;
case BPF_LINK_TYPE_PERF_EVENT:
switch (info->perf_event.type) {
case BPF_PERF_EVENT_EVENT:
show_perf_event_event_plain(info);
break;
case BPF_PERF_EVENT_TRACEPOINT:
show_perf_event_tracepoint_plain(info);
break;
case BPF_PERF_EVENT_KPROBE:
case BPF_PERF_EVENT_KRETPROBE:
show_perf_event_kprobe_plain(info);
break;
case BPF_PERF_EVENT_UPROBE:
case BPF_PERF_EVENT_URETPROBE:
show_perf_event_uprobe_plain(info);
break;
default:
break;
}
break;
default:
break;
}
......@@ -417,10 +789,13 @@ static int do_show_link(int fd)
{
struct bpf_link_info info;
__u32 len = sizeof(info);
char buf[256];
__u64 *addrs = NULL;
char buf[PATH_MAX];
int count;
int err;
memset(&info, 0, sizeof(info));
buf[0] = '\0';
again:
err = bpf_link_get_info_by_fd(fd, &info, &len);
if (err) {
......@@ -431,22 +806,67 @@ static int do_show_link(int fd)
}
if (info.type == BPF_LINK_TYPE_RAW_TRACEPOINT &&
!info.raw_tracepoint.tp_name) {
info.raw_tracepoint.tp_name = (unsigned long)&buf;
info.raw_tracepoint.tp_name = ptr_to_u64(&buf);
info.raw_tracepoint.tp_name_len = sizeof(buf);
goto again;
}
if (info.type == BPF_LINK_TYPE_ITER &&
!info.iter.target_name) {
info.iter.target_name = (unsigned long)&buf;
info.iter.target_name = ptr_to_u64(&buf);
info.iter.target_name_len = sizeof(buf);
goto again;
}
if (info.type == BPF_LINK_TYPE_KPROBE_MULTI &&
!info.kprobe_multi.addrs) {
count = info.kprobe_multi.count;
if (count) {
addrs = calloc(count, sizeof(__u64));
if (!addrs) {
p_err("mem alloc failed");
close(fd);
return -ENOMEM;
}
info.kprobe_multi.addrs = ptr_to_u64(addrs);
goto again;
}
}
if (info.type == BPF_LINK_TYPE_PERF_EVENT) {
switch (info.perf_event.type) {
case BPF_PERF_EVENT_TRACEPOINT:
if (!info.perf_event.tracepoint.tp_name) {
info.perf_event.tracepoint.tp_name = ptr_to_u64(&buf);
info.perf_event.tracepoint.name_len = sizeof(buf);
goto again;
}
break;
case BPF_PERF_EVENT_KPROBE:
case BPF_PERF_EVENT_KRETPROBE:
if (!info.perf_event.kprobe.func_name) {
info.perf_event.kprobe.func_name = ptr_to_u64(&buf);
info.perf_event.kprobe.name_len = sizeof(buf);
goto again;
}
break;
case BPF_PERF_EVENT_UPROBE:
case BPF_PERF_EVENT_URETPROBE:
if (!info.perf_event.uprobe.file_name) {
info.perf_event.uprobe.file_name = ptr_to_u64(&buf);
info.perf_event.uprobe.name_len = sizeof(buf);
goto again;
}
break;
default:
break;
}
}
if (json_output)
show_link_close_json(fd, &info);
else
show_link_close_plain(fd, &info);
if (addrs)
free(addrs);
close(fd);
return 0;
}
......@@ -471,7 +891,8 @@ static int do_show(int argc, char **argv)
fd = link_parse_fd(&argc, &argv);
if (fd < 0)
return fd;
return do_show_link(fd);
do_show_link(fd);
goto out;
}
if (argc)
......@@ -510,6 +931,9 @@ static int do_show(int argc, char **argv)
if (show_pinned)
delete_pinned_obj_table(link_table);
out:
if (dd.sym_count)
kernel_syms_destroy(&dd);
return errno == ENOENT ? 0 : -1;
}
......
......@@ -46,7 +46,11 @@ void kernel_syms_load(struct dump_data *dd)
}
dd->sym_mapping = tmp;
sym = &dd->sym_mapping[dd->sym_count];
if (sscanf(buff, "%p %*c %s", &address, sym->name) != 2)
/* module is optional */
sym->module[0] = '\0';
/* trim the square brackets around the module name */
if (sscanf(buff, "%p %*c %s [%[^]]s", &address, sym->name, sym->module) < 2)
continue;
sym->address = (unsigned long)address;
if (!strcmp(sym->name, "__bpf_call_base")) {
......
......@@ -5,12 +5,14 @@
#define __BPF_TOOL_XLATED_DUMPER_H
#define SYM_MAX_NAME 256
#define MODULE_MAX_NAME 64
struct bpf_prog_linfo;
struct kernel_sym {
unsigned long address;
char name[SYM_MAX_NAME];
char module[MODULE_MAX_NAME];
};
struct dump_data {
......
......@@ -1057,6 +1057,16 @@ enum bpf_link_type {
MAX_BPF_LINK_TYPE,
};
enum bpf_perf_event_type {
BPF_PERF_EVENT_UNSPEC = 0,
BPF_PERF_EVENT_UPROBE = 1,
BPF_PERF_EVENT_URETPROBE = 2,
BPF_PERF_EVENT_KPROBE = 3,
BPF_PERF_EVENT_KRETPROBE = 4,
BPF_PERF_EVENT_TRACEPOINT = 5,
BPF_PERF_EVENT_EVENT = 6,
};
/* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
*
* NONE(default): No further bpf programs allowed in the subtree.
......@@ -6439,6 +6449,36 @@ struct bpf_link_info {
__s32 priority;
__u32 flags;
} netfilter;
struct {
__aligned_u64 addrs;
__u32 count; /* in/out: kprobe_multi function count */
__u32 flags;
} kprobe_multi;
struct {
__u32 type; /* enum bpf_perf_event_type */
__u32 :32;
union {
struct {
__aligned_u64 file_name; /* in/out */
__u32 name_len;
__u32 offset; /* offset from file_name */
} uprobe; /* BPF_PERF_EVENT_UPROBE, BPF_PERF_EVENT_URETPROBE */
struct {
__aligned_u64 func_name; /* in/out */
__u32 name_len;
__u32 offset; /* offset from func_name */
__u64 addr;
} kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */
struct {
__aligned_u64 tp_name; /* in/out */
__u32 name_len;
} tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
struct {
__u64 config;
__u32 type;
} event; /* BPF_PERF_EVENT_EVENT */
};
} perf_event;
};
} __attribute__((aligned(8)));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment