- 13 Jun, 2018 4 commits
-
-
Brenden Blanco authored
Signed-off-by: Brenden Blanco <bblanco@gmail.com>
-
yonghong-song authored
Make the input string of get_kprobe_functions as bytes literal in tcpdrop and zfsslower so the tool can be python3 compatible. Signed-off-by: Yonghong Song <yhs@fb.com>
-
yonghong-song authored
Fix issue #1802. On x64, the following commit (in 4.17) changed the raw parameter passed to the syscall entry function from a list of parameters supplied in user space to a single `pt_regs *` parameter. Also in 4.17, x64 syscall entry function is changed from `sys_<name>` to `__x64_sys_<name>`. ``` commit fa697140f9a20119a9ec8fd7460cc4314fbdaff3 Author: Dominik Brodowski <linux@dominikbrodowski.net> Date: Thu Apr 5 11:53:02 2018 +0200 syscalls/x86: Use 'struct pt_regs' based syscall calling convention for 64-bit syscalls Let's make use of ARCH_HAS_SYSCALL_WRAPPER=y on pure 64-bit x86-64 systems: Each syscall defines a stub which takes struct pt_regs as its only argument. It decodes just those parameters it needs, e.g: asmlinkage long sys_xyzzy(const struct pt_regs *regs) { return SyS_xyzzy(regs->di, regs->si, regs->dx); } This approach avoids leaking random user-provided register content down the call chain. ... ``` In bcc, we support kprobe function signatures in the bpf program. The rewriter will automatically generate proper assignment to these parameters. With the above function signature change, the original method does not work any more. This patch enhanced rewriter to generate two version codes guarded with CONFIG_ARCH_HAS_SYSCALL_WRAPPER. But we need to identify whether a function will be attached to syscall entry function or not during prog load time at which time the program has not attached to any event. The prefix `kprobe__` is used for kprobe autoload, we can use `kprobe____x64_sys_` as the prefix to identify x64 syscall entry functions. To support other architecture or not-autoloading program, the prefix `syscall__` is introduced to signal it is a syscall entry function. trace.py and other tools which uses kprobe syscall entry functions are also modified with the new interface so that they can work properly with 4.17. Signed-off-by: Yonghong Song <yhs@fb.com>
-
Paul Chaignon authored
20fb64cd stops the whole AST traversal if it meets a bpf_probe_read call. I think the original intent was to simply not rewrite the third argument, so this commit fixes it by remembering the third argument on bpf_probe_read call traversals and overriding TraverseStmt to skip the traversal of that argument when we meet it later.
-
- 11 Jun, 2018 10 commits
-
-
yonghong-song authored
bcc uses some func prefixes for auto load purpose. These func prefixes include "kprobe__", "tracepoint__" and "raw_tracepoint__". Currently we also pass this function name as the program name to the kernel. The kernel can only accept 16 bytes so long program name will be truncated. For example, with bps we will see something like 287- <raw_tracepoint> 0 2 Jun10/17:07 raw_tracepoint_ 290- tracepoint 0 4 Jun10/17:08 tracepoint__soc 297- kprobe 0 2 Jun10/17:09 kprobe__tcp_cle Such long prefixes are unnecessarily taking the space for the real function name. This patch removed such prefixes before giving them to the kernel. The result will like below: 311- <raw_tracepoint> 0 2 Jun10/17:44 sched_switch 321- tracepoint 0 4 Jun10/17:45 sock__inet_sock 322- kprobe 0 2 Jun10/17:45 tcp_cleanup_rbu Signed-off-by: Yonghong Song <yhs@fb.com>
-
Andreas Gerstmayr authored
commit 95b3d8c8 fixed the dport filtering of the kprobes variant by moving the network byte order to host byte order conversation before the filtering. Before submitting the perf event the byte order of the dport was again converted - this commit removes this double conversion.
-
yonghong-song authored
[V2] Add two map types for bpf_redirect_map()
-
yonghong-song authored
Smoke test for tcpdrop
-
4ast authored
adjust tracepoint field type based on size
-
4ast authored
add missing types in bps
-
Paul Chaignon authored
-
Gary Lin authored
Also add a simple example for DEVMAP based on xdp_drop_count.py v2: Add an example for CPUMAP Signed-off-by: Gary Lin <glin@suse.com>
-
Gary Lin authored
Those two map types are necessary to support bpf_redirect_map() in XDP. v2: Use ArrayBase as the base class of DevMap and CpuMap Signed-off-by: Gary Lin <glin@suse.com>
-
Yonghong Song authored
Add missing program and map types in bps Signed-off-by: Yonghong Song <yhs@fb.com>
-
- 10 Jun, 2018 1 commit
-
-
Yonghong Song authored
Fix issue #1807 tracepoint may have a format like this: (from syscalls/sys_enter_socket) field:unsigned short common_type; offset:0; size:2; signed:0; field:unsigned char common_flags; offset:2; size:1; signed:0; field:unsigned char common_preempt_count; offset:3; size:1; signed:0; field:int common_pid; offset:4; size:4; signed:1; field:int __syscall_nr; offset:8; size:4; signed:1; field:int family; offset:16; size:8; signed:0; field:int type; offset:24; size:8; signed:0; field:int protocol; offset:32; size:8; signed:0; Current rewriter generates: struct tracepoint__syscalls__sys_enter_socket { u64 __do_not_use__; int __syscall_nr; int family; int type; int protocol; }; This is incorrect as in the above structure, offsets of `family`/`type`/`procotol` becomingg 12/16/20. This patch fixed the issue by adjusting field type based on its size. The new structure: struct tracepoint__syscalls__sys_enter_socket { u64 __do_not_use__; int __syscall_nr; s64 family; s64 type; s64 protocol; }; The offsets of all fields are correct now. Signed-off-by: Yonghong Song <yhs@fb.com>
-
- 08 Jun, 2018 8 commits
-
-
yonghong-song authored
add tcpdrop tool
-
yonghong-song authored
Feat/add stack frames to funcslower
-
Brendan Gregg authored
-
yonghong-song authored
profile: remove remnant from old version
-
dpayne authored
-
dpayne authored
-
dpayne authored
-
Paul Chaignon authored
The comment on the instrumentation of perf_event_open should have been removed in commit 715f7e6e.
-
- 07 Jun, 2018 1 commit
-
-
Gary Lin authored
Signed-off-by: Gary Lin <glin@suse.com>
-
- 06 Jun, 2018 5 commits
-
-
dpayne authored
-
dpayne authored
2. Not using BPF_F_REUSE_STACKID in funcslower stack traces 3. Combining an ifdef check
-
4ast authored
fix a memory leak for getline()
-
Yonghong Song authored
The memory allocated by getline() is not freed. This may cause some tool like clang memory leak sanitizer complain. Signed-off-by: Yonghong Song <yhs@fb.com>
-
yonghong-song authored
Fix typo
-
- 05 Jun, 2018 4 commits
-
-
dpayne authored
-
Dmitry Dolgov authored
daelays -> delays
-
dpayne authored
-
dpayne authored
-
- 03 Jun, 2018 1 commit
-
-
yonghong-song authored
skip probe rewriter for bpf_probe_read()
-
- 02 Jun, 2018 5 commits
-
-
Yonghong Song authored
bpf_probe_read() is often used to access pointees in bpf programs. Recent rewriter has become smarter so a lot of bpf_probe_read() can be replaced with simple pointer/member access. In certain cases, bpf_probe_read() is still preferred though. For example, kernel net/tcp.h defined TCP_SKB_CB as below #define TCP_SKB_CB(__skb) ((struct tcp_skb_cb *)&((__skb)->cb[0])) User can use below to access tcp_gso_size of a skb data structure. TCP_SKB_CB(skb)->tcp_gso_size The rewriter will fail as it attempts to rewrite (__skb)->cb[0]. Instead of chasing down to prevent exactly the above pattern, this patch detects function bpf_probe_read() in ProbeVisitor and will skip it so bpf_probe_read()'s third parameter is a AddrOf. This can also help other cases where rewriter is not capable and user used bpf_probe_read() as the workaround. Also fixed tcptop.py to use direct assignment instead of bpf_probe_read. Otherwise, rewriter will actually rewrite src address reference inside the bpf_probe_read(). Signed-off-by: Yonghong Song <yhs@fb.com>
-
4ast authored
Add "-D __BPF_TRACING__" to frontend compilation flags
-
Yonghong Song authored
In 4.17 kernel, x86 build requires compiler asm-goto support. clang does not support asm-goto and bpf program compilation started to break. The following kernel commit commit b1ae32dbab50ed19cfc16d225b0fb0114fb13025 Author: Alexei Starovoitov <ast@kernel.org> Date: Sun May 13 12:32:22 2018 -0700 x86/cpufeature: Guard asm_volatile_goto usage for BPF compilation Workaround for the sake of BPF compilation which utilizes kernel headers, but clang does not support ASM GOTO and fails the build. workarounded the issue by permitting native clang compilation. A warning message, however, is issued: ./arch/x86/include/asm/cpufeature.h:150:2: warning: "Compiler lacks ASM_GOTO support. Add -D __BPF_TRACING__ to your compiler arguments" [-W#warnings] #warning "Compiler lacks ASM_GOTO support. Add -D __BPF_TRACING__ to your compil... ^ 1 warning generated. This patch added "-D __BPF_TRACING__" to clang frontend compilation to suppress the warning. Signed-off-by: Yonghong Song <yhs@fb.com>
-
yonghong-song authored
Refactor external pointer assignments
-
yonghong-song authored
sync BPF compat headers with latest bpf-next, update BPF features list
-
- 01 Jun, 2018 1 commit
-
-
Quentin Monnet authored
Update doc/kernel-versions.md with latest eBPF features, map types, JIT-compiler, helpers. Synchronise headers with bpf-next (at commit bcece5dc40b9). Add prototypes for the following helpers: - bpf_get_stack() - bpf_skb_load_bytes_relative() - bpf_fib_lookup() - bpf_sock_hash_update() - bpf_msg_redirect_hash() - bpf_sk_redirect_hash() - bpf_lwt_push_encap() - bpf_lwt_seg6_store_bytes() - bpf_lwt_seg6_adjust_srh() - bpf_lwt_seg6_action() - bpf_rc_repeat() - bpf_rc_keydown()
-