- 09 Nov, 2016 2 commits
-
-
Brenden Blanco authored
Make bcc_symcache_new(tid) work with symbols from /tmp/perf-pid.map
-
Mark Drayton authored
-
- 08 Nov, 2016 2 commits
-
-
Brenden Blanco authored
Fix warnings covered by -Wdelete-non-virtual-dtor
-
Marco Leogrande authored
The warnings were: src/cc/bcc_syms.cc: In function ‘void bcc_free_symcache(void*, int)’: src/cc/bcc_syms.cc:217:40: warning: deleting object of polymorphic class type ‘KSyms’ which has non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor] delete static_cast<KSyms*>(symcache); ^ src/cc/bcc_syms.cc:219:43: warning: deleting object of polymorphic class type ‘ProcSyms’ which has non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor] delete static_cast<ProcSyms*>(symcache); ^ Fix the warnings by defining a virtual destructor for the base class SymbolCache. Signed-off-by: Marco Leogrande <marcol@plumgrid.com>
-
- 07 Nov, 2016 1 commit
-
-
Jan-Erik Rediger authored
-
- 05 Nov, 2016 2 commits
-
-
Paul Chaignon authored
-
Jan-Erik Rediger authored
-
- 03 Nov, 2016 1 commit
-
-
Mark Drayton authored
* support filtering by process ID (-p) or thread ID (-t); previously -p actually filtered on thread ID (aka "pid" in kernel-speak) * include process and thread ID in output * flip order of user and kernel stacks to flow more naturally * resolve symbols using process ID instead of thread ID so only one symbol cache is instantiated per process * misc aesthetic fixes here and there
-
- 02 Nov, 2016 2 commits
-
-
Brenden Blanco authored
perf_reader: install perf_reader.h
-
Brenden Blanco authored
Expose destruction of SymbolCache in libbcc
-
- 31 Oct, 2016 1 commit
-
-
Teng Qin authored
-
- 30 Oct, 2016 2 commits
-
-
Marcin Ślusarz authored
Ref: iovisor/bcc#778
-
Teng Qin authored
-
- 28 Oct, 2016 1 commit
-
-
Sasha Goldshtein authored
funccount now bails early with an error if there are no functions matching the specified pattern (the same applies to tracepoints and USDT probes). For example: ``` No functions matched by pattern ^sched:sched_fork$ ``` Fixes #789.
-
- 27 Oct, 2016 1 commit
-
-
Brendan Gregg authored
-
- 26 Oct, 2016 1 commit
-
-
Brenden Blanco authored
funccount: Fix on-CPU hang when attaching to SyS_*
-
- 25 Oct, 2016 3 commits
-
-
Marco Leogrande authored
Signed-off-by: Marco Leogrande <marcol@plumgrid.com>
-
Sasha Goldshtein authored
To avoid a potential race with the key zeroing modifying the next hash key retrieved by the loop in `Table.zero()`, retrieve all the keys in user space first before starting the zeroing loop. See discussion on #780. Tested with `funccount 'SyS_*' -i 1` while running a heavy read/write test application (`dd`) in the background for several minutes with no visible issues.
-
Sasha Goldshtein authored
Because we know the number of probes in advance before attaching them, we can simply preinitialize a fixed-size array instead of using a BPF map. This avoids potential deadlocks/hangs/race conditions with the Python program and internally in the kernel. See also #415, #665, #233 for more discussion.
-
- 21 Oct, 2016 3 commits
-
-
Brendan Gregg authored
* profile.py to use new perf support * Minor adjustments to llcstat docs
-
Brendan Gregg authored
-
Brenden Blanco authored
Add basic support for BPF perf event
-
- 20 Oct, 2016 12 commits
-
-
Teng Qin authored
-
Teng Qin authored
-
Sasha Goldshtein authored
Avoiding the prepopulation of the location cache allows us to get rid of the `zero()` call at the end of each interval, which would hang the program at full CPU. Replaced the prepopulation with a `lookup_or_init` and the `zero()` call with a call to `clear()`.
-
Sasha Goldshtein authored
`BPF.get_kprobe_functions` does not filter duplicates, and as a result may return the same function name more than once if it appears in /sys/kernel/debug/tracing/available_filter_functions more than once. Change the function's behavior to filter out duplicates before returning, so we don't end up attaching the same kprobe more than once.
-
Teng Qin authored
-
Teng Qin authored
-
Teng Qin authored
-
Brendan Gregg authored
trace, argdist: STRCMP helper function
-
Brendan Gregg authored
-
Brendan Gregg authored
-
Brendan Gregg authored
-
Sasha Goldshtein authored
Because `funccount` doesn't use the direct regex attach infrastructure in the BPF module, it needs its own checking for a maximum probe limit that would make sense. We use 1000 because that's what the BPF module uses as well. When trying to attach to more than 1000 probes, we bail out early.
-
- 19 Oct, 2016 6 commits
-
-
Brendan Gregg authored
funccount: Generalized for uprobes, tracepoints, and USDT
-
Sasha Goldshtein authored
As part of the funccount work, the kprobe quota test doesn't fail early when adding multiple kprobes at once (with `event_re`), but rather only when the 1000th probe is being added. Revert to the old behavior, which fixes the `test_probe_quota` test. Add similar test for uprobes, `test_uprobe_quota`, which tests the recently-added uprobe regex support.
-
Sasha Goldshtein authored
This commit updates `funccount` to support attaching to a set of user functions, kernel tracepoints, or USDT probes using familiar syntax. Along the way, the implementation has been updated to use a separate BPF function for each target function, because using the instruction pointer to determine the function name doesn't work for anything other than kprobes. Even though the BPF program can now be potentially larger, testing with 40-50 attach points shows no significant overhead compared to the previous version. Examples of what's now possible: ``` funccount t:block:* funccount u:node:gc* funccount -r 'c:(read|write)$' funccount -p 142 u:ruby:object__create ```
-
Sasha Goldshtein authored
Make the `get_user_functions`, `get_kprobe_functions`, and `get_tracepoints` methods publicly accessible from the BPF class. These can then be used by tools that need to do their own work before attaching programs to a set of functions or tracepoints.
-
Sasha Goldshtein authored
-
Brendan Gregg authored
-