- 03 Feb, 2017 2 commits
-
-
https://github.com/iovisor/bcc/pull/936Kenny Yu authored
- Remove dependency on networkx. I did this by copying only the parts I needed from networkx, and adapting it to only use what I needed. These include: `DiGraph`, `strongly_connected_components`, `simple_cyles` - Symbolize global and static mutexes. In order to do this, I subshell out to `subshell`. This isn't very efficient, but this only happens at the end of the program if a deadlock is found, so it's not too bad. - `--verbose` mode to print graph statistics - Make `--binary` flag optional. Not needed by default, However, this is needed on kernels without this recent kernel patch (https://lkml.org/lkml/2017/1/13/585, submitted 2 weeks ago): we can't attach a uprobe on a binary that has `:` in the path name. Instead, we can create a symlink without `:` in the path and pass that to the `--binary` argument instead.
-
Kenny Yu authored
`deadlock_detector` is a new tool to detect potential deadlocks (lock order inversions) in a running process. The program attaches uprobes on `pthread_mutex_lock` and `pthread_mutex_unlock` to build a mutex wait directed graph, and then looks for a cycle in this graph. This graph has the following properties: - Nodes in the graph represent mutexes. - Edge (A, B) exists if there exists some thread T where lock(A) was called and lock(B) was called before unlock(A) was called. If there is a cycle in this graph, this indicates that there is a lock order inversion (potential deadlock). If the program finds a lock order inversion, the program will dump the cycle of mutexes, dump the stack traces where each mutex was acquired, and then exit. The format of the output uses a similar output as ThreadSanitizer (See example: https://github.com/google/sanitizers/wiki/ThreadSanitizerDeadlockDetector) This program can only find potential deadlocks that occur while the program is tracing the process. It cannot find deadlocks that may have occurred before the program was attached to the process. If the traced process has many mutexes and threads, this program will add a very large overhead because every mutex lock/unlock and clone call will be traced. This tool is meant for debugging only, and you should run this tool only on programs where the slowdown is acceptable. Note: This tool adds a dependency on `networkx` for the graph libraries (building a directed graph and cycle detection). Note: This tool does not work for shared mutexes or recursive mutexes. For shared (read-write) mutexes, a deadlock requires a cycle in the wait graph where at least one of the mutexes in the cycle is acquiring exclusive (write) ownership. For recursive mutexes, lock() is called multiple times on the same mutex. However, there is no way to determine if a mutex is a recursive mutex after the mutex has been created. As a result, this tool will not find potential deadlocks that involve only one mutex.
-
- 01 Feb, 2017 7 commits
-
-
4ast authored
Support for __data_loc tracepoint fields
-
Sasha Goldshtein authored
-
Sasha Goldshtein authored
-
Sasha Goldshtein authored
`__data_loc` fields are dynamically sized by the kernel at runtime. The field data follows the tracepoint structure entry, and needs to be extracted in a special way. The `__data_loc` field itself is a 32-bit value that consists of two 16-bit parts: the high 16 bits are the length of the data, and the low 16 bits are the offset of the data from the beginning of the tracepoint structure. From a cursory look, there are >200 tracepoints in recent kernels that have this kind of field. This patch fixes `tp_frontend_action.cc` to recognize and emit `__data_loc` fields correctly, as 32-bit opaque fields. Then, it introduces two helper macros: `TP_DATA_LOC_READ(dst, field)` reads from `args->field` by finding the right offset and length and emitting the `bpf_probe_read` required to fetch the data. This will only work with new kernels. `TP_DATA_LOC_READ_CONST(dst, field, length)` takes a user-specified length rather than finding it from `args->field`. This will work on older kernels, where the BPF verifier doesn't allow non-constant sizes to be passed to `bpf_probe_read`.
-
4ast authored
Handling multiple concurrent probe users.
-
Derek authored
-
Derek authored
-
- 31 Jan, 2017 10 commits
-
-
4ast authored
powerpc: update the build triplet
-
Naveen N. Rao authored
The more commonly used triplet on ppc64le happens to be powerpc64le-unknown-linux-gnu. The existing one causes problems in certain build environments. Change this. While at it, also include support for building on big endian. Reported-by: Anton Blanchard <anton@samba.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
-
https://github.com/derek0883/bccDerek authored
-
Derek authored
using static buf size in libbpf.c. for uprobe, set buf size to PATH_MAX
-
Derek authored
-
Derek authored
remove event_desc from front-end, handle it inside libbpf Event naming pattern changed to $(eventname)_bcc_$(pid)
-
Derek authored
-
Derek authored
-
Derek authored
-
Derek authored
Event naming pattern changed to $(eventname)_bcc_$(pid) Detect /sys/kernel/debug/tracing/instances in bpf_attach_probe, if it exist, then will per-instance event, if failed the make global event instance as same as before.
-
- 30 Jan, 2017 4 commits
-
-
4ast authored
argdist: Fix -p behavior to filter tgid and not pid
-
4ast authored
cpudist: remove unused Tracepoint import
-
Sasha Goldshtein authored
argdist remained one of the last holdouts to use the `-p` switch inconsistently with other tools, filtering for kernel pid (thread id from user space perspective) and not kernel tgid (process id from user space perspective). This is now fixed. Additionally, minor nits around generating pid filters were fixed, and a potential collision with user-provided argument names was fixed too (in general, script-generated arguments/locals should probably stick to reserved identifiers, such as `__whatever` rather than `whatever`).
-
Sasha Goldshtein authored
-
- 29 Jan, 2017 2 commits
- 26 Jan, 2017 1 commit
-
-
Derek authored
-
- 24 Jan, 2017 2 commits
- 23 Jan, 2017 1 commit
-
-
Derek authored
Event naming pattern changed to $(eventname)_bcc_$(pid) Detect /sys/kernel/debug/tracing/instances in bpf_attach_probe, if it exist, then will per-instance event, if failed the make global event instance as same as before.
-
- 22 Jan, 2017 1 commit
-
-
4ast authored
Fix python2/3 incompatible percpu helpers
-
- 21 Jan, 2017 2 commits
-
-
Brenden Blanco authored
Signed-off-by: Brenden Blanco <bblanco@gmail.com>
-
Brenden Blanco authored
The kernel uses number of possible cpus to size the leaf, not the num of online cpus. Fixup the python side appropriately. Update: use num_possible_cpus() helper instead Signed-off-by: Brenden Blanco <bblanco@gmail.com>
-
- 20 Jan, 2017 3 commits
-
-
Brenden Blanco authored
The python3 version of the percpu helpers (average, sum, etc.) were using a python2 function that has since moved to functools (reduce). Worse, the test case for percpu functionality was not enabled in the cmake file. Better turn that on and make it work. Signed-off-by: Brenden Blanco <bblanco@gmail.com>
-
4ast authored
[tools][memleak.py] add parameter for specifying object to load malloc/free from
-
Maria Kacik authored
-
- 17 Jan, 2017 5 commits
-
-
4ast authored
trace: Allow function signatures in uprobes and kprobes
-
4ast authored
trace, tplist, argdist: UDST probe miscellaneous fixes
-
Sasha Goldshtein authored
§`trace` now allows uprobes and kprobes to have function signatures, which means function parameters can be named and typed, rather than relying on the positional arg1, arg2, etc. arguments. This also enables structure field access, which is impossible with the unnamed arguments due to rewriter limitations. The example requested by @brendangregg, which now works, is the following: §Â``` PID TID COMM FUNC - 777 785 automount SyS_nanosleep sleep for 500000000 ns 777 785 automount SyS_nanosleep sleep for 500000000 ns 777 785 automount SyS_nanosleep sleep for 500000000 ns 777 785 automount SyS_nanosleep sleep for 500000000 ns ^C ```
-
Sasha Goldshtein authored
-
Sasha Goldshtein authored
`trace` would use the incorrect argument index for USDT probes when filtering specifically, e.g. `trace u:lib:tp (arg1 != 0) ...` would actually use the type of the 2nd argument, and not the 1st argument for the type of the filter variable in the generated program. This could cause compilation errors or subtle bugs where the data would be either extended or contracted to fit the wrong argument's type. Additionally, `trace` would use the pid (thread id, `-L`) filter with the `attach_uprobe` API, which expects a tgid (process id). As a result, incorrect filtering would happen.
-