Commit 66fb4d29 authored by Kenny Yu's avatar Kenny Yu

tools: add tool to detect potential deadlocks in running programs

`deadlock_detector` is a new tool to detect potential deadlocks (lock order
inversions) in a running process. The program attaches uprobes on
`pthread_mutex_lock` and `pthread_mutex_unlock` to build a mutex wait directed
graph, and then looks for a cycle in this graph. This graph has the following
properties:

- Nodes in the graph represent mutexes.
- Edge (A, B) exists if there exists some thread T where lock(A) was called
  and lock(B) was called before unlock(A) was called.

If there is a cycle in this graph, this indicates that there is a lock order
inversion (potential deadlock). If the program finds a lock order inversion, the
program will dump the cycle of mutexes, dump the stack traces where each mutex
was acquired, and then exit.

The format of the output uses a similar output as ThreadSanitizer (See example:
https://github.com/google/sanitizers/wiki/ThreadSanitizerDeadlockDetector)

This program can only find potential deadlocks that occur while the program is
tracing the process. It cannot find deadlocks that may have occurred before the
program was attached to the process.

If the traced process has many mutexes and threads, this program will add a
very large overhead because every mutex lock/unlock and clone call will be
traced. This tool is meant for debugging only, and you should run this tool
only on programs where the slowdown is acceptable.

Note: This tool adds a dependency on `networkx` for the graph libraries
(building a directed graph and cycle detection).

Note: This tool does not work for shared mutexes or recursive mutexes.

For shared (read-write) mutexes, a deadlock requires a cycle in the wait
graph where at least one of the mutexes in the cycle is acquiring exclusive
(write) ownership.

For recursive mutexes, lock() is called multiple times on the same mutex.
However, there is no way to determine if a mutex is a recursive mutex
after the mutex has been created. As a result, this tool will not find
potential deadlocks that involve only one mutex.
parent 4a57f4dd
Q: while running 'make test' I'm seeing:
'ImportError: No module named pyroute2'
A: Install pyroute2:
'ImportError: No module named pyroute2' or 'ImportError: No module named networkx'
A: Install pyroute2 and networkx:
git clone https://github.com/svinota/pyroute2.git
cd pyroute2; sudo make install
git clone https://github.com/networkx/networkx.git
cd networkx; sudo make install
Q: hello_world.py fails with:
OSError: libbcc.so: cannot open shared object file: No such file or directory
......
......@@ -99,6 +99,13 @@ cd pyroute2; sudo make install
sudo python /usr/share/bcc/examples/simple_tc.py
```
(Optional) Install networkx for additional deadlock detector features
```bash
git@github.com:networkx/networkx.git
cd networkx; sudo make install
sudo python /usr/share/bcc/tools/deadlock_detector.py
```
## Fedora - Binary
Install a 4.2+ kernel from
......@@ -200,6 +207,7 @@ sudo dnf install -y luajit luajit-devel # for Lua support
sudo dnf install -y \
http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm
sudo pip install pyroute2
sudo pip install networkx
```
### Install binary clang
......
......@@ -89,6 +89,7 @@ Examples:
- tools/[cpuunclaimed](tools/cpuunclaimed.py): Sample CPU run queues and calculate unclaimed idle CPU. [Examples](tools/cpuunclaimed_example.txt)
- tools/[dcsnoop](tools/dcsnoop.py): Trace directory entry cache (dcache) lookups. [Examples](tools/dcsnoop_example.txt).
- tools/[dcstat](tools/dcstat.py): Directory entry cache (dcache) stats. [Examples](tools/dcstat_example.txt).
- tools/[deadlock_detector.py](tools/deadlock_detector.py): Detect potential deadlocks on a running process. [Examples](tools/deadlock_detector_example.txt)
- tools/[execsnoop](tools/execsnoop.py): Trace new processes via exec() syscalls. [Examples](tools/execsnoop_example.txt).
- tools/[ext4dist](tools/ext4dist.py): Summarize ext4 operation latency distribution as a histogram. [Examples](tools/ext4dist_example.txt).
- tools/[ext4slower](tools/ext4slower.py): Trace slow ext4 operations. [Examples](tools/ext4slower_example.txt).
......
......@@ -3,7 +3,7 @@ Maintainer: Brenden Blanco <bblanco@plumgrid.com>
Section: misc
Priority: optional
Standards-Version: 3.9.5
Build-Depends: debhelper (>= 9), cmake, libllvm3.7 | libllvm3.8, llvm-3.7-dev | llvm-3.8-dev, libclang-3.7-dev | libclang-3.8-dev, libelf-dev, bison, flex, libedit-dev, clang-format | clang-format-3.7, python-netaddr, python-pyroute2, luajit, libluajit-5.1-dev
Build-Depends: debhelper (>= 9), cmake, libllvm3.7 | libllvm3.8, llvm-3.7-dev | llvm-3.8-dev, libclang-3.7-dev | libclang-3.8-dev, libelf-dev, bison, flex, libedit-dev, clang-format | clang-format-3.7, python-netaddr, python-networkx, python-pyroute2, luajit, libluajit-5.1-dev
Homepage: https://github.com/iovisor/bcc
Package: libbcc
......
.TH deadlock_detector 8 "2017-02-01" "USER COMMANDS"
.SH NAME
deadlock_detector \- Find potential deadlocks (lock order inversions)
in a running program.
.SH SYNOPSIS
.B deadlock_detector [\-h] [\--dump-graph FILE]
[\--lock-symbols LOCK_SYMBOLS] [\--unlock-symbols UNLOCK_SYMBOLS] binary pid
.SH DESCRIPTION
deadlock_detector detects potential deadlocks on a running process. The program
attaches uprobes on `pthread_mutex_lock` and `pthread_mutex_unlock` by default
to build a mutex wait directed graph, and then looks for a cycle in this graph.
This graph has the following properties:
- Nodes in the graph represent mutexes.
- Edge (A, B) exists if there exists some thread T where lock(A) was called
and lock(B) was called before unlock(A) was called.
If there is a cycle in this graph, this indicates that there is a lock order
inversion (potential deadlock). If the program finds a lock order inversion, the
program will dump the cycle of mutexes, dump the stack traces where each mutex
was acquired, and then exit.
This program can only find potential deadlocks that occur while the program is
tracing the process. It cannot find deadlocks that may have occurred before the
program was attached to the process.
This tool does not work for shared mutexes or recursive mutexes.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF, bcc, and networkx
.SH OPTIONS
.TP
\--dump-graph DUMP_GRAPH
If set, this will dump the mutex graph to the specified file.
.TP
\--lock-symbols LOCK_SYMBOLS
Comma-separated list of lock symbols to trace. Default is pthread_mutex_lock
.TP
\--unlock-symbols UNLOCK_SYMBOLS
Comma-separated list of unlock symbols to trace. Default is pthread_mutex_unlock
.TP
binary
Absolute path to binary
.TP
pid
Pid to trace
.SH EXAMPLES
.TP
Find potential deadlocks in a process:
#
.B deadlock_detector /path/to/binary $(pidof binary)
.TP
Find potential deadlocks in a process and dump the mutex wait graph to a file:
#
.B deadlock_detector /path/to/binary $(pidof binary) --dump-graph graph.json
.TP
Find potential deadlocks in a process with custom mutexes:
#
.B deadlock_detector /path/to/binary $(pidof binary)
--lock-symbols custom_mutex1_lock,custom_mutex2_lock
--unlock_symbols custom_mutex1_unlock,custom_mutex2_unlock
.SH OUTPUT
This program does not output any fields. Rather, it will keep running until
it finds a potential deadlock, or the user hits Ctrl-C. If the program finds
a potential deadlock, it will output the stack traces and lock order inversion
in the following format and exit:
.TP
Potential Deadlock Detected!
.TP
Cycle in lock order graph: Mutex M0 => Mutex M1 => Mutex M0
.TP
Mutex M1 acquired here while holding Mutex M0 in Thread T:
.B [stack trace]
.TP
Mutex M0 previously acquired by the same Thread T here:
.B [stack trace]
.TP
Mutex M0 acquired here while holding Mutex M1 in Thread S:
.B [stack trace]
.TP
Mutex M1 previously acquired by the same Thread S here:
.B [stack trace]
.TP
Thread T created by Thread R here:
.B [stack trace]
.TP
Thread S created by Thread Q here:
.B [stack trace]
.SH OVERHEAD
This traces all mutex lock and unlock events and all thread creation events
on the traced process. The overhead of this can be high if the process has many
threads and mutexes. You should only run this on a process where the slowdown
is acceptable.
.SH SOURCE
This is from bcc.
.IP
https://github.com/iovisor/bcc
.PP
Also look in the bcc distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Kenny Yu
/*
* deadlock_detector.c Detects potential deadlocks in a running process.
* For Linux, uses BCC, eBPF. See .py file.
*
* Copyright 2017 Facebook, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 1-Feb-2016 Kenny Yu Created this.
*/
#include <linux/sched.h>
#include <uapi/linux/ptrace.h>
// Maximum number of mutexes a single thread can hold at once.
// If the number is too big, the unrolled loops wil cause the stack
// to be too big, and the bpf verifier will fail.
#define MAX_HELD_MUTEXES 16
// Info about held mutexes. `mutex` will be 0 if not held.
struct held_mutex_t {
u64 mutex;
u64 stack_id;
};
// List of mutexes that a thread is holding. Whenever we loop over this array,
// we need to force the compiler to unroll the loop, otherwise the bcc verifier
// will fail because the loop will create a backwards edge.
struct thread_to_held_mutex_leaf_t {
struct held_mutex_t held_mutexes[MAX_HELD_MUTEXES];
};
// Map of thread ID -> array of (mutex addresses, stack id)
BPF_TABLE("hash", u32, struct thread_to_held_mutex_leaf_t,
thread_to_held_mutexes, 2097152);
// Key type for edges. Represents an edge from mutex1 => mutex2.
struct edges_key_t {
u64 mutex1;
u64 mutex2;
};
// Leaf type for edges. Holds information about where each mutex was acquired.
struct edges_leaf_t {
u64 mutex1_stack_id;
u64 mutex2_stack_id;
u32 thread_pid;
char comm[TASK_COMM_LEN];
};
// Represents all edges currently in the mutex wait graph.
BPF_TABLE("hash", struct edges_key_t, struct edges_leaf_t, edges, 2097152);
// Info about parent thread when a child thread is created.
struct thread_created_leaf_t {
u64 stack_id;
u32 parent_pid;
char comm[TASK_COMM_LEN];
};
// Map of child thread pid -> info about parent thread.
BPF_TABLE("hash", u32, struct thread_created_leaf_t, thread_to_parent, 10240);
// Stack traces when threads are created and when mutexes are locked/unlocked.
BPF_STACK_TRACE(stack_traces, 655360);
// The first argument to the user space function we are tracing
// is a pointer to the mutex M held by thread T.
//
// For all mutexes N held by mutexes_held[T]
// add edge N => M (held by T)
// mutexes_held[T].add(M)
int trace_mutex_acquire(struct pt_regs *ctx, void *mutex_addr) {
// Higher 32 bits is process ID, Lower 32 bits is thread ID
u32 pid = bpf_get_current_pid_tgid();
u64 mutex = (u64)mutex_addr;
struct thread_to_held_mutex_leaf_t empty_leaf = {};
struct thread_to_held_mutex_leaf_t *leaf =
thread_to_held_mutexes.lookup_or_init(&pid, &empty_leaf);
if (!leaf) {
bpf_trace_printk(
"could not add thread_to_held_mutex key, thread: %d, mutex: %p\n", pid,
mutex);
return 1; // Could not insert, no more memory
}
// Recursive mutexes lock the same mutex multiple times. We cannot tell if
// the mutex is recursive after the mutex is already created. To avoid noisy
// reports, disallow self edges. Do one pass to check if we are already
// holding the mutex, and if we are, do nothing.
#pragma unroll
for (int i = 0; i < MAX_HELD_MUTEXES; ++i) {
if (leaf->held_mutexes[i].mutex == mutex) {
return 1; // Disallow self edges
}
}
u64 stack_id =
stack_traces.get_stackid(ctx, BPF_F_USER_STACK | BPF_F_REUSE_STACKID);
int added_mutex = 0;
#pragma unroll
for (int i = 0; i < MAX_HELD_MUTEXES; ++i) {
// If this is a free slot, see if we can insert.
if (!leaf->held_mutexes[i].mutex) {
if (!added_mutex) {
leaf->held_mutexes[i].mutex = mutex;
leaf->held_mutexes[i].stack_id = stack_id;
added_mutex = 1;
}
continue; // Nothing to do for a free slot
}
// Add edges from held mutex => current mutex
struct edges_key_t edge_key = {};
edge_key.mutex1 = leaf->held_mutexes[i].mutex;
edge_key.mutex2 = mutex;
struct edges_leaf_t edge_leaf = {};
edge_leaf.mutex1_stack_id = leaf->held_mutexes[i].stack_id;
edge_leaf.mutex2_stack_id = stack_id;
edge_leaf.thread_pid = pid;
bpf_get_current_comm(&edge_leaf.comm, sizeof(edge_leaf.comm));
// Returns non-zero on error
int result = edges.update(&edge_key, &edge_leaf);
if (result) {
bpf_trace_printk("could not add edge key %p, %p, error: %d\n",
edge_key.mutex1, edge_key.mutex2, result);
continue; // Could not insert, no more memory
}
}
// There were no free slots for this mutex.
if (!added_mutex) {
bpf_trace_printk("could not add mutex %p, added_mutex: %d\n", mutex,
added_mutex);
return 1;
}
return 0;
}
// The first argument to the user space function we are tracing
// is a pointer to the mutex M held by thread T.
//
// mutexes_held[T].remove(M)
int trace_mutex_release(struct pt_regs *ctx, void *mutex_addr) {
// Higher 32 bits is process ID, Lower 32 bits is thread ID
u32 pid = bpf_get_current_pid_tgid();
u64 mutex = (u64)mutex_addr;
struct thread_to_held_mutex_leaf_t *leaf =
thread_to_held_mutexes.lookup(&pid);
if (!leaf) {
// If the leaf does not exist for the pid, then it means we either missed
// the acquire event, or we had no more memory and could not add it.
bpf_trace_printk(
"could not find thread_to_held_mutex, thread: %d, mutex: %p\n", pid,
mutex);
return 1;
}
// For older kernels without "Bpf: allow access into map value arrays"
// (https://lkml.org/lkml/2016/8/30/287) the bpf verifier will fail with an
// invalid memory access on `leaf->held_mutexes[i]` below. On newer kernels,
// we can avoid making this extra copy in `value` and use `leaf` directly.
struct thread_to_held_mutex_leaf_t value = {};
bpf_probe_read(&value, sizeof(struct thread_to_held_mutex_leaf_t), leaf);
#pragma unroll
for (int i = 0; i < MAX_HELD_MUTEXES; ++i) {
// Find the current mutex (if it exists), and clear it.
// Note: Can't use `leaf->` in this if condition, see comment above.
if (value.held_mutexes[i].mutex == mutex) {
leaf->held_mutexes[i].mutex = 0;
leaf->held_mutexes[i].stack_id = 0;
}
}
return 0;
}
// Trace return from clone() syscall in the child thread (return value > 0).
int trace_clone(struct pt_regs *ctx, unsigned long flags, void *child_stack,
void *ptid, void *ctid, struct pt_regs *regs) {
u32 child_pid = PT_REGS_RC(ctx);
if (child_pid <= 0) {
return 1;
}
struct thread_created_leaf_t thread_created_leaf = {};
thread_created_leaf.parent_pid = bpf_get_current_pid_tgid();
thread_created_leaf.stack_id =
stack_traces.get_stackid(ctx, BPF_F_USER_STACK | BPF_F_REUSE_STACKID);
bpf_get_current_comm(&thread_created_leaf.comm,
sizeof(thread_created_leaf.comm));
struct thread_created_leaf_t *insert_result =
thread_to_parent.lookup_or_init(&child_pid, &thread_created_leaf);
if (!insert_result) {
bpf_trace_printk(
"could not add thread_created_key, child: %d, parent: %d\n", child_pid,
thread_created_leaf.parent_pid);
return 1; // Could not insert, no more memory
}
return 0;
}
#!/usr/bin/env python
#
# deadlock_detector Detects potential deadlocks (lock order inversions)
# on a running process. For Linux, uses BCC, eBPF.
#
# USAGE: deadlock_detector.py [-h] [--dump-graph DUMP_GRAPH]
# [--lock-symbols LOCK_SYMBOLS]
# [--unlock-symbols UNLOCK_SYMBOLS]
# binary pid
#
# This traces pthread mutex lock and unlock calls to build a directed graph
# representing the mutex wait graph:
#
# - Nodes in the graph represent mutexes.
# - Edge (A, B) exists if there exists some thread T where lock(A) was called
# and lock(B) was called before unlock(A) was called.
#
# If the program finds a potential lock order inversion, the program will dump
# the cycle of mutexes and the stack traces where each mutex was acquired, and
# then exit.
#
# Note: This tool does not work for shared mutexes or recursive mutexes.
#
# For shared (read-write) mutexes, a deadlock requires a cycle in the wait
# graph where at least one of the mutexes in the cycle is acquiring exclusive
# (write) ownership.
#
# For recursive mutexes, lock() is called multiple times on the same mutex.
# However, there is no way to determine if a mutex is a recursive mutex
# after the mutex has been created. As a result, this tool will not find
# potential deadlocks that involve only one mutex.
#
# Copyright 2017 Facebook, Inc.
# Licensed under the Apache License, Version 2.0 (the "License")
#
# 01-Feb-2017 Kenny Yu Created this.
from __future__ import (
absolute_import, division, unicode_literals, print_function
)
from bcc import BPF
import argparse
import json
import networkx
from networkx.readwrite import json_graph
import sys
import time
def find_cycle(graph):
'''
Looks for a cycle in the graph. If found, returns the first cycle.
If nodes a1, a2, ..., an are in a cycle, then this returns:
[(a1,a2), (a2,a3), ... (an-1,an), (an, a1)]
Otherwise returns an empty list.
'''
cycles = list(networkx.simple_cycles(graph))
if cycles:
nodes = cycles[0]
nodes.append(nodes[0])
edges = []
prev = nodes[0]
for node in nodes[1:]:
edges.append((prev, node))
prev = node
return edges
else:
return []
def find_cycle_and_print(graph, thread_to_parent, print_stack_trace_fn):
'''
Detects if there is a cycle in the mutex graph. If there is, dump
the following information and return True. Otherwise returns False.
Potential Deadlock Detected!
Cycle in lock order graph: M0 => M1 => M2 => M0
for (m, n) in cycle:
Mutex n acquired here while holding Mutex m in thread T:
[ stack trace ]
Mutex m previously acquired by thread T here:
[ stack trace ]
for T in all threads:
Thread T was created here:
[ stack trace ]
'''
edges = find_cycle(graph)
if not edges:
return False
# List of mutexes in the cycle, first and last repeated
nodes_in_order = []
# Map mutex address -> readable alias
node_addr_to_name = {}
for counter, (m, n) in enumerate(edges):
nodes_in_order.append(m)
node_addr_to_name[m] = 'Mutex M%d (0x%016x)' % (counter, m)
nodes_in_order.append(nodes_in_order[0])
print('----------------\nPotential Deadlock Detected!\n')
print(
'Cycle in lock order graph: %s\n' %
(' => '.join([node_addr_to_name[n] for n in nodes_in_order]))
)
# Set of threads involved in the lock inversion
thread_pids = set()
# For each edge in the cycle, print where the two mutexes were held
for (m, n) in edges:
thread_pid = graph[m][n]['thread_pid']
thread_comm = graph[m][n]['thread_comm']
first_mutex_stack_id = graph[m][n]['first_mutex_stack_id']
second_mutex_stack_id = graph[m][n]['second_mutex_stack_id']
thread_pids.add(thread_pid)
print(
'%s acquired here while holding %s in Thread %d (%s):' % (
node_addr_to_name[n], node_addr_to_name[m], thread_pid,
thread_comm
)
)
print_stack_trace_fn(second_mutex_stack_id)
print('')
print(
'%s previously acquired by the same Thread %d (%s) here:' %
(node_addr_to_name[m], thread_pid, thread_comm)
)
print_stack_trace_fn(first_mutex_stack_id)
print('')
# Print where the threads were created, if available
for thread_pid in thread_pids:
parent_pid, stack_id, parent_comm = thread_to_parent.get(
thread_pid, (None, None, None)
)
if parent_pid:
print(
'Thread %d created by Thread %d (%s) here: ' %
(thread_pid, parent_pid, parent_comm)
)
print_stack_trace_fn(stack_id)
else:
print(
'Could not find stack trace where Thread %d was created' %
thread_pid
)
print('')
return True
def strlist(s):
'''
Given a comma-separated string, returns a list of substrings
'''
return s.strip().split(',')
def main():
parser = argparse.ArgumentParser(
description=(
'Detect potential deadlocks (lock inversions) in a running binary.'
' Must be run as root.'
)
)
parser.add_argument('binary', type=str, help='Absolute path to binary')
parser.add_argument('pid', type=int, help='Pid to trace')
parser.add_argument(
'--dump-graph',
type=str,
default='',
help='If set, this will dump the mutex graph to the specified file.',
)
parser.add_argument(
'--lock-symbols',
type=strlist,
default=['pthread_mutex_lock'],
help='Comma-separated list of lock symbols to trace. Default is '
'pthread_mutex_lock',
)
parser.add_argument(
'--unlock-symbols',
type=strlist,
default=['pthread_mutex_unlock'],
help='Comma-separated list of unlock symbols to trace. Default is '
'pthread_mutex_unlock',
)
args = parser.parse_args()
bpf = BPF(src_file='deadlock_detector.c')
# Trace where threads are created
bpf.attach_kretprobe(
event='sys_clone',
fn_name='trace_clone',
pid=args.pid
)
# We must trace unlock first, otherwise in the time we attached the probe
# on lock() and have not yet attached the probe on unlock(), a thread
# can acquire multiple mutexes and released them, but the release
# events will not be traced, resulting in noisy reports.
for symbol in args.unlock_symbols:
try:
bpf.attach_uprobe(
name=args.binary,
sym=symbol,
fn_name='trace_mutex_release',
pid=args.pid,
)
except Exception as e:
print(e)
sys.exit(1)
for symbol in args.lock_symbols:
try:
bpf.attach_uprobe(
name=args.binary,
sym=symbol,
fn_name='trace_mutex_acquire',
pid=args.pid,
)
except Exception as e:
print(e)
sys.exit(1)
def print_stack_trace(stack_id):
'''
Closure that prints the symbolized stack trace. Captures: `bpf`, `args`
'''
for addr in bpf.get_table('stack_traces').walk(stack_id):
line = bpf.sym(addr, args.pid)
print('@ %016x %s' % (addr, line))
print('Tracing... Hit Ctrl-C to end.')
while True:
try:
# Mutex wait directed graph. Nodes are mutexes. Edge (A,B) exists
# if there exists some thread T where lock(A) was called and
# lock(B) was called before unlock(A) was called.
graph = networkx.DiGraph()
for key, leaf in bpf.get_table('edges').items():
graph.add_edge(
key.mutex1,
key.mutex2,
thread_pid=leaf.thread_pid,
thread_comm=leaf.comm.decode('utf-8'),
first_mutex_stack_id=leaf.mutex1_stack_id,
second_mutex_stack_id=leaf.mutex2_stack_id,
)
if args.dump_graph:
with open(args.dump_graph, 'w') as f:
data = json_graph.node_link_data(graph)
f.write(json.dumps(data, indent=2))
# Map of child thread pid -> parent info
thread_to_parent_dict = {
child.value: (parent.parent_pid, parent.stack_id, parent.comm)
for child, parent in bpf.get_table('thread_to_parent').items()
}
start = time.time()
has_cycle = find_cycle_and_print(
graph, thread_to_parent_dict, print_stack_trace
)
print(
'Nodes: %d, Edges: %d, Looking for cycle took %f seconds' %
(len(graph.nodes()), len(graph.edges()), (time.time() - start))
)
if has_cycle:
sys.exit(1)
time.sleep(1)
except KeyboardInterrupt:
break
if __name__ == '__main__':
main()
Demonstrations of deadlock_detector.
This program detects potential deadlocks on a running process. The program
attaches uprobes on `pthread_mutex_lock` and `pthread_mutex_unlock` to build
a mutex wait directed graph, and then looks for a cycle in this graph. This
graph has the following properties:
- Nodes in the graph represent mutexes.
- Edge (A, B) exists if there exists some thread T where lock(A) was called
and lock(B) was called before unlock(A) was called.
If there is a cycle in this graph, this indicates that there is a lock order
inversion (potential deadlock). If the program finds a lock order inversion, the
program will dump the cycle of mutexes, dump the stack traces where each mutex
was acquired, and then exit.
This program can only find potential deadlocks that occur while the program is
tracing the process. It cannot find deadlocks that may have occurred before the
program was attached to the process.
Note: This tool does not work for shared mutexes or recursive mutexes.
For shared (read-write) mutexes, a deadlock requires a cycle in the wait
graph where at least one of the mutexes in the cycle is acquiring exclusive
(write) ownership.
For recursive mutexes, lock() is called multiple times on the same mutex.
However, there is no way to determine if a mutex is a recursive mutex
after the mutex has been created. As a result, this tool will not find
potential deadlocks that involve only one mutex.
# ./deadlock_detector.py /path/to/program/with/lockinversion $(pidof lockinversion)
Tracing... Hit Ctrl-C to end.
Nodes: 0, Edges: 0, Looking for cycle took 0.000056 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000062 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000070 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000071 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000066 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000066 seconds
----------------
Potential Deadlock Detected!
Cycle in lock order graph: Mutex M0 (0x00007ffccd7ab140) => Mutex M1 (0x00007ffccd7ab0b0) => Mutex M2 (0x00007ffccd7ab0e0) => Mutex M3 (0x00007ffccd7ab110) => Mutex M0 (0x00007ffccd7ab140)
Mutex M1 (0x00007ffccd7ab0b0) acquired here while holding Mutex M0 (0x00007ffccd7ab140) in Thread 3120373 (lockinversion):
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402ecc main::{lambda()#4}::operator()() const
@ 0000000000406cc4 void std::_Bind_simple<main::{lambda()#4} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406aab std::_Bind_simple<main::{lambda()#4} ()>::operator()()
@ 000000000040689a std::thread::_Impl<std::_Bind_simple<main::{lambda()#4} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M0 (0x00007ffccd7ab140) previously acquired by the same Thread 3120373 (lockinversion) here:
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402eb6 main::{lambda()#4}::operator()() const
@ 0000000000406cc4 void std::_Bind_simple<main::{lambda()#4} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406aab std::_Bind_simple<main::{lambda()#4} ()>::operator()()
@ 000000000040689a std::thread::_Impl<std::_Bind_simple<main::{lambda()#4} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M2 (0x00007ffccd7ab0e0) acquired here while holding Mutex M1 (0x00007ffccd7ab0b0) in Thread 3120370 (lockinversion):
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402d6a main::{lambda()#1}::operator()() const
@ 0000000000406dea void std::_Bind_simple<main::{lambda()#1} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406b17 std::_Bind_simple<main::{lambda()#1} ()>::operator()()
@ 00000000004068f4 std::thread::_Impl<std::_Bind_simple<main::{lambda()#1} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M1 (0x00007ffccd7ab0b0) previously acquired by the same Thread 3120370 (lockinversion) here:
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402d53 main::{lambda()#1}::operator()() const
@ 0000000000406dea void std::_Bind_simple<main::{lambda()#1} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406b17 std::_Bind_simple<main::{lambda()#1} ()>::operator()()
@ 00000000004068f4 std::thread::_Impl<std::_Bind_simple<main::{lambda()#1} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M3 (0x00007ffccd7ab110) acquired here while holding Mutex M2 (0x00007ffccd7ab0e0) in Thread 3120371 (lockinversion):
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402de0 main::{lambda()#2}::operator()() const
@ 0000000000406d88 void std::_Bind_simple<main::{lambda()#2} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406af3 std::_Bind_simple<main::{lambda()#2} ()>::operator()()
@ 00000000004068d6 std::thread::_Impl<std::_Bind_simple<main::{lambda()#2} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M2 (0x00007ffccd7ab0e0) previously acquired by the same Thread 3120371 (lockinversion) here:
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402dc9 main::{lambda()#2}::operator()() const
@ 0000000000406d88 void std::_Bind_simple<main::{lambda()#2} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406af3 std::_Bind_simple<main::{lambda()#2} ()>::operator()()
@ 00000000004068d6 std::thread::_Impl<std::_Bind_simple<main::{lambda()#2} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M0 (0x00007ffccd7ab140) acquired here while holding Mutex M3 (0x00007ffccd7ab110) in Thread 3120372 (lockinversion):
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402e56 main::{lambda()#3}::operator()() const
@ 0000000000406d26 void std::_Bind_simple<main::{lambda()#3} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406acf std::_Bind_simple<main::{lambda()#3} ()>::operator()()
@ 00000000004068b8 std::thread::_Impl<std::_Bind_simple<main::{lambda()#3} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Mutex M3 (0x00007ffccd7ab110) previously acquired by the same Thread 3120372 (lockinversion) here:
@ 00000000004024d0 [unknown]
@ 0000000000406f4e std::mutex::lock()
@ 0000000000407250 std::lock_guard<std::mutex>::lock_guard(std::mutex&)
@ 0000000000402e3f main::{lambda()#3}::operator()() const
@ 0000000000406d26 void std::_Bind_simple<main::{lambda()#3} ()>::_M_invoke<>(std::_Index_tuple<>)
@ 0000000000406acf std::_Bind_simple<main::{lambda()#3} ()>::operator()()
@ 00000000004068b8 std::thread::_Impl<std::_Bind_simple<main::{lambda()#3} ()> >::_M_run()
@ 00007f9f9791f4e1 execute_native_thread_routine
@ 00007f9f9809e7f1 start_thread
@ 00007f9f9736046d __clone
Thread 3120370 created by Thread 3113530 (b'lockinversion') here:
@ 00007f9f97360431 __clone
@ 00007f9f9809eef5 pthread_create
@ 00007f9f97921440 std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>)
@ 00000000004033e0 std::thread::thread<main::{lambda()#1}>(main::{lambda()#1}&&)
@ 0000000000403167 main
@ 00007f9f972730f6 __libc_start_main
@ 0000000000402ad8 [unknown]
Thread 3120371 created by Thread 3113530 (b'lockinversion') here:
@ 00007f9f97360431 __clone
@ 00007f9f9809eef5 pthread_create
@ 00007f9f97921440 std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>)
@ 00000000004034e6 std::thread::thread<main::{lambda()#2}>(main::{lambda()#2}&&)
@ 000000000040319f main
@ 00007f9f972730f6 __libc_start_main
@ 0000000000402ad8 [unknown]
Thread 3120372 created by Thread 3113530 (b'lockinversion') here:
@ 00007f9f97360431 __clone
@ 00007f9f9809eef5 pthread_create
@ 00007f9f97921440 std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>)
@ 00000000004035ec std::thread::thread<main::{lambda()#3}>(main::{lambda()#3}&&)
@ 00000000004031da main
@ 00007f9f972730f6 __libc_start_main
@ 0000000000402ad8 [unknown]
Thread 3120373 created by Thread 3113530 (b'lockinversion') here:
@ 00007f9f97360431 __clone
@ 00007f9f9809eef5 pthread_create
@ 00007f9f97921440 std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>)
@ 00000000004036f2 std::thread::thread<main::{lambda()#4}>(main::{lambda()#4}&&)
@ 0000000000403215 main
@ 00007f9f972730f6 __libc_start_main
@ 0000000000402ad8 [unknown]
Nodes: 6, Edges: 5, Looking for cycle took 0.009499 seconds
This is output from a process that has a potential deadlock involving 4 mutexes
and 4 threads:
- Thread 3120373 acquired M1 while holding M0 (edge M0 -> M1)
- Thread 3120370 acquired M2 while holding M1 (edge M1 -> M2)
- Thread 3120371 acquired M3 while holding M2 (edge M2 -> M3)
- Thread 3120372 acquired M0 while holding M3 (edge M3 -> M0)
This is the C++ program that generated the output above:
```c++
#include <sys/types.h>
#include <unistd.h>
#include <chrono>
#include <iostream>
#include <mutex>
#include <thread>
int main(void) {
std::mutex m1;
std::mutex m2;
std::mutex m3;
std::mutex m4;
std::cout << "&m1: " << (void*)&m1 << std::endl;
std::cout << "&m2: " << (void*)&m2 << std::endl;
std::cout << "&m3: " << (void*)&m3 << std::endl;
std::cout << "&m4: " << (void*)&m4 << std::endl;
std::cout << "pid: " << getpid() << std::endl;
std::cout << "sleeping for a bit to allow trace to attach..." << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "starting program..." << std::endl;
auto t1 = std::thread([&m1, &m2] {
std::lock_guard<std::mutex> g1(m1);
std::lock_guard<std::mutex> g2(m2);
});
t1.join();
auto t2 = std::thread([&m2, &m3] {
std::lock_guard<std::mutex> g2(m2);
std::lock_guard<std::mutex> g3(m3);
});
t2.join();
auto t3 = std::thread([&m3, &m4] {
std::lock_guard<std::mutex> g3(m3);
std::lock_guard<std::mutex> g4(m4);
});
t3.join();
auto t4 = std::thread([&m1, &m4] {
std::lock_guard<std::mutex> g4(m4);
std::lock_guard<std::mutex> g1(m1);
});
t4.join();
std::cout << "sleeping to allow trace to collect data..." << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "done!" << std::endl;
}
```
Note that an actual deadlock did not occur, although this mutex lock ordering
creates the possibility of a deadlock, and this is a hint to the programmer to
reconsider the lock ordering.
# ./deadlock_detector.py /path/to/program $(pidof program) --dump-graph graph.json
Tracing... Hit Ctrl-C to end.
Nodes: 0, Edges: 0, Looking for cycle took 0.000062 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000066 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000065 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000053 seconds
Nodes: 102, Edges: 4936, Looking for cycle took 0.155751 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.141393 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.119585 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.118088 seconds
^C
If the program does not find a deadlock, it will keep running until you hit
Ctrl-C. It will also dump statistics about the number of nodes and edges in
the mutex wait graph. If you want to serialize the graph to analyze it later,
you can pass the `--dump-graph FILE` flag, and the program will serialize
the graph in json format.
# ./deadlock_detector.py /path/to/program $(pidof program) --lock-symbols custom_mutex1_lock,custom_mutex2_lock --unlock_symbols custom_mutex1_unlock,custom_mutex2_unlock
Tracing... Hit Ctrl-C to end.
Nodes: 0, Edges: 0, Looking for cycle took 0.000062 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000066 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000065 seconds
Nodes: 0, Edges: 0, Looking for cycle took 0.000053 seconds
Nodes: 102, Edges: 4936, Looking for cycle took 0.155751 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.141393 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.119585 seconds
Nodes: 102, Edges: 4951, Looking for cycle took 0.118088 seconds
^C
If your program is using custom mutexes and not pthread mutexes, you can use
the `--lock-symbols` and `--unlock-symbols` flags to specify different mutex
symbols to trace. The flags take a comma-separated string of symbol names.
Note that if the symbols are inlined in the binary, then this program can result
in false positives.
USAGE message:
# ./deadlock_detector.py -h
usage: deadlock_detector.py [-h] [--dump-graph DUMP_GRAPH]
[--lock-symbols LOCK_SYMBOLS]
[--unlock-symbols UNLOCK_SYMBOLS]
binary pid
Detect potential deadlocks (lock inversions) in a running binary. Must be run
as root.
positional arguments:
binary Absolute path to binary
pid Pid to trace
optional arguments:
-h, --help show this help message and exit
--dump-graph DUMP_GRAPH
If set, this will dump the mutex graph to the
specified file.
--lock-symbols LOCK_SYMBOLS
Comma-separated list of lock symbols to trace. Default
is pthread_mutex_lock
--unlock-symbols UNLOCK_SYMBOLS
Comma-separated list of unlock symbols to trace.
Default is pthread_mutex_unlock
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment