1. 09 Feb, 2017 16 commits
  2. 08 Feb, 2017 4 commits
  3. 07 Feb, 2017 8 commits
  4. 06 Feb, 2017 3 commits
  5. 05 Feb, 2017 2 commits
  6. 04 Feb, 2017 1 commit
  7. 03 Feb, 2017 6 commits
    • Kenny Yu's avatar
      b83af356
    • Kenny Yu's avatar
      Fix a few small typos · d07b7597
      Kenny Yu authored
      d07b7597
    • Kenny Yu's avatar
      Address comments from https://github.com/iovisor/bcc/pull/936 · e7dff43a
      Kenny Yu authored
      - Remove dependency on networkx. I did this by copying only the parts I needed
        from networkx, and adapting it to only use what I needed. These include:
        `DiGraph`, `strongly_connected_components`, `simple_cyles`
      
      - Symbolize global and static mutexes. In order to do this, I subshell out to
        `subshell`. This isn't very efficient, but this only happens at the end of
        the program if a deadlock is found, so it's not too bad.
      
      - `--verbose` mode to print graph statistics
      
      - Make `--binary` flag optional. Not needed by default, However, this is needed
        on kernels without this recent kernel patch
        (https://lkml.org/lkml/2017/1/13/585, submitted 2 weeks ago): we can't attach
        a uprobe on a binary that has `:` in the path name. Instead, we can create a
        symlink without `:` in the path and pass that to the `--binary` argument
        instead.
      e7dff43a
    • Jesper Dangaard Brouer's avatar
      docs: keep track of when prealloc of map elements were introduced · e682846d
      Jesper Dangaard Brouer authored
      Kernel v4.6-rc1~91^2~108^2~6
       commit 6c9059817432 ("bpf: pre-allocate hash map elements")
      
      Introduced default preallocation of mem elements to solve a deadlock
      (when kprobe'ing the memory allocator itself).
      
      This change is also a performance enhancement.
      
      The commit also introduced a map_flags on BPF_MAP_CREATE, which can disable
      this preallocation again BPF_F_NO_PREALLOC.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      e682846d
    • Kenny Yu's avatar
      tools: add tool to detect potential deadlocks in running programs · 66fb4d29
      Kenny Yu authored
      `deadlock_detector` is a new tool to detect potential deadlocks (lock order
      inversions) in a running process. The program attaches uprobes on
      `pthread_mutex_lock` and `pthread_mutex_unlock` to build a mutex wait directed
      graph, and then looks for a cycle in this graph. This graph has the following
      properties:
      
      - Nodes in the graph represent mutexes.
      - Edge (A, B) exists if there exists some thread T where lock(A) was called
        and lock(B) was called before unlock(A) was called.
      
      If there is a cycle in this graph, this indicates that there is a lock order
      inversion (potential deadlock). If the program finds a lock order inversion, the
      program will dump the cycle of mutexes, dump the stack traces where each mutex
      was acquired, and then exit.
      
      The format of the output uses a similar output as ThreadSanitizer (See example:
      https://github.com/google/sanitizers/wiki/ThreadSanitizerDeadlockDetector)
      
      This program can only find potential deadlocks that occur while the program is
      tracing the process. It cannot find deadlocks that may have occurred before the
      program was attached to the process.
      
      If the traced process has many mutexes and threads, this program will add a
      very large overhead because every mutex lock/unlock and clone call will be
      traced. This tool is meant for debugging only, and you should run this tool
      only on programs where the slowdown is acceptable.
      
      Note: This tool adds a dependency on `networkx` for the graph libraries
      (building a directed graph and cycle detection).
      
      Note: This tool does not work for shared mutexes or recursive mutexes.
      
      For shared (read-write) mutexes, a deadlock requires a cycle in the wait
      graph where at least one of the mutexes in the cycle is acquiring exclusive
      (write) ownership.
      
      For recursive mutexes, lock() is called multiple times on the same mutex.
      However, there is no way to determine if a mutex is a recursive mutex
      after the mutex has been created. As a result, this tool will not find
      potential deadlocks that involve only one mutex.
      66fb4d29
    • Brenden Blanco's avatar
      Merge pull request #935 from wcohen/wcohen/lua_opt · e1f7462c
      Brenden Blanco authored
      Allow RPMS to be built on ppc64 and aarch64 by making luajit optional
      e1f7462c