1. 09 Feb, 2017 9 commits
  2. 08 Feb, 2017 4 commits
  3. 07 Feb, 2017 8 commits
  4. 06 Feb, 2017 3 commits
  5. 05 Feb, 2017 2 commits
  6. 04 Feb, 2017 1 commit
  7. 03 Feb, 2017 6 commits
    • Kenny Yu's avatar
      b83af356
    • Kenny Yu's avatar
      Fix a few small typos · d07b7597
      Kenny Yu authored
      d07b7597
    • Kenny Yu's avatar
      Address comments from https://github.com/iovisor/bcc/pull/936 · e7dff43a
      Kenny Yu authored
      - Remove dependency on networkx. I did this by copying only the parts I needed
        from networkx, and adapting it to only use what I needed. These include:
        `DiGraph`, `strongly_connected_components`, `simple_cyles`
      
      - Symbolize global and static mutexes. In order to do this, I subshell out to
        `subshell`. This isn't very efficient, but this only happens at the end of
        the program if a deadlock is found, so it's not too bad.
      
      - `--verbose` mode to print graph statistics
      
      - Make `--binary` flag optional. Not needed by default, However, this is needed
        on kernels without this recent kernel patch
        (https://lkml.org/lkml/2017/1/13/585, submitted 2 weeks ago): we can't attach
        a uprobe on a binary that has `:` in the path name. Instead, we can create a
        symlink without `:` in the path and pass that to the `--binary` argument
        instead.
      e7dff43a
    • Jesper Dangaard Brouer's avatar
      docs: keep track of when prealloc of map elements were introduced · e682846d
      Jesper Dangaard Brouer authored
      Kernel v4.6-rc1~91^2~108^2~6
       commit 6c9059817432 ("bpf: pre-allocate hash map elements")
      
      Introduced default preallocation of mem elements to solve a deadlock
      (when kprobe'ing the memory allocator itself).
      
      This change is also a performance enhancement.
      
      The commit also introduced a map_flags on BPF_MAP_CREATE, which can disable
      this preallocation again BPF_F_NO_PREALLOC.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      e682846d
    • Kenny Yu's avatar
      tools: add tool to detect potential deadlocks in running programs · 66fb4d29
      Kenny Yu authored
      `deadlock_detector` is a new tool to detect potential deadlocks (lock order
      inversions) in a running process. The program attaches uprobes on
      `pthread_mutex_lock` and `pthread_mutex_unlock` to build a mutex wait directed
      graph, and then looks for a cycle in this graph. This graph has the following
      properties:
      
      - Nodes in the graph represent mutexes.
      - Edge (A, B) exists if there exists some thread T where lock(A) was called
        and lock(B) was called before unlock(A) was called.
      
      If there is a cycle in this graph, this indicates that there is a lock order
      inversion (potential deadlock). If the program finds a lock order inversion, the
      program will dump the cycle of mutexes, dump the stack traces where each mutex
      was acquired, and then exit.
      
      The format of the output uses a similar output as ThreadSanitizer (See example:
      https://github.com/google/sanitizers/wiki/ThreadSanitizerDeadlockDetector)
      
      This program can only find potential deadlocks that occur while the program is
      tracing the process. It cannot find deadlocks that may have occurred before the
      program was attached to the process.
      
      If the traced process has many mutexes and threads, this program will add a
      very large overhead because every mutex lock/unlock and clone call will be
      traced. This tool is meant for debugging only, and you should run this tool
      only on programs where the slowdown is acceptable.
      
      Note: This tool adds a dependency on `networkx` for the graph libraries
      (building a directed graph and cycle detection).
      
      Note: This tool does not work for shared mutexes or recursive mutexes.
      
      For shared (read-write) mutexes, a deadlock requires a cycle in the wait
      graph where at least one of the mutexes in the cycle is acquiring exclusive
      (write) ownership.
      
      For recursive mutexes, lock() is called multiple times on the same mutex.
      However, there is no way to determine if a mutex is a recursive mutex
      after the mutex has been created. As a result, this tool will not find
      potential deadlocks that involve only one mutex.
      66fb4d29
    • Brenden Blanco's avatar
      Merge pull request #935 from wcohen/wcohen/lua_opt · e1f7462c
      Brenden Blanco authored
      Allow RPMS to be built on ppc64 and aarch64 by making luajit optional
      e1f7462c
  8. 02 Feb, 2017 1 commit
    • William Cohen's avatar
      Allow RPMS to be built on ppc64 and aarch64 by making luajit optional · ef91b6ed
      William Cohen authored
      Not all architectures have luajit supported.  The bcc configure and
      build were already was set up to make the luajit dependent parts
      optional.  The bcc.spec now makes the luajit dependent parts optional
      too allowing Fedora 25 builds on ppc64, ppc64le, and aarch64.  This
      change has been tested and allows the resulting srpm to build on the
      Fedora koji build system for the newly added architectures.
      Signed-off-by: default avatarWilliam Cohen <wcohen@redhat.com>
      ef91b6ed
  9. 01 Feb, 2017 6 commits
    • 4ast's avatar
      Merge pull request #928 from goldshtn/tp-data-loc · 4a57f4dd
      4ast authored
      Support for __data_loc tracepoint fields
      4a57f4dd
    • Sasha Goldshtein's avatar
      tplist: Don't ignore __data_loc fields · 3ea6eee8
      Sasha Goldshtein authored
      3ea6eee8
    • Sasha Goldshtein's avatar
      c6aaaed1
    • Sasha Goldshtein's avatar
      cc: Support for __data_loc tracepoint fields · b9545a5c
      Sasha Goldshtein authored
      `__data_loc` fields are dynamically sized by the kernel at
      runtime. The field data follows the tracepoint structure entry,
      and needs to be extracted in a special way. The `__data_loc` field
      itself is a 32-bit value that consists of two 16-bit parts: the
      high 16 bits are the length of the data, and the low 16 bits are
      the offset of the data from the beginning of the tracepoint
      structure. From a cursory look, there are >200 tracepoints in
      recent kernels that have this kind of field.
      
      This patch fixes `tp_frontend_action.cc` to recognize and emit
      `__data_loc` fields correctly, as 32-bit opaque fields. Then, it
      introduces two helper macros:
      
      `TP_DATA_LOC_READ(dst, field)` reads from `args->field` by finding
      the right offset and length and emitting the `bpf_probe_read`
      required to fetch the data. This will only work with new kernels.
      
      `TP_DATA_LOC_READ_CONST(dst, field, length)` takes a user-specified
      length rather than finding it from `args->field`. This will work
      on older kernels, where the BPF verifier doesn't allow non-constant
      sizes to be passed to `bpf_probe_read`.
      b9545a5c
    • 4ast's avatar
      Merge pull request #918 from derek0883/mybcc · b77915df
      4ast authored
      Handling multiple concurrent probe users.
      b77915df
    • Derek's avatar
      enum bpf_probe_attach_type to CAPITAL · 227b5b99
      Derek authored
      227b5b99