Commit 217d6017 authored by Brendan Gregg's avatar Brendan Gregg Committed by GitHub

Merge pull request #70 from iovisor/tools

Tools
parents b0c64996 820c6453
# Contributing bpftrace/eBPF tools
If you want to contribute tools to bpftrace, please read this checklist first.
_(Written by Brendan Gregg. Adapted from the [bcc version](https://github.com/iovisor/bcc/blob/master/CONTRIBUTING-SCRIPTS.md))._
bpftrace tool development checklist:
1. **Research the topic landscape**. Learn the existing tools and metrics (incl. from /proc). Determine what real world problems exist and need solving. We have too many tools and metrics as it is, we don't need more "I guess that's useful" tools, we need more "ah-hah! I couldn't do this before!" tools. Consider asking other developers about your idea. Many of us can be found in IRC, in the #iovisor channel on irc.oftc.net. There's also the iovisor mailing list (see the README.md), and github for issues.
1. **Create a known workload for testing**. This might involving writing a 10 line C program, using a micro-benchmark, or just improvising at the shell. If you don't know how to create a workload, learn! Figuring this out will provide invaluable context and details that you may have otherwise overlooked. Sometimes it's easy, and I'm able to just use dd(1) from /dev/urandom or a disk device to /dev/null. It lets me set the I/O size, count, and provides throughput statistics for cross-checking my tool output. But other times I need a micro-benchmark, or some C.
1. **Write the tool to solve the problem and no more**. Unix philosophy: do one thing and do it well. netstat doesn't have an option to dump packets, tcpdump-style. They are two different tools.
1. **Consider bcc for custom output and options**. Need to really customize your output? Want to support a variety of command line options? It sounds like your tool may be better as a bcc tool, which currently supports these using Python (and other) interfaces [bcc](https://github.com/iovisor/bcc).
1. **Check your tool correctly measures your known workload**. If possible, run a prime number of events (eg, 23) and check that the numbers match. Try other workload variations.
1. **Use other observability tools to perform a cross-check or sanity check**. Eg, imagine you write a PCI bus tool that shows current throughput is 28 Gbytes/sec. How could you sanity test that? Well, what PCI devices are there? Disks and network cards? Measure their throughput (iostat, nicstat, sar), and check if is in the ballpark of 28 Gbytes/sec (which would include PCI frame overheads). Ideally, your numbers match.
1. **Measure the overhead of the tool**. If you are running a micro-benchmark, how much slower is it with the tool running. Is more CPU consumed? Try to determine the worst case: run the micro-benchmark so that CPU headroom is exhausted, and then run the bpftrace tool. Can overhead be lowered?
1. **Test again, and stress test**. You want to discover and fix all the bad things before others hit them.
1. **Consider your own repository**. Your tool does not need to be here! bpftrace makes it very easy to create new tools, perhaps too easy. As the previous items described, it's possible to create tools that print metrics that are incorrect, or cause too high overhead. Tools here will likely be run on production servers as root, at many companies, and we want to err on the side of caution. You can always create your own repository of bpftrace tools, and once they have had some exposure, testing, and bug fixes, consider contributing them here.
1. **Concise, intuitive, self-explanatory output**. The default output should meet the common need concisely. Consider including a startup message that's self-explanatory, eg "Tracing block I/O. Output every 1 seconds. Ctrl-C to end.". Also, try hard to keep the output less than 80 characters wide, especially the default output of the tool. That way, the output not only fits on the smallest reasonable terminal, it also fits well in slide decks, blog posts, articles, and printed material, all of which help education and adoption. Publishers of technical books often have templates they require books to conform to: it may not be an option to shrink or narrow the font to fit your output.
1. **Check style**: Do you have a consistent convention for identation, variable names, and comment style? You can follow the lead from the other tools.
1. **Write an _example.txt file**. Copy the style in tools/biolatency_example.txt: start with an intro sentence, then have examples, and finish with the USAGE message. Explain everything: the first example should explain what we are seeing, even if this seems obvious. For some people it won't be obvious. Also explain why we are running the tool: what problems it's solving. It can take a long time (hours) to come up with good examples, but it's worth it. These will get copied around (eg, presentations, articles).
1. **Read your example.txt file**. Does this sound too niche or convoluted? Are you spending too much time explaining caveats? These can be hints that perhaps you should fix your tool, or abandon it! I've abandoned many tools at this stage.
1. **Write a man page**. Either ROFF (.8), markdown (.md), or plain text (.txt): so long as it documents the important sections, particularly columns (fields) and caveats. These go under man/man8. See the other examples. Include a section on overhead, and pull no punches. It's better for end users to know about high overhead beforehand, than to discover it the hard way. Also explain caveats. Don't assume those will be obvious to tool users.
1. **Read your man page**. For ROFF: nroff -man filename. Like before, this exercise is like saying something out loud. Does it sound too niche or convoluted? Again, hints that you might need to go back and fix things, or abandon it.
1. **Spell check your documentation**. Use a spell checker like aspell to check your document quality before committing.
1. **Add an entry to README.md**.
1. If you made it this far, pull request!
......@@ -141,6 +141,22 @@ verify_cpu+0
]: 150
```
## Tools
bpftrace contains various tools, which also serve as examples of programming in the bpftrace language.
- tools/[bashreadline.bt](tools/bashreadline.bt): Print entered bash commands system wide. [Examples](tools/bashreadline_example.txt).
- tools/[capable.bt](tools/capable.bt): Trace security capabilitiy checks. [Examples](tools/capable_example.txt).
- tools/[cpuwalk.bt](tools/cpuwalk.bt): Sample which CPUs are executing processes. [Examples](tools/cpuwalk_example.txt).
- tools/[gethostlatency.bt](tools/gethostlatency.bt): Show latency for getaddrinfo/gethostbyname[2] calls. [Examples](tools/gethostlatency_example.txt).
- tools/[loads.bt](tools/loads.bt): Print load averages. [Examples](tools/loads_example.txt).
- tools/[pidpersec.bt](tools/pidpersec.bt): Count new procesess (via fork). [Examples](tools/pidpersec_example.txt).
- tools/[vfscount.bt](tools/vfscount.bt): Count VFS calls. [Examples](tools/vfscount_example.txt).
- tools/[vfsstat.bt](tools/vfsstat.bt): Count some VFS calls, with per-second summaries. [Examples](tools/vfsstat_example.txt).
- tools/[xfsdist.bt](tools/xfsdist.bt): Summarize XFS operation latency distribution as a histogram. [Examples](tools/xfsdist_example.txt).
For more eBPF observability tools, see [bcc tools](https://github.com/iovisor/bcc#tools).
## Probe types
<center><a href="images/bpftrace_probes_2018.png"><img src="images/bpftrace_probes_2018.png" border=0 width=700></a></center>
......
.TH bashreadline 8 "2018-09-06" "USER COMMANDS"
.SH NAME
bashreadline.bt \- Print bash commands system wide. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B bashreadline.bt
.SH DESCRIPTION
bashreadline traces the return of the readline() function using uretprobes, to
show the bash commands that were entered interactively, system wide. The
entered command may fail: this is just showing what was entered.
This program is also a basic example of bpftrace and uretprobes.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Trace bash commands system wide:
#
.B bashreadline.bt
.SH FIELDS
.TP
TIME
A timestamp on the output, in "HH:MM:SS" format.
.TP
PID
The process ID for bash.
.TP
COMMAND
Entered command.
.SH OVERHEAD
As the rate of interactive bash commands is expected to be very low (<<100/s),
the overhead of this program is expected to be negligible.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file
containing example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
may provide more options and customizations.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
opensnoop(8)
.TH capable 8 "2018-09-08" "USER COMMANDS"
.SH NAME
capable.bt \- Trace security capability checks (cap_capable()).
.SH SYNOPSIS
.B capable.bt
.SH DESCRIPTION
This traces security capability checks in the kernel, and prints details for
each call. This can be useful for general debugging, and also security
enforcement: determining a white list of capabilities an application needs.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF, bpftrace.
.SH EXAMPLES
.TP
Trace all capability checks system-wide:
#
.B capable
.SH FIELDS
.TP
TIME(s)
Time of capability check: HH:MM:SS.
.TP
UID
User ID.
.TP
PID
Process ID.
.TP
COMM
Process name.
CAP
Capability number.
NAME
Capability name. See capabilities(7) for descriptions.
.TP
AUDIT
Whether this was an audit event.
.SH OVERHEAD
This adds low-overhead instrumentation to capability checks, which are expected
to be low frequency, however, that depends on the application. Test in a lab
environment before use.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
provides options to customize the output.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
capabilities(7)
.TH cpuwalk 8 "2018-09-08" "USER COMMANDS"
.SH NAME
cpuwalk.bt \- Sample which CPUs are executing processes.. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B cpuwalk.bt
.SH DESCRIPTION
This tool samples CPUs at 99 Hertz, then prints a histogram showing which
CPUs were active. 99 Hertz is used to avoid lockstep sampling that would
skew results. This tool can help identify if your application's workload is
evenly using the CPUs, or if there is an imbalance problem.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Sample CPUs and print a summary on Ctrl-C:
#
.B cpuwalk
.SH FIELDS
.TP
1st, 2nd
The CPU is shown in the first field, after the "[". Disregard the second field.
.TP
3rd
A column showing the number of samples for this CPU.
.TP
4th
This is an ASCII histogram representing the count colimn.
.SH OVERHEAD
This should be negligible.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
mpstat(1)
.TH gethostlatency 8 "2018-09-08" "USER COMMANDS"
.SH NAME
gethostlatency.bt \- Show latency for getaddrinfo/gethostbyname[2] calls. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B gethostlatency.bt
.SH DESCRIPTION
This traces and prints when getaddrinfo(), gethostbyname(), and gethostbyname2()
are called, system wide, and shows the responsible PID and command name,
latency of the call (duration) in milliseconds, and the host string.
This tool can be useful for identifying DNS latency, by identifying which
remote host name lookups were slow, and by how much.
This tool currently uses dynamic tracing of user-level functions and registers,
and may need modifications to match your software and processor architecture.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bcc.
.SH EXAMPLES
.TP
Trace host lookups (getaddrinfo/gethostbyname[2]) system wide:
#
.B gethostlatency.bt
.SH FIELDS
.TP
TIME
Time of the command (HH:MM:SS).
.TP
PID
Process ID of the client performing the call.
.TP
COMM
Process (command) name of the client performing the call.
.TP
LATms
Latency of the call, in milliseconds.
.TP
HOST
Host name string: the target of the lookup.
.SH OVERHEAD
The rate of lookups should be relatively low, so the overhead is not expected
to be a problem.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
provides command line options.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
tcpdump(8)
.TH loads 8 "2018-09-10" "USER COMMANDS"
.SH NAME
loads.bt \- Prints load averages. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B loads.bt
.SH DESCRIPTION
These are the same load averages printed by "uptime", but to three decimal
places instead of two (not that it really matters). This is really a
demonstration of fetching and processing a kernel structure from bpftrace.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Print system load averages every second:
#
.B loads
.SH FIELDS
.TP
HH:MM:SS
Each output line includes time of printing in "HH:MM:SS" format.
.TP
load averages:
These are exponentially-damped moving sum averages of the system loads.
Load is a measurement of demand on system resources, which include CPUs and
other resources that are accessed with the kernel in an uninterruptible state
(TASK_UNINTERRUPTIBLE), which includes types of disk I/O and lock accessses.
Linux load averages originally reflected CPU demand only, as it does in other
OSes, but this was changed in Linux 0.99.14. This demand meseasurement reflects
not just the utilized resource, but also the queued demand (a saturation
measurement). Finally, the three numbers are called the "one-", "five-", and
"fifteen-minute" load averages, however these times are constants used in the
exponentially-damping equation, and the load averages reflect load beyond these
times. Were you expecting an accurate description of load averages in
the man page of a bpftrace tool?
.SH OVERHEAD
Other than bpftrace startup time, negligible.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
.SH REFERENCE
For more on load averages, see:
.PP
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
uptime(1)
.TH pidpersec 8 "2018-09-06" "USER COMMANDS"
.SH NAME
pidpersec.bt \- Count new processes (via fork()). Uses bpftrace/eBPF.
.SH SYNOPSIS
.B pidpersec.bt
.SH DESCRIPTION
pidpersec shows how many new processes were created each second. There
can be performance issues caused by many short-lived processes, which may not
be visible in sampling tools like top(1). pidpersec provides one way to
investigate this behavior.
This works by tracing the tracepoint:sched:sched_process_fork tracepoint.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Count new processes, printing per-second summaries until Ctrl-C is hit:
#
.B pidpersec.bt
.SH FIELDS
.TP
1st
Count of processes (after "@")
.SH OVERHEAD
This traces kernel forks, and maintains an in-kernel count which is
read asynchronously from user-space. As the rate of this is generally expected to
be low (<< 1000/s), the overhead is also expected to be negligible.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file
containing example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
may provide more options and customizations.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
top(1)
.TH vfscount 8 "2018-09-06" "USER COMMANDS"
.SH NAME
vfscount.bt \- Count VFS calls ("vfs_*"). Uses bpftrace/eBPF.
.SH SYNOPSIS
.B vfscount.bt
.SH DESCRIPTION
This counts VFS calls. This can be useful for general workload
characterization of these operations.
This works by tracing all kernel functions beginning with "vfs_" using dynamic
tracing. This may match more functions than you are interested in measuring:
Edit the script to customize which functions to trace.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Count all VFS calls until Ctrl-C is hit:
#
.B vfscount.bt
.SH FIELDS
.TP
1st
Kernel function name (in @[])
.TP
2nd
Number of calls while tracing
.SH OVERHEAD
This traces kernel vfs functions and maintains in-kernel counts, which
are asynchronously copied to user-space. While the rate of VFS operations can
be very high (>1M/sec), this is a relatively efficient way to trace these
events, and so the overhead is expected to be small for normal workloads.
Measure in a test environment, and if overheads are an issue, edit the script
to reduce the types of vfs functions traced (currently all beginning with
"vfs_").
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file
containing example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
may provide more options and customizations.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
vfsstat.bt(8)
.TH vfsstat 8 "2018-09-06" "USER COMMANDS"
.SH NAME
vfsstat.bt \- Count key VFS calls. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B vfsstat.bt
.SH DESCRIPTION
This traces some common VFS calls and prints per-second summaries. This can
be useful for general workload characterization, and looking for patterns
in operation usage over time.
This works by tracing some kernel vfs functions using dynamic tracing, and will
need updating to match any changes to these functions. Edit the script to
customize which functions are traced. Also see vfscount, which is more
easily customized to trace multiple functions.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Count some VFS calls, printing per-second summaries until Ctrl-C is hit:
#
.B vfsstat.bt
.SH FIELDS
.TP
HH:MM:SS
Each output summary is prefixed by the time of printing in "HH:MM:SS" format.
.TP
1st
Kernel function name (in @[])
.TP
2nd
Number of calls while tracing
.SH OVERHEAD
This traces various kernel vfs functions and maintains in-kernel counts, which
are asynchronously copied to user-space. While the rate of VFS operations can
be very high (>1M/sec), this is a relatively efficient way to trace these
events, and so the overhead is expected to be small for normal workloads.
Measure in a test environment.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file
containing example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
may provide more options and customizations.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
vfscount.bt(8)
.TH xfsdist 8 "2018-09-08" "USER COMMANDS"
.SH NAME
xfsdist.bt \- Summarize XFS operation latency. Uses bpftrace/eBPF.
.SH SYNOPSIS
.B xfsdist.bt
.SH DESCRIPTION
This tool summarizes time (latency) spent in common XFS file operations: reads,
writes, opens, and syncs, and presents it as a power-of-2 histogram. It uses an
in-kernel eBPF map to store the histogram for efficiency.
Since this works by tracing the xfs_file_operations interface functions, it
will need updating to match any changes to these functions.
Since this uses BPF, only the root user can use this tool.
.SH REQUIREMENTS
CONFIG_BPF and bpftrace.
.SH EXAMPLES
.TP
Trace XFS operation time, and print a summary on Ctrl-C:
#
.B xfsdist
.SH FIELDS
.TP
0th
The operation name (shown in "@[...]") is printed before each I/O histogram.
.TP
1st, 2nd
This is a range of latency, in microseconds (shown in "[...)" set notation).
.TP
3rd
A column showing the count of operations in this range.
.TP
4th
This is an ASCII histogram representing the count colimn.
.SH OVERHEAD
This adds low-overhead instrumentation to these XFS operations,
including reads and writes from the file system cache. Such reads and writes
can be very frequent (depending on the workload; eg, 1M/sec), at which
point the overhead of this tool may become noticeable.
Measure and quantify before use.
.SH SOURCE
This is from bpftrace.
.IP
https://github.com/iovisor/bpftrace
.PP
Also look in the bpftrace distribution for a companion _examples.txt file containing
example usage, output, and commentary for this tool.
This is a bpftrace version of the bcc tool of the same name. The bcc tool
may provide more options and customizations.
.IP
https://github.com/iovisor/bcc
.SH OS
Linux
.SH STABILITY
Unstable - in development.
.SH AUTHOR
Brendan Gregg
.SH SEE ALSO
iosnoop(1)
/*
* bashreadline Print entered bash commands from all running shells.
* For Linux, uses bpftrace and eBPF.
*
* This works by tracing the readline() function using a uretprobe (uprobes).
*
* USAGE: bashreadline.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 06-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing bash commands... Hit Ctrl-C to end.\n");
printf("%-9s %-6s %s\n", "TIME", "PID", "COMMAND");
}
uretprobe:/bin/bash:readline
{
time("%H:%M:%S ");
printf("%-6d %s\n", pid, str(retval));
}
Demonstrations of bashreadline, the Linux bpftrace/eBPF version.
This prints bash commands from all running bash shells on the system. For
example:
# bpftrace bashreadline.bt
Attaching 2 probes...
Tracing bash commands... Hit Ctrl-C to end.
TIME PID COMMAND
06:40:06 5526 df -h
06:40:09 5526 ls -l
06:40:18 5526 echo hello bpftrace
06:40:42 5526 echooo this is a failed command, but we can see it anyway
^C
The entered command may fail. This is just showing what command lines were
entered interactively for bash to process.
It works by tracing the return of the readline() function using uprobes
(specifically a uretprobe).
There is another version of this tool in bcc: https://github.com/iovisor/bcc
/*
* capable Trace security capabilitiy checks (cap_capable()).
* For Linux, uses bpftrace and eBPF.
*
* USAGE: capable.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 08-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing cap_capable syscalls... Hit Ctrl-C to end.\n");
printf("%-9s %-6s %-6s %-16s %-4s %-20s AUDIT\n", "TIME", "UID", "PID",
"COMM", "CAP", "NAME");
@cap[0] = "CAP_CHOWN";
@cap[1] = "CAP_DAC_OVERRIDE";
@cap[2] = "CAP_DAC_READ_SEARCH";
@cap[3] = "CAP_FOWNER";
@cap[4] = "CAP_FSETID";
@cap[5] = "CAP_KILL";
@cap[6] = "CAP_SETGID";
@cap[7] = "CAP_SETUID";
@cap[8] = "CAP_SETPCAP";
@cap[9] = "CAP_LINUX_IMMUTABLE";
@cap[10] = "CAP_NET_BIND_SERVICE";
@cap[11] = "CAP_NET_BROADCAST";
@cap[12] = "CAP_NET_ADMIN";
@cap[13] = "CAP_NET_RAW";
@cap[14] = "CAP_IPC_LOCK";
@cap[15] = "CAP_IPC_OWNER";
@cap[16] = "CAP_SYS_MODULE";
@cap[17] = "CAP_SYS_RAWIO";
@cap[18] = "CAP_SYS_CHROOT";
@cap[19] = "CAP_SYS_PTRACE";
@cap[20] = "CAP_SYS_PACCT";
@cap[21] = "CAP_SYS_ADMIN";
@cap[22] = "CAP_SYS_BOOT";
@cap[23] = "CAP_SYS_NICE";
@cap[24] = "CAP_SYS_RESOURCE";
@cap[25] = "CAP_SYS_TIME";
@cap[26] = "CAP_SYS_TTY_CONFIG";
@cap[27] = "CAP_MKNOD";
@cap[28] = "CAP_LEASE";
@cap[29] = "CAP_AUDIT_WRITE";
@cap[30] = "CAP_AUDIT_CONTROL";
@cap[31] = "CAP_SETFCAP";
@cap[32] = "CAP_MAC_OVERRIDE";
@cap[33] = "CAP_MAC_ADMIN";
@cap[34] = "CAP_SYSLOG";
@cap[35] = "CAP_WAKE_ALARM";
@cap[36] = "CAP_BLOCK_SUSPEND";
@cap[37] = "CAP_AUDIT_READ";
}
kprobe:cap_capable
{
$cap = arg2;
$audit = arg3;
time("%H:%M:%S ");
printf("%-6d %-6d %-16s %-4d %-20s %d\n", uid, pid, comm, $cap,
@cap[$cap], $audit);
}
END
{
clear(@cap);
}
Demonstrations of capable, the Linux bpftrace/eBPF version.
capable traces calls to the kernel cap_capable() function, which does security
capability checks, and prints details for each call. For example:
# capable.bt
TIME UID PID COMM CAP NAME AUDIT
22:11:23 114 2676 snmpd 12 CAP_NET_ADMIN 1
22:11:23 0 6990 run 24 CAP_SYS_RESOURCE 1
22:11:23 0 7003 chmod 3 CAP_FOWNER 1
22:11:23 0 7003 chmod 4 CAP_FSETID 1
22:11:23 0 7005 chmod 4 CAP_FSETID 1
22:11:23 0 7005 chmod 4 CAP_FSETID 1
22:11:23 0 7006 chown 4 CAP_FSETID 1
22:11:23 0 7006 chown 4 CAP_FSETID 1
22:11:23 0 6990 setuidgid 6 CAP_SETGID 1
22:11:23 0 6990 setuidgid 6 CAP_SETGID 1
22:11:23 0 6990 setuidgid 7 CAP_SETUID 1
22:11:24 0 7013 run 24 CAP_SYS_RESOURCE 1
22:11:24 0 7026 chmod 3 CAP_FOWNER 1
22:11:24 0 7026 chmod 4 CAP_FSETID 1
22:11:24 0 7028 chmod 4 CAP_FSETID 1
22:11:24 0 7028 chmod 4 CAP_FSETID 1
22:11:24 0 7029 chown 4 CAP_FSETID 1
22:11:24 0 7029 chown 4 CAP_FSETID 1
22:11:24 0 7013 setuidgid 6 CAP_SETGID 1
22:11:24 0 7013 setuidgid 6 CAP_SETGID 1
22:11:24 0 7013 setuidgid 7 CAP_SETUID 1
22:11:25 0 7036 run 24 CAP_SYS_RESOURCE 1
22:11:25 0 7049 chmod 3 CAP_FOWNER 1
22:11:25 0 7049 chmod 4 CAP_FSETID 1
22:11:25 0 7051 chmod 4 CAP_FSETID 1
22:11:25 0 7051 chmod 4 CAP_FSETID 1
[...]
This can be useful for general debugging, and also security enforcement:
determining a whitelist of capabilities an application needs.
The output above includes various capability checks: snmpd checking
CAP_NET_ADMIN, run checking CAP_SYS_RESOURCES, then some short-lived processes
checking CAP_FOWNER, CAP_FSETID, etc.
To see what each of these capabilities does, check the capabilities(7) man
page and the kernel source.
There is another version of this tool in bcc: https://github.com/iovisor/bcc
The bcc version provides options to customize the output.
/*
* cpuwalk Sample which CPUs are executing processes.
* For Linux, uses bpftrace and eBPF.
*
* USAGE: cpuwalk.bt
*
* This is a bpftrace version of the DTraceToolkit tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 08-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Sampling CPU at 99hz... Hit Ctrl-C to end.\n");
}
profile:hz:99
/pid/
{
@cpu = lhist(cpu, 0, 1000, 1);
}
Demonstrations of cpuwalk, the Linux bpftrace/eBPF version.
cpuwalk samples which CPUs processes are running on, and prints a summary
histogram. For example, here is a Linux kernel build on a 36-CPU server:
# cpuwalk.bt
Attaching 2 probes...
Sampling CPU at 99hz... Hit Ctrl-C to end.
^C
@cpu:
[0, 1) 130 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[1, 2) 137 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[2, 3) 99 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[3, 4) 99 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[4, 5) 82 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[5, 6) 34 |@@@@@@@@@@@@ |
[6, 7) 67 |@@@@@@@@@@@@@@@@@@@@@@@@ |
[7, 8) 41 |@@@@@@@@@@@@@@@ |
[8, 9) 97 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[9, 10) 140 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[10, 11) 105 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[11, 12) 77 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[12, 13) 39 |@@@@@@@@@@@@@@ |
[13, 14) 58 |@@@@@@@@@@@@@@@@@@@@@ |
[14, 15) 64 |@@@@@@@@@@@@@@@@@@@@@@@ |
[15, 16) 57 |@@@@@@@@@@@@@@@@@@@@@ |
[16, 17) 99 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[17, 18) 56 |@@@@@@@@@@@@@@@@@@@@ |
[18, 19) 44 |@@@@@@@@@@@@@@@@ |
[19, 20) 80 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[20, 21) 64 |@@@@@@@@@@@@@@@@@@@@@@@ |
[21, 22) 59 |@@@@@@@@@@@@@@@@@@@@@ |
[22, 23) 88 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[23, 24) 84 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[24, 25) 29 |@@@@@@@@@@ |
[25, 26) 48 |@@@@@@@@@@@@@@@@@ |
[26, 27) 62 |@@@@@@@@@@@@@@@@@@@@@@@ |
[27, 28) 66 |@@@@@@@@@@@@@@@@@@@@@@@@ |
[28, 29) 57 |@@@@@@@@@@@@@@@@@@@@@ |
[29, 30) 59 |@@@@@@@@@@@@@@@@@@@@@ |
[30, 31) 56 |@@@@@@@@@@@@@@@@@@@@ |
[31, 32) 23 |@@@@@@@@ |
[32, 33) 90 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[33, 34) 62 |@@@@@@@@@@@@@@@@@@@@@@@ |
[34, 35) 39 |@@@@@@@@@@@@@@ |
[35, 36) 68 |@@@@@@@@@@@@@@@@@@@@@@@@@ |
This shows that all 36 CPUs were active, with some busier than others.
Compare that output to the following workload from an application:
# cpuwalk.bt
Attaching 2 probes...
Sampling CPU at 99hz... Hit Ctrl-C to end.
^C
@cpu:
[6, 7) 243 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[7, 8) 0 | |
[8, 9) 0 | |
[9, 10) 0 | |
[10, 11) 0 | |
[11, 12) 0 | |
[12, 13) 0 | |
[13, 14) 0 | |
[14, 15) 0 | |
[15, 16) 0 | |
[16, 17) 0 | |
[17, 18) 0 | |
[18, 19) 0 | |
[19, 20) 0 | |
[20, 21) 1 | |
In this case, only a single CPU (6) is really active doing work. Only a single
sample was taken of another CPU (20) running a process. If the workload was
supposed to be making use of multiple CPUs, it isn't, and that can be
investigated (application's configuration, number of threads, CPU binding, etc).
/*
* gethostlatency Trace getaddrinfo/gethostbyname[2] calls.
* For Linux, uses bpftrace and eBPF.
*
* This can be useful for identifying DNS latency, by identifying which
* remote host name lookups were slow, and by how much.
*
* This uses dynamic tracing of user-level functions and registers, and may
# need modifications to match your software and processor architecture.
*
* USAGE: gethostlatency.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 08-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing getaddr/gethost calls... Hit Ctrl-C to end.\n");
printf("%-9s %-6s %-16s %6s %s\n", "TIME", "PID", "COMM", "LATms",
"HOST");
}
uprobe:/lib/x86_64-linux-gnu/libc.so.6:getaddrinfo,
uprobe:/lib/x86_64-linux-gnu/libc.so.6:gethostbyname,
uprobe:/lib/x86_64-linux-gnu/libc.so.6:gethostbyname2
{
@start[tid] = nsecs;
@name[tid] = arg0;
}
uretprobe:/lib/x86_64-linux-gnu/libc.so.6:getaddrinfo,
uretprobe:/lib/x86_64-linux-gnu/libc.so.6:gethostbyname,
uretprobe:/lib/x86_64-linux-gnu/libc.so.6:gethostbyname2
/@start[tid]/
{
$latms = (nsecs - @start[tid]) / 1000000;
time("%H:%M:%S ");
printf("%-6d %-16s %6d %s\n", pid, comm, $latms, str(@name[tid]));
delete(@start[tid]);
delete(@name[tid]);
}
Demonstrations of gethostlatency, the Linux bpftrace/eBPF version.
This traces host name lookup calls (getaddrinfo(), gethostbyname(), and
gethostbyname2()), and shows the PID and command performing the lookup, the
latency (duration) of the call in milliseconds, and the host string:
# bpftrace gethostlatency.bt
Attaching 7 probes...
Tracing getaddr/gethost calls... Hit Ctrl-C to end.
TIME PID COMM LATms HOST
02:52:05 19105 curl 81 www.netflix.com
02:52:12 19111 curl 17 www.netflix.com
02:52:19 19116 curl 9 www.facebook.com
02:52:23 19118 curl 3 www.facebook.com
In this example, the first call to lookup "www.netflix.com" took 81 ms, and
the second took 17 ms (sounds like some caching).
There is another version of this tool in bcc: https://github.com/iovisor/bcc
The bcc version provides options to customize the output.
/*
* loads Prints load averages.
* For Linux, uses bpftrace and eBPF.
*
* These are the same load averages printed by "uptime", but to three decimal
* places instead of two (not that it really matters). This is really a
* demonstration of fetching and processing a kernel structure from bpftrace.
*
* USAGE: loads.bt
*
* This is a bpftrace version of a DTraceToolkit tool.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 10-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Reading load averages... Hit Ctrl-C to end.\n");
}
interval:s:1
{
/*
* See fs/proc/loadavg.c and include/linux/sched/loadavg.h for the
* following calculations.
*/
$avenrun = kaddr("avenrun");
$load1 = *$avenrun;
$load5 = *($avenrun + 8);
$load15 = *($avenrun + 16);
time("%H:%M:%S ");
printf("load averages: %d.%03d %d.%03d %d.%03d\n",
($load1 / 2048), (($load1 & 2047) * 1000) / 2048,
($load5 / 2048), (($load5 & 2047) * 1000) / 2048,
($load15 / 2048), (($load15 & 2047) * 1000) / 2048
);
}
Demonstrations of loads, the Linux bpftrace/eBPF version.
This is a simple tool that prints the system load averages, to three decimal
places each (not that it really matters), as a demonstration of fetching
kernel structures from bpftrace:
# bpftrace loads.bt
Attaching 2 probes...
Reading load averages... Hit Ctrl-C to end.
21:29:17 load averages: 2.091 2.048 1.947
21:29:18 load averages: 2.091 2.048 1.947
21:29:19 load averages: 2.091 2.048 1.947
21:29:20 load averages: 2.091 2.048 1.947
21:29:21 load averages: 2.164 2.064 1.953
21:29:22 load averages: 2.164 2.064 1.953
21:29:23 load averages: 2.164 2.064 1.953
^C
These are the same load averages printed by uptime:
# uptime
21:29:24 up 2 days, 18:57, 3 users, load average: 2.16, 2.06, 1.95
For more on load averages, see my post:
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
/*
* pidpersec Count new procesess (via fork).
* For Linux, uses bpftrace and eBPF.
*
* Written as a basic example of counting on an event.
*
* USAGE: pidpersec.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 06-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing new processes... Hit Ctrl-C to end.\n");
}
tracepoint:sched:sched_process_fork
{
@ = count();
}
interval:s:1
{
time("%H:%M:%S PIDs/sec: ");
print(@);
clear(@);
}
END
{
clear(@);
}
Demonstrations of pidpersec, the Linux bpftrace/eBPF version.
Tracing new procesess:
# pidpersec.bt
Attaching 4 probes...
Tracing new processes... Hit Ctrl-C to end.
22:29:50 PIDs/sec: @: 121
22:29:51 PIDs/sec: @: 120
22:29:52 PIDs/sec: @: 122
22:29:53 PIDs/sec: @: 124
22:29:54 PIDs/sec: @: 123
22:29:55 PIDs/sec: @: 121
22:29:56 PIDs/sec: @: 121
22:29:57 PIDs/sec: @: 121
22:29:58 PIDs/sec: @: 49
22:29:59 PIDs/sec:
22:30:00 PIDs/sec:
22:30:01 PIDs/sec:
22:30:02 PIDs/sec:
^C
The output begins by showing a rate of new procesess over 120 per second.
That then ends at time 22:29:59, and for the next few seconds there are zero
new processes per second.
The following example shows a Linux build launched at 6:33:40, on a 36 CPU
server, with make -j36:
# pidpersec.bt
Attaching 4 probes...
Tracing new processes... Hit Ctrl-C to end.
06:33:38 PIDs/sec:
06:33:39 PIDs/sec:
06:33:40 PIDs/sec: @: 2314
06:33:41 PIDs/sec: @: 2517
06:33:42 PIDs/sec: @: 1345
06:33:43 PIDs/sec: @: 1752
06:33:44 PIDs/sec: @: 1744
06:33:45 PIDs/sec: @: 1549
06:33:46 PIDs/sec: @: 1643
06:33:47 PIDs/sec: @: 1487
06:33:48 PIDs/sec: @: 1534
06:33:49 PIDs/sec: @: 1279
06:33:50 PIDs/sec: @: 1392
06:33:51 PIDs/sec: @: 1556
06:33:52 PIDs/sec: @: 1580
06:33:53 PIDs/sec: @: 1944
A Linux kernel build involves launched many thousands of short-lived processes,
which can be seen in the above output: a rate of over 1,000 processes per
second.
There is another version of this tool in bcc: https://github.com/iovisor/bcc
/*
* vfscount Count VFS calls ("vfs_*").
* For Linux, uses bpftrace and eBPF.
*
* Written as a basic example of counting kernel functions.
*
* USAGE: vfscount.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 06-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing VFS calls... Hit Ctrl-C to end.\n");
}
kprobe:vfs_*
{
@[func] = count();
}
Demonstrations of vfscount, the Linux bpftrace/eBPF version.
Tracing all VFS calls:
# bpftrace vfscount.bt
Attaching 54 probes...
cannot attach kprobe, Invalid argument
Warning: could not attach probe kprobe:vfs_dedupe_get_page.isra.21, skipping.
Tracing VFS calls... Hit Ctrl-C to end.
^C
@[vfs_fsync_range]: 4
@[vfs_readlink]: 14
@[vfs_statfs]: 56
@[vfs_lock_file]: 60
@[vfs_write]: 276
@[vfs_statx]: 328
@[vfs_statx_fd]: 394
@[vfs_open]: 541
@[vfs_getattr]: 595
@[vfs_getattr_nosec]: 597
@[vfs_read]: 1113
While tracing, the vfs_read() call was the most frequent, occurring 1,113 times.
VFS is the Virtual File System: a kernel abstraction for file systems and other
resources that expose a file system interface. Much of VFS maps directly to the
syscall interface. Tracing VFS calls gives you a high level breakdown of the
kernel workload, and starting points for further investigation.
Notet that a warning was printed: "Warning: could not attach probe
kprobe:vfs_dedupe_get_page.isra.21": these are not currently instrumentable by
bpftrace/kprobes, so a warning is printed to let you know that they will be
missed.
There is another version of this tool in bcc: https://github.com/iovisor/bcc
/*
* vfsstat Count some VFS calls, with per-second summaries.
* For Linux, uses bpftrace and eBPF.
*
* Written as a basic example of counting multiple events and printing a
* per-second summary.
*
* USAGE: vfsstat.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 06-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing key VFS calls... Hit Ctrl-C to end.\n");
}
kprobe:vfs_read,
kprobe:vfs_write,
kprobe:vfs_fsync,
kprobe:vfs_open,
kprobe:vfs_create
{
@[func] = count();
}
interval:s:1
{
time();
print(@);
clear(@);
}
END
{
clear(@);
}
Demonstrations of vfsstat, the Linux bpftrace/eBPF version.
This traces some common VFS calls (see the script for the list) and prints
per-second summaries.
# bpftrace vfsstat.bt
Attaching 8 probes...
Tracing key VFS calls... Hit Ctrl-C to end.
21:30:38
@[vfs_write]: 1274
@[vfs_open]: 8675
@[vfs_read]: 11515
21:30:39
@[vfs_write]: 1155
@[vfs_open]: 8077
@[vfs_read]: 10398
21:30:40
@[vfs_write]: 1222
@[vfs_open]: 8554
@[vfs_read]: 11011
21:30:41
@[vfs_write]: 1230
@[vfs_open]: 8605
@[vfs_read]: 11077
21:30:42
@[vfs_write]: 1229
@[vfs_open]: 8591
@[vfs_read]: 11061
^C
Each second, a timestamp is printed ("HH:MM:SS") followed by common VFS
functions and the number of calls for that second. While tracing, the vfs_read()
kernel function was most frequent, occurring over 10,000 times per second.
There is another version of this tool in bcc: https://github.com/iovisor/bcc
The bcc version provides command line options.
/*
* xfsdist Summarize XFS operation latency.
* For Linux, uses bpftrace and eBPF.
*
* This traces four common file system calls: read, write, open, and fsync.
* It can be customized to trace more if desired.
*
* USAGE: xfsdist.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 08-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing XFS operation latency... Hit Ctrl-C to end.\n");
}
kprobe:xfs_file_read_iter,
kprobe:xfs_file_write_iter,
kprobe:xfs_file_open,
kprobe:xfs_file_fsync
{
@start[tid] = nsecs;
@name[tid] = func;
}
kretprobe:xfs_file_read_iter,
kretprobe:xfs_file_write_iter,
kretprobe:xfs_file_open,
kretprobe:xfs_file_fsync
/@start[tid]/
{
@us[@name[tid]] = hist((nsecs - @start[tid]) / 1000);
delete(@start[tid]);
delete(@name[tid]);
}
Demonstrations of xfsdist, the Linux bpftrace/eBPF version.
xfsdist traces XFS reads, writes, opens, and fsyncs, and summarizes their
latency as a power-of-2 histogram. For example:
# xfsdist.bt
Attaching 9 probes...
Tracing XFS operation latency... Hit Ctrl-C to end.
^C
@us[xfs_file_write_iter]:
[0, 1] 0 | |
[2, 4) 0 | |
[4, 8) 0 | |
[8, 16) 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[16, 32) 2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
@us[xfs_file_read_iter]:
[0, 1] 724 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2, 4) 137 |@@@@@@@@@ |
[4, 8) 143 |@@@@@@@@@@ |
[8, 16) 37 |@@ |
[16, 32) 11 | |
[32, 64) 22 |@ |
[64, 128) 7 | |
[128, 256) 0 | |
[256, 512) 485 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[512, 1K) 149 |@@@@@@@@@@ |
[1K, 2K) 98 |@@@@@@@ |
[2K, 4K) 85 |@@@@@@ |
[4K, 8K) 27 |@ |
[8K, 16K) 29 |@@ |
[16K, 32K) 25 |@ |
[32K, 64K) 1 | |
[64K, 128K) 0 | |
[128K, 256K) 6 | |
@us[xfs_file_open]:
[0, 1] 1819 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2, 4) 272 |@@@@@@@ |
[4, 8) 0 | |
[8, 16) 9 | |
[16, 32) 7 | |
This output shows a bi-modal distribution for read latency, with a faster
mode of 724 reads that took between 0 and 1 microseconds, and a slower
mode of over 485 reads that took between 256 and 512 microseconds. It's
likely that the faster mode was a hit from the in-memory file system cache,
and the slower mode is a read from a storage device (disk).
This "latency" is measured from when the operation was issued from the VFS
interface to the file system, to when it completed. This spans everything:
block device I/O (disk I/O), file system CPU cycles, file system locks, run
queue latency, etc. This is a better measure of the latency suffered by
applications reading from the file system than measuring this down at the
block device interface.
Note that this only traces the common file system operations previously
listed: other file system operations (eg, inode operations including
getattr()) are not traced.
There is another version of this tool in bcc: https://github.com/iovisor/bcc
The bcc version provides command line options to customize the output.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment