• Alexei Starovoitov's avatar
    net: filter: rework/optimize internal BPF interpreter's instruction set · bd4cf0ed
    Alexei Starovoitov authored
    This patch replaces/reworks the kernel-internal BPF interpreter with
    an optimized BPF instruction set format that is modelled closer to
    mimic native instruction sets and is designed to be JITed with one to
    one mapping. Thus, the new interpreter is noticeably faster than the
    current implementation of sk_run_filter(); mainly for two reasons:
    
    1. Fall-through jumps:
    
      BPF jump instructions are forced to go either 'true' or 'false'
      branch which causes branch-miss penalty. The new BPF jump
      instructions have only one branch and fall-through otherwise,
      which fits the CPU branch predictor logic better. `perf stat`
      shows drastic difference for branch-misses between the old and
      new code.
    
    2. Jump-threaded implementation of interpreter vs switch
       statement:
    
      Instead of single table-jump at the top of 'switch' statement,
      gcc will now generate multiple table-jump instructions, which
      helps CPU branch predictor logic.
    
    Note that the verification of filters is still being done through
    sk_chk_filter() in classical BPF format, so filters from user- or
    kernel space are verified in the same way as we do now, and same
    restrictions/constraints hold as well.
    
    We reuse current BPF JIT compilers in a way that this upgrade would
    even be fine as is, but nevertheless allows for a successive upgrade
    of BPF JIT compilers to the new format.
    
    The internal instruction set migration is being done after the
    probing for JIT compilation, so in case JIT compilers are able to
    create a native opcode image, we're going to use that, and in all
    other cases we're doing a follow-up migration of the BPF program's
    instruction set, so that it can be transparently run in the new
    interpreter.
    
    In short, the *internal* format extends BPF in the following way (more
    details can be taken from the appended documentation):
    
      - Number of registers increase from 2 to 10
      - Register width increases from 32-bit to 64-bit
      - Conditional jt/jf targets replaced with jt/fall-through
      - Adds signed > and >= insns
      - 16 4-byte stack slots for register spill-fill replaced
        with up to 512 bytes of multi-use stack space
      - Introduction of bpf_call insn and register passing convention
        for zero overhead calls from/to other kernel functions
      - Adds arithmetic right shift and endianness conversion insns
      - Adds atomic_add insn
      - Old tax/txa insns are replaced with 'mov dst,src' insn
    
    Performance of two BPF filters generated by libpcap resp. bpf_asm
    was measured on x86_64, i386 and arm32 (other libpcap programs
    have similar performance differences):
    
    fprog #1 is taken from Documentation/networking/filter.txt:
    tcpdump -i eth0 port 22 -dd
    
    fprog #2 is taken from 'man tcpdump':
    tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
       ((tcp[12]&0xf0)>>2)) != 0)' -dd
    
    Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
    same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
    smaller is better:
    
    --x86_64--
             fprog #1  fprog #1   fprog #2  fprog #2
             cache-hit cache-miss cache-hit cache-miss
    old BPF      90       101        192       202
    new BPF      31        71         47        97
    old BPF jit  12        34         17        44
    new BPF jit TBD
    
    --i386--
             fprog #1  fprog #1   fprog #2  fprog #2
             cache-hit cache-miss cache-hit cache-miss
    old BPF     107       136        227       252
    new BPF      40       119         69       172
    
    --arm32--
             fprog #1  fprog #1   fprog #2  fprog #2
             cache-hit cache-miss cache-hit cache-miss
    old BPF     202       300        475       540
    new BPF     180       270        330       470
    old BPF jit  26       182         37       202
    new BPF jit TBD
    
    Thus, without changing any userland BPF filters, applications on
    top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
    classifier, netfilter's xt_bpf, team driver's load-balancing mode,
    and many more will have better interpreter filtering performance.
    
    While we are replacing the internal BPF interpreter, we also need
    to convert seccomp BPF in the same step to make use of the new
    internal structure since it makes use of lower-level API details
    without being further decoupled through higher-level calls like
    sk_unattached_filter_{create,destroy}(), for example.
    
    Just as for normal socket filtering, also seccomp BPF experiences
    a time-to-verdict speedup:
    
    05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
    
      seccomp_rule_add_exact(ctx,...
      seccomp_rule_add_exact(ctx,...
    
      rc = seccomp_load(ctx);
    
      for (i = 0; i < 10000000; i++)
         syscall(199, 100);
    
    'short filter' has 2 rules
    'large filter' has 200 rules
    
    'short filter' performance is slightly better on x86_64/i386/arm32
    'large filter' is much faster on x86_64 and i386 and shows no
                   difference on arm32
    
    --x86_64-- short filter
    old BPF: 2.7 sec
     39.12%  bench  libc-2.15.so       [.] syscall
      8.10%  bench  [kernel.kallsyms]  [k] sk_run_filter
      6.31%  bench  [kernel.kallsyms]  [k] system_call
      5.59%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
      4.37%  bench  [kernel.kallsyms]  [k] trace_hardirqs_off_caller
      3.70%  bench  [kernel.kallsyms]  [k] __secure_computing
      3.67%  bench  [kernel.kallsyms]  [k] lock_is_held
      3.03%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
    new BPF: 2.58 sec
     42.05%  bench  libc-2.15.so       [.] syscall
      6.91%  bench  [kernel.kallsyms]  [k] system_call
      6.25%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
      6.07%  bench  [kernel.kallsyms]  [k] __secure_computing
      5.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
    
    --arm32-- short filter
    old BPF: 4.0 sec
     39.92%  bench  [kernel.kallsyms]  [k] vector_swi
     16.60%  bench  [kernel.kallsyms]  [k] sk_run_filter
     14.66%  bench  libc-2.17.so       [.] syscall
      5.42%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
      5.10%  bench  [kernel.kallsyms]  [k] __secure_computing
    new BPF: 3.7 sec
     35.93%  bench  [kernel.kallsyms]  [k] vector_swi
     21.89%  bench  libc-2.17.so       [.] syscall
     13.45%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
      6.25%  bench  [kernel.kallsyms]  [k] __secure_computing
      3.96%  bench  [kernel.kallsyms]  [k] syscall_trace_exit
    
    --x86_64-- large filter
    old BPF: 8.6 seconds
        73.38%    bench  [kernel.kallsyms]  [k] sk_run_filter
        10.70%    bench  libc-2.15.so       [.] syscall
         5.09%    bench  [kernel.kallsyms]  [k] seccomp_bpf_load
         1.97%    bench  [kernel.kallsyms]  [k] system_call
    new BPF: 5.7 seconds
        66.20%    bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
        16.75%    bench  libc-2.15.so       [.] syscall
         3.31%    bench  [kernel.kallsyms]  [k] system_call
         2.88%    bench  [kernel.kallsyms]  [k] __secure_computing
    
    --i386-- large filter
    old BPF: 5.4 sec
    new BPF: 3.8 sec
    
    --arm32-- large filter
    old BPF: 13.5 sec
     73.88%  bench  [kernel.kallsyms]  [k] sk_run_filter
     10.29%  bench  [kernel.kallsyms]  [k] vector_swi
      6.46%  bench  libc-2.17.so       [.] syscall
      2.94%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
      1.19%  bench  [kernel.kallsyms]  [k] __secure_computing
      0.87%  bench  [kernel.kallsyms]  [k] sys_getuid
    new BPF: 13.5 sec
     76.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
     10.98%  bench  [kernel.kallsyms]  [k] vector_swi
      5.87%  bench  libc-2.17.so       [.] syscall
      1.77%  bench  [kernel.kallsyms]  [k] __secure_computing
      0.93%  bench  [kernel.kallsyms]  [k] sys_getuid
    
    BPF filters generated by seccomp are very branchy, so the new
    internal BPF performance is better than the old one. Performance
    gains will be even higher when BPF JIT is committed for the
    new structure, which is planned in future work (as successive
    JIT migrations).
    
    BPF has also been stress-tested with trinity's BPF fuzzer.
    
    Joint work with Daniel Borkmann.
    Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
    Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
    Cc: Hagen Paul Pfeifer <hagen@jauu.net>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Paul Moore <pmoore@redhat.com>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: H. Peter Anvin <hpa@linux.intel.com>
    Cc: linux-kernel@vger.kernel.org
    Acked-by: default avatarKees Cook <keescook@chromium.org>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    bd4cf0ed
seccomp.c 13.7 KB