An error occurred fetching the project authors.
  1. 25 Aug, 2020 1 commit
  2. 20 Aug, 2020 1 commit
    • Alexei Starovoitov's avatar
      bpf: Add kernel module with user mode driver that populates bpffs. · d71fa5c9
      Alexei Starovoitov authored
      Add kernel module with user mode driver that populates bpffs with
      BPF iterators.
      
      $ mount bpffs /my/bpffs/ -t bpf
      $ ls -la /my/bpffs/
      total 4
      drwxrwxrwt  2 root root    0 Jul  2 00:27 .
      drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
      -rw-------  1 root root    0 Jul  2 00:27 maps.debug
      -rw-------  1 root root    0 Jul  2 00:27 progs.debug
      
      The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
      maps, load two BPF programs, attach them to BPF iterators, and finally send two
      bpf_link IDs back to the kernel.
      The kernel will pin two bpf_links into newly mounted bpffs instance under
      names "progs.debug" and "maps.debug". These two files become human readable.
      
      $ cat /my/bpffs/progs.debug
        id name            attached
        11 dump_bpf_map    bpf_iter_bpf_map
        12 dump_bpf_prog   bpf_iter_bpf_prog
        27 test_pkt_access
        32 test_main       test_pkt_access test_pkt_access
        33 test_subprog1   test_pkt_access_subprog1 test_pkt_access
        34 test_subprog2   test_pkt_access_subprog2 test_pkt_access
        35 test_subprog3   test_pkt_access_subprog3 test_pkt_access
        36 new_get_skb_len get_skb_len test_pkt_access
        37 new_get_skb_ifindex get_skb_ifindex test_pkt_access
        38 new_get_constant get_constant test_pkt_access
      
      The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
      all BPF programs currently loaded in the system. This information is unstable
      and will change from kernel to kernel as ".debug" suffix conveys.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200819042759.51280-4-alexei.starovoitov@gmail.com
      d71fa5c9
  3. 26 Jul, 2020 1 commit
  4. 01 Jun, 2020 2 commits
    • Jakub Sitnicki's avatar
      flow_dissector: Move out netns_bpf prog callbacks · b27f7bb5
      Jakub Sitnicki authored
      Move functions to manage BPF programs attached to netns that are not
      specific to flow dissector to a dedicated module named
      bpf/net_namespace.c.
      
      The set of functions will grow with the addition of bpf_link support for
      netns attached programs. This patch prepares ground by creating a place
      for it.
      
      This is a code move with no functional changes intended.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200531082846.2117903-4-jakub@cloudflare.com
      b27f7bb5
    • Andrii Nakryiko's avatar
      bpf: Implement BPF ring buffer and verifier support for it · 457f4436
      Andrii Nakryiko authored
      This commit adds a new MPSC ring buffer implementation into BPF ecosystem,
      which allows multiple CPUs to submit data to a single shared ring buffer. On
      the consumption side, only single consumer is assumed.
      
      Motivation
      ----------
      There are two distinctive motivators for this work, which are not satisfied by
      existing perf buffer, which prompted creation of a new ring buffer
      implementation.
        - more efficient memory utilization by sharing ring buffer across CPUs;
        - preserving ordering of events that happen sequentially in time, even
        across multiple CPUs (e.g., fork/exec/exit events for a task).
      
      These two problems are independent, but perf buffer fails to satisfy both.
      Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
      also solved by having an MPSC implementation of ring buffer. The ordering
      problem could technically be solved for perf buffer with some in-kernel
      counting, but given the first one requires an MPSC buffer, the same solution
      would solve the second problem automatically.
      
      Semantics and APIs
      ------------------
      Single ring buffer is presented to BPF programs as an instance of BPF map of
      type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
      rejected.
      
      One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
      BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
      "same CPU only" rule. This would be more familiar interface compatible with
      existing perf buffer use in BPF, but would fail if application needed more
      advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
      this with current approach. Additionally, given the performance of BPF
      ringbuf, many use cases would just opt into a simple single ring buffer shared
      among all CPUs, for which current approach would be an overkill.
      
      Another approach could introduce a new concept, alongside BPF map, to
      represent generic "container" object, which doesn't necessarily have key/value
      interface with lookup/update/delete operations. This approach would add a lot
      of extra infrastructure that has to be built for observability and verifier
      support. It would also add another concept that BPF developers would have to
      familiarize themselves with, new syntax in libbpf, etc. But then would really
      provide no additional benefits over the approach of using a map.
      BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
      doesn't few other map types (e.g., queue and stack; array doesn't support
      delete, etc).
      
      The approach chosen has an advantage of re-using existing BPF map
      infrastructure (introspection APIs in kernel, libbpf support, etc), being
      familiar concept (no need to teach users a new type of object in BPF program),
      and utilizing existing tooling (bpftool). For common scenario of using
      a single ring buffer for all CPUs, it's as simple and straightforward, as
      would be with a dedicated "container" object. On the other hand, by being
      a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
      implement a wide variety of topologies, from one ring buffer for each CPU
      (e.g., as a replacement for perf buffer use cases), to a complicated
      application hashing/sharding of ring buffers (e.g., having a small pool of
      ring buffers with hashed task's tgid being a look up key to preserve order,
      but reduce contention).
      
      Key and value sizes are enforced to be zero. max_entries is used to specify
      the size of ring buffer and has to be a power of 2 value.
      
      There are a bunch of similarities between perf buffer
      (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
        - variable-length records;
        - if there is no more space left in ring buffer, reservation fails, no
          blocking;
        - memory-mappable data area for user-space applications for ease of
          consumption and high performance;
        - epoll notifications for new incoming data;
        - but still the ability to do busy polling for new data to achieve the
          lowest latency, if necessary.
      
      BPF ringbuf provides two sets of APIs to BPF programs:
        - bpf_ringbuf_output() allows to *copy* data from one place to a ring
          buffer, similarly to bpf_perf_event_output();
        - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
          split the whole process into two steps. First, a fixed amount of space is
          reserved. If successful, a pointer to a data inside ring buffer data area
          is returned, which BPF programs can use similarly to a data inside
          array/hash maps. Once ready, this piece of memory is either committed or
          discarded. Discard is similar to commit, but makes consumer ignore the
          record.
      
      bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
      record has to be prepared in some other place first. But it allows to submit
      records of the length that's not known to verifier beforehand. It also closely
      matches bpf_perf_event_output(), so will simplify migration significantly.
      
      bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
      pointer directly to ring buffer memory. In a lot of cases records are larger
      than BPF stack space allows, so many programs have use extra per-CPU array as
      a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
      completely. But in exchange, it only allows a known constant size of memory to
      be reserved, such that verifier can verify that BPF program can't access
      memory outside its reserved record space. bpf_ringbuf_output(), while slightly
      slower due to extra memory copy, covers some use cases that are not suitable
      for bpf_ringbuf_reserve().
      
      The difference between commit and discard is very small. Discard just marks
      a record as discarded, and such records are supposed to be ignored by consumer
      code. Discard is useful for some advanced use-cases, such as ensuring
      all-or-nothing multi-record submission, or emulating temporary malloc()/free()
      within single BPF program invocation.
      
      Each reserved record is tracked by verifier through existing
      reference-tracking logic, similar to socket ref-tracking. It is thus
      impossible to reserve a record, but forget to submit (or discard) it.
      
      bpf_ringbuf_query() helper allows to query various properties of ring buffer.
      Currently 4 are supported:
        - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
        - BPF_RB_RING_SIZE returns the size of ring buffer;
        - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
          consumer/producer, respectively.
      Returned values are momentarily snapshots of ring buffer state and could be
      off by the time helper returns, so this should be used only for
      debugging/reporting reasons or for implementing various heuristics, that take
      into account highly-changeable nature of some of those characteristics.
      
      One such heuristic might involve more fine-grained control over poll/epoll
      notifications about new data availability in ring buffer. Together with
      BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
      it allows BPF program a high degree of control and, e.g., more efficient
      batched notifications. Default self-balancing strategy, though, should be
      adequate for most applications and will work reliable and efficiently already.
      
      Design and implementation
      -------------------------
      This reserve/commit schema allows a natural way for multiple producers, either
      on different CPUs or even on the same CPU/in the same BPF program, to reserve
      independent records and work with them without blocking other producers. This
      means that if BPF program was interruped by another BPF program sharing the
      same ring buffer, they will both get a record reserved (provided there is
      enough space left) and can work with it and submit it independently. This
      applies to NMI context as well, except that due to using a spinlock during
      reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
      in which case reservation will fail even if ring buffer is not full.
      
      The ring buffer itself internally is implemented as a power-of-2 sized
      circular buffer, with two logical and ever-increasing counters (which might
      wrap around on 32-bit architectures, that's not a problem):
        - consumer counter shows up to which logical position consumer consumed the
          data;
        - producer counter denotes amount of data reserved by all producers.
      
      Each time a record is reserved, producer that "owns" the record will
      successfully advance producer counter. At that point, data is still not yet
      ready to be consumed, though. Each record has 8 byte header, which contains
      the length of reserved record, as well as two extra bits: busy bit to denote
      that record is still being worked on, and discard bit, which might be set at
      commit time if record is discarded. In the latter case, consumer is supposed
      to skip the record and move on to the next one. Record header also encodes
      record's relative offset from the beginning of ring buffer data area (in
      pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
      the pointer to the record itself, without requiring also the pointer to ring
      buffer itself. Ring buffer memory location will be restored from record
      metadata header. This significantly simplifies verifier, as well as improving
      API usability.
      
      Producer counter increments are serialized under spinlock, so there is
      a strict ordering between reservations. Commits, on the other hand, are
      completely lockless and independent. All records become available to consumer
      in the order of reservations, but only after all previous records where
      already committed. It is thus possible for slow producers to temporarily hold
      off submitted records, that were reserved later.
      
      Reservation/commit/consumer protocol is verified by litmus tests in
      Documentation/litmus-test/bpf-rb.
      
      One interesting implementation bit, that significantly simplifies (and thus
      speeds up as well) implementation of both producers and consumers is how data
      area is mapped twice contiguously back-to-back in the virtual memory. This
      allows to not take any special measures for samples that have to wrap around
      at the end of the circular buffer data area, because the next page after the
      last data page would be first data page again, and thus the sample will still
      appear completely contiguous in virtual memory. See comment and a simple ASCII
      diagram showing this visually in bpf_ringbuf_area_alloc().
      
      Another feature that distinguishes BPF ringbuf from perf ring buffer is
      a self-pacing notifications of new data being availability.
      bpf_ringbuf_commit() implementation will send a notification of new record
      being available after commit only if consumer has already caught up right up
      to the record being committed. If not, consumer still has to catch up and thus
      will see new data anyways without needing an extra poll notification.
      Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
      this allows to achieve a very high throughput without having to resort to
      tricks like "notify only every Nth sample", which are necessary with perf
      buffer. For extreme cases, when BPF program wants more manual control of
      notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
      BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
      availability, but require extra caution and diligence in using this API.
      
      Comparison to alternatives
      --------------------------
      Before considering implementing BPF ring buffer from scratch existing
      alternatives in kernel were evaluated, but didn't seem to meet the needs. They
      largely fell into few categores:
        - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations
          outlined above (ordering and memory consumption);
        - linked list-based implementations; while some were multi-producer designs,
          consuming these from user-space would be very complicated and most
          probably not performant; memory-mapping contiguous piece of memory is
          simpler and more performant for user-space consumers;
        - io_uring is SPSC, but also requires fixed-sized elements. Naively turning
          SPSC queue into MPSC w/ lock would have subpar performance compared to
          locked reserve + lockless commit, as with BPF ring buffer. Fixed sized
          elements would be too limiting for BPF programs, given existing BPF
          programs heavily rely on variable-sized perf buffer already;
        - specialized implementations (like a new printk ring buffer, [0]) with lots
          of printk-specific limitations and implications, that didn't seem to fit
          well for intended use with BPF programs.
      
        [0] https://lwn.net/Articles/779550/Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      457f4436
  5. 22 May, 2020 1 commit
  6. 10 May, 2020 3 commits
  7. 29 Mar, 2020 1 commit
  8. 09 Jan, 2020 1 commit
    • Martin KaFai Lau's avatar
      bpf: Introduce BPF_PROG_TYPE_STRUCT_OPS · 27ae7997
      Martin KaFai Lau authored
      This patch allows the kernel's struct ops (i.e. func ptr) to be
      implemented in BPF.  The first use case in this series is the
      "struct tcp_congestion_ops" which will be introduced in a
      latter patch.
      
      This patch introduces a new prog type BPF_PROG_TYPE_STRUCT_OPS.
      The BPF_PROG_TYPE_STRUCT_OPS prog is verified against a particular
      func ptr of a kernel struct.  The attr->attach_btf_id is the btf id
      of a kernel struct.  The attr->expected_attach_type is the member
      "index" of that kernel struct.  The first member of a struct starts
      with member index 0.  That will avoid ambiguity when a kernel struct
      has multiple func ptrs with the same func signature.
      
      For example, a BPF_PROG_TYPE_STRUCT_OPS prog is written
      to implement the "init" func ptr of the "struct tcp_congestion_ops".
      The attr->attach_btf_id is the btf id of the "struct tcp_congestion_ops"
      of the _running_ kernel.  The attr->expected_attach_type is 3.
      
      The ctx of BPF_PROG_TYPE_STRUCT_OPS is an array of u64 args saved
      by arch_prepare_bpf_trampoline that will be done in the next
      patch when introducing BPF_MAP_TYPE_STRUCT_OPS.
      
      "struct bpf_struct_ops" is introduced as a common interface for the kernel
      struct that supports BPF_PROG_TYPE_STRUCT_OPS prog.  The supporting kernel
      struct will need to implement an instance of the "struct bpf_struct_ops".
      
      The supporting kernel struct also needs to implement a bpf_verifier_ops.
      During BPF_PROG_LOAD, bpf_struct_ops_find() will find the right
      bpf_verifier_ops by searching the attr->attach_btf_id.
      
      A new "btf_struct_access" is also added to the bpf_verifier_ops such
      that the supporting kernel struct can optionally provide its own specific
      check on accessing the func arg (e.g. provide limited write access).
      
      After btf_vmlinux is parsed, the new bpf_struct_ops_init() is called
      to initialize some values (e.g. the btf id of the supporting kernel
      struct) and it can only be done once the btf_vmlinux is available.
      
      The R0 checks at BPF_EXIT is excluded for the BPF_PROG_TYPE_STRUCT_OPS prog
      if the return type of the prog->aux->attach_func_proto is "void".
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/20200109003503.3855825-1-kafai@fb.com
      27ae7997
  9. 13 Dec, 2019 1 commit
    • Björn Töpel's avatar
      bpf: Introduce BPF dispatcher · 75ccbef6
      Björn Töpel authored
      The BPF dispatcher is a multi-way branch code generator, mainly
      targeted for XDP programs. When an XDP program is executed via the
      bpf_prog_run_xdp(), it is invoked via an indirect call. The indirect
      call has a substantial performance impact, when retpolines are
      enabled. The dispatcher transform indirect calls to direct calls, and
      therefore avoids the retpoline. The dispatcher is generated using the
      BPF JIT, and relies on text poking provided by bpf_arch_text_poke().
      
      The dispatcher hijacks a trampoline function it via the __fentry__ nop
      of the trampoline. One dispatcher instance currently supports up to 64
      dispatch points. A user creates a dispatcher with its corresponding
      trampoline with the DEFINE_BPF_DISPATCHER macro.
      Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20191213175112.30208-3-bjorn.topel@gmail.com
      75ccbef6
  10. 15 Nov, 2019 1 commit
    • Alexei Starovoitov's avatar
      bpf: Introduce BPF trampoline · fec56f58
      Alexei Starovoitov authored
      Introduce BPF trampoline concept to allow kernel code to call into BPF programs
      with practically zero overhead.  The trampoline generation logic is
      architecture dependent.  It's converting native calling convention into BPF
      calling convention.  BPF ISA is 64-bit (even on 32-bit architectures). The
      registers R1 to R5 are used to pass arguments into BPF functions. The main BPF
      program accepts only single argument "ctx" in R1. Whereas CPU native calling
      convention is different. x86-64 is passing first 6 arguments in registers
      and the rest on the stack. x86-32 is passing first 3 arguments in registers.
      sparc64 is passing first 6 in registers. And so on.
      
      The trampolines between BPF and kernel already exist.  BPF_CALL_x macros in
      include/linux/filter.h statically compile trampolines from BPF into kernel
      helpers. They convert up to five u64 arguments into kernel C pointers and
      integers. On 64-bit architectures this BPF_to_kernel trampolines are nops. On
      32-bit architecture they're meaningful.
      
      The opposite job kernel_to_BPF trampolines is done by CAST_TO_U64 macros and
      __bpf_trace_##call() shim functions in include/trace/bpf_probe.h. They convert
      kernel function arguments into array of u64s that BPF program consumes via
      R1=ctx pointer.
      
      This patch set is doing the same job as __bpf_trace_##call() static
      trampolines, but dynamically for any kernel function. There are ~22k global
      kernel functions that are attachable via nop at function entry. The function
      arguments and types are described in BTF.  The job of btf_distill_func_proto()
      function is to extract useful information from BTF into "function model" that
      architecture dependent trampoline generators will use to generate assembly code
      to cast kernel function arguments into array of u64s.  For example the kernel
      function eth_type_trans has two pointers. They will be casted to u64 and stored
      into stack of generated trampoline. The pointer to that stack space will be
      passed into BPF program in R1. On x86-64 such generated trampoline will consume
      16 bytes of stack and two stores of %rdi and %rsi into stack. The verifier will
      make sure that only two u64 are accessed read-only by BPF program. The verifier
      will also recognize the precise type of the pointers being accessed and will
      not allow typecasting of the pointer to a different type within BPF program.
      
      The tracing use case in the datacenter demonstrated that certain key kernel
      functions have (like tcp_retransmit_skb) have 2 or more kprobes that are always
      active.  Other functions have both kprobe and kretprobe.  So it is essential to
      keep both kernel code and BPF programs executing at maximum speed. Hence
      generated BPF trampoline is re-generated every time new program is attached or
      detached to maintain maximum performance.
      
      To avoid the high cost of retpoline the attached BPF programs are called
      directly. __bpf_prog_enter/exit() are used to support per-program execution
      stats.  In the future this logic will be optimized further by adding support
      for bpf_stats_enabled_key inside generated assembly code. Introduction of
      preemptible and sleepable BPF programs will completely remove the need to call
      to __bpf_prog_enter/exit().
      
      Detach of a BPF program from the trampoline should not fail. To avoid memory
      allocation in detach path the half of the page is used as a reserve and flipped
      after each attach/detach. 2k bytes is enough to call 40+ BPF programs directly
      which is enough for BPF tracing use cases. This limit can be increased in the
      future.
      
      BPF_TRACE_FENTRY programs have access to raw kernel function arguments while
      BPF_TRACE_FEXIT programs have access to kernel return value as well. Often
      kprobe BPF program remembers function arguments in a map while kretprobe
      fetches arguments from a map and analyzes them together with return value.
      BPF_TRACE_FEXIT accelerates this typical use case.
      
      Recursion prevention for kprobe BPF programs is done via per-cpu
      bpf_prog_active counter. In practice that turned out to be a mistake. It
      caused programs to randomly skip execution. The tracing tools missed results
      they were looking for. Hence BPF trampoline doesn't provide builtin recursion
      prevention. It's a job of BPF program itself and will be addressed in the
      follow up patches.
      
      BPF trampoline is intended to be used beyond tracing and fentry/fexit use cases
      in the future. For example to remove retpoline cost from XDP programs.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20191114185720.1641606-5-ast@kernel.org
      fec56f58
  11. 13 Aug, 2019 1 commit
    • Andrii Nakryiko's avatar
      btf: expose BTF info through sysfs · 341dfcf8
      Andrii Nakryiko authored
      Make .BTF section allocated and expose its contents through sysfs.
      
      /sys/kernel/btf directory is created to contain all the BTFs present
      inside kernel. Currently there is only kernel's main BTF, represented as
      /sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
      each module will expose its BTF as /sys/kernel/btf/<module-name> file.
      
      Current approach relies on a few pieces coming together:
      1. pahole is used to take almost final vmlinux image (modulo .BTF and
         kallsyms) and generate .BTF section by converting DWARF info into
         BTF. This section is not allocated and not mapped to any segment,
         though, so is not yet accessible from inside kernel at runtime.
      2. objcopy dumps .BTF contents into binary file and subsequently
         convert binary file into linkable object file with automatically
         generated symbols _binary__btf_kernel_bin_start and
         _binary__btf_kernel_bin_end, pointing to start and end, respectively,
         of BTF raw data.
      3. final vmlinux image is generated by linking this object file (and
         kallsyms, if necessary). sysfs_btf.c then creates
         /sys/kernel/btf/kernel file and exposes embedded BTF contents through
         it. This allows, e.g., libbpf and bpftool access BTF info at
         well-known location, without resorting to searching for vmlinux image
         on disk (location of which is not standardized and vmlinux image
         might not be even available in some scenarios, e.g., inside qemu
         during testing).
      
      Alternative approach using .incbin assembler directive to embed BTF
      contents directly was attempted but didn't work, because sysfs_proc.o is
      not re-compiled during link-vmlinux.sh stage. This is required, though,
      to update embedded BTF data (initially empty data is embedded, then
      pahole generates BTF info and we need to regenerate sysfs_btf.o with
      updated contents, but it's too late at that point).
      
      If BTF couldn't be generated due to missing or too old pahole,
      sysfs_btf.c handles that gracefully by detecting that
      _binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
      /sys/kernel/btf at all.
      
      v2->v3:
      - added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
      - created proper kobject (btf_kobj) for btf directory (Greg K-H);
      - undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
        due to initially missing  __binary__btf_kernel_bin_{start/end} symbols;
      
      v1->v2:
      - allow kallsyms stage to re-use vmlinux generated by gen_btf();
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      341dfcf8
  12. 12 Jun, 2019 1 commit
    • Valdis Klētnieks's avatar
      bpf: silence warning messages in core · aee450cb
      Valdis Klētnieks authored
      Compiling kernel/bpf/core.c with W=1 causes a flood of warnings:
      
      kernel/bpf/core.c:1198:65: warning: initialized field overwritten [-Woverride-init]
       1198 | #define BPF_INSN_3_TBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = true
            |                                                                 ^~~~
      kernel/bpf/core.c:1087:2: note: in expansion of macro 'BPF_INSN_3_TBL'
       1087 |  INSN_3(ALU, ADD,  X),   \
            |  ^~~~~~
      kernel/bpf/core.c:1202:3: note: in expansion of macro 'BPF_INSN_MAP'
       1202 |   BPF_INSN_MAP(BPF_INSN_2_TBL, BPF_INSN_3_TBL),
            |   ^~~~~~~~~~~~
      kernel/bpf/core.c:1198:65: note: (near initialization for 'public_insntable[12]')
       1198 | #define BPF_INSN_3_TBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = true
            |                                                                 ^~~~
      kernel/bpf/core.c:1087:2: note: in expansion of macro 'BPF_INSN_3_TBL'
       1087 |  INSN_3(ALU, ADD,  X),   \
            |  ^~~~~~
      kernel/bpf/core.c:1202:3: note: in expansion of macro 'BPF_INSN_MAP'
       1202 |   BPF_INSN_MAP(BPF_INSN_2_TBL, BPF_INSN_3_TBL),
            |   ^~~~~~~~~~~~
      
      98 copies of the above.
      
      The attached patch silences the warnings, because we *know* we're overwriting
      the default initializer. That leaves bpf/core.c with only 6 other warnings,
      which become more visible in comparison.
      Signed-off-by: default avatarValdis Kletnieks <valdis.kletnieks@vt.edu>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      aee450cb
  13. 19 Oct, 2018 1 commit
    • Mauricio Vasquez B's avatar
      bpf: add queue and stack maps · f1a2e44a
      Mauricio Vasquez B authored
      Queue/stack maps implement a FIFO/LIFO data storage for ebpf programs.
      These maps support peek, pop and push operations that are exposed to eBPF
      programs through the new bpf_map[peek/pop/push] helpers.  Those operations
      are exposed to userspace applications through the already existing
      syscalls in the following way:
      
      BPF_MAP_LOOKUP_ELEM            -> peek
      BPF_MAP_LOOKUP_AND_DELETE_ELEM -> pop
      BPF_MAP_UPDATE_ELEM            -> push
      
      Queue/stack maps are implemented using a buffer, tail and head indexes,
      hence BPF_F_NO_PREALLOC is not supported.
      
      As opposite to other maps, queue and stack do not use RCU for protecting
      maps values, the bpf_map[peek/pop] have a ARG_PTR_TO_UNINIT_MAP_VALUE
      argument that is a pointer to a memory zone where to save the value of a
      map.  Basically the same as ARG_PTR_TO_UNINIT_MEM, but the size has not
      be passed as an extra argument.
      
      Our main motivation for implementing queue/stack maps was to keep track
      of a pool of elements, like network ports in a SNAT, however we forsee
      other use cases, like for exampling saving last N kernel events in a map
      and then analysing from userspace.
      Signed-off-by: default avatarMauricio Vasquez B <mauricio.vasquez@polito.it>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f1a2e44a
  14. 15 Oct, 2018 1 commit
    • Daniel Borkmann's avatar
      bpf, sockmap: convert to generic sk_msg interface · 604326b4
      Daniel Borkmann authored
      Add a generic sk_msg layer, and convert current sockmap and later
      kTLS over to make use of it. While sk_buff handles network packet
      representation from netdevice up to socket, sk_msg handles data
      representation from application to socket layer.
      
      This means that sk_msg framework spans across ULP users in the
      kernel, and enables features such as introspection or filtering
      of data with the help of BPF programs that operate on this data
      structure.
      
      Latter becomes in particular useful for kTLS where data encryption
      is deferred into the kernel, and as such enabling the kernel to
      perform L7 introspection and policy based on BPF for TLS connections
      where the record is being encrypted after BPF has run and came to
      a verdict. In order to get there, first step is to transform open
      coding of scatter-gather list handling into a common core framework
      that subsystems can use.
      
      The code itself has been split and refactored into three bigger
      pieces: i) the generic sk_msg API which deals with managing the
      scatter gather ring, providing helpers for walking and mangling,
      transferring application data from user space into it, and preparing
      it for BPF pre/post-processing, ii) the plain sock map itself
      where sockets can be attached to or detached from; these bits
      are independent of i) which can now be used also without sock
      map, and iii) the integration with plain TCP as one protocol
      to be used for processing L7 application data (later this could
      e.g. also be extended to other protocols like UDP). The semantics
      are the same with the old sock map code and therefore no change
      of user facing behavior or APIs. While pursuing this work it
      also helped finding a number of bugs in the old sockmap code
      that we've fixed already in earlier commits. The test_sockmap
      kselftest suite passes through fine as well.
      
      Joint work with John.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      604326b4
  15. 10 Aug, 2018 1 commit
    • Martin KaFai Lau's avatar
      bpf: Introduce BPF_MAP_TYPE_REUSEPORT_SOCKARRAY · 5dc4c4b7
      Martin KaFai Lau authored
      This patch introduces a new map type BPF_MAP_TYPE_REUSEPORT_SOCKARRAY.
      
      To unleash the full potential of a bpf prog, it is essential for the
      userspace to be capable of directly setting up a bpf map which can then
      be consumed by the bpf prog to make decision.  In this case, decide which
      SO_REUSEPORT sk to serve the incoming request.
      
      By adding BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, the userspace has total control
      and visibility on where a SO_REUSEPORT sk should be located in a bpf map.
      The later patch will introduce BPF_PROG_TYPE_SK_REUSEPORT such that
      the bpf prog can directly select a sk from the bpf map.  That will
      raise the programmability of the bpf prog attached to a reuseport
      group (a group of sk serving the same IP:PORT).
      
      For example, in UDP, the bpf prog can peek into the payload (e.g.
      through the "data" pointer introduced in the later patch) to learn
      the application level's connection information and then decide which sk
      to pick from a bpf map.  The userspace can tightly couple the sk's location
      in a bpf map with the application logic in generating the UDP payload's
      connection information.  This connection info contact/API stays within the
      userspace.
      
      Also, when used with map-in-map, the userspace can switch the
      old-server-process's inner map to a new-server-process's inner map
      in one call "bpf_map_update_elem(outer_map, &index, &new_reuseport_array)".
      The bpf prog will then direct incoming requests to the new process instead
      of the old process.  The old process can finish draining the pending
      requests (e.g. by "accept()") before closing the old-fds.  [Note that
      deleting a fd from a bpf map does not necessary mean the fd is closed]
      
      During map_update_elem(),
      Only SO_REUSEPORT sk (i.e. which has already been added
      to a reuse->socks[]) can be used.  That means a SO_REUSEPORT sk that is
      "bind()" for UDP or "bind()+listen()" for TCP.  These conditions are
      ensured in "reuseport_array_update_check()".
      
      A SO_REUSEPORT sk can only be added once to a map (i.e. the
      same sk cannot be added twice even to the same map).  SO_REUSEPORT
      already allows another sk to be created for the same IP:PORT.
      There is no need to re-create a similar usage in the BPF side.
      
      When a SO_REUSEPORT is deleted from the "reuse->socks[]" (e.g. "close()"),
      it will notify the bpf map to remove it from the map also.  It is
      done through "bpf_sk_reuseport_detach()" and it will only be called
      if >=1 of the "reuse->sock[]" has ever been added to a bpf map.
      
      The map_update()/map_delete() has to be in-sync with the
      "reuse->socks[]".  Hence, the same "reuseport_lock" used
      by "reuse->socks[]" has to be used here also. Care has
      been taken to ensure the lock is only acquired when the
      adding sk passes some strict tests. and
      freeing the map does not require the reuseport_lock.
      
      The reuseport_array will also support lookup from the syscall
      side.  It will return a sock_gen_cookie().  The sock_gen_cookie()
      is on-demand (i.e. a sk's cookie is not generated until the very
      first map_lookup_elem()).
      
      The lookup cookie is 64bits but it goes against the logical userspace
      expectation on 32bits sizeof(fd) (and as other fd based bpf maps do also).
      It may catch user in surprise if we enforce value_size=8 while
      userspace still pass a 32bits fd during update.  Supporting different
      value_size between lookup and update seems unintuitive also.
      
      We also need to consider what if other existing fd based maps want
      to return 64bits value from syscall's lookup in the future.
      Hence, reuseport_array supports both value_size 4 and 8, and
      assuming user will usually use value_size=4.  The syscall's lookup
      will return ENOSPC on value_size=4.  It will will only
      return 64bits value from sock_gen_cookie() when user consciously
      choose value_size=8 (as a signal that lookup is desired) which then
      requires a 64bits value in both lookup and update.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      5dc4c4b7
  16. 02 Aug, 2018 1 commit
    • Roman Gushchin's avatar
      bpf: introduce cgroup storage maps · de9cbbaa
      Roman Gushchin authored
      This commit introduces BPF_MAP_TYPE_CGROUP_STORAGE maps:
      a special type of maps which are implementing the cgroup storage.
      
      >From the userspace point of view it's almost a generic
      hash map with the (cgroup inode id, attachment type) pair
      used as a key.
      
      The only difference is that some operations are restricted:
        1) a user can't create new entries,
        2) a user can't remove existing entries.
      
      The lookup from userspace is o(log(n)).
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      de9cbbaa
  17. 03 May, 2018 1 commit
    • Björn Töpel's avatar
      bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAP · fbfc504a
      Björn Töpel authored
      The xskmap is yet another BPF map, very much inspired by
      dev/cpu/sockmap, and is a holder of AF_XDP sockets. A user application
      adds AF_XDP sockets into the map, and by using the bpf_redirect_map
      helper, an XDP program can redirect XDP frames to an AF_XDP socket.
      
      Note that a socket that is bound to certain ifindex/queue index will
      *only* accept XDP frames from that netdev/queue index. If an XDP
      program tries to redirect from a netdev/queue index other than what
      the socket is bound to, the frame will not be received on the socket.
      
      A socket can reside in multiple maps.
      
      v3: Fixed race and simplified code.
      v2: Removed one indirection in map lookup.
      Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      fbfc504a
  18. 19 Apr, 2018 1 commit
    • Martin KaFai Lau's avatar
      bpf: btf: Introduce BPF Type Format (BTF) · 69b693f0
      Martin KaFai Lau authored
      This patch introduces BPF type Format (BTF).
      
      BTF (BPF Type Format) is the meta data format which describes
      the data types of BPF program/map.  Hence, it basically focus
      on the C programming language which the modern BPF is primary
      using.  The first use case is to provide a generic pretty print
      capability for a BPF map.
      
      BTF has its root from CTF (Compact C-Type format).  To simplify
      the handling of BTF data, BTF removes the differences between
      small and big type/struct-member.  Hence, BTF consistently uses u32
      instead of supporting both "one u16" and "two u32 (+padding)" in
      describing type and struct-member.
      
      It also raises the number of types (and functions) limit
      from 0x7fff to 0x7fffffff.
      
      Due to the above changes,  the format is not compatible to CTF.
      Hence, BTF starts with a new BTF_MAGIC and version number.
      
      This patch does the first verification pass to the BTF.  The first
      pass checks:
      1. meta-data size (e.g. It does not go beyond the total btf's size)
      2. name_offset is valid
      3. Each BTF_KIND (e.g. int, enum, struct....) does its
         own check of its meta-data.
      
      Some other checks, like checking a struct's member is referring
      to a valid type, can only be done in the second pass.  The second
      verification pass will be implemented in the next patch.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarAlexei Starovoitov <ast@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      69b693f0
  19. 04 Jan, 2018 1 commit
  20. 05 Nov, 2017 1 commit
    • Jakub Kicinski's avatar
      bpf: offload: add infrastructure for loading programs for a specific netdev · ab3f0063
      Jakub Kicinski authored
      The fact that we don't know which device the program is going
      to be used on is quite limiting in current eBPF infrastructure.
      We have to reverse or limit the changes which kernel makes to
      the loaded bytecode if we want it to be offloaded to a networking
      device.  We also have to invent new APIs for debugging and
      troubleshooting support.
      
      Make it possible to load programs for a specific netdev.  This
      helps us to bring the debug information closer to the core
      eBPF infrastructure (e.g. we will be able to reuse the verifer
      log in device JIT).  It allows device JITs to perform translation
      on the original bytecode.
      
      __bpf_prog_get() when called to get a reference for an attachment
      point will now refuse to give it if program has a device assigned.
      Following patches will add a version of that function which passes
      the expected netdev in. @type argument in __bpf_prog_get() is
      renamed to attach_type to make it clearer that it's only set on
      attachment.
      
      All calls to ndo_bpf are protected by rtnl, only verifier callbacks
      are not.  We need a wait queue to make sure netdev doesn't get
      destroyed while verifier is still running and calling its driver.
      Signed-off-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: default avatarSimon Horman <simon.horman@netronome.com>
      Reviewed-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ab3f0063
  21. 02 Nov, 2017 1 commit
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: default avatarKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: default avatarPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  22. 18 Oct, 2017 1 commit
    • Jesper Dangaard Brouer's avatar
      bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP · 6710e112
      Jesper Dangaard Brouer authored
      The 'cpumap' is primarily used as a backend map for XDP BPF helper
      call bpf_redirect_map() and XDP_REDIRECT action, like 'devmap'.
      
      This patch implement the main part of the map.  It is not connected to
      the XDP redirect system yet, and no SKB allocation are done yet.
      
      The main concern in this patch is to ensure the datapath can run
      without any locking.  This adds complexity to the setup and tear-down
      procedure, which assumptions are extra carefully documented in the
      code comments.
      
      V2:
       - make sure array isn't larger than NR_CPUS
       - make sure CPUs added is a valid possible CPU
      
      V3: fix nitpicks from Jakub Kicinski <kubakici@wp.pl>
      
      V5:
       - Restrict map allocation to root / CAP_SYS_ADMIN
       - WARN_ON_ONCE if queue is not empty on tear-down
       - Return -EPERM on memlock limit instead of -ENOMEM
       - Error code in __cpu_map_entry_alloc() also handle ptr_ring_cleanup()
       - Moved cpu_map_enqueue() to next patch
      
      V6: all notice by Daniel Borkmann
       - Fix err return code in cpu_map_alloc() introduced in V5
       - Move cpu_possible() check after max_entries boundary check
       - Forbid usage initially in check_map_func_compatibility()
      
      V7:
       - Fix alloc error path spotted by Daniel Borkmann
       - Did stress test adding+removing CPUs from the map concurrently
       - Fixed refcnt issue on cpu_map_entry, kthread started too soon
       - Make sure packets are flushed during tear-down, involved use of
         rcu_barrier() and kthread_run only exit after queue is empty
       - Fix alloc error path in __cpu_map_entry_alloc() for ptr_ring
      
      V8:
       - Nitpicking comments and gramma by Edward Cree
       - Fix missing semi-colon introduced in V7 due to rebasing
       - Move struct bpf_cpu_map_entry members cpu+map_id to tracepoint patch
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6710e112
  23. 10 Oct, 2017 1 commit
  24. 16 Aug, 2017 2 commits
    • John Fastabend's avatar
      bpf: sock_map fixes for !CONFIG_BPF_SYSCALL and !STREAM_PARSER · 6bdc9c4c
      John Fastabend authored
      Resolve issues with !CONFIG_BPF_SYSCALL and !STREAM_PARSER
      
      net/core/filter.c: In function ‘do_sk_redirect_map’:
      net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration]
         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
         ^
      net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default]
         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
      
      Fixes: 174a79ff ("bpf: sockmap with sk redirect support")
      Reported-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6bdc9c4c
    • John Fastabend's avatar
      bpf: sockmap with sk redirect support · 174a79ff
      John Fastabend authored
      Recently we added a new map type called dev map used to forward XDP
      packets between ports (6093ec2d). This patches introduces a
      similar notion for sockets.
      
      A sockmap allows users to add participating sockets to a map. When
      sockets are added to the map enough context is stored with the
      map entry to use the entry with a new helper
      
        bpf_sk_redirect_map(map, key, flags)
      
      This helper (analogous to bpf_redirect_map in XDP) is given the map
      and an entry in the map. When called from a sockmap program, discussed
      below, the skb will be sent on the socket using skb_send_sock().
      
      With the above we need a bpf program to call the helper from that will
      then implement the send logic. The initial site implemented in this
      series is the recv_sock hook. For this to work we implemented a map
      attach command to add attributes to a map. In sockmap we add two
      programs a parse program and a verdict program. The parse program
      uses strparser to build messages and pass them to the verdict program.
      The parse programs use the normal strparser semantics. The verdict
      program is of type SK_SKB.
      
      The verdict program returns a verdict SK_DROP, or  SK_REDIRECT for
      now. Additional actions may be added later. When SK_REDIRECT is
      returned, expected when bpf program uses bpf_sk_redirect_map(), the
      sockmap logic will consult per cpu variables set by the helper routine
      and pull the sock entry out of the sock map. This pattern follows the
      existing redirect logic in cls and xdp programs.
      
      This gives the flow,
      
       recv_sock -> str_parser (parse_prog) -> verdict_prog -> skb_send_sock
                                                           \
                                                            -> kfree_skb
      
      As an example use case a message based load balancer may use specific
      logic in the verdict program to select the sock to send on.
      
      Sample programs are provided in future patches that hopefully illustrate
      the user interfaces. Also selftests are in follow-on patches.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      174a79ff
  25. 09 Aug, 2017 1 commit
    • Edward Cree's avatar
      bpf/verifier: rework value tracking · f1174f77
      Edward Cree authored
      Unifies adjusted and unadjusted register value types (e.g. FRAME_POINTER is
       now just a PTR_TO_STACK with zero offset).
      Tracks value alignment by means of tracking known & unknown bits.  This
       also replaces the 'reg->imm' (leading zero bits) calculations for (what
       were) UNKNOWN_VALUEs.
      If pointer leaks are allowed, and adjust_ptr_min_max_vals returns -EACCES,
       treat the pointer as an unknown scalar and try again, because we might be
       able to conclude something about the result (e.g. pointer & 0x40 is either
       0 or 0x40).
      Verifier hooks in the netronome/nfp driver were changed to match the new
       data structures.
      Signed-off-by: default avatarEdward Cree <ecree@solarflare.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f1174f77
  26. 17 Jul, 2017 1 commit
  27. 22 Mar, 2017 1 commit
    • Martin KaFai Lau's avatar
      bpf: Add array of maps support · 56f668df
      Martin KaFai Lau authored
      This patch adds a few helper funcs to enable map-in-map
      support (i.e. outer_map->inner_map).  The first outer_map type
      BPF_MAP_TYPE_ARRAY_OF_MAPS is also added in this patch.
      The next patch will introduce a hash of maps type.
      
      Any bpf map type can be acted as an inner_map.  The exception
      is BPF_MAP_TYPE_PROG_ARRAY because the extra level of
      indirection makes it harder to verify the owner_prog_type
      and owner_jited.
      
      Multi-level map-in-map is not supported (i.e. map->map is ok
      but not map->map->map).
      
      When adding an inner_map to an outer_map, it currently checks the
      map_type, key_size, value_size, map_flags, max_entries and ops.
      The verifier also uses those map's properties to do static analysis.
      map_flags is needed because we need to ensure BPF_PROG_TYPE_PERF_EVENT
      is using a preallocated hashtab for the inner_hash also.  ops and
      max_entries are needed to generate inlined map-lookup instructions.
      For simplicity reason, a simple '==' test is used for both map_flags
      and max_entries.  The equality of ops is implied by the equality of
      map_type.
      
      During outer_map creation time, an inner_map_fd is needed to create an
      outer_map.  However, the inner_map_fd's life time does not depend on the
      outer_map.  The inner_map_fd is merely used to initialize
      the inner_map_meta of the outer_map.
      
      Also, for the outer_map:
      
      * It allows element update and delete from syscall
      * It allows element lookup from bpf_prog
      
      The above is similar to the current fd_array pattern.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      56f668df
  28. 23 Jan, 2017 1 commit
  29. 25 Nov, 2016 1 commit
    • Daniel Mack's avatar
      cgroup: add support for eBPF programs · 30070984
      Daniel Mack authored
      This patch adds two sets of eBPF program pointers to struct cgroup.
      One for such that are directly pinned to a cgroup, and one for such
      that are effective for it.
      
      To illustrate the logic behind that, assume the following example
      cgroup hierarchy.
      
        A - B - C
              \ D - E
      
      If only B has a program attached, it will be effective for B, C, D
      and E. If D then attaches a program itself, that will be effective for
      both D and E, and the program in B will only affect B and C. Only one
      program of a given type is effective for a cgroup.
      
      Attaching and detaching programs will be done through the bpf(2)
      syscall. For now, ingress and egress inet socket filtering are the
      only supported use-cases.
      Signed-off-by: default avatarDaniel Mack <daniel@zonque.org>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      30070984
  30. 15 Nov, 2016 1 commit
    • Martin KaFai Lau's avatar
      bpf: LRU List · 3a08c2fd
      Martin KaFai Lau authored
      Introduce bpf_lru_list which will provide LRU capability to
      the bpf_htab in the later patch.
      
      * General Thoughts:
      1. Target use case.  Read is more often than update.
         (i.e. bpf_lookup_elem() is more often than bpf_update_elem()).
         If bpf_prog does a bpf_lookup_elem() first and then an in-place
         update, it still counts as a read operation to the LRU list concern.
      2. It may be useful to think of it as a LRU cache
      3. Optimize the read case
         3.1 No lock in read case
         3.2 The LRU maintenance is only done during bpf_update_elem()
      4. If there is a percpu LRU list, it will lose the system-wise LRU
         property.  A completely isolated percpu LRU list has the best
         performance but the memory utilization is not ideal considering
         the work load may be imbalance.
      5. Hence, this patch starts the LRU implementation with a global LRU
         list with batched operations before accessing the global LRU list.
         As a LRU cache, #read >> #update/#insert operations, it will work well.
      6. There is a local list (for each cpu) which is named
         'struct bpf_lru_locallist'.  This local list is not used to sort
         the LRU property.  Instead, the local list is to batch enough
         operations before acquiring the lock of the global LRU list.  More
         details on this later.
      7. In the later patch, it allows a percpu LRU list by specifying a
         map-attribute for scalability reason and for use cases that need to
         prepare for the worst (and pathological) case like DoS attack.
         The percpu LRU list is completely isolated from each other and the
         LRU nodes (including free nodes) cannot be moved across the list.  The
         following description is for the global LRU list but mostly applicable
         to the percpu LRU list also.
      
      * Global LRU List:
      1. It has three sub-lists: active-list, inactive-list and free-list.
      2. The two list idea, active and inactive, is borrowed from the
         page cache.
      3. All nodes are pre-allocated and all sit at the free-list (of the
         global LRU list) at the beginning.  The pre-allocation reasoning
         is similar to the existing BPF_MAP_TYPE_HASH.  However,
         opting-out prealloc (BPF_F_NO_PREALLOC) is not supported in
         the LRU map.
      
      * Active/Inactive List (of the global LRU list):
      1. The active list, as its name says it, maintains the active set of
         the nodes.  We can think of it as the working set or more frequently
         accessed nodes.  The access frequency is approximated by a ref-bit.
         The ref-bit is set during the bpf_lookup_elem().
      2. The inactive list, as its name also says it, maintains a less
         active set of nodes.  They are the candidates to be removed
         from the bpf_htab when we are running out of free nodes.
      3. The ordering of these two lists is acting as a rough clock.
         The tail of the inactive list is the older nodes and
         should be released first if the bpf_htab needs free element.
      
      * Rotating the Active/Inactive List (of the global LRU list):
      1. It is the basic operation to maintain the LRU property of
         the global list.
      2. The active list is only rotated when the inactive list is running
         low.  This idea is similar to the current page cache.
         Inactive running low is currently defined as
         "# of inactive < # of active".
      3. The active list rotation always starts from the tail.  It moves
         node without ref-bit set to the head of the inactive list.
         It moves node with ref-bit set back to the head of the active
         list and then clears its ref-bit.
      4. The inactive rotation is pretty simply.
         It walks the inactive list and moves the nodes back to the head of
         active list if its ref-bit is set. The ref-bit is cleared after moving
         to the active list.
         If the node does not have ref-bit set, it just leave it as it is
         because it is already in the inactive list.
      
      * Shrinking the Inactive List (of the global LRU list):
      1. Shrinking is the operation to get free nodes when the bpf_htab is
         full.
      2. It usually only shrinks the inactive list to get free nodes.
      3. During shrinking, it will walk the inactive list from the tail,
         delete the nodes without ref-bit set from bpf_htab.
      4. If no free node found after step (3), it will forcefully get
         one node from the tail of inactive or active list.  Forcefully is
         in the sense that it ignores the ref-bit.
      
      * Local List:
      1. Each CPU has a 'struct bpf_lru_locallist'.  The purpose is to
         batch enough operations before acquiring the lock of the
         global LRU.
      2. A local list has two sub-lists, free-list and pending-list.
      3. During bpf_update_elem(), it will try to get from the free-list
         of (the current CPU local list).
      4. If the local free-list is empty, it will acquire from the
         global LRU list.  The global LRU list can either satisfy it
         by its global free-list or by shrinking the global inactive
         list.  Since we have acquired the global LRU list lock,
         it will try to get at most LOCAL_FREE_TARGET elements
         to the local free list.
      5. When a new element is added to the bpf_htab, it will
         first sit at the pending-list (of the local list) first.
         The pending-list will be flushed to the global LRU list
         when it needs to acquire free nodes from the global list
         next time.
      
      * Lock Consideration:
      The LRU list has a lock (lru_lock).  Each bucket of htab has a
      lock (buck_lock).  If both locks need to be acquired together,
      the lock order is always lru_lock -> buck_lock and this only
      happens in the bpf_lru_list.c logic.
      
      In hashtab.c, both locks are not acquired together (i.e. one
      lock is always released first before acquiring another lock).
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3a08c2fd
  31. 08 Mar, 2016 1 commit
    • Alexei Starovoitov's avatar
      bpf: introduce percpu_freelist · e19494ed
      Alexei Starovoitov authored
      Introduce simple percpu_freelist to keep single list of elements
      spread across per-cpu singly linked lists.
      
      /* push element into the list */
      void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);
      
      /* pop element from the list */
      struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);
      
      The object is pushed to the current cpu list.
      Pop first trying to get the object from the current cpu list,
      if it's empty goes to the neigbour cpu list.
      
      For bpf program usage pattern the collision rate is very low,
      since programs push and pop the objects typically on the same cpu.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e19494ed
  32. 20 Feb, 2016 1 commit
    • Alexei Starovoitov's avatar
      bpf: introduce BPF_MAP_TYPE_STACK_TRACE · d5a3b1f6
      Alexei Starovoitov authored
      add new map type to store stack traces and corresponding helper
      bpf_get_stackid(ctx, map, flags) - walk user or kernel stack and return id
      @ctx: struct pt_regs*
      @map: pointer to stack_trace map
      @flags: bits 0-7 - numer of stack frames to skip
              bit 8 - collect user stack instead of kernel
              bit 9 - compare stacks by hash only
              bit 10 - if two different stacks hash into the same stackid
                       discard old
              other bits - reserved
      Return: >= 0 stackid on success or negative error
      
      stackid is a 32-bit integer handle that can be further combined with
      other data (including other stackid) and used as a key into maps.
      
      Userspace will access stackmap using standard lookup/delete syscall commands to
      retrieve full stack trace for given stackid.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d5a3b1f6
  33. 03 Nov, 2015 1 commit
    • Daniel Borkmann's avatar
      bpf: add support for persistent maps/progs · b2197755
      Daniel Borkmann authored
      This work adds support for "persistent" eBPF maps/programs. The term
      "persistent" is to be understood that maps/programs have a facility
      that lets them survive process termination. This is desired by various
      eBPF subsystem users.
      
      Just to name one example: tc classifier/action. Whenever tc parses
      the ELF object, extracts and loads maps/progs into the kernel, these
      file descriptors will be out of reach after the tc instance exits.
      So a subsequent tc invocation won't be able to access/relocate on this
      resource, and therefore maps cannot easily be shared, f.e. between the
      ingress and egress networking data path.
      
      The current workaround is that Unix domain sockets (UDS) need to be
      instrumented in order to pass the created eBPF map/program file
      descriptors to a third party management daemon through UDS' socket
      passing facility. This makes it a bit complicated to deploy shared
      eBPF maps or programs (programs f.e. for tail calls) among various
      processes.
      
      We've been brainstorming on how we could tackle this issue and various
      approches have been tried out so far, which can be read up further in
      the below reference.
      
      The architecture we eventually ended up with is a minimal file system
      that can hold map/prog objects. The file system is a per mount namespace
      singleton, and the default mount point is /sys/fs/bpf/. Any subsequent
      mounts within a given namespace will point to the same instance. The
      file system allows for creating a user-defined directory structure.
      The objects for maps/progs are created/fetched through bpf(2) with
      two new commands (BPF_OBJ_PIN/BPF_OBJ_GET). I.e. a bpf file descriptor
      along with a pathname is being passed to bpf(2) that in turn creates
      (we call it eBPF object pinning) the file system nodes. Only the pathname
      is being passed to bpf(2) for getting a new BPF file descriptor to an
      existing node. The user can use that to access maps and progs later on,
      through bpf(2). Removal of file system nodes is being managed through
      normal VFS functions such as unlink(2), etc. The file system code is
      kept to a very minimum and can be further extended later on.
      
      The next step I'm working on is to add dump eBPF map/prog commands
      to bpf(2), so that a specification from a given file descriptor can
      be retrieved. This can be used by things like CRIU but also applications
      can inspect the meta data after calling BPF_OBJ_GET.
      
      Big thanks also to Alexei and Hannes who significantly contributed
      in the design discussion that eventually let us end up with this
      architecture here.
      
      Reference: https://lkml.org/lkml/2015/10/15/925Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b2197755
  34. 01 Mar, 2015 1 commit
  35. 18 Nov, 2014 2 commits
    • Alexei Starovoitov's avatar
      bpf: allow eBPF programs to use maps · d0003ec0
      Alexei Starovoitov authored
      expose bpf_map_lookup_elem(), bpf_map_update_elem(), bpf_map_delete_elem()
      map accessors to eBPF programs
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d0003ec0
    • Alexei Starovoitov's avatar
      bpf: add array type of eBPF maps · 28fbcfa0
      Alexei Starovoitov authored
      add new map type BPF_MAP_TYPE_ARRAY and its implementation
      
      - optimized for fastest possible lookup()
        . in the future verifier/JIT may recognize lookup() with constant key
          and optimize it into constant pointer. Can optimize non-constant
          key into direct pointer arithmetic as well, since pointers and
          value_size are constant for the life of the eBPF program.
          In other words array_map_lookup_elem() may be 'inlined' by verifier/JIT
          while preserving concurrent access to this map from user space
      
      - two main use cases for array type:
        . 'global' eBPF variables: array of 1 element with key=0 and value is a
          collection of 'global' variables which programs can use to keep the state
          between events
        . aggregation of tracing events into fixed set of buckets
      
      - all array elements pre-allocated and zero initialized at init time
      
      - key as an index in array and can only be 4 byte
      
      - map_delete_elem() returns EINVAL, since elements cannot be deleted
      
      - map_update_elem() replaces elements in an non-atomic way
        (for atomic updates hashtable type should be used instead)
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      28fbcfa0