1. 02 Nov, 2021 2 commits
  2. 01 Nov, 2021 20 commits
  3. 29 Oct, 2021 1 commit
  4. 28 Oct, 2021 16 commits
    • Alexei Starovoitov's avatar
      Merge branch 'Typeless/weak ksym for gen_loader + misc fixups' · b9989b59
      Alexei Starovoitov authored
      Kumar Kartikeya says:
      
      ====================
      
      Patches (1,2,3,6) add typeless and weak ksym support to gen_loader. It is follow
      up for the recent kfunc from modules series.
      
      The later patches (7,8) are misc fixes for selftests, and patch 4 for libbpf
      where we try to be careful to not end up with fds == 0, as libbpf assumes in
      various places that they are greater than 0. Patch 5 fixes up missing O_CLOEXEC
      in libbpf.
      
      Changelog:
      ----------
      v4 -> v5
      v4: https://lore.kernel.org/bpf/20211020191526.2306852-1-memxor@gmail.com
      
       * Address feedback from Andrii
         * Drop use of ensure_good_fd in unneeded call sites
         * Add sys_bpf_fd
         * Add _lskel suffix to all light skeletons and change all current selftests
         * Drop early break in close loop for sk_lookup
         * Fix other nits
      
      v3 -> v4
      v3: https://lore.kernel.org/bpf/20211014205644.1837280-1-memxor@gmail.com
      
       * Remove gpl_only = true from bpf_kallsyms_lookup_name (Alexei)
       * Add bpf_dump_raw_ok check to ensure kptr_restrict isn't bypassed (Alexei)
      
      v2 -> v3
      v2: https://lore.kernel.org/bpf/20211013073348.1611155-1-memxor@gmail.com
      
       * Address feedback from Song
         * Move ksym logging to separate helper to avoid code duplication
         * Move src_reg mask stuff to separate helper
         * Fix various other nits, add acks
           * __builtin_expect is used instead of likely to as skel_internal.h is
             included in isolation.
      
      v1 -> v2
      v1: https://lore.kernel.org/bpf/20211006002853.308945-1-memxor@gmail.com
      
       * Remove redundant OOM checks in emit_bpf_kallsyms_lookup_name
       * Use designated initializer for sk_lookup fd array (Jakub)
       * Do fd check for all fd returning low level APIs (Andrii, Alexei)
       * Make Fixes: tag quote commit message, use selftests/bpf prefix (Song, Andrii)
       * Split typeless and weak ksym support into separate patches, expand commit
         message (Song)
       * Fix duplication in selftests stemming from use of LSKELS_EXTRA (Song)
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b9989b59
    • Kumar Kartikeya Dwivedi's avatar
      selftests/bpf: Fix memory leak in test_ima · efadf2ad
      Kumar Kartikeya Dwivedi authored
      The allocated ring buffer is never freed, do so in the cleanup path.
      
      Fixes: f446b570 ("bpf/selftests: Update the IMA test to use BPF ring buffer")
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-9-memxor@gmail.com
      efadf2ad
    • Kumar Kartikeya Dwivedi's avatar
      selftests/bpf: Fix fd cleanup in sk_lookup test · c3fc706e
      Kumar Kartikeya Dwivedi authored
      Similar to the fix in commit:
      e31eec77 ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
      
      We use designated initializer to set fds to -1 without breaking on
      future changes to MAX_SERVER constant denoting the array size.
      
      The particular close(0) occurs on non-reuseport tests, so it can be seen
      with -n 115/{2,3} but not 115/4. This can cause problems with future
      tests if they depend on BTF fd never being acquired as fd 0, breaking
      internal libbpf assumptions.
      
      Fixes: 0ab5539f ("selftests/bpf: Tests for BPF_SK_LOOKUP attach point")
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-8-memxor@gmail.com
      c3fc706e
    • Kumar Kartikeya Dwivedi's avatar
      selftests/bpf: Add weak/typeless ksym test for light skeleton · 087cba79
      Kumar Kartikeya Dwivedi authored
      Also, avoid using CO-RE features, as lskel doesn't support CO-RE, yet.
      Include both light and libbpf skeleton in same file to test both of them
      together.
      
      In c48e51c8 ("bpf: selftests: Add selftests for module kfunc support"),
      I added support for generating both lskel and libbpf skel for a BPF
      object, however the name parameter for bpftool caused collisions when
      included in same file together. This meant that every test needed a
      separate file for a libbpf/light skeleton separation instead of
      subtests.
      
      Change that by appending a "_lskel" suffix to the name for files using
      light skeleton, and convert all existing users.
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-7-memxor@gmail.com
      087cba79
    • Kumar Kartikeya Dwivedi's avatar
      libbpf: Use O_CLOEXEC uniformly when opening fds · 92274e24
      Kumar Kartikeya Dwivedi authored
      There are some instances where we don't use O_CLOEXEC when opening an
      fd, fix these up. Otherwise, it is possible that a parallel fork causes
      these fds to leak into a child process on execve.
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-6-memxor@gmail.com
      92274e24
    • Kumar Kartikeya Dwivedi's avatar
      libbpf: Ensure that BPF syscall fds are never 0, 1, or 2 · 549a6323
      Kumar Kartikeya Dwivedi authored
      Add a simple wrapper for passing an fd and getting a new one >= 3 if it
      is one of 0, 1, or 2. There are two primary reasons to make this change:
      First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
      most recently noticed in [0]). Second, Alexei pointed out in [1] that
      some environments reset stdin, stdout, and stderr if they notice an
      invalid fd at these numbers. To protect against both these cases, switch
      all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
      We only need to modify the syscall wrappers and not other code that
      assumes a valid fd by doing >= 0, to avoid pointless churn, and because
      it is still a valid assumption. The cost paid is two additional syscalls
      if fd is in range [0, 2].
      
        [0]: e31eec77 ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
        [1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.comSigned-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
      549a6323
    • Kumar Kartikeya Dwivedi's avatar
      libbpf: Add weak ksym support to gen_loader · 585a3571
      Kumar Kartikeya Dwivedi authored
      This extends existing ksym relocation code to also support relocating
      weak ksyms. Care needs to be taken to zero out the src_reg (currently
      BPF_PSEUOD_BTF_ID, always set for gen_loader by bpf_object__relocate_data)
      when the BTF ID lookup fails at runtime.  This is not a problem for
      libbpf as it only sets ext->is_set when BTF ID lookup succeeds (and only
      proceeds in case of failure if ext->is_weak, leading to src_reg
      remaining as 0 for weak unresolved ksym).
      
      A pattern similar to emit_relo_kfunc_btf is followed of first storing
      the default values and then jumping over actual stores in case of an
      error. For src_reg adjustment, we also need to perform it when copying
      the populated instruction, so depending on if copied insn[0].imm is 0 or
      not, we decide to jump over the adjustment.
      
      We cannot reach that point unless the ksym was weak and resolved and
      zeroed out, as the emit_check_err will cause us to jump to cleanup
      label, so we do not need to recheck whether the ksym is weak before
      doing the adjustment after copying BTF ID and BTF FD.
      
      This is consistent with how libbpf relocates weak ksym. Logging
      statements are added to show the relocation result and aid debugging.
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-4-memxor@gmail.com
      585a3571
    • Kumar Kartikeya Dwivedi's avatar
      libbpf: Add typeless ksym support to gen_loader · c24941cd
      Kumar Kartikeya Dwivedi authored
      This uses the bpf_kallsyms_lookup_name helper added in previous patches
      to relocate typeless ksyms. The return value ENOENT can be ignored, and
      the value written to 'res' can be directly stored to the insn, as it is
      overwritten to 0 on lookup failure. For repeating symbols, we can simply
      copy the previously populated bpf_insn.
      
      Also, we need to take care to not close fds for typeless ksym_desc, so
      reuse the 'off' member's space to add a marker for typeless ksym and use
      that to skip them in cleanup_relos.
      
      We add a emit_ksym_relo_log helper that avoids duplicating common
      logging instructions between typeless and weak ksym (for future commit).
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-3-memxor@gmail.com
      c24941cd
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Add bpf_kallsyms_lookup_name helper · d6aef08a
      Kumar Kartikeya Dwivedi authored
      This helper allows us to get the address of a kernel symbol from inside
      a BPF_PROG_TYPE_SYSCALL prog (used by gen_loader), so that we can
      relocate typeless ksym vars.
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20211028063501.2239335-2-memxor@gmail.com
      d6aef08a
    • Alexei Starovoitov's avatar
      Merge branch 'Implement bloom filter map' · 2895f48f
      Alexei Starovoitov authored
      Joanne Koong says:
      
      ====================
      
      This patchset adds a new kind of bpf map: the bloom filter map.
      Bloom filters are a space-efficient probabilistic data structure
      used to quickly test whether an element exists in a set.
      For a brief overview about how bloom filters work,
      https://en.wikipedia.org/wiki/Bloom_filter
      may be helpful.
      
      One example use-case is an application leveraging a bloom filter
      map to determine whether a computationally expensive hashmap
      lookup can be avoided. If the element was not found in the bloom
      filter map, the hashmap lookup can be skipped.
      
      This patchset includes benchmarks for testing the performance of
      the bloom filter for different entry sizes and different number of
      hash functions used, as well as comparisons for hashmap lookups
      with vs. without the bloom filter.
      
      A high level overview of this patchset is as follows:
      1/5 - kernel changes for adding bloom filter map
      2/5 - libbpf changes for adding map_extra flags
      3/5 - tests for the bloom filter map
      4/5 - benchmarks for bloom filter lookup/update throughput and false positive
      rate
      5/5 - benchmarks for how hashmap lookups perform with vs. without the bloom
      filter
      
      v5 -> v6:
      * in 1/5: remove "inline" from the hash function, add check in syscall to
      fail out in cases where map_extra is not 0 for non-bloom-filter maps,
      fix alignment matching issues, move "map_extra flags" comments to inside
      the bpf_attr struct, add bpf_map_info map_extra changes here, add map_extra
      assignment in bpf_map_get_info_by_fd, change hash value_size to u32 instead of
      a u64
      * in 2/5: remove bpf_map_info map_extra changes, remove TODO comment about
      extending BTF arrays to cover u64s, cast to unsigned long long for %llx when
      printing out map_extra flags
      * in 3/5: use __type(value, ...) instead of __uint(value_size, ...) for values
      and keys
      * in 4/5: fix wrong bounds for the index when iterating through random values,
      update commit message to include update+lookup benchmark results for 8 byte
      and 64-byte value sizes, remove explicit global bool initializaton to false
      for hashmap_use_bloom and count_false_hits variables
      
      v4 -> v5:
      * Change the "bitset map with bloom filter capabilities" to a bloom filter map
      with max_entries signifying the number of unique entries expected in the bloom
      filter, remove bitset tests
      * Reduce verbiage by changing "bloom_filter" to "bloom", and renaming progs to
      more concise names.
      * in 2/5: remove "map_extra" from struct definitions that are frozen, create a
      "bpf_create_map_params" struct to propagate map_extra to the kernel at map
      creation time, change map_extra to __u64
      * in 4/5: check pthread condition variable in a loop when generating initial
      map data, remove "err" checks where not pragmatic, generate random values
      for the hashmap in the setup() instead of in the bpf program, add check_args()
      for checking that there aren't more requested entries than possible unique
      entries for the specified value size
      * in 5/5: Update commit message with updated benchmark data
      
      v3 -> v4:
      * Generalize the bloom filter map to be a bitset map with bloom filter
      capabilities
      * Add map_extra flags; pass in nr_hash_funcs through lower 4 bits of map_extra
      for the bitset map
      * Add tests for the bitset map (non-bloom filter) functionality
      * In the benchmarks, stats are computed only as monotonic increases, and place
      stats in a struct instead of as a percpu_array bpf map
      
      v2 -> v3:
      * Add libbpf changes for supporting nr_hash_funcs, instead of passing the
      number of hash functions through map_flags.
      * Separate the hashing logic in kernel/bpf/bloom_filter.c into a helper
      function
      
      v1 -> v2:
      * Remove libbpf changes, and pass the number of hash functions through
      map_flags instead.
      * Default to using 5 hash functions if no number of hash functions
      is specified.
      * Use set_bit instead of spinlocks in the bloom filter bitmap. This
      improved the speed significantly. For example, using 5 hash functions
      with 100k entries, there was roughly a 35% speed increase.
      * Use jhash2 (instead of jhash) for u32-aligned value sizes. This
      increased the speed by roughly 5 to 15%. When using jhash2 on value
      sizes non-u32 aligned (truncating any remainder bits), there was not
      a noticeable difference.
      * Add test for using the bloom filter as an inner map.
      * Reran the benchmarks, updated the commit messages to correspond to
      the new results.
      ====================
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2895f48f
    • Joanne Koong's avatar
      bpf/benchs: Add benchmarks for comparing hashmap lookups w/ vs. w/out bloom filter · f44bc543
      Joanne Koong authored
      This patch adds benchmark tests for comparing the performance of hashmap
      lookups without the bloom filter vs. hashmap lookups with the bloom filter.
      
      Checking the bloom filter first for whether the element exists should
      overall enable a higher throughput for hashmap lookups, since if the
      element does not exist in the bloom filter, we can avoid a costly lookup in
      the hashmap.
      
      On average, using 5 hash functions in the bloom filter tended to perform
      the best across the widest range of different entry sizes. The benchmark
      results using 5 hash functions (running on 8 threads on a machine with one
      numa node, and taking the average of 3 runs) were roughly as follows:
      
      value_size = 4 bytes -
      	10k entries: 30% faster
      	50k entries: 40% faster
      	100k entries: 40% faster
      	500k entres: 70% faster
      	1 million entries: 90% faster
      	5 million entries: 140% faster
      
      value_size = 8 bytes -
      	10k entries: 30% faster
      	50k entries: 40% faster
      	100k entries: 50% faster
      	500k entres: 80% faster
      	1 million entries: 100% faster
      	5 million entries: 150% faster
      
      value_size = 16 bytes -
      	10k entries: 20% faster
      	50k entries: 30% faster
      	100k entries: 35% faster
      	500k entres: 65% faster
      	1 million entries: 85% faster
      	5 million entries: 110% faster
      
      value_size = 40 bytes -
      	10k entries: 5% faster
      	50k entries: 15% faster
      	100k entries: 20% faster
      	500k entres: 65% faster
      	1 million entries: 75% faster
      	5 million entries: 120% faster
      Signed-off-by: default avatarJoanne Koong <joannekoong@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-6-joannekoong@fb.com
      f44bc543
    • Joanne Koong's avatar
      bpf/benchs: Add benchmark tests for bloom filter throughput + false positive · 57fd1c63
      Joanne Koong authored
      This patch adds benchmark tests for the throughput (for lookups + updates)
      and the false positive rate of bloom filter lookups, as well as some
      minor refactoring of the bash script for running the benchmarks.
      
      These benchmarks show that as the number of hash functions increases,
      the throughput and the false positive rate of the bloom filter decreases.
      >From the benchmark data, the approximate average false-positive rates
      are roughly as follows:
      
      1 hash function = ~30%
      2 hash functions = ~15%
      3 hash functions = ~5%
      4 hash functions = ~2.5%
      5 hash functions = ~1%
      6 hash functions = ~0.5%
      7 hash functions  = ~0.35%
      8 hash functions = ~0.15%
      9 hash functions = ~0.1%
      10 hash functions = ~0%
      
      For reference data, the benchmarks run on one thread on a machine
      with one numa node for 1 to 5 hash functions for 8-byte and 64-byte
      values are as follows:
      
      1 hash function:
        50k entries
      	8-byte value
      	    Lookups - 51.1 M/s operations
      	    Updates - 33.6 M/s operations
      	    False positive rate: 24.15%
      	64-byte value
      	    Lookups - 15.7 M/s operations
      	    Updates - 15.1 M/s operations
      	    False positive rate: 24.2%
        100k entries
      	8-byte value
      	    Lookups - 51.0 M/s operations
      	    Updates - 33.4 M/s operations
      	    False positive rate: 24.04%
      	64-byte value
      	    Lookups - 15.6 M/s operations
      	    Updates - 14.6 M/s operations
      	    False positive rate: 24.06%
        500k entries
      	8-byte value
      	    Lookups - 50.5 M/s operations
      	    Updates - 33.1 M/s operations
      	    False positive rate: 27.45%
      	64-byte value
      	    Lookups - 15.6 M/s operations
      	    Updates - 14.2 M/s operations
      	    False positive rate: 27.42%
        1 mil entries
      	8-byte value
      	    Lookups - 49.7 M/s operations
      	    Updates - 32.9 M/s operations
      	    False positive rate: 27.45%
      	64-byte value
      	    Lookups - 15.4 M/s operations
      	    Updates - 13.7 M/s operations
      	    False positive rate: 27.58%
        2.5 mil entries
      	8-byte value
      	    Lookups - 47.2 M/s operations
      	    Updates - 31.8 M/s operations
      	    False positive rate: 30.94%
      	64-byte value
      	    Lookups - 15.3 M/s operations
      	    Updates - 13.2 M/s operations
      	    False positive rate: 30.95%
        5 mil entries
      	8-byte value
      	    Lookups - 41.1 M/s operations
      	    Updates - 28.1 M/s operations
      	    False positive rate: 31.01%
      	64-byte value
      	    Lookups - 13.3 M/s operations
      	    Updates - 11.4 M/s operations
      	    False positive rate: 30.98%
      
      2 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 34.1 M/s operations
      	    Updates - 20.1 M/s operations
      	    False positive rate: 9.13%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.9 M/s operations
      	    False positive rate: 9.21%
        100k entries
      	8-byte value
      	    Lookups - 33.7 M/s operations
      	    Updates - 18.9 M/s operations
      	    False positive rate: 9.13%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.7 M/s operations
      	    False positive rate: 9.19%
        500k entries
      	8-byte value
      	    Lookups - 32.7 M/s operations
      	    Updates - 18.1 M/s operations
      	    False positive rate: 12.61%
      	64-byte value
      	    Lookups - 8.4 M/s operations
      	    Updates - 7.5 M/s operations
      	    False positive rate: 12.61%
        1 mil entries
      	8-byte value
      	    Lookups - 30.6 M/s operations
      	    Updates - 18.9 M/s operations
      	    False positive rate: 12.54%
      	64-byte value
      	    Lookups - 8.0 M/s operations
      	    Updates - 7.0 M/s operations
      	    False positive rate: 12.52%
        2.5 mil entries
      	8-byte value
      	    Lookups - 25.3 M/s operations
      	    Updates - 16.7 M/s operations
      	    False positive rate: 16.77%
      	64-byte value
      	    Lookups - 7.9 M/s operations
      	    Updates - 6.5 M/s operations
      	    False positive rate: 16.88%
        5 mil entries
      	8-byte value
      	    Lookups - 20.8 M/s operations
      	    Updates - 14.7 M/s operations
      	    False positive rate: 16.78%
      	64-byte value
      	    Lookups - 7.0 M/s operations
      	    Updates - 6.0 M/s operations
      	    False positive rate: 16.78%
      
      3 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 25.1 M/s operations
      	    Updates - 14.6 M/s operations
      	    False positive rate: 7.65%
      	64-byte value
      	    Lookups - 5.8 M/s operations
      	    Updates - 5.5 M/s operations
      	    False positive rate: 7.58%
        100k entries
      	8-byte value
      	    Lookups - 24.7 M/s operations
      	    Updates - 14.1 M/s operations
      	    False positive rate: 7.71%
      	64-byte value
      	    Lookups - 5.8 M/s operations
      	    Updates - 5.3 M/s operations
      	    False positive rate: 7.62%
        500k entries
      	8-byte value
      	    Lookups - 22.9 M/s operations
      	    Updates - 13.9 M/s operations
      	    False positive rate: 2.62%
      	64-byte value
      	    Lookups - 5.6 M/s operations
      	    Updates - 4.8 M/s operations
      	    False positive rate: 2.7%
        1 mil entries
      	8-byte value
      	    Lookups - 19.8 M/s operations
      	    Updates - 12.6 M/s operations
      	    False positive rate: 2.60%
      	64-byte value
      	    Lookups - 5.3 M/s operations
      	    Updates - 4.4 M/s operations
      	    False positive rate: 2.69%
        2.5 mil entries
      	8-byte value
      	    Lookups - 16.2 M/s operations
      	    Updates - 10.7 M/s operations
      	    False positive rate: 4.49%
      	64-byte value
      	    Lookups - 4.9 M/s operations
      	    Updates - 4.1 M/s operations
      	    False positive rate: 4.41%
        5 mil entries
      	8-byte value
      	    Lookups - 18.8 M/s operations
      	    Updates - 9.2 M/s operations
      	    False positive rate: 4.45%
      	64-byte value
      	    Lookups - 5.2 M/s operations
      	    Updates - 3.9 M/s operations
      	    False positive rate: 4.54%
      
      4 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 19.7 M/s operations
      	    Updates - 11.1 M/s operations
      	    False positive rate: 1.01%
      	64-byte value
      	    Lookups - 4.4 M/s operations
      	    Updates - 4.0 M/s operations
      	    False positive rate: 1.00%
        100k entries
      	8-byte value
      	    Lookups - 19.5 M/s operations
      	    Updates - 10.9 M/s operations
      	    False positive rate: 1.00%
      	64-byte value
      	    Lookups - 4.3 M/s operations
      	    Updates - 3.9 M/s operations
      	    False positive rate: 0.97%
        500k entries
      	8-byte value
      	    Lookups - 18.2 M/s operations
      	    Updates - 10.6 M/s operations
      	    False positive rate: 2.05%
      	64-byte value
      	    Lookups - 4.3 M/s operations
      	    Updates - 3.7 M/s operations
      	    False positive rate: 2.05%
        1 mil entries
      	8-byte value
      	    Lookups - 15.5 M/s operations
      	    Updates - 9.6 M/s operations
      	    False positive rate: 1.99%
      	64-byte value
      	    Lookups - 4.0 M/s operations
      	    Updates - 3.4 M/s operations
      	    False positive rate: 1.99%
        2.5 mil entries
      	8-byte value
      	    Lookups - 13.8 M/s operations
      	    Updates - 7.7 M/s operations
      	    False positive rate: 3.91%
      	64-byte value
      	    Lookups - 3.7 M/s operations
      	    Updates - 3.6 M/s operations
      	    False positive rate: 3.78%
        5 mil entries
      	8-byte value
      	    Lookups - 13.0 M/s operations
      	    Updates - 6.9 M/s operations
      	    False positive rate: 3.93%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.7 M/s operations
      	    False positive rate: 3.39%
      
      5 hash functions:
        50k entries
      	8-byte value
      	    Lookups - 16.4 M/s operations
      	    Updates - 9.1 M/s operations
      	    False positive rate: 0.78%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.2 M/s operations
      	    False positive rate: 0.77%
        100k entries
      	8-byte value
      	    Lookups - 16.3 M/s operations
      	    Updates - 9.0 M/s operations
      	    False positive rate: 0.79%
      	64-byte value
      	    Lookups - 3.5 M/s operations
      	    Updates - 3.2 M/s operations
      	    False positive rate: 0.78%
        500k entries
      	8-byte value
      	    Lookups - 15.1 M/s operations
      	    Updates - 8.8 M/s operations
      	    False positive rate: 1.82%
      	64-byte value
      	    Lookups - 3.4 M/s operations
      	    Updates - 3.0 M/s operations
      	    False positive rate: 1.78%
        1 mil entries
      	8-byte value
      	    Lookups - 13.2 M/s operations
      	    Updates - 7.8 M/s operations
      	    False positive rate: 1.81%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.8 M/s operations
      	    False positive rate: 1.80%
        2.5 mil entries
      	8-byte value
      	    Lookups - 10.5 M/s operations
      	    Updates - 5.9 M/s operations
      	    False positive rate: 0.29%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.4 M/s operations
      	    False positive rate: 0.28%
        5 mil entries
      	8-byte value
      	    Lookups - 9.6 M/s operations
      	    Updates - 5.7 M/s operations
      	    False positive rate: 0.30%
      	64-byte value
      	    Lookups - 3.2 M/s operations
      	    Updates - 2.7 M/s operations
      	    False positive rate: 0.30%
      Signed-off-by: default avatarJoanne Koong <joannekoong@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-5-joannekoong@fb.com
      57fd1c63
    • Joanne Koong's avatar
      selftests/bpf: Add bloom filter map test cases · ed9109ad
      Joanne Koong authored
      This patch adds test cases for bpf bloom filter maps. They include tests
      checking against invalid operations by userspace, tests for using the
      bloom filter map as an inner map, and a bpf program that queries the
      bloom filter map for values added by a userspace program.
      Signed-off-by: default avatarJoanne Koong <joannekoong@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-4-joannekoong@fb.com
      ed9109ad
    • Joanne Koong's avatar
      libbpf: Add "map_extra" as a per-map-type extra flag · 47512102
      Joanne Koong authored
      This patch adds the libbpf infrastructure for supporting a
      per-map-type "map_extra" field, whose definition will be
      idiosyncratic depending on map type.
      
      For example, for the bloom filter map, the lower 4 bits of
      map_extra is used to denote the number of hash functions.
      
      Please note that until libbpf 1.0 is here, the
      "bpf_create_map_params" struct is used as a temporary
      means for propagating the map_extra field to the kernel.
      Signed-off-by: default avatarJoanne Koong <joannekoong@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-3-joannekoong@fb.com
      47512102
    • Joanne Koong's avatar
      bpf: Add bloom filter map implementation · 9330986c
      Joanne Koong authored
      This patch adds the kernel-side changes for the implementation of
      a bpf bloom filter map.
      
      The bloom filter map supports peek (determining whether an element
      is present in the map) and push (adding an element to the map)
      operations.These operations are exposed to userspace applications
      through the already existing syscalls in the following way:
      
      BPF_MAP_LOOKUP_ELEM -> peek
      BPF_MAP_UPDATE_ELEM -> push
      
      The bloom filter map does not have keys, only values. In light of
      this, the bloom filter map's API matches that of queue stack maps:
      user applications use BPF_MAP_LOOKUP_ELEM/BPF_MAP_UPDATE_ELEM
      which correspond internally to bpf_map_peek_elem/bpf_map_push_elem,
      and bpf programs must use the bpf_map_peek_elem and bpf_map_push_elem
      APIs to query or add an element to the bloom filter map. When the
      bloom filter map is created, it must be created with a key_size of 0.
      
      For updates, the user will pass in the element to add to the map
      as the value, with a NULL key. For lookups, the user will pass in the
      element to query in the map as the value, with a NULL key. In the
      verifier layer, this requires us to modify the argument type of
      a bloom filter's BPF_FUNC_map_peek_elem call to ARG_PTR_TO_MAP_VALUE;
      as well, in the syscall layer, we need to copy over the user value
      so that in bpf_map_peek_elem, we know which specific value to query.
      
      A few things to please take note of:
       * If there are any concurrent lookups + updates, the user is
      responsible for synchronizing this to ensure no false negative lookups
      occur.
       * The number of hashes to use for the bloom filter is configurable from
      userspace. If no number is specified, the default used will be 5 hash
      functions. The benchmarks later in this patchset can help compare the
      performance of using different number of hashes on different entry
      sizes. In general, using more hashes decreases both the false positive
      rate and the speed of a lookup.
       * Deleting an element in the bloom filter map is not supported.
       * The bloom filter map may be used as an inner map.
       * The "max_entries" size that is specified at map creation time is used
      to approximate a reasonable bitmap size for the bloom filter, and is not
      otherwise strictly enforced. If the user wishes to insert more entries
      into the bloom filter than "max_entries", they may do so but they should
      be aware that this may lead to a higher false positive rate.
      Signed-off-by: default avatarJoanne Koong <joannekoong@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211027234504.30744-2-joannekoong@fb.com
      9330986c
    • Tiezhu Yang's avatar
      bpf, tests: Add module parameter test_suite to test_bpf module · b066abba
      Tiezhu Yang authored
      After commit 9298e63e ("bpf/tests: Add exhaustive tests of ALU
      operand magnitudes"), when modprobe test_bpf.ko with JIT on mips64,
      there exists segment fault due to the following reason:
      
        [...]
        ALU64_MOV_X: all register value magnitudes jited:1
        Break instruction in kernel code[#1]
        [...]
      
      It seems that the related JIT implementations of some test cases
      in test_bpf() have problems. At this moment, I do not care about
      the segment fault while I just want to verify the test cases of
      tail calls.
      
      Based on the above background and motivation, add the following
      module parameter test_suite to the test_bpf.ko:
      
        test_suite=<string>: only the specified test suite will be run, the
        string can be "test_bpf", "test_tail_calls" or "test_skb_segment".
      
      If test_suite is not specified, but test_id, test_name or test_range
      is specified, set 'test_bpf' as the default test suite. This is useful
      to only test the corresponding test suite when specifying the valid
      test_suite string.
      
      Any invalid test suite will result in -EINVAL being returned and no
      tests being run. If the test_suite is not specified or specified as
      empty string, it does not change the current logic, all of the test
      cases will be run.
      
      Here are some test results:
      
       # dmesg -c
       # modprobe test_bpf
       # dmesg | grep Summary
       test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed]
       test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
       test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_bpf
       # dmesg | tail -1
       test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_tail_calls
       # dmesg
       test_bpf: #0 Tail call leaf jited:0 21 PASS
       [...]
       test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS
       test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_skb_segment
       # dmesg
       test_bpf: #0 gso_with_rx_frags PASS
       test_bpf: #1 gso_linear_no_head_frag PASS
       test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_id=1
       # dmesg
       test_bpf: test_bpf: set 'test_bpf' as the default test_suite.
       test_bpf: #1 TXA jited:0 54 51 50 PASS
       test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_bpf test_name=TXA
       # dmesg
       test_bpf: #1 TXA jited:0 54 50 51 PASS
       test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_tail_calls test_range=6,7
       # dmesg
       test_bpf: #6 Tail call error path, NULL target jited:0 41 PASS
       test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS
       test_bpf: test_tail_calls: Summary: 2 PASSED, 0 FAILED, [0/2 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_skb_segment test_id=1
       # dmesg
       test_bpf: #1 gso_linear_no_head_frag PASS
       test_bpf: test_skb_segment: Summary: 1 PASSED, 0 FAILED
      
      By the way, the above segment fault has been fixed in the latest bpf-next
      tree which contains the mips64 JIT rework.
      Signed-off-by: default avatarTiezhu Yang <yangtiezhu@loongson.cn>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Acked-by: default avatarJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Link: https://lore.kernel.org/bpf/1635384321-28128-1-git-send-email-yangtiezhu@loongson.cn
      b066abba
  5. 27 Oct, 2021 1 commit