1. 05 Mar, 2021 34 commits
  2. 04 Mar, 2021 2 commits
  3. 03 Mar, 2021 1 commit
  4. 26 Feb, 2021 3 commits
    • Ian Denhardt's avatar
    • Ian Denhardt's avatar
      tools, bpf_asm: Hard error on out of range jumps · 04883a07
      Ian Denhardt authored
      Per discussion at [0] this was originally introduced as a warning due
      to concerns about breaking existing code, but a hard error probably
      makes more sense, especially given that concerns about breakage were
      only speculation.
      
        [0] https://lore.kernel.org/bpf/c964892195a6b91d20a67691448567ef528ffa6d.camel@linux.ibm.com/T/#tSigned-off-by: default avatarIan Denhardt <ian@zenhack.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/bpf/a6b6c7516f5d559049d669968e953b4a8d7adea3.1614201868.git.ian@zenhack.net
      04883a07
    • Alexei Starovoitov's avatar
      Merge branch 'bpf: add bpf_for_each_map_elem() helper' · cc0f8353
      Alexei Starovoitov authored
      Yonghong Song says:
      
      ====================
      
      This patch set introduced bpf_for_each_map_elem() helper.
      The helper permits bpf program iterates through all elements
      for a particular map.
      
      The work originally inspired by an internal discussion where
      firewall rules are kept in a map and bpf prog wants to
      check packet 5 tuples against all rules in the map.
      A bounded loop can be used but it has a few drawbacks.
      As the loop iteration goes up, verification time goes up too.
      For really large maps, verification may fail.
      A helper which abstracts out the loop itself will not have
      verification time issue.
      
      A recent discussion in [1] involves to iterate all hash map
      elements in bpf program. Currently iterating all hashmap elements
      in bpf program is not easy if key space is really big.
      Having a helper to abstract out the loop itself is even more
      meaningful.
      
      The proposed helper signature looks like:
        long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags)
      where callback_fn is a static function and callback_ctx is
      a piece of data allocated on the caller stack which can be
      accessed by the callback_fn. The callback_fn signature might be
      different for different maps. For example, for hash/array maps,
      the signature is
        long callback_fn(map, key, val, callback_ctx)
      
      In the rest of series, Patches 1/2/3/4 did some refactoring. Patch 5
      implemented core kernel support for the helper. Patches 6 and 7
      added hashmap and arraymap support. Patches 8/9 added libbpf
      support. Patch 10 added bpftool support. Patches 11 and 12 added
      selftests for hashmap and arraymap.
      
      [1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/
      
      Changelogs:
        v4 -> v5:
          - rebase on top of bpf-next.
        v3 -> v4:
          - better refactoring of check_func_call(), calculate subprogno outside
            of __check_func_call() helper. (Andrii)
          - better documentation (like the list of supported maps and their
            callback signatures) in uapi header. (Andrii)
          - implement and use ASSERT_LT in selftests. (Andrii)
          - a few other minor changes.
        v2 -> v3:
          - add comments in retrieve_ptr_limit(), which is in sanitize_ptr_alu(),
            to clarify the code is not executed for PTR_TO_MAP_KEY handling,
            but code is manually tested. (Alexei)
          - require BTF for callback function. (Alexei)
          - simplify hashmap/arraymap callback return handling as return value
            [0, 1] has been enforced by the verifier. (Alexei)
          - also mark global subprog (if used in ld_imm64) as RELO_SUBPROG_ADDR. (Andrii)
          - handle the condition to mark RELO_SUBPROG_ADDR properly. (Andrii)
          - make bpftool subprog insn offset dumping consist with pcrel calls. (Andrii)
        v1 -> v2:
          - setup callee frame in check_helper_call() and then proceed to verify
            helper return value as normal (Alexei)
          - use meta data to keep track of map/func pointer to avoid hard coding
            the register number (Alexei)
          - verify callback_fn return value range [0, 1]. (Alexei)
          - add migrate_{disable, enable} to ensure percpu value is the one
            bpf program expects to see. (Alexei)
          - change bpf_for_each_map_elem() return value to the number of iterated
            elements. (Andrii)
          - Change libbpf pseudo_func relo name to RELO_SUBPROG_ADDR and use
            more rigid checking for the relocation. (Andrii)
          - Better format to print out subprog address with bpftool. (Andrii)
          - Use bpf_prog_test_run to trigger bpf run, instead of bpf_iter. (Andrii)
          - Other misc changes.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      cc0f8353