1. 21 Feb, 2020 8 commits
  2. 20 Feb, 2020 2 commits
  3. 19 Feb, 2020 8 commits
    • Yonghong Song's avatar
      selftests/bpf: Change llvm flag -mcpu=probe to -mcpu=v3 · 83250f2b
      Yonghong Song authored
      The latest llvm supports cpu version v3, which is cpu version v1
      plus some additional 64bit jmp insns and 32bit jmp insn support.
      
      In selftests/bpf Makefile, the llvm flag -mcpu=probe did runtime
      probe into the host system. Depending on compilation environments,
      it is possible that runtime probe may fail, e.g., due to
      memlock issue. This will cause generated code with cpu version v1.
      This may cause confusion as the same compiler and the same C code
      generates different byte codes in different environment.
      
      Let us change the llvm flag -mcpu=probe to -mcpu=v3 so the
      generated code will be the same regardless of the compilation
      environment.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200219004236.2291125-1-yhs@fb.com
      83250f2b
    • Alexei Starovoitov's avatar
      Merge branch 'bpf_read_branch_records' · 03aa3955
      Alexei Starovoitov authored
      Daniel Xu says:
      
      ====================
      Branch records are a CPU feature that can be configured to record
      certain branches that are taken during code execution. This data is
      particularly interesting for profile guided optimizations. perf has had
      branch record support for a while but the data collection can be a bit
      coarse grained.
      
      We (Facebook) have seen in experiments that associating metadata with
      branch records can improve results (after postprocessing). We generally
      use bpf_probe_read_*() to get metadata out of userspace. That's why bpf
      support for branch records is useful.
      
      Aside from this particular use case, having branch data available to bpf
      progs can be useful to get stack traces out of userspace applications
      that omit frame pointers.
      
      Changes in v8:
      - Use globals instead of perf buffer
      - Call test_perf_branches__detach() before destroying skeleton
      - Fix typo in docs
      
      Changes in v7:
      - Const-ify and static-ify local var
      - Documentation formatting
      
      Changes in v6:
      - Move #ifdef a little to avoid unused variable warnings on !x86
      - Test negative condition in selftest (-EINVAL on improperly configured
        perf event)
      - Skip positive condition selftest on setups that don't support branch
        records
      
      Changes in v5:
      - Rename bpf_perf_prog_read_branches() -> bpf_read_branch_records()
      - Rename BPF_F_GET_BR_SIZE -> BPF_F_GET_BRANCH_RECORDS_SIZE
      - Squash tools/ bpf.h sync into selftest commit
      
      Changes in v4:
      - Add BPF_F_GET_BR_SIZE flag
      - Return -ENOENT on unsupported architectures
      - Only accept initialized memory in helper
      - Check buffer size is multiple of sizeof(struct perf_branch_entry)
      - Use bpf skeleton in selftest
      - Add commit messages
      - Spelling and formatting
      
      Changes in v3:
      - Document filling unused buffer with zero
      - Formatting fixes
      - Rebase
      
      Changes in v2:
      - Change to a bpf helper instead of context access
      - Avoid mentioning Intel specific things
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      03aa3955
    • Daniel Xu's avatar
      selftests/bpf: Add bpf_read_branch_records() selftest · 67306f84
      Daniel Xu authored
      Add a selftest to test:
      
      * default bpf_read_branch_records() behavior
      * BPF_F_GET_BRANCH_RECORDS_SIZE flag behavior
      * error path on non branch record perf events
      * using helper to write to stack
      * using helper to write to global
      
      On host with hardware counter support:
      
          # ./test_progs -t perf_branches
          #27/1 perf_branches_hw:OK
          #27/2 perf_branches_no_hw:OK
          #27 perf_branches:OK
          Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED
      
      On host without hardware counter support (VM):
      
          # ./test_progs -t perf_branches
          #27/1 perf_branches_hw:OK
          #27/2 perf_branches_no_hw:OK
          #27 perf_branches:OK
          Summary: 1/2 PASSED, 1 SKIPPED, 0 FAILED
      
      Also sync tools/include/uapi/linux/bpf.h.
      Signed-off-by: default avatarDaniel Xu <dxu@dxuuu.xyz>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200218030432.4600-3-dxu@dxuuu.xyz
      67306f84
    • Daniel Xu's avatar
      bpf: Add bpf_read_branch_records() helper · fff7b643
      Daniel Xu authored
      Branch records are a CPU feature that can be configured to record
      certain branches that are taken during code execution. This data is
      particularly interesting for profile guided optimizations. perf has had
      branch record support for a while but the data collection can be a bit
      coarse grained.
      
      We (Facebook) have seen in experiments that associating metadata with
      branch records can improve results (after postprocessing). We generally
      use bpf_probe_read_*() to get metadata out of userspace. That's why bpf
      support for branch records is useful.
      
      Aside from this particular use case, having branch data available to bpf
      progs can be useful to get stack traces out of userspace applications
      that omit frame pointers.
      Signed-off-by: default avatarDaniel Xu <dxu@dxuuu.xyz>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200218030432.4600-2-dxu@dxuuu.xyz
      fff7b643
    • Daniel Borkmann's avatar
      Merge branch 'bpf-skmsg-simplify-restore' · 2f14b2d9
      Daniel Borkmann authored
      Jakub Sitnicki says:
      
      ====================
      This series has been split out from "Extend SOCKMAP to store listening
      sockets" [0]. I think it stands on its own, and makes the latter series
      smaller, which will make the review easier, hopefully.
      
      The essence is that we don't need to do a complicated dance in
      sk_psock_restore_proto, if we agree that the contract with tcp_update_ulp
      is to restore callbacks even when the socket doesn't use ULP. This is what
      tcp_update_ulp currently does, and we just make use of it.
      
      Series is accompanied by a test for a particularly tricky case of restoring
      callbacks when we have both sockmap and tls callbacks configured in
      sk->sk_prot.
      
      [0] https://lore.kernel.org/bpf/20200127131057.150941-1-jakub@cloudflare.com/
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      2f14b2d9
    • Jakub Sitnicki's avatar
      selftests/bpf: Test unhashing kTLS socket after removing from map · d1ba1204
      Jakub Sitnicki authored
      When a TCP socket gets inserted into a sockmap, its sk_prot callbacks get
      replaced with tcp_bpf callbacks built from regular tcp callbacks. If TLS
      gets enabled on the same socket, sk_prot callbacks get replaced once again,
      this time with kTLS callbacks built from tcp_bpf callbacks.
      
      Now, we allow removing a socket from a sockmap that has kTLS enabled. After
      removal, socket remains with kTLS configured. This is where things things
      get tricky.
      
      Since the socket has a set of sk_prot callbacks that are a mix of kTLS and
      tcp_bpf callbacks, we need to restore just the tcp_bpf callbacks to the
      original ones. At the moment, it comes down to the the unhash operation.
      
      We had a regression recently because tcp_bpf callbacks were not cleared in
      this particular scenario of removing a kTLS socket from a sockmap. It got
      fixed in commit 4da6a196 ("bpf: Sockmap/tls, during free we may call
      tcp_bpf_unhash() in loop").
      
      Add a test that triggers the regression so that we don't reintroduce it in
      the future.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-4-jakub@cloudflare.com
      d1ba1204
    • Jakub Sitnicki's avatar
      bpf, sk_msg: Don't clear saved sock proto on restore · a178b458
      Jakub Sitnicki authored
      There is no need to clear psock->sk_proto when restoring socket protocol
      callbacks in sk->sk_prot. The psock is about to get detached from the sock
      and eventually destroyed. At worst we will restore the protocol callbacks
      and the write callback twice.
      
      This makes reasoning about psock state easier. Once psock is initialized,
      we can count on psock->sk_proto always being set.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-3-jakub@cloudflare.com
      a178b458
    • Jakub Sitnicki's avatar
      bpf, sk_msg: Let ULP restore sk_proto and write_space callback · a4393861
      Jakub Sitnicki authored
      We don't need a fallback for when the socket is not using ULP.
      tcp_update_ulp handles this case exactly the same as we do in
      sk_psock_restore_proto. Get rid of the duplicated code.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-2-jakub@cloudflare.com
      a4393861
  4. 18 Feb, 2020 7 commits
  5. 17 Feb, 2020 15 commits