1. 02 Jul, 2020 2 commits
    • Martin KaFai Lau's avatar
      bpf: selftests: Restore netns after each test · 811d7e37
      Martin KaFai Lau authored
      It is common for networking tests creating its netns and making its own
      setting under this new netns (e.g. changing tcp sysctl).  If the test
      forgot to restore to the original netns, it would affect the
      result of other tests.
      
      This patch saves the original netns at the beginning and then restores it
      after every test.  Since the restore "setns()" is not expensive, it does it
      on all tests without tracking if a test has created a new netns or not.
      
      The new restore_netns() could also be done in test__end_subtest() such
      that each subtest will get an automatic netns reset.  However,
      the individual test would lose flexibility to have total control
      on netns for its own subtests.  In some cases, forcing a test to do
      unnecessary netns re-configure for each subtest is time consuming.
      e.g. In my vm, forcing netns re-configure on each subtest in sk_assign.c
      increased the runtime from 1s to 8s.  On top of that,  test_progs.c
      is also doing per-test (instead of per-subtest) cleanup for cgroup.
      Thus, this patch also does per-test restore_netns().  The only existing
      per-subtest cleanup is reset_affinity() and no test is depending on this.
      Thus, it is removed from test__end_subtest() to give a consistent
      expectation to the individual tests.  test_progs.c only ensures
      any affinity/netns/cgroup change made by an earlier test does not
      affect the following tests.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200702004858.2103728-1-kafai@fb.com
      811d7e37
    • Martin KaFai Lau's avatar
      bpf: selftests: A few improvements to network_helpers.c · 99126abe
      Martin KaFai Lau authored
      This patch makes a few changes to the network_helpers.c
      
      1) Enforce SO_RCVTIMEO and SO_SNDTIMEO
         This patch enforces timeout to the network fds through setsockopt
         SO_RCVTIMEO and SO_SNDTIMEO.
      
         It will remove the need for SOCK_NONBLOCK that requires a more demanding
         timeout logic with epoll/select, e.g. epoll_create, epoll_ctrl, and
         then epoll_wait for timeout.
      
         That removes the need for connect_wait() from the
         cgroup_skb_sk_lookup.c. The needed change is made in
         cgroup_skb_sk_lookup.c.
      
      2) start_server():
         Add optional addr_str and port to start_server().
         That removes the need of the start_server_with_port().  The caller
         can pass addr_str==NULL and/or port==0.
      
         I have a future tcp-hdr-opt test that will pass a non-NULL addr_str
         and it is in general useful for other future tests.
      
         "int timeout_ms" is also added to control the timeout
         on the "accept(listen_fd)".
      
      3) connect_to_fd(): Fully use the server_fd.
         The server sock address has already been obtained from
         getsockname(server_fd).  The sockaddr includes the family,
         so the "int family" arg is redundant.
      
         Since the server address is obtained from server_fd,  there
         is little reason not to get the server's socket type from the
         server_fd also.  getsockopt(server_fd) can be used to do that,
         so "int type" arg is also removed.
      
         "int timeout_ms" is added.
      
      4) connect_fd_to_fd():
         "int timeout_ms" is added.
         Some code is also refactored to connect_fd_to_addr() which is
         shared with connect_to_fd().
      
      5) Preserve errno:
         Some callers need to check errno, e.g. cgroup_skb_sk_lookup.c.
         Make changes to do it more consistently in save_errno_close()
         and log_err().
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/20200702004852.2103003-1-kafai@fb.com
      99126abe
  2. 01 Jul, 2020 15 commits
  3. 30 Jun, 2020 2 commits
  4. 28 Jun, 2020 3 commits
    • Alexei Starovoitov's avatar
      Merge branch 'libbpf_autoload_knob' · afa12644
      Alexei Starovoitov authored
      Andrii Nakryiko says:
      
      ====================
      Add ability to turn off default auto-loading of each BPF program by libbpf on
      BPF object load. This is the feature that allows BPF applications to have
      optional functionality, which is only excercised on kernel that support
      necessary features, while falling back to reduced/less performant
      functionality, if kernel is outdated.
      ====================
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      afa12644
    • Andrii Nakryiko's avatar
      selftests/bpf: Test auto-load disabling logic for BPF programs · 5712174c
      Andrii Nakryiko authored
      Validate that BPF object with broken (in multiple ways) BPF program can still
      be successfully loaded, if that broken BPF program is disabled.
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200625232629.3444003-3-andriin@fb.com
      5712174c
    • Andrii Nakryiko's avatar
      libbpf: Support disabling auto-loading BPF programs · d9297581
      Andrii Nakryiko authored
      Currently, bpf_object__load() (and by induction skeleton's load), will always
      attempt to prepare, relocate, and load into kernel every single BPF program
      found inside the BPF object file. This is often convenient and the right thing
      to do and what users expect.
      
      But there are plenty of cases (especially with BPF development constantly
      picking up the pace), where BPF application is intended to work with old
      kernels, with potentially reduced set of features. But on kernels supporting
      extra features, it would like to take a full advantage of them, by employing
      extra BPF program. This could be a choice of using fentry/fexit over
      kprobe/kretprobe, if kernel is recent enough and is built with BTF. Or BPF
      program might be providing optimized bpf_iter-based solution that user-space
      might want to use, whenever available. And so on.
      
      With libbpf and BPF CO-RE in particular, it's advantageous to not have to
      maintain two separate BPF object files to achieve this. So to enable such use
      cases, this patch adds ability to request not auto-loading chosen BPF
      programs. In such case, libbpf won't attempt to perform relocations (which
      might fail due to old kernel), won't try to resolve BTF types for
      BTF-aware (tp_btf/fentry/fexit/etc) program types, because BTF might not be
      present, and so on. Skeleton will also automatically skip auto-attachment step
      for such not loaded BPF programs.
      
      Overall, this feature allows to simplify development and deployment of
      real-world BPF applications with complicated compatibility requirements.
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200625232629.3444003-2-andriin@fb.com
      d9297581
  5. 25 Jun, 2020 18 commits