1. 31 May, 2019 7 commits
    • Alexei Starovoitov's avatar
      Merge branch 'propagate-cn-to-tcp' · 576240cf
      Alexei Starovoitov authored
      Lawrence Brakmo says:
      
      ====================
      This patchset adds support for propagating congestion notifications (cn)
      to TCP from cgroup inet skb egress BPF programs.
      
      Current cgroup skb BPF programs cannot trigger TCP congestion window
      reductions, even when they drop a packet. This patch-set adds support
      for cgroup skb BPF programs to send congestion notifications in the
      return value when the packets are TCP packets. Rather than the
      current 1 for keeping the packet and 0 for dropping it, they can
      now return:
          NET_XMIT_SUCCESS    (0)    - continue with packet output
          NET_XMIT_DROP       (1)    - drop packet and do cn
          NET_XMIT_CN         (2)    - continue with packet output and do cn
          -EPERM                     - drop packet
      
      Finally, HBM programs are modified to collect and return more
      statistics.
      
      There has been some discussion regarding the best place to manage
      bandwidths. Some believe this should be done in the qdisc where it can
      also be managed with a BPF program. We believe there are advantages
      for doing it with a BPF program in the cgroup/skb callback. For example,
      it reduces overheads in the cases where there is on primary workload and
      one or more secondary workloads, where each workload is running on its
      own cgroupv2. In this scenario, we only need to throttle the secondary
      workloads and there is no overhead for the primary workload since there
      will be no BPF program attached to its cgroup.
      
      Regardless, we agree that this mechanism should not penalize those that
      are not using it. We tested this by doing 1 byte req/reply RPCs over
      loopback. Each test consists of 30 sec of back-to-back 1 byte RPCs.
      Each test was repeated 50 times with a 1 minute delay between each set
      of 10. We then calculated the average RPCs/sec over the 50 tests. We
      compare upstream with upstream + patchset and no BPF program as well
      as upstream + patchset and a BPF program that just returns ALLOW_PKT.
      Here are the results:
      
      upstream                           80937 RPCs/sec
      upstream + patches, no BPF program 80894 RPCs/sec
      upstream + patches, BPF program    80634 RPCs/sec
      
      These numbers indicate that there is no penalty for these patches
      
      The use of congestion notifications improves the performance of HBM when
      using Cubic. Without congestion notifications, Cubic will not decrease its
      cwnd and HBM will need to drop a large percentage of the packets.
      
      The following results are obtained for rate limits of 1Gbps,
      between two servers using netperf, and only one flow. We also show how
      reducing the max delayed ACK timer can improve the performance when
      using Cubic.
      
      Command used was:
        ./do_hbm_test.sh -l -D --stats -N -r=<rate> [--no_cn] [dctcp] \
                         -s=<server running netserver>
        where:
           <rate>   is 1000
           --no_cn  specifies no cwr notifications
           dctcp    uses dctcp
      
                             Cubic                    DCTCP
      Lim, DA      Mbps cwnd cred drops  Mbps cwnd cred drops
      --------     ---- ---- ---- -----  ---- ---- ---- -----
        1G, 40       35  462 -320 67%     995    1 -212  0.05%
        1G, 40,cn   736    9  -78  0.07   995    1 -212  0.05
        1G,  5,cn   941    2 -189  0.13   995    1 -212  0.05
      
      Notes:
        --no_cn has no effect with DCTCP
        Lim = rate limit
        DA = maximum delay ack timer
        cred = credit in packets
        drops = % packets dropped
      
      v1->v2: Insures that only BPF_CGROUP_INET_EGRESS can return values 2 and 3
              New egress values apply to all protocols, not just TCP
              Cleaned up patch 4, Update BPF_CGROUP_RUN_PROG_INET_EGRESS callers
              Removed changes to __tcp_transmit_skb (patch 5), no longer needed
              Removed sample use of EDT
      v2->v3: Removed the probe timer related changes
      v3->v4: Replaced preempt_enable_no_resched() by preempt_enable()
              in BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() macro
      ====================
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      576240cf
    • brakmo's avatar
      bpf: Add more stats to HBM · d58c6f72
      brakmo authored
      Adds more stats to HBM, including average cwnd and rtt of all TCP
      flows, percents of packets that are ecn ce marked and distribution
      of return values.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d58c6f72
    • brakmo's avatar
      bpf: Add cn support to hbm_out_kern.c · ffd81558
      brakmo authored
      Update hbm_out_kern.c to support returning cn notifications.
      Also updates relevant files to allow disabling cn notifications.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      ffd81558
    • brakmo's avatar
      bpf: Update BPF_CGROUP_RUN_PROG_INET_EGRESS calls · 956fe219
      brakmo authored
      Update BPF_CGROUP_RUN_PROG_INET_EGRESS() callers to support returning
      congestion notifications from the BPF programs.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      956fe219
    • brakmo's avatar
      bpf: Update __cgroup_bpf_run_filter_skb with cn · e7a3160d
      brakmo authored
      For egress packets, __cgroup_bpf_fun_filter_skb() will now call
      BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() instead of PROG_CGROUP_RUN_ARRAY()
      in order to propagate congestion notifications (cn) requests to TCP
      callers.
      
      For egress packets, this function can return:
         NET_XMIT_SUCCESS    (0)    - continue with packet output
         NET_XMIT_DROP       (1)    - drop packet and notify TCP to call cwr
         NET_XMIT_CN         (2)    - continue with packet output and notify TCP
                                      to call cwr
         -EPERM                     - drop packet
      
      For ingress packets, this function will return -EPERM if any attached
      program was found and if it returned != 1 during execution. Otherwise 0
      is returned.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e7a3160d
    • brakmo's avatar
      bpf: cgroup inet skb programs can return 0 to 3 · 5cf1e914
      brakmo authored
      Allows cgroup inet skb programs to return values in the range [0, 3].
      The second bit is used to deterine if congestion occurred and higher
      level protocol should decrease rate. E.g. TCP would call tcp_enter_cwr()
      
      The bpf_prog must set expected_attach_type to BPF_CGROUP_INET_EGRESS
      at load time if it uses the new return values (i.e. 2 or 3).
      
      The expected_attach_type is currently not enforced for
      BPF_PROG_TYPE_CGROUP_SKB.  e.g Meaning the current bpf_prog with
      expected_attach_type setting to BPF_CGROUP_INET_EGRESS can attach to
      BPF_CGROUP_INET_INGRESS.  Blindly enforcing expected_attach_type will
      break backward compatibility.
      
      This patch adds a enforce_expected_attach_type bit to only
      enforce the expected_attach_type when it uses the new
      return value.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      5cf1e914
    • brakmo's avatar
      bpf: Create BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY · 1f52f6c0
      brakmo authored
      Create new macro BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() to be used by
      __cgroup_bpf_run_filter_skb for EGRESS BPF progs so BPF programs can
      request cwr for TCP packets.
      
      Current cgroup skb programs can only return 0 or 1 (0 to drop the
      packet. This macro changes the behavior so the low order bit
      indicates whether the packet should be dropped (0) or not (1)
      and the next bit is used for congestion notification (cn).
      
      Hence, new allowed return values of CGROUP EGRESS BPF programs are:
        0: drop packet
        1: keep packet
        2: drop packet and call cwr
        3: keep packet and call cwr
      
      This macro then converts it to one of NET_XMIT values or -EPERM
      that has the effect of dropping the packet with no cn.
        0: NET_XMIT_SUCCESS  skb should be transmitted (no cn)
        1: NET_XMIT_DROP     skb should be dropped and cwr called
        2: NET_XMIT_CN       skb should be transmitted and cwr called
        3: -EPERM            skb should be dropped (no cn)
      
      Note that when more than one BPF program is called, the packet is
      dropped if at least one of programs requests it be dropped, and
      there is cn if at least one program returns cn.
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      1f52f6c0
  2. 29 May, 2019 15 commits
  3. 28 May, 2019 15 commits
  4. 25 May, 2019 3 commits
    • Matteo Croce's avatar
      samples: bpf: add ibumad sample to .gitignore · d9a6f413
      Matteo Croce authored
      This commit adds ibumad to .gitignore which is
      currently ommited from the ignore file.
      Signed-off-by: default avatarMatteo Croce <mcroce@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d9a6f413
    • Alexei Starovoitov's avatar
      Merge branch 'optimize-zext' · 198ae936
      Alexei Starovoitov authored
      Jiong Wang says:
      
      ====================
      v9:
        - Split patch 5 in v8.
          make bpf uapi header file sync a separate patch. (Alexei)
      
      v8:
        - For stack slot read, mark them as REG_LIVE_READ64. (Alexei)
        - Change DEF_NOT_SUBREG from -1 to 0. (Alexei)
        - Rebased on top of latest bpf-next.
      
      v7:
        - Drop the first patch in v6, the one adding 32-bit return value and
          argument type. (Alexei)
        - Rename bpf_jit_hardware_zext to bpf_jit_needs_zext. (Alexei)
        - Use mov32 with imm == 1 to indicate it is zext. (Alexei)
        - JIT back-ends peephole next insn to optimize out unnecessary zext
          inserted by verifier. (Alexei)
        - Testing:
          + patch set tested (bpf selftest) on x64 host with llvm 9.0
            no regression observed no both JIT and interpreter modes.
          + patch set tested (bpf selftest) on x32 host.
            By Yanqing Wang, thanks!
            no regression observed on both JIT and interpreter modes.
          + patch set tested (bpf selftest) on RV64 host with llvm 9.0,
            By Björn Töpel, thanks!
            no regression observed before and after this set with JIT_ALWAYS_ON.
            test_progs_32 also enabled as LLVM 9.0 is used by Björn.
          + cross compiled the other affected targets, arm, PowerPC, SPARC, S390.
      
      v6:
        - Fixed s390 kbuild test robot error. (kbuild)
        - Make comment style in backends patches more consistent.
      
      v5:
        - Adjusted several test_verifier helpers to make them works on hosts
          w and w/o hardware zext. (Naveen)
        - Make sure zext flag not set when verifier by-passed, for example,
          libtest_bpf.ko. (Naveen)
        - Conservatively mark bpf main return value as 64-bit. (Alexei)
        - Make sure read flag is either READ64 or READ32, not the mix of both.
          (Alexei)
        - Merged patch 1 and 2 in v4. (Alexei)
        - Fixed kbuild test robot warning on NFP. (kbuild)
        - Proposed new BPF_ZEXT insn to have optimal code-gen for various JIT
          back-ends.
        - Conservately set zext flags for patched-insn.
        - Fixed return value zext for helper function calls.
        - Also adjusted test_verifier scalability unit test to avoid triggerring
          too many insn patch which will hang computer.
        - re-tested on x86 host with llvm 9.0, no regression on test_verifier,
          test_progs, test_progs_32.
        - re-tested offload target (nfp), no regression on local testsuite.
      
      v4:
        - added the two missing fixes which addresses two Jakub's reviewes in v3.
        - rebase on top of bpf-next.
      
      v3:
        - remove redundant check in "propagate_liveness_reg". (Jakub)
        - add extra check in "mark_reg_read" to prune more search. (Jakub)
        - re-implemented "prog_flags" passing mechanism, removed use of
          global switch inside libbpf.
        - enabled high 32-bit randomization beyond "test_verifier" and
          "test_progs". Now it should have been enabled for all possible
          tests. Re-run all tests, haven't noticed regression.
        - remove RFC tag.
      
      v2:
        - rebased on top of bpf-next master.
        - added comments for what is sub-register def index. (Edward, Alexei)
        - removed patch 1 which turns bit mask from enum to macro. (Alexei)
        - removed sysctl/bpf_jit_32bit_opt. (Alexei)
        - merged sub-register def insn index into reg state. (Alexei)
        - change test methodology (Alexei):
            + instead of simple unit tests on x86_64 for which this optimization
              doesn't enabled due to there is hardware support, poison high
              32-bit for whose def identified as safe to do so. this could let
              the correctness of this patch set checked when daily bpf selftest
              ran which delivers very stressful test on host machine like x86_64.
            + hi32 poisoning is gated by a new BPF_F_TEST_RND_HI32 prog flags.
            + BPF_F_TEST_RND_HI32 is enabled for all tests of "test_progs" and
              "test_verifier", the latter needs minor tweak on two unit tests,
              please see the patch for the change.
            + introduced a new global variable "libbpf_test_mode" into libbpf.
              once it is set to true, it will set BPF_F_TEST_RND_HI32 for all the
              later PROG_LOAD syscall, the goal is to easy the enable of hi32
              poison on exsiting testsuite.
              we could also introduce new APIs, for example "bpf_prog_test_load",
              then use -Dbpf_prog_load=bpf_prog_test_load to migrate tests under
              test_progs, but there are several load APIs, and such new API need
              some change on struture like "struct bpf_prog_load_attr".
            + removed old unit tests. it is based on insn scan and requires quite
              a few test_verifier generic code change. given hi32 randomization
              could offer good test coverage, the unit tests doesn't add much
              extra test value.
        - enhanced register width check ("is_reg64") when record sub-register
          write, now, it returns more accurate width.
        - Re-run all tests under "test_progs" and "test_verifier" on x86_64, no
          regression. Fixed a couple of bugs exposed:
            1. ctx field size transformation was not taken into account.
            2. insn patch could cause lost of original aux data which is
               important for ctx field conversion.
            3. return value for propagate_liveness was wrong and caused
               regression on processed insn number.
            4. helper call arg wasn't handled properly that path prune may cause
               64-bit read info in pruned path lost.
        - Re-run Cilium bpf prog for processed-insn-number benchmarking, no
          regression.
      
      v1:
        - Fixed the missing handling on callee-saved for bpf-to-bpf call,
          sub-register defs therefore moved to frame state. (Jakub Kicinski)
        - Removed redundant "cross_reg". (Jakub Kicinski)
        - Various coding styles & grammar fixes. (Jakub Kicinski, Quentin Monnet)
      
      eBPF ISA specification requires high 32-bit cleared when low 32-bit
      sub-register is written. This applies to destination register of ALU32 etc.
      JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and
      AArch64 ISA has the same semantics, so the corresponding JIT back-end
      doesn't need to do extra work.
      
      However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches
      (PowerPC, SPARC etc) need to do explicit zero extension to meet this
      requirement, otherwise code like the following will fail.
      
        u64_value = (u64) u32_value
        ... other uses of u64_value
      
      This is because compiler could exploit the semantic described above and
      save those zero extensions for extending u32_value to u64_value, these JIT
      back-ends are expected to guarantee this through inserting extra zero
      extensions which however could be a significant increase on the code size.
      Some benchmarks show there could be ~40% sub-register writes out of total
      insns, meaning at least ~40% extra code-gen.
      
      One observation is these extra zero extensions are not always necessary.
      Take above code snippet for example, it is possible u32_value will never be
      casted into a u64, the value of high 32-bit of u32_value then could be
      ignored and extra zero extension could be eliminated.
      
      This patch implements this idea, insns defining sub-registers will be
      marked when the high 32-bit of the defined sub-register matters. For
      those unmarked insns, it is safe to eliminate high 32-bit clearnace for
      them.
      
      Algo
      ====
      We could use insn scan based static analysis to tell whether one
      sub-register def doesn't need zero extension. However, using such static
      analysis, we must do conservative assumption at branching point where
      multiple uses could be introduced. So, for any sub-register def that is
      active at branching point, we need to mark it as needing zero extension.
      This could introducing quite a few false alarms, for example ~25% on
      Cilium bpf_lxc.
      
      It will be far better to use dynamic data-flow tracing which verifier
      fortunately already has and could be easily extend to serve the purpose of
      this patch set.
      
       - Split read flags into READ32 and READ64.
      
       - Record index of insn that does sub-register write. Keep the index inside
         reg state and update it during verifier insn walking.
      
       - A full register read on a sub-register marks its definition insn as
         needing zero extension on dst register.
      
         A new sub-register write overrides the old one.
      
       - When propagating read64 during path pruning, also mark any insn defining
         a sub-register that is read in the pruned path as full-register.
      
      Benchmark
      =========
       - I estimate the JITed image could be 10% ~ 30% smaller on these affected
         arches (nfp, arm, x32, risv, ppc, sparc, s390), depending on the prog.
      
       - For Cilium bpf_lxc, there is ~11500 insns in the compiled binary (use
         latest LLVM snapshot, and with -mcpu=v3 -mattr=+alu32 enabled), 4460 of
         them has sub-register writes (~40%). Calculated by:
      
          cat dump | grep -P "\tw" | wc -l       (ALU32)
          cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
          cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
          cat dump | grep -P "r.*=.*u8" | wc -l  (READ_B)
      
         After this patch set enabled, > 25% of those 4460 could be identified as
         doesn't needing zero extension on the destination, and the percentage
         could go further up to more than 50% with some follow up optimizations
         based on the infrastructure offered by this set. This leads to
         significant save on JITed image.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      198ae936
    • Jiong Wang's avatar
      nfp: bpf: eliminate zero extension code-gen · 0b4de1ff
      Jiong Wang authored
      This patch eliminate zero extension code-gen for instructions including
      both alu and load/store. The only exception is for ctx load, because
      offload target doesn't go through host ctx convert logic so we do
      customized load and ignores zext flag set by verifier.
      
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0b4de1ff