1. 03 Dec, 2020 30 commits
  2. 02 Dec, 2020 4 commits
  3. 01 Dec, 2020 1 commit
  4. 30 Nov, 2020 5 commits
    • Daniel Borkmann's avatar
      Merge branch 'xdp-preferred-busy-polling' · df542285
      Daniel Borkmann authored
      Björn Töpel says:
      
      ====================
      This series introduces three new features:
      
      1. A new "heavy traffic" busy-polling variant that works in concert
         with the existing napi_defer_hard_irqs and gro_flush_timeout knobs.
      
      2. A new socket option that let a user change the busy-polling NAPI
         budget.
      
      3. Allow busy-polling to be performed on XDP sockets.
      
      The existing busy-polling mode, enabled by the SO_BUSY_POLL socket
      option or system-wide using the /proc/sys/net/core/busy_read knob, is
      an opportunistic. That means that if the NAPI context is not
      scheduled, it will poll it. If, after busy-polling, the budget is
      exceeded the busy-polling logic will schedule the NAPI onto the
      regular softirq handling.
      
      One implication of the behavior above is that a busy/heavy loaded NAPI
      context will never enter/allow for busy-polling. Some applications
      prefer that most NAPI processing would be done by busy-polling.
      
      This series adds a new socket option, SO_PREFER_BUSY_POLL, that works
      in concert with the napi_defer_hard_irqs and gro_flush_timeout
      knobs. The napi_defer_hard_irqs and gro_flush_timeout knobs were
      introduced in commit 6f8b12d6 ("net: napi: add hard irqs deferral
      feature"), and allows for a user to defer interrupts to be enabled and
      instead schedule the NAPI context from a watchdog timer. When a user
      enables the SO_PREFER_BUSY_POLL, again with the other knobs enabled,
      and the NAPI context is being processed by a softirq, the softirq NAPI
      processing will exit early to allow the busy-polling to be performed.
      
      If the application stops performing busy-polling via a system call,
      the watchdog timer defined by gro_flush_timeout will timeout, and
      regular softirq handling will resume.
      
      In summary; Heavy traffic applications that prefer busy-polling over
      softirq processing should use this option.
      
      Patch 6 touches a lot of drivers, so the Cc: list is grossly long.
      
      Example usage:
      
        $ echo 2 | sudo tee /sys/class/net/ens785f1/napi_defer_hard_irqs
        $ echo 200000 | sudo tee /sys/class/net/ens785f1/gro_flush_timeout
      
      Note that the timeout should be larger than the userspace processing
      window, otherwise the watchdog will timeout and fall back to regular
      softirq processing.
      
      Enable the SO_BUSY_POLL/SO_PREFER_BUSY_POLL options on your socket.
      
      Performance simple UDP ping-pong:
      
      A packet generator blasts UDP packets from a packet generator to a
      certain {src,dst}IP/port, so a dedicated ksoftirq will be busy
      handling the packets at a certain core.
      
      A simple UDP test program that simply does recvfrom/sendto is running
      at the host end. Throughput in pps and RTT latency is measured at the
      packet generator.
      
      /proc/sys/net/core/busy_read is set (20).
      
      Min       Max       Avg (usec)
      
      1. Blocking 2-cores:                       490Kpps
       1218.192  1335.427  1271.083
      
      2. Blocking, 1-core:                       155Kpps
       1327.195 17294.855  4761.367
      
      3. Non-blocking, 2-cores:                  475Kpps
       1221.197  1330.465  1270.740
      
      4. Non-blocking, 1-core:                     3Kpps
      29006.482 37260.465 33128.367
      
      5. Non-blocking, prefer busy-poll, 1-core: 420Kpps
       1202.535  5494.052  4885.443
      
      Scenario 2 and 5 shows when the new option should be used. Throughput
      go from 155 to 420Kpps, average latency are similar, but the tail
      latencies are much better for the latter.
      
      Performance XDP sockets:
      
      Again, a packet generator blasts UDP packets from a packet generator
      to a certain {src,dst}IP/port.
      
      Today, running XDP sockets sample on the same core as the softirq
      handling, performance tanks mainly because we do not yield to
      user-space when the XDP socket Rx queue is full.
      
        # taskset -c 5 ./xdpsock -i ens785f1 -q 5 -n 1 -r
        Rx: 64Kpps
      
        # # preferred busy-polling, budget 8
        # taskset -c 5 ./xdpsock -i ens785f1 -q 5 -n 1 -r -B -b 8
        Rx 9.9Mpps
        # # preferred busy-polling, budget 64
        # taskset -c 5 ./xdpsock -i ens785f1 -q 5 -n 1 -r -B -b 64
        Rx: 19.3Mpps
        # # preferred busy-polling, budget 256
        # taskset -c 5 ./xdpsock -i ens785f1 -q 5 -n 1 -r -B -b 256
        Rx: 21.4Mpps
        # # preferred busy-polling, budget 512
        # taskset -c 5 ./xdpsock -i ens785f1 -q 5 -n 1 -r -B -b 512
        Rx: 21.7Mpps
      
      Compared to the two-core case:
        # taskset -c 4 ./xdpsock -i ens785f1 -q 20 -n 1 -r
        Rx: 20.7Mpps
      
      We're getting better single-core performance than two, for this naïve
      drop scenario.
      
      Performance netperf UDP_RR:
      
      Note that netperf UDP_RR is not a heavy traffic tests, and preferred
      busy-polling is not typically something we want to use here.
      
        $ echo 20 | sudo tee /proc/sys/net/core/busy_read
        $ netperf -H 192.168.1.1 -l 30 -t UDP_RR -v 2 -- \
            -o min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
      
      busy-polling blocking sockets:            12,13.33,224,0.63,74731.177
      
      I hacked netperf to use non-blocking sockets and re-ran:
      
      busy-polling non-blocking sockets:        12,13.46,218,0.72,73991.172
      prefer busy-polling non-blocking sockets: 12,13.62,221,0.59,73138.448
      
      Using the preferred busy-polling mode does not impact performance.
      
      The above tests was done for the 'ice' driver.
      
      Thanks to Jakub for suggesting this busy-polling addition [1], and
      Eric for all input/review!
      
      Changes:
      
      rfc-v1 [2] -> rfc-v2:
        * Changed name from bias to prefer.
        * Base the work on Eric's/Luigi's defer irq/gro timeout work.
        * Proper GRO flushing.
        * Build issues for some XDP drivers.
      
      rfc-v2 [3] -> v1:
        * Fixed broken qlogic build.
        * Do not trigger an IPI (XDP socket wakeup) when busy-polling is
          enabled.
      
      v1 [4] -> v2:
        * Added napi_id to socionext driver, and added Ilias Acked-by:. (Ilias)
        * Added a samples patch to improve busy-polling for xdpsock/l2fwd.
        * Correctly mark atomic operations with {WRITE,READ}_ONCE, to make
          KCSAN and the code readers happy. (Eric)
        * Check NAPI budget not to exceed U16_MAX. (Eric)
        * Added kdoc.
      
      v2 [5] -> v3:
        * Collected Acked-by.
        * Check NAPI disable prior prefer busy-polling. (Jakub)
        * Added napi_id registration for virtio-net. (Michael)
        * Added napi_id registration for veth.
      
      v3 [6] -> v4:
        * Collected Acked-by/Reviewed-by.
      
      [1] https://lore.kernel.org/netdev/20200925120652.10b8d7c5@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/
      [2] https://lore.kernel.org/bpf/20201028133437.212503-1-bjorn.topel@gmail.com/
      [3] https://lore.kernel.org/bpf/20201105102812.152836-1-bjorn.topel@gmail.com/
      [4] https://lore.kernel.org/bpf/20201112114041.131998-1-bjorn.topel@gmail.com/
      [5] https://lore.kernel.org/bpf/20201116110416.10719-1-bjorn.topel@gmail.com/
      [6] https://lore.kernel.org/bpf/20201119083024.119566-1-bjorn.topel@gmail.com/
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      df542285
    • Björn Töpel's avatar
      samples/bpf: Add option to set the busy-poll budget · 41bf900f
      Björn Töpel authored
      Support for the SO_BUSY_POLL_BUDGET setsockopt, via the batching
      option ('b').
      Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMagnus Karlsson <magnus.karlsson@intel.com>
      Link: https://lore.kernel.org/bpf/20201130185205.196029-11-bjorn.topel@gmail.com
      41bf900f
    • Björn Töpel's avatar
      samples/bpf: Add busy-poll support to xdpsock · b35fc148
      Björn Töpel authored
      Add a new option to xdpsock, 'B', for busy-polling. This option will
      also set the batching size, 'b' option, to the busy-poll budget.
      Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMagnus Karlsson <magnus.karlsson@intel.com>
      Link: https://lore.kernel.org/bpf/20201130185205.196029-10-bjorn.topel@gmail.com
      b35fc148
    • Björn Töpel's avatar
      samples/bpf: Use recvfrom() in xdpsock/l2fwd · 284cbc61
      Björn Töpel authored
      Start using recvfrom() the l2fwd scenario, instead of poll() which is
      more expensive and need additional knobs for busy-polling.
      Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMagnus Karlsson <magnus.karlsson@intel.com>
      Link: https://lore.kernel.org/bpf/20201130185205.196029-9-bjorn.topel@gmail.com
      284cbc61
    • Björn Töpel's avatar