1. 30 Jun, 2018 5 commits
  2. 28 Jun, 2018 2 commits
    • Jesper Dangaard Brouer's avatar
      samples/bpf: xdp_rxq_info action XDP_TX must adjust MAC-addrs · 509fda10
      Jesper Dangaard Brouer authored
      XDP_TX requires also changing the MAC-addrs, else some hardware
      may drop the TX packet before reaching the wire.  This was
      observed with driver mlx5.
      
      If xdp_rxq_info select --action XDP_TX the swapmac functionality
      is activated.  It is also possible to manually enable via cmdline
      option --swapmac.  This is practical if wanting to measure the
      overhead of writing/updating payload for other action types.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarToke Høiland-Jørgensen <toke@toke.dk>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      509fda10
    • Jesper Dangaard Brouer's avatar
      samples/bpf: extend xdp_rxq_info to read packet payload · 0d25c43a
      Jesper Dangaard Brouer authored
      There is a cost associated with reading the packet data payload
      that this test ignored.  Add option --read to allow enabling
      reading part of the payload.
      
      This sample/tool helps us analyse an issue observed with a NIC
      mlx5 (ConnectX-5 Ex) and an Intel(R) Xeon(R) CPU E5-1650 v4.
      
      With no_touch of data:
      
      Running XDP on dev:mlx5p1 (ifindex:8) action:XDP_DROP options:no_touch
      XDP stats       CPU     pps         issue-pps
      XDP-RX CPU      0       14,465,157  0
      XDP-RX CPU      1       14,464,728  0
      XDP-RX CPU      2       14,465,283  0
      XDP-RX CPU      3       14,465,282  0
      XDP-RX CPU      4       14,464,159  0
      XDP-RX CPU      5       14,465,379  0
      XDP-RX CPU      total   86,789,992
      
      When not touching data, we observe that the CPUs have idle cycles.
      When reading data the CPUs are 100% busy in softirq.
      
      With reading data:
      
      Running XDP on dev:mlx5p1 (ifindex:8) action:XDP_DROP options:read
      XDP stats       CPU     pps         issue-pps
      XDP-RX CPU      0       9,620,639   0
      XDP-RX CPU      1       9,489,843   0
      XDP-RX CPU      2       9,407,854   0
      XDP-RX CPU      3       9,422,289   0
      XDP-RX CPU      4       9,321,959   0
      XDP-RX CPU      5       9,395,242   0
      XDP-RX CPU      total   56,657,828
      
      The effect seen above is a result of cache-misses occuring when
      more RXQs are being used.  Based on perf-event observations, our
      conclusion is that the CPUs DDIO (Direct Data I/O) choose to
      deliver packet into main memory, instead of L3-cache.  We also
      found, that this can be mitigated by either using less RXQs or by
      reducing NICs the RX-ring size.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarToke Høiland-Jørgensen <toke@toke.dk>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      0d25c43a
  3. 27 Jun, 2018 12 commits
  4. 26 Jun, 2018 21 commits