1. 16 Jan, 2023 14 commits
  2. 14 Jan, 2023 23 commits
  3. 13 Jan, 2023 3 commits
    • Sebastian Czapla's avatar
      ixgbe: Filter out spurious link up indication · 6f8179c1
      Sebastian Czapla authored
      Add delayed link state recheck to filter false link up indication
      caused by transceiver with no fiber cable attached.
      Signed-off-by: default avatarSebastian Czapla <sebastianx.czapla@intel.com>
      Tested-by: Sunitha Mekala <sunithax.d.mekala@intel.com> (A Contingent worker at Intel)
      Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
      6f8179c1
    • Jesse Brandeburg's avatar
      ixgbe: XDP: fix checker warning from rcu pointer · 3fe1d0a4
      Jesse Brandeburg authored
      The ixgbe driver uses an older style failure mode when initializing the
      XDP program and the queues. It causes some warnings when running C=2
      checking builds (and it's the last one in the ethernet/intel tree).
      
      $ make W=1 C=2 M=`pwd`/drivers/net/ethernet/intel modules
      .../ixgbe_main.c:10301:25: error: incompatible types in comparison expression (different address spaces):
      .../ixgbe_main.c:10301:25:    struct bpf_prog [noderef] __rcu *
      .../ixgbe_main.c:10301:25:    struct bpf_prog *
      
      Fix the problem by removing the line that tried to re-xchg "the old_prog
      pointer" if there was an error, to make this driver act like the other
      drivers which return the error code without "pointer restoration."
      
      Also, update the "copy the pointer" logic to use WRITE_ONCE as many/all
      the other drivers do, which required making a change in two separate
      functions that write the xdp_prog variable in the ring.
      
      The code here was modeled after the code in i40e/i40e_xdp_setup().
      
      NOTE: Compile-tested only.
      
      CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
      CC: Magnus Karlsson <magnus.karlsson@intel.com>
      Signed-off-by: default avatarJesse Brandeburg <jesse.brandeburg@intel.com>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> (A Contingent Worker at Intel)
      Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
      3fe1d0a4
    • Yunhui Cui's avatar
      sock: add tracepoint for send recv length · 6e6eda44
      Yunhui Cui authored
      Add 2 tracepoints to monitor the tcp/udp traffic
      of per process and per cgroup.
      
      Regarding monitoring the tcp/udp traffic of each process, there are two
      existing solutions, the first one is https://www.atoptool.nl/netatop.php.
      The second is via kprobe/kretprobe.
      
      Netatop solution is implemented by registering the hook function at the
      hook point provided by the netfilter framework.
      
      These hook functions may be in the soft interrupt context and cannot
      directly obtain the pid. Some data structures are added to bind packets
      and processes. For example, struct taskinfobucket, struct taskinfo ...
      
      Every time the process sends and receives packets it needs multiple
      hashmaps,resulting in low performance and it has the problem fo inaccurate
      tcp/udp traffic statistics(for example: multiple threads share sockets).
      
      We can obtain the information with kretprobe, but as we know, kprobe gets
      the result by trappig in an exception, which loses performance compared
      to tracepoint.
      
      We compared the performance of tracepoints with the above two methods, and
      the results are as follows:
      
      ab -n 1000000 -c 1000 -r http://127.0.0.1/index.html
      without trace:
      Time per request: 39.660 [ms] (mean)
      Time per request: 0.040 [ms] (mean, across all concurrent requests)
      
      netatop:
      Time per request: 50.717 [ms] (mean)
      Time per request: 0.051 [ms] (mean, across all concurrent requests)
      
      kr:
      Time per request: 43.168 [ms] (mean)
      Time per request: 0.043 [ms] (mean, across all concurrent requests)
      
      tracepoint:
      Time per request: 41.004 [ms] (mean)
      Time per request: 0.041 [ms] (mean, across all concurrent requests
      
      It can be seen that tracepoint has better performance.
      Signed-off-by: default avatarYunhui Cui <cuiyunhui@bytedance.com>
      Signed-off-by: default avatarXiongchun Duan <duanxiongchun@bytedance.com>
      Reviewed-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6e6eda44