1. 29 Nov, 2010 5 commits
  2. 28 Nov, 2010 29 commits
  3. 24 Nov, 2010 6 commits
    • Tom Herbert's avatar
      xps: Transmit Packet Steering · 1d24eb48
      Tom Herbert authored
      This patch implements transmit packet steering (XPS) for multiqueue
      devices.  XPS selects a transmit queue during packet transmission based
      on configuration.  This is done by mapping the CPU transmitting the
      packet to a queue.  This is the transmit side analogue to RPS-- where
      RPS is selecting a CPU based on receive queue, XPS selects a queue
      based on the CPU (previously there was an XPS patch from Eric
      Dumazet, but that might more appropriately be called transmit completion
      steering).
      
      Each transmit queue can be associated with a number of CPUs which will
      use the queue to send packets.  This is configured as a CPU mask on a
      per queue basis in:
      
      /sys/class/net/eth<n>/queues/tx-<n>/xps_cpus
      
      The mappings are stored per device in an inverted data structure that
      maps CPUs to queues.  In the netdevice structure this is an array of
      num_possible_cpu structures where each structure holds and array of
      queue_indexes for queues which that CPU can use.
      
      The benefits of XPS are improved locality in the per queue data
      structures.  Also, transmit completions are more likely to be done
      nearer to the sending thread, so this should promote locality back
      to the socket on free (e.g. UDP).  The benefits of XPS are dependent on
      cache hierarchy, application load, and other factors.  XPS would
      nominally be configured so that a queue would only be shared by CPUs
      which are sharing a cache, the degenerative configuration woud be that
      each CPU has it's own queue.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR test
      with 1 byte req. and resp.
      
      bnx2x on 16 core AMD
         XPS (16 queues, 1 TX queue per CPU)  1234K at 100% CPU
         No XPS (16 queues)                   996K at 100% CPU
      Signed-off-by: default avatarTom Herbert <therbert@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1d24eb48
    • Tom Herbert's avatar
      xps: Improvements in TX queue selection · 3853b584
      Tom Herbert authored
      In dev_pick_tx, don't do work in calculating queue
      index or setting
      the index in the sock unless the device has more than one queue.  This
      allows the sock to be set only with a queue index of a multi-queue
      device which is desirable if device are stacked like in a tunnel.
      
      We also allow the mapping of a socket to queue to be changed.  To
      maintain in order packet transmission a flag (ooo_okay) has been
      added to the sk_buff structure.  If a transport layer sets this flag
      on a packet, the transmit queue can be changed for the socket.
      Presumably, the transport would set this if there was no possbility
      of creating OOO packets (for instance, there are no packets in flight
      for the socket).  This patch includes the modification in TCP output
      for setting this flag.
      Signed-off-by: default avatarTom Herbert <therbert@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3853b584
    • Eric Dumazet's avatar
      infiniband: remove dev_base_lock use · 22f4fbd9
      Eric Dumazet authored
      dev_base_lock is the legacy way to lock the device list, and is planned
      to disappear. (writers hold RTNL, readers hold RCU lock)
      
      Convert rdma_translate_ip() and update_ipv6_gids() to RCU locking.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: default avatarRoland Dreier <rolandd@cisco.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      22f4fbd9
    • Eric Dumazet's avatar
      scm: lower SCM_MAX_FD · bba14de9
      Eric Dumazet authored
      Lower SCM_MAX_FD from 255 to 253 so that allocations for scm_fp_list are
      halved. (commit f8d570a4 added two pointers in this structure)
      
      scm_fp_dup() should not copy whole structure (and trigger kmemcheck
      warnings), but only the used part. While we are at it, only allocate
      needed size.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bba14de9
    • Eric Dumazet's avatar
      ipv6: mcast: RCU conversion · 456b61bc
      Eric Dumazet authored
      ipv6_sk_mc_lock rwlock becomes a spinlock.
      
      readers (inet6_mc_check()) now takes rcu_read_lock() instead of read
      lock. Writers dont need to disable BH anymore.
      
      struct ipv6_mc_socklist objects are reclaimed after one RCU grace
      period.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      456b61bc
    • Giuseppe CAVALLARO's avatar
      2757a15f