1. 03 Jan, 2015 7 commits
    • Thomas Graf's avatar
      rhashtable: Per bucket locks & deferred expansion/shrinking · 97defe1e
      Thomas Graf authored
      Introduces an array of spinlocks to protect bucket mutations. The number
      of spinlocks per CPU is configurable and selected based on the hash of
      the bucket. This allows for parallel insertions and removals of entries
      which do not share a lock.
      
      The patch also defers expansion and shrinking to a worker queue which
      allows insertion and removal from atomic context. Insertions and
      deletions may occur in parallel to it and are only held up briefly
      while the particular bucket is linked or unzipped.
      
      Mutations of the bucket table pointer is protected by a new mutex, read
      access is RCU protected.
      
      In the event of an expansion or shrinking, the new bucket table allocated
      is exposed as a so called future table as soon as the resize process
      starts.  Lookups, deletions, and insertions will briefly use both tables.
      The future table becomes the main table after an RCU grace period and
      initial linking of the old to the new table was performed. Optimization
      of the chains to make use of the new number of buckets follows only the
      new table is in use.
      
      The side effect of this is that during that RCU grace period, a bucket
      traversal using any rht_for_each() variant on the main table will not see
      any insertions performed during the RCU grace period which would at that
      point land in the future table. The lookup will see them as it searches
      both tables if needed.
      
      Having multiple insertions and removals occur in parallel requires nelems
      to become an atomic counter.
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      97defe1e
    • Thomas Graf's avatar
      spinlock: Add spin_lock_bh_nested() · 113948d8
      Thomas Graf authored
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      113948d8
    • Thomas Graf's avatar
      nft_hash: Remove rhashtable_remove_pprev() · 897362e4
      Thomas Graf authored
      The removal function of nft_hash currently stores a reference to the
      previous element during lookup which is used to optimize removal later
      on. This was possible because a lock is held throughout calling
      rhashtable_lookup() and rhashtable_remove().
      
      With the introdution of deferred table resizing in parallel to lookups
      and insertions, the nftables lock will no longer synchronize all
      table mutations and the stored pprev may become invalid.
      
      Removing this optimization makes removal slightly more expensive on
      average but allows taking the resize cost out of the insert and
      remove path.
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Cc: netfilter-devel@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      897362e4
    • Thomas Graf's avatar
      rhashtable: Factor out bucket_tail() function · b8e1943e
      Thomas Graf authored
      Subsequent patches will require access to the bucket tail. Access
      to the tail is relatively cheap as the automatic resizing of the
      table should keep the number of entries per bucket to no more
      than 0.75 on average.
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b8e1943e
    • Thomas Graf's avatar
      rhashtable: Convert bucket iterators to take table and index · 88d6ed15
      Thomas Graf authored
      This patch is in preparation to introduce per bucket spinlocks. It
      extends all iterator macros to take the bucket table and bucket
      index. It also introduces a new rht_dereference_bucket() to
      handle protected accesses to buckets.
      
      It introduces a barrier() to the RCU iterators to the prevent
      the compiler from caching the first element.
      
      The lockdep verifier is introduced as stub which always succeeds
      and properly implement in the next patch when the locks are
      introduced.
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      88d6ed15
    • Thomas Graf's avatar
    • Thomas Graf's avatar
      rhashtable: Do hashing inside of rhashtable_lookup_compare() · 8d24c0b4
      Thomas Graf authored
      Hash the key inside of rhashtable_lookup_compare() like
      rhashtable_lookup() does. This allows to simplify the hashing
      functions and keep them private.
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Cc: netfilter-devel@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8d24c0b4
  2. 02 Jan, 2015 21 commits
  3. 01 Jan, 2015 7 commits
  4. 31 Dec, 2014 5 commits
    • David S. Miller's avatar
    • David S. Miller's avatar
    • David S. Miller's avatar
      Merge branch 'fib_trie-next' · e495f78d
      David S. Miller authored
      Alexander Duyck says:
      
      ====================
      fib_trie: Reduce time spent in fib_table_lookup by 35 to 75%
      
      These patches are meant to address several performance issues I have seen
      in the fib_trie implementation, and fib_table_lookup specifically.  With
      these changes in place I have seen a reduction of up to 35 to 75% for the
      total time spent in fib_table_lookup depending on the type of search being
      performed.
      
      On a VM running in my Corei7-4930K system with a trie of maximum depth of 7
      this resulted in a reduction of over 370ns per packet in the total time to
      process packets received from an ixgbe interface and route them to a dummy
      interface.  This represents a failed lookup in the local trie followed by
      a successful search in the main trie.
      
      				Baseline	Refactor
        ixgbe->dummy routing		1.20Mpps	2.21Mpps
        ------------------------------------------------------------
        processing time per packet		835ns		453ns
        fib_table_lookup		50.1%	418ns	25.0%	113ns
        check_leaf.isra.9		 7.9%	 66ns	   --	 --
        ixgbe_clean_rx_irq		 5.3%	 44ns	 9.8%	 44ns
        ip_route_input_noref		 2.9%	 25ns	 4.6%	 21ns
        pvclock_clocksource_read	 2.6%	 21ns	 4.6%	 21ns
        ip_rcv			 2.6%	 22ns	 4.0%	 18ns
      
      In the simple case of receiving a frame and dropping it before it can reach
      the socket layer I saw a reduction of 40ns per packet.  This represents a
      trip through the local trie with the correct leaf found with no need for
      any backtracing.
      
      				Baseline	Refactor
        ixgbe->local receive		2.65Mpps	2.96Mpps
        ------------------------------------------------------------
        processing time per packet		377ns		337ns
        fib_table_lookup		25.1%	 95ns	25.8%	 87ns
        ixgbe_clean_rx_irq		 8.7%	 33ns	 9.0%	 30ns
        check_leaf.isra.9		 7.2%	 27ns	   --	 --
        ip_rcv			 5.7%	 21ns	 6.5%	 22ns
      
      These changes have resulted in several functions being inlined such as
      check_leaf and fib_find_node, but due to the code simplification the
      overall size of the code has been reduced.
      
         text	   data	    bss	    dec	    hex	filename
        16932	    376	     16	  17324	   43ac	net/ipv4/fib_trie.o - before
        15259	    376	      8	  15643	   3d1b	net/ipv4/fib_trie.o - after
      
      Changes since RFC:
        Replaced this_cpu_ptr with correct call to this_cpu_inc in patch 1
        Changed test for leaf_info mismatch to (key ^ n->key) & li->mask_plen in patch 10
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e495f78d
    • Alexander Duyck's avatar
      fib_trie: Add tracking value for suffix length · 5405afd1
      Alexander Duyck authored
      This change adds a tracking value for the maximum suffix length of all
      prefixes stored in any given tnode.  With this value we can determine if we
      need to backtrace or not based on if the suffix is greater than the pos
      value.
      
      By doing this we can reduce the CPU overhead for lookups in the local table
      as many of the prefixes there are 32b long and have a suffix length of 0
      meaning we can immediately backtrace to the root node without needing to
      test any of the nodes between it and where we ended up.
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5405afd1
    • Alexander Duyck's avatar
      fib_trie: Remove checks for index >= tnode_child_length from tnode_get_child · 21d1f11d
      Alexander Duyck authored
      For some reason the compiler doesn't seem to understand that when we are in
      a loop that runs from tnode_child_length - 1 to 0 we don't expect the value
      of tn->bits to change.  As such every call to tnode_get_child was rerunning
      tnode_chile_length which ended up consuming quite a bit of space in the
      resultant assembly code.
      
      I have gone though and verified that in all cases where tnode_get_child
      is used we are either winding though a fixed loop from tnode_child_length -
      1 to 0, or are in a fastpath case where we are verifying the value by
      either checking for any remaining bits after shifting index by bits and
      testing for leaf, or by using tnode_child_length.
      
      size net/ipv4/fib_trie.o
      Before:
         text	   data	    bss	    dec	    hex	filename
        15506	    376	      8	  15890	   3e12	net/ipv4/fib_trie.o
      
      After:
         text	   data	    bss	    dec	    hex	filename
        14827	    376	      8	  15211	   3b6b	net/ipv4/fib_trie.o
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      21d1f11d