1. 15 Apr, 2019 10 commits
  2. 14 Apr, 2019 21 commits
  3. 13 Apr, 2019 9 commits
    • David S. Miller's avatar
      Merge branch 'rhashtable-bit-locking-m68k' · 5fa7d3f9
      David S. Miller authored
      NeilBrown says:
      
      ====================
      Fix rhashtable bit-locking for m68k
      
      As reported by Guenter Roeck, the new rhashtable bit-locking
      doesn't work on m68k as it only requires 2-byte alignment, so BIT(1)
      is addresses is not unused.
      
      We current use BIT(0) to identify a NULLS marker, but that is only
      needed in ->next pointers.  The bucket head does not need a NULLS
      marker, so the lsb there can be used for locking.
      
      the first 4 patches make some small improvements and re-arrange some
      code.  The final patch converts to using only BIT(0) for these two
      different special purposes.
      
      I had previously suggested dropping the series until I fix it.  Given
      that this was fairly easy, I retract that I think it best simply to
      add these patches to fix the code.
      ====================
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5fa7d3f9
    • NeilBrown's avatar
      rhashtable: use BIT(0) for locking. · ca0b709d
      NeilBrown authored
      As reported by Guenter Roeck, the new bit-locking using
      BIT(1) doesn't work on the m68k architecture.  m68k only requires
      2-byte alignment for words and longwords, so there is only one
      unused bit in pointers to structs - We current use two, one for the
      NULLS marker at the end of the linked list, and one for the bit-lock
      in the head of the list.
      
      The two uses don't need to conflict as we never need the head of the
      list to be a NULLS marker - the marker is only needed to check if an
      object has moved to a different table, and the bucket head cannot
      move.  The NULLS marker is only needed in a ->next pointer.
      
      As we already have different types for the bucket head pointer (struct
      rhash_lock_head) and the ->next pointers (struct rhash_head), it is
      fairly easy to treat the lsb differently in each.
      
      So: Initialize buckets heads to NULL, and use the lsb for locking.
      When loading the pointer from the bucket head, if it is NULL (ignoring
      the lock big), report as being the expected NULLS marker.
      When storing a value into a bucket head, if it is a NULLS marker,
      store NULL instead.
      
      And convert all places that used bit 1 for locking, to use bit 0.
      
      Fixes: 8f0db018 ("rhashtable: use bit_spin_locks to protect hash bucket.")
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ca0b709d
    • NeilBrown's avatar
      rhashtable: replace rht_ptr_locked() with rht_assign_locked() · f4712b46
      NeilBrown authored
      The only times rht_ptr_locked() is used, it is to store a new
      value in a bucket-head.  This is the only time it makes sense
      to use it too.  So replace it by a function which does the
      whole task:  Sets the lock bit and assigns to a bucket head.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f4712b46
    • NeilBrown's avatar
      rhashtable: move dereference inside rht_ptr() · adc6a3ab
      NeilBrown authored
      Rather than dereferencing a pointer to a bucket and then passing the
      result to rht_ptr(), we now pass in the pointer and do the dereference
      in rht_ptr().
      
      This requires that we pass in the tbl and hash as well to support RCU
      checks, and means that the various rht_for_each functions can expect a
      pointer that can be dereferenced without further care.
      
      There are two places where we dereference a bucket pointer
      where there is no testable protection - in each case we know
      that we much have exclusive access without having taken a lock.
      The previous code used rht_dereference() to pretend that holding
      the mutex provided protects, but holding the mutex never provides
      protection for accessing buckets.
      
      So instead introduce rht_ptr_exclusive() that can be used when
      there is known to be exclusive access without holding any locks.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      adc6a3ab
    • NeilBrown's avatar
      rhashtable: reorder some inline functions and macros. · c5783311
      NeilBrown authored
      This patch only moves some code around, it doesn't
      change the code at all.
      A subsequent patch will benefit from this as it needs
      to add calls to functions which are now defined before the
      call-site, but weren't before.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c5783311
    • NeilBrown's avatar
      rhashtable: fix some __rcu annotation errors · e4edbe3c
      NeilBrown authored
      With these annotations, the rhashtable now gets no
      warnings when compiled with "C=1" for sparse checking.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e4edbe3c
    • Gustavo A. R. Silva's avatar
      rhashtable: use struct_size() in kvzalloc() · c252aa3e
      Gustavo A. R. Silva authored
      One of the more common cases of allocation size calculations is finding
      the size of a structure that has a zero-sized array at the end, along with
      memory for some number of elements for that array.  For example:
      
      struct foo {
          int stuff;
          struct boo entry[];
      };
      
      size = sizeof(struct foo) + count * sizeof(struct boo);
      instance = kvzalloc(size, GFP_KERNEL);
      
      Instead of leaving these open-coded and prone to type mistakes, we can
      now use the new struct_size() helper:
      
      instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL);
      
      This code was detected with the help of Coccinelle.
      Signed-off-by: default avatarGustavo A. R. Silva <gustavo@embeddedor.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c252aa3e
    • David S. Miller's avatar
      Merge branch 'nfp-update-to-control-structures' · 9d60f0ea
      David S. Miller authored
      Jakub Kicinski says:
      
      ====================
      nfp: update to control structures
      
      This series prepares NFP control structures for crypto offloads.
      So far we mostly dealt with configuration requests under rtnl lock.
      This will no longer be the case with crypto.  Additionally we will
      try to reuse the BPF control message format, so we move common code
      out of BPF.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9d60f0ea
    • Jakub Kicinski's avatar
      nfp: split out common control message handling code · bcf0cafa
      Jakub Kicinski authored
      BPF's control message handler seems like a good base to built
      on for request-reply control messages.  Split it out to allow
      for reuse.
      Signed-off-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: default avatarDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bcf0cafa