1. 08 Apr, 2019 8 commits
    • vamsi krishna's avatar
      nl80211/cfg80211: Specify band specific min RSSI thresholds with sched scan · 1e1b11b6
      vamsi krishna authored
      This commit adds the support to specify the RSSI thresholds per
      band for each match set. This enhances the current behavior which
      specifies a single rssi_threshold across all the bands by
      introducing the rssi_threshold_per_band. These per band rssi
      thresholds are referred through NL80211_BAND_* (enum nl80211_band)
      variables  as attribute types. Such attributes/values per each
      band are nested through NL80211_ATTR_SCHED_SCAN_MIN_RSSI.
      These band specific rssi thresholds shall take precedence over
      the current rssi_thold per match set.
      Drivers indicate this support through
      %NL80211_EXT_FEATURE_SCHED_SCAN_BAND_SPECIFIC_RSSI_THOLD.
      These per band rssi attributes/values does not specify
      "default RSSI filter" as done by
      NL80211_SCHED_SCAN_MATCH_ATTR_RSSI to stay backward compatible.
      That said, these per band rssi values have to be specified for
      the corresponding matchset.
      Signed-off-by: default avatarvamsi krishna <vamsin@codeaurora.org>
      Signed-off-by: default avatarSrinivas Dasari <dasaris@codeaurora.org>
      [rebase on refactoring, add policy]
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      1e1b11b6
    • Johannes Berg's avatar
      nl80211: reindent some sched scan code · d39f3b4f
      Johannes Berg authored
      The sched scan code here is really deep - avoid one level
      of indentation by short-circuiting the loop instead of
      putting everything into the if block.
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      d39f3b4f
    • Vlad Buslov's avatar
      net: sched: flower: insert filter to ht before offloading it to hw · 1f17f774
      Vlad Buslov authored
      John reports:
      
      Recent refactoring of fl_change aims to use the classifier spinlock to
      avoid the need for rtnl lock. In doing so, the fl_hw_replace_filer()
      function was moved to before the lock is taken. This can create problems
      for drivers if duplicate filters are created (commmon in ovs tc offload
      due to filters being triggered by user-space matches).
      
      Drivers registered for such filters will now receive multiple copies of
      the same rule, each with a different cookie value. This means that the
      drivers would need to do a full match field lookup to determine
      duplicates, repeating work that will happen in flower __fl_lookup().
      Currently, drivers do not expect to receive duplicate filters.
      
      To fix this, verify that filter with same key is not present in flower
      classifier hash table and insert the new filter to the flower hash table
      before offloading it to hardware. Implement helper function
      fl_ht_insert_unique() to atomically verify/insert a filter.
      
      This change makes filter visible to fast path at the beginning of
      fl_change() function, which means it can no longer be freed directly in
      case of error. Refactor fl_change() error handling code to deallocate the
      filter with rcu timeout.
      
      Fixes: 620da486 ("net: sched: flower: refactor fl_change")
      Reported-by: default avatarJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1f17f774
    • David S. Miller's avatar
      Merge branch 'rhashtable-bitlocks' · 9186c90b
      David S. Miller authored
      NeilBrown says:
      
      ====================
      Convert rhashtable to use bitlocks
      
      This series converts rhashtable to use a per-bucket bitlock
      rather than a separate array of spinlocks.
      This:
        reduces memory usage
        results in slightly fewer memory accesses
        slightly improves parallelism
        makes a configuration option unnecessary
      
      The main change from previous version is to use a distinct type for
      the pointer in the bucket which has a bit-lock in it.  This
      helped find two places where rht_ptr() was missed, one
      in  rhashtable_free_and_destroy() in print_ht in the test code.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9186c90b
    • NeilBrown's avatar
      rhashtable: add lockdep tracking to bucket bit-spin-locks. · 149212f0
      NeilBrown authored
      Native bit_spin_locks are not tracked by lockdep.
      
      The bit_spin_locks used for rhashtable buckets are local
      to the rhashtable implementation, so there is little opportunity
      for the sort of misuse that lockdep might detect.
      However locks are held while a hash function or compare
      function is called, and if one of these took a lock,
      a misbehaviour is possible.
      
      As it is quite easy to add lockdep support this unlikely
      possibility seems to be enough justification.
      
      So create a lockdep class for bucket bit_spin_lock and attach
      through a lockdep_map in each bucket_table.
      
      Without the 'nested' annotation in rhashtable_rehash_one(), lockdep
      correctly reports a possible problem as this lock is taken
      while another bucket lock (in another table) is held.  This
      confirms that the added support works.
      With the correct nested annotation in place, lockdep reports
      no problems.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      149212f0
    • NeilBrown's avatar
      rhashtable: use bit_spin_locks to protect hash bucket. · 8f0db018
      NeilBrown authored
      This patch changes rhashtables to use a bit_spin_lock on BIT(1) of the
      bucket pointer to lock the hash chain for that bucket.
      
      The benefits of a bit spin_lock are:
       - no need to allocate a separate array of locks.
       - no need to have a configuration option to guide the
         choice of the size of this array
       - locking cost is often a single test-and-set in a cache line
         that will have to be loaded anyway.  When inserting at, or removing
         from, the head of the chain, the unlock is free - writing the new
         address in the bucket head implicitly clears the lock bit.
         For __rhashtable_insert_fast() we ensure this always happens
         when adding a new key.
       - even when lockings costs 2 updates (lock and unlock), they are
         in a cacheline that needs to be read anyway.
      
      The cost of using a bit spin_lock is a little bit of code complexity,
      which I think is quite manageable.
      
      Bit spin_locks are sometimes inappropriate because they are not fair -
      if multiple CPUs repeatedly contend of the same lock, one CPU can
      easily be starved.  This is not a credible situation with rhashtable.
      Multiple CPUs may want to repeatedly add or remove objects, but they
      will typically do so at different buckets, so they will attempt to
      acquire different locks.
      
      As we have more bit-locks than we previously had spinlocks (by at
      least a factor of two) we can expect slightly less contention to
      go with the slightly better cache behavior and reduced memory
      consumption.
      
      To enhance type checking, a new struct is introduced to represent the
        pointer plus lock-bit
      that is stored in the bucket-table.  This is "struct rhash_lock_head"
      and is empty.  A pointer to this needs to be cast to either an
      unsigned lock, or a "struct rhash_head *" to be useful.
      Variables of this type are most often called "bkt".
      
      Previously "pprev" would sometimes point to a bucket, and sometimes a
      ->next pointer in an rhash_head.  As these are now different types,
      pprev is NULL when it would have pointed to the bucket. In that case,
      'blk' is used, together with correct locking protocol.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8f0db018
    • NeilBrown's avatar
      rhashtable: allow rht_bucket_var to return NULL. · ff302db9
      NeilBrown authored
      Rather than returning a pointer to a static nulls, rht_bucket_var()
      now returns NULL if the bucket doesn't exist.
      This will make the next patch, which stores a bitlock in the
      bucket pointer, somewhat cleaner.
      
      This change involves introducing __rht_bucket_nested() which is
      like rht_bucket_nested(), but doesn't provide the static nulls,
      and changing rht_bucket_nested() to call this and possible
      provide a static nulls - as is still needed for the non-var case.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ff302db9
    • NeilBrown's avatar
      rhashtable: use cmpxchg() in nested_table_alloc() · 7a41c294
      NeilBrown authored
      nested_table_alloc() relies on the fact that there is
      at most one spinlock allocated for every slot in the top
      level nested table, so it is not possible for two threads
      to try to allocate the same table at the same time.
      
      This assumption is a little fragile (it is not explicit) and is
      unnecessary as cmpxchg() can be used instead.
      
      A future patch will replace the spinlocks by per-bucket bitlocks,
      and then we won't be able to protect the slot pointer with a spinlock.
      
      So replace rcu_assign_pointer() with cmpxchg() - which has equivalent
      barrier properties.
      If it the cmp fails, free the table that was just allocated.
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a41c294
  2. 07 Apr, 2019 28 commits
  3. 06 Apr, 2019 1 commit
    • Heiner Kallweit's avatar
      r8169: disable tx interrupt coalescing on RTL8168 · eb94dc9a
      Heiner Kallweit authored
      In contrast to switching rx irq coalescing off what fixed an issue,
      switching tx irq coalescing off is merely a latency optimization,
      therefore net-next. As part of this change:
      
      - Remove INTT_0 .. INTT_3 constants, they aren't used.
      
      - Remove comment in rtl_hw_start_8169(), we now have a detailed
        description by the code in rtl_set_coalesce().
      
      - Due to switching irq coalescing off per default we don't need the
        initialization in rtl_hw_start_8168(). If ethtool is used to switch
        on coalescing then rtl_set_coalesce() will configure this register.
      
      For the sake of completeness: This patch just changes the default.
      Users still have the option to configure irq coalescing via ethtool.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eb94dc9a
  4. 05 Apr, 2019 3 commits
    • David S. Miller's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · f83f7151
      David S. Miller authored
      Minor comment merge conflict in mlx5.
      
      Staging driver has a fixup due to the skb->xmit_more changes
      in 'net-next', but was removed in 'net'.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f83f7151
    • Linus Torvalds's avatar
      Merge tag 'mm-compaction-5.1-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux · 7f46774c
      Linus Torvalds authored
      Pull mm/compaction fixes from Mel Gorman:
       "The merge window for 5.1 introduced a number of compaction-related
        patches. with intermittent reports of corruption and functional
        issues. The bugs are due to sloopy checking of zone boundaries and a
        corner case where invalid indexes are used to access the free lists.
      
        Reports are not common but at least two users and 0-day have tripped
        over them. There is a chance that one of the syzbot reports are
        related but it has not been confirmed properly.
      
        The normal submission path is with Andrew but there have been some
        delays and I consider them urgent enough that they should be picked up
        before RC4 to avoid duplicate reports.
      
        All of these have been successfully tested on older RC windows. This
        will make this branch look like a rebase but in fact, they've simply
        been lifted again from Andrew's tree and placed on a fresh branch.
        I've no reason to believe that this has invalidated the testing given
        the lack of change in compaction and the nature of the fixes"
      
      * tag 'mm-compaction-5.1-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux:
        mm/compaction.c: abort search if isolation fails
        mm/compaction.c: correct zone boundary handling when resetting pageblock skip hints
      7f46774c
    • Greg Kroah-Hartman's avatar
      tty: mark Siemens R3964 line discipline as BROKEN · c7084edc
      Greg Kroah-Hartman authored
      The n_r3964 line discipline driver was written in a different time, when
      SMP machines were rare, and users were trusted to do the right thing.
      Since then, the world has moved on but not this code, it has stayed
      rooted in the past with its lovely hand-crafted list structures and
      loads of "interesting" race conditions all over the place.
      
      After attempting to clean up most of the issues, I just gave up and am
      now marking the driver as BROKEN so that hopefully someone who has this
      hardware will show up out of the woodwork (I know you are out there!)
      and will help with debugging a raft of changes that I had laying around
      for the code, but was too afraid to commit as odds are they would break
      things.
      
      Many thanks to Jann and Linus for pointing out the initial problems in
      this codebase, as well as many reviews of my attempts to fix the issues.
      It was a case of whack-a-mole, and as you can see, the mole won.
      Reported-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c7084edc