1. 13 Aug, 2019 40 commits
    • Paul E. McKenney's avatar
      rcu/nocb: Reduce contention at no-CBs invocation-done time · 523bddd5
      Paul E. McKenney authored
      Currently, nocb_cb_wait() unconditionally acquires the leaf rcu_node
      ->lock to advance callbacks when done invoking the previous batch.
      It does this while holding ->nocb_lock, which means that contention on
      the leaf rcu_node ->lock visits itself on the ->nocb_lock.  This commit
      therefore makes this lock acquisition conditional, forgoing callback
      advancement when the leaf rcu_node ->lock is not immediately available.
      (In this case, the no-CBs grace-period kthread will eventually do any
      needed callback advancement.)
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      523bddd5
    • Paul E. McKenney's avatar
      rcu/nocb: Reduce contention at no-CBs registry-time CB advancement · 6608c3a0
      Paul E. McKenney authored
      Currently, __call_rcu_nocb_wake() conditionally acquires the leaf rcu_node
      structure's ->lock, and only afterwards does rcu_advance_cbs_nowake()
      check to see if it is possible to advance callbacks without potentially
      needing to awaken the grace-period kthread.  Given that the no-awaken
      check can be done locklessly, this commit reverses the order, so that
      rcu_advance_cbs_nowake() is invoked without holding the leaf rcu_node
      structure's ->lock and rcu_advance_cbs_nowake() checks the grace-period
      state before conditionally acquiring that lock, thus reducing the number
      of needless acquistions of the leaf rcu_node structure's ->lock.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      6608c3a0
    • Paul E. McKenney's avatar
      rcu/nocb: Round down for number of no-CBs grace-period kthreads · 9fcb09bd
      Paul E. McKenney authored
      Currently, when the square root of the number of CPUs is rounded down
      by int_sqrt(), this round-down is applied to the number of callback
      kthreads per grace-period kthreads.  This makes almost no difference
      for large systems, but results in oddities such as three no-CBs
      grace-period kthreads for a five-CPU system, which is a bit excessive.
      This commit therefore causes the round-down to apply to the number of
      no-CBs grace-period kthreads, so that systems with from four to eight
      CPUs have only two no-CBs grace period kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      9fcb09bd
    • Paul E. McKenney's avatar
      rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU · 81c0b3d7
      Paul E. McKenney authored
      A given rcu_data structure's ->nocb_lock can be acquired very frequently
      by the corresponding CPU and occasionally by the corresponding no-CBs
      grace-period and callbacks kthreads.  In particular, these two kthreads
      will have frequent gaps between ->nocb_lock acquisitions that are roughly
      a grace period in duration.  This means that any excessive ->nocb_lock
      contention will be due to the CPU's acquisitions, and this in turn
      enables a very naive contention-avoidance strategy to be quite effective.
      
      This commit therefore modifies rcu_nocb_lock() to first
      attempt a raw_spin_trylock(), and to atomically increment a
      separate ->nocb_lock_contended across a raw_spin_lock().  This new
      ->nocb_lock_contended field is checked in __call_rcu_nocb_wake() when
      interrupts are enabled, with a spin-wait for contending acquisitions
      to complete, thus allowing the kthreads a chance to acquire the lock.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      81c0b3d7
    • Paul E. McKenney's avatar
      rcu/nocb: Avoid needless wakeups of no-CBs grace-period kthread · 7f36ef82
      Paul E. McKenney authored
      Currently, the code provides an extra wakeup for the no-CBs grace-period
      kthread if one of its CPUs is generating excessive numbers of callbacks.
      But satisfying though it is to wake something up when things are going
      south, unless the thing being awakened can actually help solve the
      problem, that extra wakeup does nothing but consume additional CPU time,
      which is exactly what you don't want during a call_rcu() flood.
      
      This commit therefore avoids doing anything if the corresponding
      no-CBs callback kthread is going full tilt.  Otherwise, if advancing
      callbacks immediately might help and if the leaf rcu_node structure's
      lock is immediately available, this commit invokes a new variant of
      rcu_advance_cbs() that advances callbacks only if doing so won't require
      awakening the grace-period kthread (not to be confused with any of the
      no-CBs grace-period kthreads).
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      7f36ef82
    • Paul E. McKenney's avatar
      rcu/nocb: Make __call_rcu_nocb_wake() safe for many callbacks · ce0a825e
      Paul E. McKenney authored
      It might be hard to imagine having more than two billion callbacks
      queued on a single CPU's ->cblist, but someone will do it sometime.
      This commit therefore makes __call_rcu_nocb_wake() handle this situation
      by upgrading local variable "len" from "int" to "long".
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      ce0a825e
    • Paul E. McKenney's avatar
      rcu/nocb: Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer() · 383e1332
      Paul E. McKenney authored
      Currently, wake_nocb_gp_defer() simply stores whatever waketype was
      passed in, which can result in a RCU_NOCB_WAKE_FORCE being downgraded
      to RCU_NOCB_WAKE, which could in turn delay callback processing.
      This commit therefore adds a check so that wake_nocb_gp_defer() only
      updates ->nocb_defer_wakeup when the update increases the forcefulness,
      thus avoiding downgrades.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      383e1332
    • Paul E. McKenney's avatar
      rcu/nocb: Enable re-awakening under high callback load · aeeacd9d
      Paul E. McKenney authored
      The __call_rcu_nocb_wake() function and its predecessors set
      ->qlen_last_fqs_check to zero for the first callback and to LONG_MAX / 2
      for forced reawakenings.  The former can result in a too-quick reawakening
      when there are many callbacks ready to invoke and the latter prevents a
      second reawakening.  This commit therefore sets ->qlen_last_fqs_check
      to the current number of callbacks in both cases.  While in the area,
      this commit also moves both assignments under ->nocb_lock.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      aeeacd9d
    • Paul E. McKenney's avatar
      rcu/nohz: Turn off tick for offloaded CPUs · 0bd55c69
      Paul E. McKenney authored
      Historically, no-CBs CPUs allowed the scheduler-clock tick to be
      unconditionally disabled on any transition to idle or nohz_full userspace
      execution (see the rcu_needs_cpu() implementations).  Unfortunately,
      the checks used by rcu_needs_cpu() are defeated now that no-CBs CPUs
      use ->cblist, which might make users of battery-powered devices rather
      unhappy.  This commit therefore adds explicit rcu_segcblist_is_offloaded()
      checks to return to the historical energy-efficient semantics.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      0bd55c69
    • Paul E. McKenney's avatar
      rcu/nocb: Suppress uninitialized false-positive in nocb_gp_wait() · 969974e5
      Paul E. McKenney authored
      Some compilers complain that wait_gp_seq might be used uninitialized
      in nocb_gp_wait().  This cannot actually happen because when wait_gp_seq
      is uninitialized, needwait_gp must be false, which prevents wait_gp_seq
      from being used.  But this analysis is apparently beyond some compilers,
      so this commit adds a bogus initialization of wait_gp_seq for the sole
      purpose of suppressing the false-positive warning.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      969974e5
    • Paul E. McKenney's avatar
      rcu/nocb: Use build-time no-CBs check in rcu_pending() · 921bb5fa
      Paul E. McKenney authored
      Currently, rcu_pending() invokes rcu_segcblist_is_offloaded() even
      in CONFIG_RCU_NOCB_CPU=n kernels, which cannot possibly be offloaded.
      Given that rcu_pending() is on a fastpath, it makes sense to check for
      CONFIG_RCU_NOCB_CPU=y before invoking rcu_segcblist_is_offloaded().
      This commit therefore makes this change.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      921bb5fa
    • Paul E. McKenney's avatar
      rcu/nocb: Use build-time no-CBs check in rcu_core() · c1ab99d6
      Paul E. McKenney authored
      Currently, rcu_core() invokes rcu_segcblist_is_offloaded() each time it
      needs to know whether the current CPU is a no-CBs CPU.  Given that it is
      not possible to change the no-CBs status of a CPU after boot, and given
      that it is not possible to even have no-CBs CPUs in CONFIG_RCU_NOCB_CPU=n
      kernels, this repeated runtime invocation wastes CPU.  This commit
      therefore created a const on-stack variable to allow this check to be
      done only once per rcu_core() invocation.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      c1ab99d6
    • Paul E. McKenney's avatar
      rcu/nocb: Use build-time no-CBs check in rcu_do_batch() · ec5ef87b
      Paul E. McKenney authored
      Currently, rcu_do_batch() invokes rcu_segcblist_is_offloaded() each time
      it needs to know whether the current CPU is a no-CBs CPU.  Given that it
      is not possible to change the no-CBs status of a CPU after boot, and given
      that it is not possible to even have no-CBs CPUs in CONFIG_RCU_NOCB_CPU=n
      kernels, this per-callback invocation wastes CPU.  This commit therefore
      created a const on-stack variable to allow this check to be done only
      once per rcu_do_batch() invocation.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      ec5ef87b
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
      rcu/nocb: Remove obsolete nocb_q_count and nocb_q_count_lazy fields · c035280f
      Paul E. McKenney authored
      This commit removes the obsolete nocb_q_count and nocb_q_count_lazy
      fields, also removing rcu_get_n_cbs_nocb_cpu(), adjusting
      rcu_get_n_cbs_cpu(), and making rcutree_migrate_callbacks() once again
      disable the ->cblist fields of offline CPUs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      c035280f
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
      rcu/nocb: Use rcu_segcblist for no-CBs CPUs · 5d6742b3
      Paul E. McKenney authored
      Currently the RCU callbacks for no-CBs CPUs are queued on a series of
      ad-hoc linked lists, which means that these callbacks cannot benefit
      from "drive-by" grace periods, thus suffering needless delays prior
      to invocation.  In addition, the no-CBs grace-period kthreads first
      wait for callbacks to appear and later wait for a new grace period,
      which means that callbacks appearing during a grace-period wait can
      be delayed.  These delays increase memory footprint, and could even
      result in an out-of-memory condition.
      
      This commit therefore enqueues RCU callbacks from no-CBs CPUs on the
      rcu_segcblist structure that is already used by non-no-CBs CPUs.  It also
      restructures the no-CBs grace-period kthread to be checking for incoming
      callbacks while waiting for grace periods.  Also, instead of waiting
      for a new grace period, it waits for the closest grace period that will
      cause some of the callbacks to be safe to invoke.  All of these changes
      reduce callback latency and thus the number of outstanding callbacks,
      in turn reducing the probability of an out-of-memory condition.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      5d6742b3
    • Paul E. McKenney's avatar
      rcu/nocb: Leave ->cblist enabled for no-CBs CPUs · e83e73f5
      Paul E. McKenney authored
      As a first step towards making no-CBs CPUs use the ->cblist, this commit
      leaves the ->cblist enabled for these CPUs.  The main reason to make
      no-CBs CPUs use ->cblist is to take advantage of callback numbering,
      which will reduce the effects of missed grace periods which in turn will
      reduce forward-progress problems for no-CBs CPUs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      e83e73f5
    • Paul E. McKenney's avatar
      rcu/nocb: Allow lockless use of rcu_segcblist_empty() · e6060b41
      Paul E. McKenney authored
      Currently, rcu_segcblist_empty() assumes that the callback list is not
      being changed by other CPUs, but upcoming changes will require it to
      operate locklessly.  This commit therefore adds the needed READ_ONCE()
      call, along with the WRITE_ONCE() calls when updating the callback list's
      ->head field.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      e6060b41
    • Paul E. McKenney's avatar
      rcu/nocb: Allow lockless use of rcu_segcblist_restempty() · 76c6927c
      Paul E. McKenney authored
      Currently, rcu_segcblist_restempty() assumes that the callback list
      is not being changed by other CPUs, but upcoming changes will require
      it to operate locklessly.  This commit therefore adds the needed
      READ_ONCE() calls, along with the WRITE_ONCE() calls when updating
      the callback list.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      76c6927c
    • Paul E. McKenney's avatar
      rcu/nocb: Remove deferred wakeup checks for extended quiescent states · ca5c8258
      Paul E. McKenney authored
      The idea behind the checks for extended quiescent states at the end of
      __call_rcu_nocb() is to handle cases where call_rcu() is invoked directly
      from within an extended quiescent state, for example, from the idle loop.
      However, this will result in a timer-mediated deferred wakeup, which
      will cause the needed wakeup to happen within a jiffy or thereabouts.
      There should be no forward-progress concerns, and if there are, the proper
      response is to exit the extended quiescent state while executing the
      endless blast of call_rcu() invocations, for example, using RCU_NONIDLE().
      Given the more realistic case of an isolated call_rcu() invocation, there
      should be no problem.
      
      This commit therefore removes the checks for invoking call_rcu() within
      an extended quiescent state for on no-CBs CPUs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      ca5c8258
    • Paul E. McKenney's avatar
      rcu/nocb: Check for deferred nocb wakeups before nohz_full early exit · 85f69b32
      Paul E. McKenney authored
      In theory, a timer is used to defer wakeups of no-CBs grace-period
      kthreads when the wakeup cannot be done safely directly from the
      call_rcu().  In practice, the one-jiffy delay is not always consistent
      with timely callback invocation under heavy call_rcu() loads.  Therefore,
      there are a number of checks for a pending deferred wakeup, including
      from the scheduling-clock interrupt.  Unfortunately, this check follows
      the rcu_nohz_full_cpu() early exit, which renders it useless on such CPUs.
      
      This commit therefore moves the check for the pending deferred no-CB
      wakeup to precede the rcu_nohz_full_cpu() early exit.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      85f69b32
    • Paul E. McKenney's avatar
      rcu/nocb: Make rcutree_migrate_callbacks() start at leaf rcu_node structure · c00045be
      Paul E. McKenney authored
      Because rcutree_migrate_callbacks() is invoked infrequently and because
      an exact snapshot of the grace-period state might save some callbacks a
      second trip through a grace period, this function has used the root
      rcu_node structure.  However, this safe-second-trip optimization
      happens only if rcutree_migrate_callbacks() races with grace-period
      initialization, so it is not worth the added mental load.  This commit
      therefore makes rcutree_migrate_callbacks() start with the leaf rcu_node
      structures, as is done elsewhere.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      c00045be
    • Paul E. McKenney's avatar
      rcu/nocb: Add checks for offloaded callback processing · 750d7f6a
      Paul E. McKenney authored
      This commit is a preparatory patch for offloaded callbacks using the
      same ->cblist structure used by non-offloaded callbacks.  It therefore
      adds rcu_segcblist_is_offloaded() calls where they will be needed when
      !rcu_segcblist_is_enabled() no longer flags the offloaded case.  It also
      adds checks in rcu_do_batch() to ensure that there are no missed checks:
      Currently, it should not be possible for offloaded execution to reach
      rcu_do_batch(), though this will change later in this series.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      750d7f6a
    • Paul E. McKenney's avatar
      rcu/nocb: Use separate flag to indicate offloaded ->cblist · ce5215c1
      Paul E. McKenney authored
      RCU callback processing currently uses rcu_is_nocb_cpu() to determine
      whether or not the current CPU's callbacks are to be offloaded.
      This works, but it is not so good for cache locality.  Plus use of
      ->cblist for offloaded callbacks will greatly increase the frequency
      of these checks.  This commit therefore adds a ->offloaded flag to the
      rcu_segcblist structure to provide a more flexible and cache-friendly
      means of checking for callback offloading.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      ce5215c1
    • Paul E. McKenney's avatar
      rcu/nocb: Use separate flag to indicate disabled ->cblist · 1bb5f9b9
      Paul E. McKenney authored
      NULLing the RCU_NEXT_TAIL pointer was a clever way to save a byte, but
      forward-progress considerations would require that this pointer be both
      NULL and non-NULL, which, absent a quantum-computer port of the Linux
      kernel, simply won't happen.  This commit therefore creates as separate
      ->enabled flag to replace the current NULL checks.
      
      [ paulmck: Add include files per 0day test robot and -next. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      1bb5f9b9
    • Paul E. McKenney's avatar
      rcu/nocb: Print gp/cb kthread hierarchy if dump_tree · 18cd8c93
      Paul E. McKenney authored
      This commit causes the no-CBs grace-period/callback hierarchy to be
      printed to the console when the dump_tree kernel boot parameter is set.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      18cd8c93
    • Paul E. McKenney's avatar
      rcu/nocb: Rename rcu_nocb_leader_stride kernel boot parameter · f7c612b0
      Paul E. McKenney authored
      This commit changes the name of the rcu_nocb_leader_stride kernel
      boot parameter to rcu_nocb_gp_stride in order to account for the new
      distinction between callback and grace-period no-CBs kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      f7c612b0
    • Paul E. McKenney's avatar
      rcu/nocb: Rename and document no-CB CB kthread sleep trace event · f7c9a9b6
      Paul E. McKenney authored
      The nocb_cb_wait() function traces a "FollowerSleep" trace_rcu_nocb_wake()
      event, which never was documented and is now misleading.  This commit
      therefore changes "FollowerSleep" to "CBSleep", documents this, and
      updates the documentation for "Sleep" as well.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      f7c9a9b6
    • Paul E. McKenney's avatar
      rcu/nocb: Rename rcu_organize_nocb_kthreads() local variable · 0bdc33da
      Paul E. McKenney authored
      This commit renames rdp_leader to rdp_gp in order to account for the
      new distinction between callback and grace-period no-CBs kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      0bdc33da
    • Paul E. McKenney's avatar
      rcu/nocb: Rename wake_nocb_leader_defer() to wake_nocb_gp_defer() · 0d52a665
      Paul E. McKenney authored
      This commit adjusts naming to account for the new distinction between
      callback and grace-period no-CBs kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      0d52a665
    • Paul E. McKenney's avatar
      rcu/nocb: Rename __wake_nocb_leader() to __wake_nocb_gp() · 5f675ba6
      Paul E. McKenney authored
      This commit adjusts naming to account for the new distinction between
      callback and grace-period no-CBs kthreads.  While in the area, it also
      updates local variables.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      5f675ba6
    • Paul E. McKenney's avatar
      rcu/nocb: Rename wake_nocb_leader() to wake_nocb_gp() · 5d62c08c
      Paul E. McKenney authored
      This commit adjusts naming to account for the new distinction between
      callback and grace-period no-CBs kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      5d62c08c
    • Paul E. McKenney's avatar
      rcu/nocb: Rename nocb_follower_wait() to nocb_cb_wait() · 9fa471a8
      Paul E. McKenney authored
      This commit adjusts naming to account for the new distinction between
      callback and grace-period no-CBs kthreads.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      9fa471a8
    • Paul E. McKenney's avatar
      rcu/nocb: Provide separate no-CBs grace-period kthreads · 12f54c3a
      Paul E. McKenney authored
      Currently, there is one no-CBs rcuo kthread per CPU, and these kthreads
      are divided into groups.  The first rcuo kthread to come online in a
      given group is that group's leader, and the leader both waits for grace
      periods and invokes its CPU's callbacks.  The non-leader rcuo kthreads
      only invoke callbacks.
      
      This works well in the real-time/embedded environments for which it was
      intended because such environments tend not to generate all that many
      callbacks.  However, given huge floods of callbacks, it is possible for
      the leader kthread to be stuck invoking callbacks while its followers
      wait helplessly while their callbacks pile up.  This is a good recipe
      for an OOM, and rcutorture's new callback-flood capability does generate
      such OOMs.
      
      One strategy would be to wait until such OOMs start happening in
      production, but similar OOMs have in fact happened starting in 2018.
      It would therefore be wise to take a more proactive approach.
      
      This commit therefore features per-CPU rcuo kthreads that do nothing
      but invoke callbacks.  Instead of having one of these kthreads act as
      leader, each group has a separate rcog kthread that handles grace periods
      for its group.  Because these rcuog kthreads do not invoke callbacks,
      callback floods on one CPU no longer block callbacks from reaching the
      rcuc callback-invocation kthreads on other CPUs.
      
      This change does introduce additional kthreads, however:
      
      1.	The number of additional kthreads is about the square root of
      	the number of CPUs, so that a 4096-CPU system would have only
      	about 64 additional kthreads.  Note that recent changes
      	decreased the number of rcuo kthreads by a factor of two
      	(CONFIG_PREEMPT=n) or even three (CONFIG_PREEMPT=y), so
      	this still represents a significant improvement on most systems.
      
      2.	The leading "rcuo" of the rcuog kthreads should allow existing
      	scripting to affinity these additional kthreads as needed, the
      	same as for the rcuop and rcuos kthreads.  (There are no longer
      	any rcuob kthreads.)
      
      3.	A state-machine approach was considered and rejected.  Although
      	this would allow the rcuo kthreads to continue their dual
      	leader/follower roles, it complicates callback invocation
      	and makes it more difficult to consolidate rcuo callback
      	invocation with existing softirq callback invocation.
      
      The introduction of rcuog kthreads should thus be acceptable.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      12f54c3a
    • Paul E. McKenney's avatar
      rcu/nocb: Update comments to prepare for forward-progress work · 6484fe54
      Paul E. McKenney authored
      This commit simply rewords comments to prepare for leader nocb kthreads
      doing only grace-period work and callback shuffling.  This will mean
      the addition of replacement kthreads to invoke callbacks.  The "leader"
      and "follower" thus become less meaningful, so the commit changes no-CB
      comments with these strings to "GP" and "CB", respectively.  (Give or
      take the usual grammatical transformations.)
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      6484fe54
    • Paul E. McKenney's avatar
      rcu/nocb: Rename rcu_data fields to prepare for forward-progress work · 58bf6f77
      Paul E. McKenney authored
      This commit simply renames rcu_data fields to prepare for leader
      nocb kthreads doing only grace-period work and callback shuffling.
      This will mean the addition of replacement kthreads to invoke callbacks.
      The "leader" and "follower" thus become less meaningful, so the commit
      changes no-CB fields with these strings to "gp" and "cb", respectively.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      58bf6f77
    • Paul E. McKenney's avatar
      Merge branches 'consolidate.2019.08.01b', 'fixes.2019.08.12a',... · 31da0670
      Paul E. McKenney authored
      Merge branches 'consolidate.2019.08.01b', 'fixes.2019.08.12a', 'lists.2019.08.13a' and 'torture.2019.08.01b' into HEAD
      
      consolidate.2019.08.01b: Further consolidation cleanups
      fixes.2019.08.12a: Miscellaneous fixes
      lists.2019.08.13a: Optional lockdep arguments for RCU list macros
      torture.2019.08.01b: Torture-test updates
      31da0670
    • Joel Fernandes (Google)'s avatar
      acpi: Use built-in RCU list checking for acpi_ioremaps list · bee6f871
      Joel Fernandes (Google) authored
      This commit applies the consolidated list_for_each_entry_rcu() support
      for lockdep conditions.
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      bee6f871