Commit a9fefdb2 authored by Paul E. McKenney's avatar Paul E. McKenney

rcu: Update NOCB comments

This commit updates a few obsolete comments in the RCU callback-offload
code.
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
parent b2c1955b
...@@ -1857,22 +1857,24 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp) ...@@ -1857,22 +1857,24 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp)
/* /*
* Offload callback processing from the boot-time-specified set of CPUs * Offload callback processing from the boot-time-specified set of CPUs
* specified by rcu_nocb_mask. For each CPU in the set, there is a * specified by rcu_nocb_mask. For the CPUs in the set, there are kthreads
* kthread created that pulls the callbacks from the corresponding CPU, * created that pull the callbacks from the corresponding CPU, wait for
* waits for a grace period to elapse, and invokes the callbacks. * a grace period to elapse, and invoke the callbacks. These kthreads
* The no-CBs CPUs do a wake_up() on their kthread when they insert * are organized into leaders, which manage incoming callbacks, wait for
* a callback into any empty list, unless the rcu_nocb_poll boot parameter * grace periods, and awaken followers, and the followers, which only
* has been specified, in which case each kthread actively polls its * invoke callbacks. Each leader is its own follower. The no-CBs CPUs
* CPU. (Which isn't so great for energy efficiency, but which does * do a wake_up() on their kthread when they insert a callback into any
* reduce RCU's overhead on that CPU.) * empty list, unless the rcu_nocb_poll boot parameter has been specified,
* in which case each kthread actively polls its CPU. (Which isn't so great
* for energy efficiency, but which does reduce RCU's overhead on that CPU.)
* *
* This is intended to be used in conjunction with Frederic Weisbecker's * This is intended to be used in conjunction with Frederic Weisbecker's
* adaptive-idle work, which would seriously reduce OS jitter on CPUs * adaptive-idle work, which would seriously reduce OS jitter on CPUs
* running CPU-bound user-mode computations. * running CPU-bound user-mode computations.
* *
* Offloading of callback processing could also in theory be used as * Offloading of callbacks can also be used as an energy-efficiency
* an energy-efficiency measure because CPUs with no RCU callbacks * measure because CPUs with no RCU callbacks queued are more aggressive
* queued are more aggressive about entering dyntick-idle mode. * about entering dyntick-idle mode.
*/ */
...@@ -1976,10 +1978,7 @@ static void wake_nocb_leader_defer(struct rcu_data *rdp, int waketype, ...@@ -1976,10 +1978,7 @@ static void wake_nocb_leader_defer(struct rcu_data *rdp, int waketype,
raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags); raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
} }
/* /* Does rcu_barrier need to queue an RCU callback on the specified CPU? */
* Does the specified CPU need an RCU callback for this invocation
* of rcu_barrier()?
*/
static bool rcu_nocb_cpu_needs_barrier(int cpu) static bool rcu_nocb_cpu_needs_barrier(int cpu)
{ {
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
...@@ -1995,8 +1994,8 @@ static bool rcu_nocb_cpu_needs_barrier(int cpu) ...@@ -1995,8 +1994,8 @@ static bool rcu_nocb_cpu_needs_barrier(int cpu)
* callbacks would be posted. In the worst case, the first * callbacks would be posted. In the worst case, the first
* barrier in rcu_barrier() suffices (but the caller cannot * barrier in rcu_barrier() suffices (but the caller cannot
* necessarily rely on this, not a substitute for the caller * necessarily rely on this, not a substitute for the caller
* getting the concurrency design right!). There must also be * getting the concurrency design right!). There must also be a
* a barrier between the following load an posting of a callback * barrier between the following load and posting of a callback
* (if a callback is in fact needed). This is associated with an * (if a callback is in fact needed). This is associated with an
* atomic_inc() in the caller. * atomic_inc() in the caller.
*/ */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment