Commit 26ece8ef authored by Paul E. McKenney's avatar Paul E. McKenney

rcu: Fix synchronize_rcu_expedited() header comment

This commit brings the synchronize_rcu_expedited() function's header
comment into line with the new implementation.
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
parent a1e12248
...@@ -722,13 +722,19 @@ static void sync_rcu_exp_handler(void *info) ...@@ -722,13 +722,19 @@ static void sync_rcu_exp_handler(void *info)
* synchronize_rcu_expedited - Brute-force RCU grace period * synchronize_rcu_expedited - Brute-force RCU grace period
* *
* Wait for an RCU-preempt grace period, but expedite it. The basic * Wait for an RCU-preempt grace period, but expedite it. The basic
* idea is to invoke synchronize_sched_expedited() to push all the tasks to * idea is to IPI all non-idle non-nohz online CPUs. The IPI handler
* the ->blkd_tasks lists and wait for this list to drain. This consumes * checks whether the CPU is in an RCU-preempt critical section, and
* significant time on all CPUs and is unfriendly to real-time workloads, * if so, it sets a flag that causes the outermost rcu_read_unlock()
* so is thus not recommended for any sort of common-case code. * to report the quiescent state. On the other hand, if the CPU is
* In fact, if you are using synchronize_rcu_expedited() in a loop, * not in an RCU read-side critical section, the IPI handler reports
* please restructure your code to batch your updates, and then Use a * the quiescent state immediately.
* single synchronize_rcu() instead. *
* Although this is a greate improvement over previous expedited
* implementations, it is still unfriendly to real-time workloads, so is
* thus not recommended for any sort of common-case code. In fact, if
* you are using synchronize_rcu_expedited() in a loop, please restructure
* your code to batch your updates, and then Use a single synchronize_rcu()
* instead.
*/ */
void synchronize_rcu_expedited(void) void synchronize_rcu_expedited(void)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment