Commit 27fdb35f authored by Paul E. McKenney's avatar Paul E. McKenney Committed by Linus Torvalds

doc: Fix various RCU docbook comment-header problems

Because many of RCU's files have not been included into docbook, a
number of errors have accumulated.  This commit fixes them.
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 533966c8
...@@ -276,7 +276,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, ...@@ -276,7 +276,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
#define list_entry_rcu(ptr, type, member) \ #define list_entry_rcu(ptr, type, member) \
container_of(lockless_dereference(ptr), type, member) container_of(lockless_dereference(ptr), type, member)
/** /*
* Where are list_empty_rcu() and list_first_entry_rcu()? * Where are list_empty_rcu() and list_first_entry_rcu()?
* *
* Implementing those functions following their counterparts list_empty() and * Implementing those functions following their counterparts list_empty() and
......
...@@ -523,7 +523,7 @@ static inline void rcu_preempt_sleep_check(void) { } ...@@ -523,7 +523,7 @@ static inline void rcu_preempt_sleep_check(void) { }
* Return the value of the specified RCU-protected pointer, but omit * Return the value of the specified RCU-protected pointer, but omit
* both the smp_read_barrier_depends() and the READ_ONCE(). This * both the smp_read_barrier_depends() and the READ_ONCE(). This
* is useful in cases where update-side locks prevent the value of the * is useful in cases where update-side locks prevent the value of the
* pointer from changing. Please note that this primitive does -not- * pointer from changing. Please note that this primitive does *not*
* prevent the compiler from repeating this reference or combining it * prevent the compiler from repeating this reference or combining it
* with other references, so it should not be used without protection * with other references, so it should not be used without protection
* of appropriate locks. * of appropriate locks.
...@@ -568,7 +568,7 @@ static inline void rcu_preempt_sleep_check(void) { } ...@@ -568,7 +568,7 @@ static inline void rcu_preempt_sleep_check(void) { }
* is handed off from RCU to some other synchronization mechanism, for * is handed off from RCU to some other synchronization mechanism, for
* example, reference counting or locking. In C11, it would map to * example, reference counting or locking. In C11, it would map to
* kill_dependency(). It could be used as follows: * kill_dependency(). It could be used as follows:
* * ``
* rcu_read_lock(); * rcu_read_lock();
* p = rcu_dereference(gp); * p = rcu_dereference(gp);
* long_lived = is_long_lived(p); * long_lived = is_long_lived(p);
...@@ -579,6 +579,7 @@ static inline void rcu_preempt_sleep_check(void) { } ...@@ -579,6 +579,7 @@ static inline void rcu_preempt_sleep_check(void) { }
* p = rcu_pointer_handoff(p); * p = rcu_pointer_handoff(p);
* } * }
* rcu_read_unlock(); * rcu_read_unlock();
*``
*/ */
#define rcu_pointer_handoff(p) (p) #define rcu_pointer_handoff(p) (p)
...@@ -778,18 +779,21 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) ...@@ -778,18 +779,21 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
/** /**
* RCU_INIT_POINTER() - initialize an RCU protected pointer * RCU_INIT_POINTER() - initialize an RCU protected pointer
* @p: The pointer to be initialized.
* @v: The value to initialized the pointer to.
* *
* Initialize an RCU-protected pointer in special cases where readers * Initialize an RCU-protected pointer in special cases where readers
* do not need ordering constraints on the CPU or the compiler. These * do not need ordering constraints on the CPU or the compiler. These
* special cases are: * special cases are:
* *
* 1. This use of RCU_INIT_POINTER() is NULLing out the pointer -or- * 1. This use of RCU_INIT_POINTER() is NULLing out the pointer *or*
* 2. The caller has taken whatever steps are required to prevent * 2. The caller has taken whatever steps are required to prevent
* RCU readers from concurrently accessing this pointer -or- * RCU readers from concurrently accessing this pointer *or*
* 3. The referenced data structure has already been exposed to * 3. The referenced data structure has already been exposed to
* readers either at compile time or via rcu_assign_pointer() -and- * readers either at compile time or via rcu_assign_pointer() *and*
* a. You have not made -any- reader-visible changes to *
* this structure since then -or- * a. You have not made *any* reader-visible changes to
* this structure since then *or*
* b. It is OK for readers accessing this structure from its * b. It is OK for readers accessing this structure from its
* new location to see the old state of the structure. (For * new location to see the old state of the structure. (For
* example, the changes were to statistical counters or to * example, the changes were to statistical counters or to
...@@ -805,7 +809,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) ...@@ -805,7 +809,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
* by a single external-to-structure RCU-protected pointer, then you may * by a single external-to-structure RCU-protected pointer, then you may
* use RCU_INIT_POINTER() to initialize the internal RCU-protected * use RCU_INIT_POINTER() to initialize the internal RCU-protected
* pointers, but you must use rcu_assign_pointer() to initialize the * pointers, but you must use rcu_assign_pointer() to initialize the
* external-to-structure pointer -after- you have completely initialized * external-to-structure pointer *after* you have completely initialized
* the reader-accessible portions of the linked structure. * the reader-accessible portions of the linked structure.
* *
* Note that unlike rcu_assign_pointer(), RCU_INIT_POINTER() provides no * Note that unlike rcu_assign_pointer(), RCU_INIT_POINTER() provides no
...@@ -819,6 +823,8 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) ...@@ -819,6 +823,8 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
/** /**
* RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
* @p: The pointer to be initialized.
* @v: The value to initialized the pointer to.
* *
* GCC-style initialization for an RCU-protected pointer in a structure field. * GCC-style initialization for an RCU-protected pointer in a structure field.
*/ */
......
...@@ -78,6 +78,7 @@ void synchronize_srcu(struct srcu_struct *sp); ...@@ -78,6 +78,7 @@ void synchronize_srcu(struct srcu_struct *sp);
/** /**
* srcu_read_lock_held - might we be in SRCU read-side critical section? * srcu_read_lock_held - might we be in SRCU read-side critical section?
* @sp: The srcu_struct structure to check
* *
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an SRCU * If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an SRCU
* read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, * read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC,
......
...@@ -854,7 +854,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, ...@@ -854,7 +854,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp,
/** /**
* call_srcu() - Queue a callback for invocation after an SRCU grace period * call_srcu() - Queue a callback for invocation after an SRCU grace period
* @sp: srcu_struct in queue the callback * @sp: srcu_struct in queue the callback
* @head: structure to be used for queueing the SRCU callback. * @rhp: structure to be used for queueing the SRCU callback.
* @func: function to be invoked after the SRCU grace period * @func: function to be invoked after the SRCU grace period
* *
* The callback function will be invoked some time after a full SRCU * The callback function will be invoked some time after a full SRCU
......
...@@ -85,6 +85,9 @@ void rcu_sync_init(struct rcu_sync *rsp, enum rcu_sync_type type) ...@@ -85,6 +85,9 @@ void rcu_sync_init(struct rcu_sync *rsp, enum rcu_sync_type type)
} }
/** /**
* rcu_sync_enter_start - Force readers onto slow path for multiple updates
* @rsp: Pointer to rcu_sync structure to use for synchronization
*
* Must be called after rcu_sync_init() and before first use. * Must be called after rcu_sync_init() and before first use.
* *
* Ensures rcu_sync_is_idle() returns false and rcu_sync_{enter,exit}() * Ensures rcu_sync_is_idle() returns false and rcu_sync_{enter,exit}()
...@@ -142,7 +145,7 @@ void rcu_sync_enter(struct rcu_sync *rsp) ...@@ -142,7 +145,7 @@ void rcu_sync_enter(struct rcu_sync *rsp)
/** /**
* rcu_sync_func() - Callback function managing reader access to fastpath * rcu_sync_func() - Callback function managing reader access to fastpath
* @rsp: Pointer to rcu_sync structure to use for synchronization * @rhp: Pointer to rcu_head in rcu_sync structure to use for synchronization
* *
* This function is passed to one of the call_rcu() functions by * This function is passed to one of the call_rcu() functions by
* rcu_sync_exit(), so that it is invoked after a grace period following the * rcu_sync_exit(), so that it is invoked after a grace period following the
...@@ -158,9 +161,9 @@ void rcu_sync_enter(struct rcu_sync *rsp) ...@@ -158,9 +161,9 @@ void rcu_sync_enter(struct rcu_sync *rsp)
* rcu_sync_exit(). Otherwise, set all state back to idle so that readers * rcu_sync_exit(). Otherwise, set all state back to idle so that readers
* can again use their fastpaths. * can again use their fastpaths.
*/ */
static void rcu_sync_func(struct rcu_head *rcu) static void rcu_sync_func(struct rcu_head *rhp)
{ {
struct rcu_sync *rsp = container_of(rcu, struct rcu_sync, cb_head); struct rcu_sync *rsp = container_of(rhp, struct rcu_sync, cb_head);
unsigned long flags; unsigned long flags;
BUG_ON(rsp->gp_state != GP_PASSED); BUG_ON(rsp->gp_state != GP_PASSED);
......
...@@ -3097,9 +3097,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func, ...@@ -3097,9 +3097,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func,
* read-side critical sections have completed. call_rcu_sched() assumes * read-side critical sections have completed. call_rcu_sched() assumes
* that the read-side critical sections end on enabling of preemption * that the read-side critical sections end on enabling of preemption
* or on voluntary preemption. * or on voluntary preemption.
* RCU read-side critical sections are delimited by : * RCU read-side critical sections are delimited by:
* - rcu_read_lock_sched() and rcu_read_unlock_sched(), OR *
* - anything that disables preemption. * - rcu_read_lock_sched() and rcu_read_unlock_sched(), OR
* - anything that disables preemption.
* *
* These may be nested. * These may be nested.
* *
...@@ -3124,11 +3125,12 @@ EXPORT_SYMBOL_GPL(call_rcu_sched); ...@@ -3124,11 +3125,12 @@ EXPORT_SYMBOL_GPL(call_rcu_sched);
* handler. This means that read-side critical sections in process * handler. This means that read-side critical sections in process
* context must not be interrupted by softirqs. This interface is to be * context must not be interrupted by softirqs. This interface is to be
* used when most of the read-side critical sections are in softirq context. * used when most of the read-side critical sections are in softirq context.
* RCU read-side critical sections are delimited by : * RCU read-side critical sections are delimited by:
* - rcu_read_lock() and rcu_read_unlock(), if in interrupt context. *
* OR * - rcu_read_lock() and rcu_read_unlock(), if in interrupt context, OR
* - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context. * - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context.
* These may be nested. *
* These may be nested.
* *
* See the description of call_rcu() for more detailed information on * See the description of call_rcu() for more detailed information on
* memory ordering guarantees. * memory ordering guarantees.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment